Matchbook BlogCompany news, release notes and all things data
Reference Based Mastering (RBM) makes use of reference sources like Dun & Bradstreet to match and master records for accuracy. By enabling data users to inform the process, the system is powerful and flexible. Users can set the rules they wish to use for their data, process it through the solution, and always end up with an accurate set of data. Unlike traditional MDM.
Mastered data is long considered the holy grail by CDOs and CIOs. It is no surprise that Enterprises spend millions of dollars in software, services and people in order to effectively deliver mastered data across the organization. The economic impact far exceeds a “feather in the cap” achievement and goes to the core of an organization’s competitiveness and agility to respond to changes in the business environment.
MDM solutions provide a robust approach to achieve mastered data. However, the conventional approach to MDM has pitfalls that are well documented by the MDM community, that enterprises should heed when embarking on the journey. Key among them are data ownership, lack of knowledge of the intent, and false positives with mastering.
The conventional approach to mastering data involves, among other things, the capability to match and merge data. The approach to match and merge has become far more sophisticated as MDM vendors provide new ways to match and merge with complex rules-based criteria and AI/ML based solutions.
In my conversations with organizations that are using these MDM capabilities to master their data, an unsettled feeling sets in when trying to balance receiving false positives, leaving too much data un-mastered, or dedicating more and more manual manpower to master and steward this data. Regardless of the approach, these processes are tightly controlled by a central group within an enterprise and not the true data owners of the data. This leads to lower adoption of mastered data within individual business units.
How do you democratize the process of mastering data so that data owners have more control over how their data is mastered?
For one, it involves a distributed governance model, giving the owners tools to master their own data. In my 20 years of experience in the data, analytics and MDM space, a robust way to accomplish distributed and controlled mastering of data is to use a common reference source to master data at the individual business units and subsequently using the identifiers provided by a common reference source to master the data across the enterprise.
For business data (customers, accounts, vendors), an established reference source such as Dun & Bradstreet can provide the relevant identifier such as the DUNS Number that uniquely identifies an individual company or location of a company. If you are a salesperson selling services to two different Starbucks’ on the same street a few blocks from each other, you won’t run the risk of an MDM solution falsely merging those two customers into a single customer record purely based on a match-merge capability. Instead as the salesperson responsible for these two accounts, you match this data to Dun & Bradstreet’s reference data set and be able to attach to each record a unique DUNS Number.
Reference sources don’t necessarily have to be external reference sources. They can also be an Enterprises’ unique MDM record identifier or an internal reference dataset that is vetted for your organization.
Mapping your data to a reference data set in this way provides complete control to the business units and takes away any ambiguity of mastering the record, whether from a single system or from across the organization using that reference identifier. This approach is something I call Reference Based Master (RBM). The benefits at a business unit level are, more control; and at an organization level, a true “golden record” that is trusted across the organization. Other benefits of adopting Reference Based Mastering (RMB) techniques are, no requirements for a coordinated approach across the enterprise in order to start mastering the data. Individual business units and systems can start the process of integrating with the reference source and start the process of mastering their data.
I will caution that Reference Based Mastering (RBM) techniques is a means to an end and not a replacement for MDM or the benefits that MDM solutions provide enterprises. I consider RBM a starting point to a broader MDM strategy.
Over the past few years, I have seen this approach succeed with companies large and small as they work on building their master data strategy and solution. I would love to hear your experience on your MDM journey and your thoughts on using Reference Based Mastering (RBM) techniques as a lead-in to a larger MDM strategy and implementation.
A primary goal of any data management strategy is to provide a clear understanding of the identities and key attributes of the many businesses that the organization engages. Optimally, this information is available across the enterprise, providing a 360 degree view of customers, prospects, suppliers, and other key relationships. The availability of high-quality data that provides this level of actionable insight restores trust in the data, increases its utilization and supports good business decisions.
But in the real world, holistic data management strategies take a back seat to the near-term needs of the functional teams that own their own data silos and enterprise applications. CRM and sales force automation and spend management applications have their own ways of building databases that are not easily shared.
Moreover, in the absence of a company data stewardship discipline, these siloed databases tend to accumulate a profusion of redundant, unverified, and incomplete records. Users of a CRM application, for example, may skip search-before-create –, and simply create a new record for each contact who may be new to the sales rep, but may be in an organization that is not new to the company. Many duplication issues are a result of this kind of gap in data governance processes for data entry.
To make data non-redundant, structured, and usable, companies have two sources of information to leverage: their own internal data and the data that they acquire from external sources. Each of these are mission-critical to arriving at data that is trusted, complete, and actionable.
Data quality starts with data matching: the elimination of redundancies among records and the accurate linking of multiple contacts that are associated with a single business entity. Internal data is the foundation of inferential matching. 3rd-party data must then be leveraged to complete the process, using referential matching.
What is inferential data matching?
Inferential data matching is the process of working with a company’s data to identify multiple records that should be associated with a single business entity and either linking them, collapsing them into a single record, or creating pointers to indicate that they are contacts within a single organization. But this matching process can be fraught with errors due to inadequate data management, incomplete data validation processes, or the challenges of rationalizing data across multiple systems, each with its own sources of data entry and its own ways of structuring data.
Even within an enterprise application, companies struggle with duplicated data resulting from error-generating data entry practices., For example, in a CRM application, rather than search-before-create, users may simply generate a new record rather than update or append an existing one. Many duplication issues are a result of this kind of gap in data governance processes for data entry. Resolving inferential matching issues requires aggregating data from multiple systems and determining which data components are duplicated, how duplications should be resolved, which data elements are valid, and which should be present in the enterprise’s single source of truth.
Inferential matching is a complex but necessary step towards data cleaning and reliability. Matching records that are alike or have slight differences to discover duplicates is a complex problem, especially when executing against complex objectives, such as standardizing an address in a specified way.
Inferential matching moves the enterprise closer to a 360-degree view of its contacts and commercial relationships by increasing the trustworthiness and usability of data.
With clear, deduplicated data, the sales and marketing teams can better assess risks and opportunities with each buyer and make business decisions based on a clear picture of relationships across all internal teams.
Join Matchbook Services’ Rushabh Mehta for a live webinar on May 25th, 2017 at 1 pm EDT. Learn about the data optimization tool set that Matchbook has developed in partnership with Dun and Bradstreet. Rushabh will point the way to the clean, trusted, enriched, high-quality data your teams need to build strategic analytics and grow process efficiencies. Click here to register.
ABOUT US: Matchbook Services provides data quality and data mastering solutions that offer comprehensive and multi-step inferential and referential data matching. It’s no surprise that data quality remains a key aspiration and critical need for organizations. Achieving and maintaining data quality requires processes, governance, and active oversight. The combination of trusted third-party pre-mastered data with the tools and technology that best fit your organization will set you on the right path.
Matchbook Services’ April release is full of enhancements, features and bug fixes. However most excitedly Montoring is now live! Matchbook Monitoring services now gives you updates on changes if your data changes in the D&B Database. You can choose to monitor all of your data or subsets of your records. See below for more details on this release:
- Real-Time Single Transaction API (Live API) – Enterprise edition users can now make real-time calls to Matchbook Services in order to get match and enrichment data. This feature is available for limited preview only and will be fully available to all enterprise customers by end of May. If you are interested in testing out this feature, please contact us.
- End User Licensing Agreement – When users log in, they will first be asked to accept the EULA before they can continue to use the portal. On the login page, as well as at the bottom of the portal, there is a link to the Licensing Agreement.
- REST API’s for Data Download – Users can now use REST API’s for downloading results from Matchbook Services for the Match Output as well as the Data Enrichment Output for Detailed Company Profile. To use this feature and to get your API credentials, please contact our support team.
- Show Tags – We now show data tags as tooltips when tags are present in the data
- Data Stewardship Statistics – Enhanced the pie chart on the dashboard to show actual counts by data steward in addition to percentages
- New count indicators on the Export Page – New indicators display count of records in the export queue
- Remove informational Tooltips – We have removed informational tooltips from the dashboard to make way for new help features that will be rolled out in April
- Multiple tags when adding new company records – Users can now add multiple tags to new records being added via the UI into the database
- Import/Export Additional Auto-acceptance rules – Users can now import new rules from Excel file or export rules out to Excel
- Manage Tags – Users can now manage tags from the Configuration Settings page
- Dashboard Refresh – Users can now click refresh on the new refresh icon on the dashboard to refresh the information on the dashboard
- Redirect on Logout – Users are now redirected to Matchbook’s News and Releases page on logout where users can get the latest information from Matchbook Services
- About Us page – We added more licensing details on the about us page for easy viewing
- Update standardized data in output when searching via the Clean Data/Search page
- Import Data Page – Change label “Text with Delimited” to “Custom Delimiter”
- Made the Monitoring Profile ID’s and Notification Profile ID’s received from D&B as the unique identifier in the configuration tables
- Data Import File Name – storing the right file name based on the import process
- Fix Review Matches confidence code filter which was not filtering data if nothing was selected for confidence codes
- SearchByDUNS feature should accept only Numbers
- Fixed the message box popup when accessing the base report
- One Time verification code was not being emailed to first time users
- Investigation Report was not showing certain the Investigation Remarks
The releases are coming fast and furious now with our development team incorporating more feedback from our customers and executing the product road map. See below for an outline of the New version: 2.17.02.10
Release Date: Mar 10, 2017
Major Release Features
- Export Enhanced Match Output: Additional metadata and D&B data added to the match output
- Export Enrichment Data: Users can export enriched data from Detailed Company Profile via the UI
- Tag based Export for Match Output: Users can export data based on tags via the UI. This is only available in Professional and Enterprise SKU
- REST API for Data Upload: Matchbook Services now enables Professional and Enterprise edition customers to upload data to Matchbook Services using a REST API call. To get your API keys, please contact firstname.lastname@example.org
- License Based Feature Entitlements: Matchbook Services’ feature can be controlled based on SKU
- Tag Based Data Security: Restrict data access for Data Stewards through the use of Tags
- Fine-Grained Feature Access: Administrators can now restrict access for data stewards to certain Matchbook features through user security. Here are the features/access that can be controlled
- Open investigations
- Search by DUNS
- Add/Edit Additional Auto-acceptance criteria
- Review Matches Page Re-design: The review matches page now has enhanced filtering ability for match candidates
- Monitoring DUNS Registration: Once monitoring rules are implemented, the background process can automatically register DUNS Numbers for monitoring
- Support for D&B Direct Plus Credentials: Matchbook allows users to add credentials for Direct Direct+ API access. Future releases of Matchbook Services will allow users to seamlessly integrate D&B Direct+ as part of their cleanse/match and enrichment process.
- Exclusion Filters for Interactive Search: When performing interactive searches from the UI, users can now pass in Exclusion clauses to the cleanse/match calls
Bugs Fixes and Minor Enhancements
- Fix Tag Storage – Empty tags are now stored as nulls in configuration tables for streamlined filtering
- Excel upload failure in case of duplicate column names – The issue with unhandled exception due to duplicate column names is now fixed and a valid error message is shown to the users
- Session level filter searches based on special characters – This has now been addressed to allow “[“ and “]” characters
- When looking at a single match record detail, users have the ability to see additional fields and keep that option open as they cycle through multiple match candidates
- On the Import / Single Entry form, Country has been defaulted to US
- On all screens that list countries, the sort order has been changed to show “US” first, “CA” second, “GB” third and then order the remaining by country names
- Session filters now allow users to filter data by tags
- Fixed refresh performance for dashboard
- Modified reports
- Allow users to add match as a new company and open investigations by right-clicking a match candidate on the Match Data page
For more details on Features/Pricing contact us at 805-325-3893 or send a web request
Matchbook Services releases tag-based rules among other advanced functionality in new release.
We’re excited to announce the new Matchbook Services release. We’ve added a whole set of new features and functionality in match, enrichment and administration. We’re also setting the stage for releases later in 2017 with monitoring set-up support. Please take a moment to review the new feature list. You may also need to clear your browser cache in order for the new release to work properly.
New version: 2.17.01.10
Release Date: Feb 8, 2017
Major Release Features
Tag Based Rules: This release adds the ability to isolate data processing and auto-acceptance rules using Tags. This will allow users to fine-tune how different sets of data is processed through the system for match and auto-acceptance, what data enrichments are applied and what monitoring is done on the data based on tags on input data. Tags will also be used to enable data security for data stewards in future releases. Tag based rules are only supported in Professional and Enterprise SKUs
Exclusions for Cleanse Match API Calls: These new cleanse match settings allow users to fine-tune the calls made to the D&B Direct Cleanse and Match API to only retrieve relevant results. These settings can also be selectively applied to inputted data via the new Tag Based Rule feature.
Auto-Accept Directives: Auto-accept directives allow administrators to control what type of record can be auto-accepted by the system. These directives can be fine-tuned using the new Tag based rule feature.
Data Enrichment Settings: Data Enrichment settings allow administrators to specify the types of data enrichment that should automatically be applied to data once a valid DUNS Number is selected. Application of data enrichment rules can be fine-tuned using the new Tag based rule feature
Additional Company Fields: Users can now see new additional fields uploaded through the input file when making match decisions.
Edition Based Entitlements: Features available to a customer are determined by the edition to support our new feature based pricing model.
Import and Match Data Export Changes: We are enabling new processes and views for customers connecting with the back-end database to support custom integration solutions and to also provide additional data for the match candidate and input record. This limited direct database access is only supported in Standard, Professional and Enterprise editions..
Monitoring Setup Support: Start creating all you monitoring elements with this version, in anticipation of the full monitoring capability coming in the next release. This version includes ability for administrators to create and manage monitoring configurations, including synching and setting up Monitoring profiles, User Preferences and Notification Profiles using D&B Direct 2.0 monitoring services. Users can also set up rules to automatically register DUNS numbers for monitoring. Current release will only feature monitoring setup for DCP_PREM. Future release will add support for other monitoring products. This feature will only be supported in our Professional and Enterprise editions.
Bugs Fixes and Minor Enhancements
- Login – verify using Email: Added the ability to “Resend email” if email was not received. Also fixed a bug related to the verification using email where a user was able to bypass the 2-step verification
- Re-active an inactive user: A bug caused the re-activated user to go back into an inactive state
- Issue with duplicate records with the same SrcRecordId being exported – this is an edge case scenario that should not exist in actual implementations since SrcRecordId is required to be unique for each input record
- Quick Search not bringing back search results – this was due to incorrect http encoding of the input data. This has been resolved
- Dashboard Data Queue Statistics has changed to show the Active Data Queue Statistics. We now only show unprocessed (or un-exported) records in each queue
- Header and label changes in Cleanse Match Settings
- Removed phone number validation for Import single record UI
- Revamped the sign-in experience via www.matchbookservices.com to remember user domain and login information for subsequent visits.