Matchbook BlogCompany news, release notes and all things data
Use Case #1: When match rates are low, enrichment is impossible.
Diversity is an important indication of a healthy supply chain. Companies are realizing that by demonstrating ethical compliance, they experience better financial health and contribute to a thriving global economy. But diversity indicators are difficult to measure and track.
Our client, a large multinational organization, struggled to track their supplier diversity, which is mandatory for reporting on government contracts. Dun & Bradstreet and other third-party vendors could supply the data, but match rates were low and the business didn’t have confidence in the mastering process.
When you don’t trust the data you’re looking at, it’s difficult to use it confidently to make business decisions. They needed to integrate multiple lines of business while maintaining fine-grained control over unique data use cases, but traditional MDM just wasn’t returning results that were up to their standards.
We partnered closely with their application and quality team to completely integrate Matchbook Services into their systems within three months. Together we processed over 150,000 supplier records instantly upon going live. Today, data stewardship tools within the Matchbook portal give their supplier insights teams continuous control over matching that far exceeds previous batch processing. Through automated acceptance rules and a comprehensive data stewardship console, quality and quantity of diversity measurements increased threefold and at a lower cost. The high degree of fidelity and trust in the data transformed their business units’ ability to make decisions and stay compliant across their supply chain.
Create a Data Set You Can Trust.
A high match rate means nothing if you can’t trust your data.
Matchbook delivers from over 3000 valuable business attributes to meet your unique business operations, risk, or compliance needs, and then monitors the data 24/7 for changes.
Use Case #2: Integrating MDM, A lesson in manual mastering
Our client, a Fortune 100 company, struggled to harmonize the integrity of data from multiple subsidiaries, spanning medical devices, pharmaceutical,
and consumer-packaged goods. To use the D&B D-U-NS number as their own global key, they needed to execute resource-intensive integrations and manual processing. Nothing was easy.
Master Data Management can get political. Individual business units within the project were attempting to match with DUNS on their own and were unwilling to cede control of their data points. This, coupled with differing enrichment requirements, created a huge risk of time and budget overruns.
This company was faced with a “build it or buy it” dilemma. Do they spend a year or more building the integration functionality only to have the requirements change and priorities shift, or do they buy a solution that can give them the functionality they need?
Matchbook Services delivered a plug and play solution they could easily integrate into each of their applications. With team-based data stewardship functionality, they can empower each business unit to match and enrich their own data.
Use the Right Tool For the Right Job
Giving up data control is daunting.
Matchbook facilitates the flow of data from all the integration points in a bottom up approach that empowers the stakeholders who are most familiar with the data through our comprehensive data stewardship toolbox.
Want to share these Use Cases with your organization? Download them here.
Dan Laverentz joined Matchbook Services this month as VP Customer Success. We couldn’t be more excited, nor more confident, that he will be an incredible asset to the team
Dan comes to us after holding the role of Director of Service Delivery at SolidQ North America for six years. In this role, he gained critical acumen as a primary liaison between customer leadership and the delivery teams. The experience, coupled with 23 years of technical IT, project management and business consulting, has given Dan the insight into customer relationships that will ensure Matchbook will deliver the solutions and service needed to allow customers to successfully integrate with commercial business data, as well, master their data.
“I am thrilled to bring Dan on board,” says Rushabh Mehta, Matchbook Founder and CTO. “We built Matchbook Services because we want to make sure that our customers can take advantage of the full potential of our solutions’ capabilities to meet their unique business challenges. Dan is highly adept at understanding customers unique pain points and delivering the right level of services and support. Dan is also highly experienced in building teams and processes that will be invaluable to us as we grow our market footprint exponentially. Dan is absolutely the right person to ensure that Matchbook Services never waivers in its commitment to be a customer focused company.”
“I’m honored to be joining the Matchbook Services team and looking forward to spearheading our efforts around customer success,” says Dan Laverentz, “ It’s an exciting and unique opportunity to help ensure that we deliver the best possible product and service to our customers, and partner with them to achieve success in today’s data driven world.”
“Dan’s addition to Matchbook Services’ team is part of the company’s plans to to strengthen our leadership. His prior experience with many of the team members of Matchbook is beneficial and allows us to align from day one with our customer focused approach that makes us a disruptive company”, shares Brenda McCabe CEO.
Connect with our team on LinkedIn for the latest updates about Matchbook Services, or to book a call today.
Reference Based Mastering (RBM) makes use of reference sources like Dun & Bradstreet to match and master records for accuracy. By enabling data users to inform the process, the system is powerful and flexible. Users can set the rules they wish to use for their data, process it through the solution, and always end up with an accurate set of data. Unlike traditional MDM.
Mastered data is long considered the holy grail by CDOs and CIOs. It is no surprise that Enterprises spend millions of dollars in software, services and people in order to effectively deliver mastered data across the organization. The economic impact far exceeds a “feather in the cap” achievement and goes to the core of an organization’s competitiveness and agility to respond to changes in the business environment.
MDM solutions provide a robust approach to achieve mastered data. However, the conventional approach to MDM has pitfalls that are well documented by the MDM community, that enterprises should heed when embarking on the journey. Key among them are data ownership, lack of knowledge of the intent, and false positives with mastering.
The conventional approach to mastering data involves, among other things, the capability to match and merge data. The approach to match and merge has become far more sophisticated as MDM vendors provide new ways to match and merge with complex rules-based criteria and AI/ML based solutions.
In my conversations with organizations that are using these MDM capabilities to master their data, an unsettled feeling sets in when trying to balance receiving false positives, leaving too much data un-mastered, or dedicating more and more manual manpower to master and steward this data. Regardless of the approach, these processes are tightly controlled by a central group within an enterprise and not the true data owners of the data. This leads to lower adoption of mastered data within individual business units.
How do you democratize the process of mastering data so that data owners have more control over how their data is mastered?
For one, it involves a distributed governance model, giving the owners tools to master their own data. In my 20 years of experience in the data, analytics and MDM space, a robust way to accomplish distributed and controlled mastering of data is to use a common reference source to master data at the individual business units and subsequently using the identifiers provided by a common reference source to master the data across the enterprise.
For business data (customers, accounts, vendors), an established reference source such as Dun & Bradstreet can provide the relevant identifier such as the DUNS Number that uniquely identifies an individual company or location of a company. If you are a salesperson selling services to two different Starbucks’ on the same street a few blocks from each other, you won’t run the risk of an MDM solution falsely merging those two customers into a single customer record purely based on a match-merge capability. Instead as the salesperson responsible for these two accounts, you match this data to Dun & Bradstreet’s reference data set and be able to attach to each record a unique DUNS Number.
Reference sources don’t necessarily have to be external reference sources. They can also be an Enterprises’ unique MDM record identifier or an internal reference dataset that is vetted for your organization.
Mapping your data to a reference data set in this way provides complete control to the business units and takes away any ambiguity of mastering the record, whether from a single system or from across the organization using that reference identifier. This approach is something I call Reference Based Master (RBM). The benefits at a business unit level are, more control; and at an organization level, a true “golden record” that is trusted across the organization. Other benefits of adopting Reference Based Mastering (RMB) techniques are, no requirements for a coordinated approach across the enterprise in order to start mastering the data. Individual business units and systems can start the process of integrating with the reference source and start the process of mastering their data.
I will caution that Reference Based Mastering (RBM) techniques is a means to an end and not a replacement for MDM or the benefits that MDM solutions provide enterprises. I consider RBM a starting point to a broader MDM strategy.
Over the past few years, I have seen this approach succeed with companies large and small as they work on building their master data strategy and solution. I would love to hear your experience on your MDM journey and your thoughts on using Reference Based Mastering (RBM) techniques as a lead-in to a larger MDM strategy and implementation.
A primary goal of any data management strategy is to provide a clear understanding of the identities and key attributes of the many businesses that the organization engages. Optimally, this information is available across the enterprise, providing a 360 degree view of customers, prospects, suppliers, and other key relationships. The availability of high-quality data that provides this level of actionable insight restores trust in the data, increases its utilization and supports good business decisions.
But in the real world, holistic data management strategies take a back seat to the near-term needs of the functional teams that own their own data silos and enterprise applications. CRM and sales force automation and spend management applications have their own ways of building databases that are not easily shared.
Moreover, in the absence of a company data stewardship discipline, these siloed databases tend to accumulate a profusion of redundant, unverified, and incomplete records. Users of a CRM application, for example, may skip search-before-create –, and simply create a new record for each contact who may be new to the sales rep, but may be in an organization that is not new to the company. Many duplication issues are a result of this kind of gap in data governance processes for data entry.
To make data non-redundant, structured, and usable, companies have two sources of information to leverage: their own internal data and the data that they acquire from external sources. Each of these are mission-critical to arriving at data that is trusted, complete, and actionable.
Data quality starts with data matching: the elimination of redundancies among records and the accurate linking of multiple contacts that are associated with a single business entity. Internal data is the foundation of inferential matching. 3rd-party data must then be leveraged to complete the process, using referential matching.
What is inferential data matching?
Inferential data matching is the process of working with a company’s data to identify multiple records that should be associated with a single business entity and either linking them, collapsing them into a single record, or creating pointers to indicate that they are contacts within a single organization. But this matching process can be fraught with errors due to inadequate data management, incomplete data validation processes, or the challenges of rationalizing data across multiple systems, each with its own sources of data entry and its own ways of structuring data.
Even within an enterprise application, companies struggle with duplicated data resulting from error-generating data entry practices., For example, in a CRM application, rather than search-before-create, users may simply generate a new record rather than update or append an existing one. Many duplication issues are a result of this kind of gap in data governance processes for data entry. Resolving inferential matching issues requires aggregating data from multiple systems and determining which data components are duplicated, how duplications should be resolved, which data elements are valid, and which should be present in the enterprise’s single source of truth.
Inferential matching is a complex but necessary step towards data cleaning and reliability. Matching records that are alike or have slight differences to discover duplicates is a complex problem, especially when executing against complex objectives, such as standardizing an address in a specified way.
Inferential matching moves the enterprise closer to a 360-degree view of its contacts and commercial relationships by increasing the trustworthiness and usability of data.
With clear, deduplicated data, the sales and marketing teams can better assess risks and opportunities with each buyer and make business decisions based on a clear picture of relationships across all internal teams.
Join Matchbook Services’ Rushabh Mehta for a live webinar on May 25th, 2017 at 1 pm EDT. Learn about the data optimization tool set that Matchbook has developed in partnership with Dun and Bradstreet. Rushabh will point the way to the clean, trusted, enriched, high-quality data your teams need to build strategic analytics and grow process efficiencies. Click here to register.
ABOUT US: Matchbook Services provides data quality and data mastering solutions that offer comprehensive and multi-step inferential and referential data matching. It’s no surprise that data quality remains a key aspiration and critical need for organizations. Achieving and maintaining data quality requires processes, governance, and active oversight. The combination of trusted third-party pre-mastered data with the tools and technology that best fit your organization will set you on the right path.
Matchbook Services’ April release is full of enhancements, features and bug fixes. However most excitedly Montoring is now live! Matchbook Monitoring services now gives you updates on changes if your data changes in the D&B Database. You can choose to monitor all of your data or subsets of your records. See below for more details on this release:
- Real-Time Single Transaction API (Live API) – Enterprise edition users can now make real-time calls to Matchbook Services in order to get match and enrichment data. This feature is available for limited preview only and will be fully available to all enterprise customers by end of May. If you are interested in testing out this feature, please contact us.
- End User Licensing Agreement – When users log in, they will first be asked to accept the EULA before they can continue to use the portal. On the login page, as well as at the bottom of the portal, there is a link to the Licensing Agreement.
- REST API’s for Data Download – Users can now use REST API’s for downloading results from Matchbook Services for the Match Output as well as the Data Enrichment Output for Detailed Company Profile. To use this feature and to get your API credentials, please contact our support team.
- Show Tags – We now show data tags as tooltips when tags are present in the data
- Data Stewardship Statistics – Enhanced the pie chart on the dashboard to show actual counts by data steward in addition to percentages
- New count indicators on the Export Page – New indicators display count of records in the export queue
- Remove informational Tooltips – We have removed informational tooltips from the dashboard to make way for new help features that will be rolled out in April
- Multiple tags when adding new company records – Users can now add multiple tags to new records being added via the UI into the database
- Import/Export Additional Auto-acceptance rules – Users can now import new rules from Excel file or export rules out to Excel
- Manage Tags – Users can now manage tags from the Configuration Settings page
- Dashboard Refresh – Users can now click refresh on the new refresh icon on the dashboard to refresh the information on the dashboard
- Redirect on Logout – Users are now redirected to Matchbook’s News and Releases page on logout where users can get the latest information from Matchbook Services
- About Us page – We added more licensing details on the about us page for easy viewing
- Update standardized data in output when searching via the Clean Data/Search page
- Import Data Page – Change label “Text with Delimited” to “Custom Delimiter”
- Made the Monitoring Profile ID’s and Notification Profile ID’s received from D&B as the unique identifier in the configuration tables
- Data Import File Name – storing the right file name based on the import process
- Fix Review Matches confidence code filter which was not filtering data if nothing was selected for confidence codes
- SearchByDUNS feature should accept only Numbers
- Fixed the message box popup when accessing the base report
- One Time verification code was not being emailed to first time users
- Investigation Report was not showing certain the Investigation Remarks