Provider Data Accuracy: Stop Chasing 95%—Define What Accuracy is Worth to Your Business (Part 2)

This is the second in a two-part series on the importance of measuring accuracy when it comes to provider data, and the challenges of doing exactly that. If you haven’t read the first post, you may want to start there. Here is that link. 

 

In our original post on the topic, Aaron Beach, Orderly’s VP of Data and Engineering, dives into the vexing question of how vendors or business leaders can claim to achieve metrics of accuracy when there is no clear standard against which to measure that accuracy. That is, you can’t grade a test if you don’t know the answers to the questions it asks.

 

In response to our first article, then, the question remains: What now?

 

For starters, it’s important to establish a shared understanding of what anyone means when they talk about “accuracy.” If a vendor claims that they can achieve “95% accuracy,” you should immediately ask: “95% according to what? Measured how? Against which verification method? With what assumptions?”

 

Being armed with a bit of skepticism — and the ability to challenge claims that others might be making — will pay dividends to any leader looking to evaluate competing claims. Healthy skepticism is also generally good practice for responding to anyone trying to sell you something. 

 

Going a step further, there is a deeper, more valuable insight to be gained from Aaron’s explanation: accuracy is actually a distraction when it comes to making decisions for your business. The real issue is whether you have defined what accuracy is worth to your organization — operationally, financially, and legally.

 

Accuracy is not a vanity metric, per se. But accuracy itself cannot be the only, nor even necessarily the primary, consideration. As with all things in business and engineering, there are tradeoffs. Given engineering and financial resources are not infinite, you cannot optimize for accuracy alone without also considering the costs in terms of time, money, security, and other factors that might be crucial to your business. 

 

The more strategic organizations recognize these tradeoffs. Those same organizations also generally have the most success engaging with the Orderly platform. 

The Mindset Divide: Defining Tradeoffs

When we speak with health plans, provider networks, and large healthcare organizations, we tend to see two distinct mindsets:

1. The Vanity Metric Organization

This organization asks vendors a single question:

 

“Can you get us to [ARBITRARY NUMBER]% accuracy?”

 

The answer is almost always “yes,” but then the conversation gets a bit more difficult when you have to explain the logarithmic investment required for each marginal step up in accuracy. 

 

When pressed further, a few things become clear:

  • These organizations cannot quantify the cost of inaccurate provider data.
  • They cannot define what level of accuracy would be defensible in an audit.
  • They cannot articulate the marginal benefit of moving from 80% to 85%.
  • They assume “higher is better,” without asking “at what cost?”

 

Representing a vendor in these conversations is a delicate dance. We always want to help, but as a business we can’t give away our services for free (despite popular perception, (even Meta and Google don’t do that). At the same time, it’s hard to challenge a potential customer even when you think they might be going down the wrong path. In our attempts to help steer the customer down what we consider to be a more successful path, we sometimes bear the brunt of their resentment if we try to explain that what they claim they want might actually harm their business.

 

As a result, we regularly observe prospective customers turning to others who are all too willing to make false promises, knowingly or not, only to have that same organization circle back years later when they are back on the market looking for someone else who can “actually deliver.”

 

It’s hard to watch. To avoid these scenarios, we’ve tried to direct our attention to organizations with the second mindset we see: 

2. The Strategic Organization

Through pattern recognition, we’ve gotten better at identifying these more strategic organizations based on the different types of questions they ask early in the process: 

 

  • What does “defensible” look like?
  • What does inaccuracy cost us today?
  • How is accuracy measured?
  • What is the marginal cost of each additional percentage point of accuracy?
  • Are there other metrics we should be paying attention to besides accuracy?

 

The questions above underscore a deeper understanding of the nuance of their business. No one metric can stand alone. Each is interconnected and comes with its own set of limitations and consequences to consider. 

 

Organizations asking these questions understand that accuracy is not static. Data decays. Verification methods have error rates. Every improvement has a cost curve.

 

World-class organizations do not chase percentages. They define thresholds.

 

If you’re reading this and you get that sinking feeling in your stomach that your organization seems to carry the first mindset over the second, take heart. Just like data, organizations aren’t static. Companies are not “Mature” or “Immature” by default. Sophistication and maturity in decision-making is a skill that can be cultivated and practiced. Change in business is constant, and organizations capable of adaptation are also best positioned for long term success.

 

The first step is recognizing the need to improve provider data. From there, you need to have two conversations to determine how.

Conversation One: The Compliance Threshold

The first conversation is foundational. 

 

Don’t start with an arbitrary or hypothetical target. Instead, connect with your legal or compliance leader to have a candid discussion. 

 

Example questions to ask:

  • What level of provider data accuracy would be defensible if we were audited?
  • Are regulators expecting perfection, or do they just need to see demonstrable progress?
  • If we could show documented methodology, confidence scoring, and ongoing measurement, would that satisfy scrutiny?
  • What evidence would you need to feel comfortable defending our posture?

In the process, you’re likely to discover a key insight: compliance is rarely about perfection. It is about defensibility. 

 

There is a meaningful difference between:

  • “We believe our data is accurate.”

and,

  • “We can demonstrate our methodology, our verification processes, our confidence estimates, and our ongoing investment in maintaining them.”

 

If your current provider data accuracy is 65%, and you do nothing, that is a posture of risk. But what happens if you can demonstrate,

  • 75% with documented verification;
  • 80% with ongoing measurement; or
  • 85% with continuous decay management?

 

At each step, cost increases. The question is not “Can we reach 95%?” The question is “What level is defensible, and what does it cost to get there?”

 

If you walk away with guidance that “any number below X% accuracy is unacceptable!”, your immediate response should be to ask: 

  • Why? 
  • What are the implications of falling below that number?
  • How is this threshold accuracy calculated, and are we currently measuring our accuracy the same way?

 

Anything that cannot be rigorously defined cannot be rigorously defended. Anyone charged with making purchasing decisions at an organization should be able to stand before the Board and defend their decision with confidence. And nothing confers confidence like having the data to back up your decision. 

Conversation Two: The Cost of Inaccuracy

The second conversation is operational.

 

Now that you’ve established the what and the why, it’s time to move on to the how. Sit down with the Head of Claims, Provider Operations, or Network Management.

Ask straightforward questions that have clear, unambiguous answers:

  • How many FTE hours are spent correcting inaccurate provider data
  • How often are claims delayed or reworked because of incorrect information?
  • How many calls are triggered by directory errors?
  • How often does inaccurate data create member abrasion or provider dissatisfaction?
  • What is the cost per manual correction?

 

Many organizations “know” that inaccurate provider data is a problem. Few can quantify it. This matters. If you cannot measure the cost of inaccuracy, you cannot rationally justify paying for higher accuracy.

 

Then push further:

  • What happens if we improve accuracy by 5%? By 10%?
  • How much rework would we eliminate
  • How much time could we save working on this problem?
  • Would our NPS scores improve?
  • Could we reduce compliance exposure?
  • Would we reduce headcount?
  • Could we redeploy resources elsewhere in the organization?

 

You may discover that moving from 65% to 75% creates substantial operational savings. You may also discover that moving from 85% to 90% creates almost none.

 

Accuracy has a diminishing returns curve. The last five percentage points are almost always the most expensive — and frequently the least economically meaningful.

 

If you want to invest wisely, you must understand where that curve bends.

When It Makes Sense to Invest — and When It Doesn’t

At Orderly, we almost always start every introductory or discovery call with a candid discussion about whether or not we can help. We wouldn’t be in business if we didn’t believe that we were providing a valuable and much needed service to the world. More importantly, we wouldn’t be in business if our customers didn’t just believe in the value of the services we provide them, but were also able to calculate and quantify that value on an ongoing basis. 

 

The simple fact is, it may not make sense for your business to invest in solutions to improve your provider data. 

 

To make it easier for you to determine for yourself whether a provider data solution might make sense, we’ve developed the following cheat sheet: 

 

Investing in provider data accuracy likely makes sense if:

  • You have defined a defensible compliance threshold.
  • You can quantify operational waste tied to inaccuracy.
  • You have executive sponsorship willing to treat accuracy as an ongoing discipline, not a one-time cleanse.
  • You understand that data decay requires continuous measurement and reinvestment.

 

It likely does not make sense if:

  • You are chasing a marketing number.
  • You believe a vendor can “guarantee” 95% without transparent methodology.
  • You cannot quantify the compliance or operational impact.
  • You expect a single project to permanently solve a dynamic problem.

 

If the first list of bullets seems a bit daunting, the good news is you’re not alone. Many, if not most, organizations are not going to know exactly where they stand or be able to answer all these questions on day one. The simple act of asking the questions is often the first push needed to get you started and a good indication that you might be ready to work with an outside vendor. 

 

And if you don’t have all the answers and don’t even know where to start, Orderly can help with that as well. We are comfortable asking the hard questions when it might be unpopular or risky to do so yourself, and we have a team of data engineers more than capable to provide guidance on where to start looking. 

 

The goal isn’t to have your house in order before the cleaners arrive. It’s important just to know where you want them to start when they get there. 

A New Mindset: Accuracy as a Managed Asset

Based on everything we’ve shared so far, you might be tempted to think that measuring accuracy at all is a fruitless exercise, but that would be missing the point. The goal isn’t to stigmatize or marginalize any one metric. Rather we intend to place accuracy in the proper context among numerous other metrics or milestones that might be impactful to your business and to show that the most sophisticated organizations have stopped thinking about accuracy as a single number.

 

Instead, they treat provider data accuracy as a managed asset:

  • Field-specific (addresses decay differently than specialties).
  • Time-sensitive (a verified phone number today is not equally reliable in six months).
  • Continuously measured.
  • Bounded by observable agreement rates.
  • Transparent about assumptions.

 

In this model, confidence scores become more useful than static percentages. Not because they inflate results, but because they explicitly acknowledge uncertainty and decay. A confidence score that can be traced to verification methodology, agreement rates, and time decay is far more defensible than a flat “95% accuracy.”

 

It is also more honest.

 

At the end of the day, you cannot have perfect accuracy, zero cost, and zero risk at the same time. That is not how engineering works. That is not how economics works. And it is not how compliance works.

 

The question isn’t “How do we achieve the greatest accuracy, period?”

 

Rather it’s a series of questions: 

  • What level of accuracy is defensible?
  • What level of inaccuracy is costing us money?
  • What is the marginal cost of improvement?
  • Where does return diminish?
  • And are we measuring this rigorously?

 

Organizations that win in provider data are not the ones chasing perfection. They are the ones who understand what accuracy costs, and balance those costs against what accuracy is worth, to them. 

 

If you have not had the two conversations outlined above, start there.

 

If you have, and you can clearly define the threshold you need and the economic value it creates, then investing in better provider data may make sense.

 

If you cannot, pause for a moment. Take a step back, and ask yourself any number of the many questions outlined in this article. Remind yourself that the goal is not to buy a number.

 

The goal is to build a defensible, economically rational, continuously measured accuracy posture — one that you can explain to your board, defend to regulators, and justify to your finance team.

 

It is a different standard. And it is a much more durable one.

Want to understand the math behind these ideas?

Read Part 1 of this series from Orderly VP of Data and Engineering Aaron Beach, where he breaks down the technical challenges of measuring provider data accuracy—and why many industry accuracy claims don’t hold up under scrutiny.

About Our Guest Author:

Kevin Krauth is the CEO and co-founder of Orderly Health, a provider data management platform recently acquired by First Choice Health. Prior to Orderly, Kevin has worked in Corporate M&A with Blackstone, helped scale a solar financing startup to exit, and built out data infrastructure at Electronic Arts. He holds a BA in Public Policy and Economics from Duke, and a certificate in Machine Learning from MIT Sloan.

Enjoyed this content and hungry for more?

Subscribe now to receive the monthly Orderly newsletter directly in your inbox, packed with insights and updates to keep you ahead of the curve in the world of healthcare.