Financial crime, fraud, waste, and abuse are surging in today’s economy and no industry is immune from this contagion. Healthcare is no exception.
The threat of healthcare fraud has only become more prevalent due to continued growth in the population of healthcare consumers, the increase in care being delivered outside of traditional care settings such as telehealth, and exponential development of resources offering health and wellness services. Moreover, as the Baby Boomer generation ages, the number of healthcare consumers — and thus the opportunities for fraud — are likely to increase even more in the next few years. Overall, opportunities to easily commit fraud by parties outside of and even within the healthcare network have become more common, leading to challenges in differentiating between good and bad actors.
The pandemic unleashed a torrent of fraudulent claims, driven (in part) by: the huge sums of money allocated by the federal government for testing, treatment, and economic subsidies; changes in employment patterns, resulting in people holding multiple jobs; and the remote work trend, which led to less stringent security measures by home-based workers. The result is a more sophisticated crop of fraudsters and fraud schemes, leaving companies exposed to heretofore unknown and unforeseen risks.
Regardless of the nature of the fraud, or the element of the healthcare ecosystem in which it occurs, the impact is significant. The National Health Care Anti-Fraud Association estimated (on a conservative basis) that healthcare fraud costs the U.S. about $68 billion annually — about 3% percent of all healthcare spending in the country. Other estimates range as high as 10% of annual healthcare expenditure, or $230 billion.1
With financial, regulatory, and reputational risks on the line, payers, providers, federal and state government agencies, and drug manufacturers must be vigilant about fraud risk management practices to prevent fraud and minimize impact.
AI and machine-learning (ML) technologies analyze vast amounts of data, making these systems extremely effective defenses in identifying and preventing fraud. Given that healthcare’s most widely used technology providers, such as Epic and Cerner, serve thousands of hospitals and payers and maintain healthcare data on hundreds of millions of patients, detecting potential fraud embedded in that data is essential.2
AI/ML can be applied to healthcare fraud detection and prevention in several ways. For example, AI/ML algorithms can analyze large volumes of healthcare data to identify patterns of both unintentional and intentional fraudulent activities, such as billing for services that were not provided or submitting duplicate claims. These algorithms can flag suspicious transactions for further investigation and help organizations detect fraud more quickly.
AI/ML also can be used to develop predictive models that identify potential fraudsters or at-risk claims. These models can analyze patterns in data to predict which claims are most likely to be fraudulent, allowing organizations to take proactive measures to detect fraud.
Additionally, unusual patterns such as unexpected spikes in billing or unusual provider behavior can be identified by AI/ML algorithms. These algorithms can flag these anomalies for further investigation, helping to detect and prevent fraud. Similarly, AI/ML can analyze claims data to identify discrepancies, errors, and anomalies that can flag suspect claims for review, helping to detect and prevent fraud before it occurs.
Payers particularly stand to benefit from employing AI/ML to prevent fraud. Detection and prevention of payer fraud requires a combination of data analysis, investigation, and collaboration between providers, payers, and law enforcement agencies. AI/ML can assist payer programs by analyzing large volumes of data to identify patterns and anomalies that may indicate fraudulent activities.
Some of the more common vertical areas that are exposed to healthcare fraud are described below. Also, both payers and providers may be subject to fraud in which perpetrators use another person’s health insurance. The use of AI/ML has effective applications for these areas.
The effectiveness of AI/ML is contingent on access to high volumes of quality and relevant data. In cases where access to real-world data is limited or restricted due to privacy concerns, synthetic data — artificially generated data that mimics the characteristics and patterns of real-world data — can be used to simulate, train, and test models in a controlled environment.
For example, if available real-world data is limited or biased, synthetic data can augment the dataset and increase its size and diversity. Data privacy is a critical concern in healthcare, of course, and regulations such as HIPAA can restrict sharing of real-world patient data. Synthetic data can be used to generate datasets that mimic the characteristics of real-world data, allowing researchers and data scientists to build and test models without accessing sensitive information.
Further, synthetic data can be used to simulate different scenarios and test the performance of models under varying conditions. This includes simulating fraud schemes, such as up-coding or billing for services not rendered, to test how well the models can detect and classify such fraud.
While synthetic data can be a valuable tool for building and testing AI/ML models for healthcare fraud detection, it is essential to ensure that the synthetic data is representative of real-world data and accurately captures its characteristics and patterns. This can be achieved through careful data-generation techniques and validation against real-world data to ensure the synthetic data is high quality and useful for model development.
While AI and ML technologies are providing new and better tools to detect and prevent healthcare fraud, they can also be a double-edged sword. Bad actors can leverage the power of AI/ML to commit fraud at scale. For example, using natural language processing, bad actors can scan the obituaries to assume the identities of people who have passed away, and submit forged medical expenses using generative AI for reimbursement. It may take months or even years before Medicare or Medicaid systems are updated, resulting in thousands of dollars in fraud per day. The dual use of AI/ML to commit and prevent fraud is a complex challenge that requires a comprehensive strategy with foundational people, process, and technology components.
Additionally, a major advantage of AI/ML is its ability to comb through large volumes of data quickly and precisely to identify fraud, limiting the hours spent by employees in reviewing cases manually. There is tremendous potential for cost savings in this. However, these systems do not eliminate the need for human oversight. AI/ML frees people to do more sophisticated, analytical tasks, but these technologies must be continuously monitored to ensure that they are using their enormous data-mining capacities to lead to correct, actionable conclusions. Choosing the right AI/ML vendors and advisors, and effectively implementing the AI/ML system, is an important consideration.
The availability of AI and ML to address healthcare fraud could not come at a more critical time. A growing and aging population of healthcare consumers, the evolution of treatment beyond traditional settings, and continued increases in the financial resources allocated to healthcare are creating ever greater potential for fraud. In combating these new fraud threats, the weapons provided by AI/ML will be increasingly essential.
Co-authored by Ellen Zimiles and Rod Fontecilla.
Complexity demands a trusted guide with the unique expertise and cross-sector versatility to deliver unwavering success. We work with organizations across regulated commercial and public sectors to catalyze transformation and pioneer new directions for the future.