Avoiding Liability in AI Healthcare Decision-Making
Over a half dozen states have already enacted legislation regulating the use of artificial intelligence (AI) in healthcare decision-making. Now the New York State Legislature, too, is considering several bills limiting AI use in healthcare and insurance decision-making (A1456, S7896/A8556, A3991), though no governing regulations have been signed into law thus far.
For this reason, it is essential that physician groups and their attorneys keep a close eye on this developing legislation and relevant precedent. While there are no specific state-level regulations currently in place, physician groups and other healthcare providers face potential liability when AI is used in the decision-making and claims processes, specifically when the technology is found to conflict with existing policies and contracts.
Recent Litigation on AI-Enabled Healthcare Claims Processing
Three ongoing cases highlight the risks that healthcare providers and insurers face using AI models and algorithms for decision making.
- Kisting-Leung, et al. v. Cigna Corp., et al. (2023) – Three California plaintiffs filed a putative class action lawsuit against Cigna over claims they allege were denied due to use of the company’s PXDX algorithm. Without using any AI-specific statutes, the plaintiffs asserted that usage of the algorithm contradicted the company’s own terms requiring reviews by a medical director. While some of the plaintiffs’ initial claims have been dismissed, the case remains pending as the court considers breach of fiduciary duty in violation of 29 U.S.C. § 1132(a)(3) and alleged violations of California’s Unfair Competition Law (UCL).
- Estate of Lokken v. UnitedHealth Group, Inc., et al. (2023) – Similar to the Kisting-Leung case, the plaintiffs allege that UnitedHealth denied claims using their own AI model called nH Predict. The model purportedly told providers when post-acute care for Medicare Advantage patients should be cut off and provided generic recommendations that did not take a patient’s individual circumstances into account. This case has further mirrored Kisting-Leung in that some of the plaintiff’s claims have been dismissed because of protections for insurers under the Medicare Act. The issue of whether UnitedHealth violated their own policies by allowing an AI algorithm to make decisions that are required to be made by clinical staff or physicians will be considered under Minnesota state law.
- Barrows et al. v. Humana, Inc. (2023) – This case, filed in Kentucky federal court, also alleges wrongful denial of Medicare claims due to Humana’s use of the same nH Predict model at the center of the UnitedHealth case.
California and several other states have enacted laws regulating the use of AI in processing health care claims and coverage determinations since these cases were filed, with most including provisions requiring individual review of cases by qualified, human medical professionals. Nevertheless, the primary question posed by these lawsuits is not about whether AI should be used in determining health claims, but rather if the use of AI violates the company’s existing operational terms.
How Physician Groups and Insurers Can Avoid Liability
In New York, physician groups and insurance companies must consider liability if they choose to employ AI technology as a factor in their decision-making and claims processes. Public distrust of AI usage in highly sensitive matters like healthcare is a risk factor for litigation even when companies have taken steps to use the technology responsibly, transparently, and in concert with human decision makers.
In the absence of governing law, companies must first ensure that their policies and contracts allow for and regulate AI usage, so they are not subject to the same legal challenges being faced by Cigna, UnitedHealth and Humana. In the long term, New York’s position as a forerunner in AI regulation suggests that physician groups and other healthcare companies should prepare for stricter governance in the future. While 2025’s RAISE Act did not directly impact the healthcare sector, its passage established New York as a leader in AI regulation, and that leadership is likely to extend to the healthcare industry going forward.
Physician groups, insurance companies and other healthcare providers should secure legal counsel to answer questions about AI liability and review their current policies and procedures to shield themselves against potential litigation.
Bleakley Platt & Schmidt’s Health Care Litigation and Health Law practice groups are ready to guide you through this evolving set of challenges. Contact us today to schedule a consultation.