Image by Boy_Anupong/Getty Images
UnitedHealthcare, the largest health insurance provider in the US, is using an AI algorithm called nH Predict whose wildly inaccurate predictions are being used to deny health coverage to severely ill patients by cutting the time they can spend in extended care, a new lawsuit alleges.
The suit, filed this week in the US District Court of Minnesota, was put forward by the estate of two deceased individuals who were denied coverage by UnitedHealth. The plaintiffs argue that the health insurance company should have known how inaccurate its AI was, and that the provider breached its contract by using it.
Their grievances are corroborated by an investigation from Stat News into UnitedHealth’s internal practices at its subsidiary NaviHealth, which found that the company forced employees to unwaveringly adhere to the AI algorithm’s questionable projections on how long patients could stay in extended care.
At least there was a silver lining in the board room: the penny-pinching AI reportedly saved the company an estimated hundreds of millions of dollars it would have been forced to spend on the patients’ care otherwise, according to Stat.
Though the health claims are rarely appealed, when they are, around 90 percent of them are reversed, according to the lawsuit. That suggests that the AI is egregiously inaccurate, and that by placing undue trust in it, UnitedHealth is scamming countless vulnerable patients out of their healthcare.
“If UnitedHealth is using [NaviHealth’s] algorithms as gospel… that’s not clinical decision-making,” Spencer Perlman, a healthcare markets analyst, told Stat. “That’s aggregating data and using an algorithm to make a decision that has nothing to do with the individual themselves.”
UnitedHealth fired back in a statement to Stat.
“The assertions that NaviHealth uses or incentivizes employees to use a tool to deny care are false,” it read. “Adverse coverage decisions are made by medical directors and based on Medicare coverage criteria, not a tool or a performance goal tied to any single quality metric.”
Documents and employee testimony seem to corroborate the questionable decisionmaking of UnitedHealth’s AI, though.
In one case, the nH Predict system allotted a mere 20 days of rehab for an older woman who was found paralyzed after suffering a stroke — just half the average for impaired stroke patients, according to Stat. An elderly, legally blind man with a failing heart and kidneys only received a shockingly inadequate 16 days to recover.
What could be making nH Predict so wrong? It’s basing its projections on the length of stays of some six million previous patients in the company’s database. On its face, that may appear sound, but that means the AI is inheriting the errors and cost-cutting of those previous decisions — and above all, failing to account for exigent factors both clinical and practical.
“Length of stay is not some biological variable,” Ziad Obermeyer, a physician at University of California, Berkeley, and a researcher of algorithmic bias, told Stat.
“People are being forced out of the [nursing home] because they can’t pay or because their insurance sucks,” he added. “And so the algorithm is basically learning all the inequalities of our current system.”
Yet UnitedHealth would only make its standards more extreme. In 2022, case managers were instructed to keep nursing home stays within three percent of the AI’s projection.
Next year, however, it was narrowed to less than one percent, effectively giving employees zero leeway. If case managers failed to hit that target, they were disciplined or fired, according to Stat.
“By the end of my time at NaviHealth I realized — I’m not an advocate, I’m just a moneymaker for this company,” Amber Lynch, a former NaviHealth case manager who was fired earlier this year, told Stat. “It’s all about money and data points,” she added. “It takes the dignity out of the patient, and I hated that.”
All told, it sounds like a grim example of how the seeming objectivity of AI can be used to cover up shady practices and exploit people at their most vulnerable.
More on AI: In Huge Upset, OpenAI Fires Sam Altman
Share This Article