OpenAI Launches ChatGPT Health, Which Ingests Your Entire Medical Records, But Warns Not to Use It for “Diagnosis or Treatment”

OpenAI's ChatGPT Health will ingest your medical records to generate more relevant responses that shouldn't be acted upon.
Getty Images

AI chatbots may be explosively popular, but they’re known to dispense some seriously wacky— and potentially dangerous — health advice, in a flood of easily accessible misinformation that has alarmed experts.

Their advent has turned countless users into armchair experts, who often end up relying on obsolete, misattributed, or completely made-up advice.

A recent investigation by The Guardian, for instance, found that Google’s AI Overviews, which accompany most search results pages, doled out plenty of inaccurate health information that could lead to grave health risks if followed.

But seemingly unperturbed by experts’ repeated warnings that AI’s health advice shouldn’t be trusted, OpenAI is doubling down by launching a new feature called ChatGPT Health, which will ingest your medical records to generate responses “more relevant and useful to you.”

Yet despite being “designed in close collaboration with physicians” and built on “strong privacy, security, and data controls,” the feature is “designed to support, not replace, medical care.” In fact, it’s shipping with a ludicrously self-defeating caveat: that the bespoke health feature is “not intended for diagnosis or treatment.”

“ChatGPT Health helps people take a more active role in understanding and managing their health and wellness — while supporting, not replacing, care from clinicians,” the company’s website reads.

In reality, users are certain to use it for exactly the type of health advice that OpenAI is warning against in the fine print, which is likely to bring fresh new embarrassments for the company.

It’ll only be heightening existing problems for the company. As Business Insider reports, ChatGPT is “making amateur lawyers and doctors out of everyone,” to the dismay of legal and medical professionals.

Miami-based medical malpractice attorney Jonathan Freidin told the publication that people will use chatbots like ChatGPT to fill out his firm’s client contact sheet.

“We’re seeing a lot more callers who feel like they have a case because ChatGPT or Gemini told them that the doctors or nurses fell below the standard of care in multiple different ways,” he said. “While that may be true, it doesn’t necessarily translate into a viable case.”

Then there’s the fact that users are willing to surrender medical histories, including highly sensitive and personal information — a decision that OpenAI is now encouraging with ChatGPT Health — despite federal law, like HIPAA, not applying to consumer AI products.

Case in point, billionaire Elon Musk encouraged people last year to upload their medical data to his ChatGPT competitor Grok, leading to a flood of confusion as users received hallucinated diagnoses after sharing their X-rays and PET scans.

Given the AI industry’s track record when it comes to privacy protection and struggles with significant data leaks, all these risks are as pertinent as ever.

“New AI health tools offer the promise of empowering patients and promoting better health outcomes, but health data is some of the most sensitive information people can share and it must be protected,” Center for Democracy and Technology senior counsel Andrew Crawford told the BBC.

“Especially as OpenAI moves to explore advertising as a business model, it’s crucial that separation between this sort of health data and memories that ChatGPT captures from other conversations is airtight,” he added. “Since it’s up to each company to set the rules for how health data is collected, used, shared, and stored, inadequate data protections and policies can put sensitive health information in real danger.”

“ChatGPT is only bound by its own disclosures and promises, so without any meaningful limitation on that, like regulation or a law, ChatGPT can change the terms of its service at any time,” Electronic Privacy Information Center senior counsel Sara Geoghegan told The Record.

Then there are concerns over highly sensitive data, like reproductive health information, being passed on to the police against the user’s wishes.

“How does OpenAI handle [law enforcement] requests?” Crawford told The Record. “Do they just turn over the information? Is the user in any way informed?”

“There’s lots of questions there that I still don’t have great answers to,” he added.

More on AI and health advice: Google’s AI Overviews Caught Giving Dangerous “Health” Advice

I’m a senior editor at Futurism, where I edit and write about NASA and the private space sector, as well as topics ranging from SETI and artificial intelligence to tech and medical policy.


TAGS IN THIS STORY

Go to Source