Image by Getty / Futurism
Even OpenAI’s latest AI model is still capable of making idiotic mistakes: after billions of dollars, the model still can’t reliably tell how many times the letter “r” appears in the word “strawberry.”
And while “hallucinations” — a conveniently anthropomorphizing word used by AI companies to denote bullshit dreamed up by their AI chatbots — aren’t a huge deal when, say, a student gets caught with wrong answers in their assignment, the stakes are a lot higher when it comes to medical advice.
A communications platform called MyChart sees hundreds of thousands of messages being exchanged between doctors and patients a day, and the company recently added a new AI-powered feature that automatically drafts replies to patients’ questions on behalf of doctors and assistants.
As the New York Times reports, roughly 15,000 doctors are already making use of the feature, despite the possibility of the AI introducing potentially dangerous errors.
Case in point, UNC Health family medicine doctor Vinay Reddy told the NYT that an AI-generated draft message reassured one of his patients that she had gotten a hepatitis B vaccine — despite never having access to her vaccination records.
Worse yet, the new MyChart tool isn’t required to divulge that a given response was written by an AI. That could make it nearly impossible for patients to realize that they were given medical advice by an algorithm.
The tool, which relies on a version of GPT-4, the OpenAI large language model that powers ChatGPT, pulls in data from material including medical records and drug prescriptions.
The tool even attempts to simulate the “voice” of the doctor, making it all the more insidious.
“The sales pitch has been that it’s supposed to save them time so that they can spend more time talking to patients,” Hastings Center bioethics researcher Athmeya Jayaram told the NYT. “In this case, they’re trying to save time talking to patients with generative AI.”
Critics worry that even though medical professionals are supposed to go over these drafts, the AI could introduce mistakes that could slip through the cracks.
There’s plenty of evidence that could already be happening. In a July study, researchers found “hallucinations” in seven out of 116 AI-generated draft messages by MyChart’s tool.
While that may not sound like a lot, even a single error could have disastrous consequences.
A separate study found that GPT-4 repeatedly made errors when tasked with responding to patient messages.
Some patients may never find out that they’re getting advice from an AI. There are no federal rules about messages needing to be labeled as AI-generated.
“When you read a doctor’s note, you read it in the voice of your doctor,” Jayaram told the NYT. “If a patient were to know that, in fact, the message that they’re exchanging with their doctor is generated by AI, I think they would feel rightly betrayed.”
More on medical AI: An AI Event Hired John Mulaney to Do a Comedy Set and He Brutally Roasted Them Onstage
Share This Article