OpenAI Employee Says She’s Never Tried Therapy But ChatGPT Is Pretty Much a Replacement For It

“This is probably it?”

TheraGPT

A senior OpenAI employee opened a veritable can of worms this week when she claimed that the latest version of ChatGPT, which now has voice recognition capabilities, is akin to talking with a human therapist — even though, as she admitted, she’d never actually done therapy herself.

“Just had a quite emotional, personal conversation [with] ChatGPT in voice mode, talking about stress [and] work-life balance,” Lilian Weng, OpenAI’s head of safety systems, posted on the site formerly known as Twitter, adding that she felt “heard” and “warm” following the conversation.

“Never tried therapy before but this is probably it?” Weng continued. “Try it especially if you usually just use it as a productivity tool.”

The response to the AI safety worker’s suggestion — which she admitted were her “personal take” — was swift and furious.

“This is not therapy,” one user posted, “and saying it is is dangerous.”

“Your personal take is not the right take for anyone in your position,” another added.

Long History

As outlandish as it is for a senior OpenAI safety employee to suggest that ChatGPT could be used in a therapeutic capacity, the concept of AI therapists actually goes way, way back to the swinging 60s, when a pioneering computer scientist created a dirt-simple chatbot called ELIZA, which would basically flip user queries back with questions of its own, the way therapists are wont to do.

Released in 1966 by MIT professor Joseph Weizenbaum, ELIZA was intended to showcase the superficiality of artificial intelligence but became the unwitting source of the now-famous “ELIZA effect,” in which people ascribe human properties to machines that are capable of interacting with them.

“It’s essentially an illusion that a machine that you’re talking to has a larger, human-like understanding of the world,” Margaret Mitchell, the chief ethics scientist at the Hugging Face AI company, told the startup website Built In earlier this year. “It’s having this sense that there’s a massive mind and intentionality behind a system that might not actually be there.”

While ELIZA was ahead of its time, so-called AI therapists have experienced something of a renaissance over the past decade. In 2015, researchers at the University of Southern California released “Ellie,” a chatbot with a virtual humanoid avatar that was trained to detect facial expressions and, purportedly, tell how the humans it interacted with were feeling.

In the ensuing years, there have been a few other attempts at launching AI therapists, but ChatGPT seems poised to change the therapy chatbot game — much to the chagrin of many human mental health professionals.

Weng is far from the first person to feel that they’ve gained some therapeutic benefit from ChatGPT, and she undoubtedly won’t be the last. But given both her position in AI safety and inexperience with actual therapy, it feels like she got way over her skis on this one.

More on AI: Google’s Search AI Says You Can Melt Eggs. Its Source Will Make You Facepalm

Share This Article

Go to Source