As Microsoft’s Bing AI keeps making headlines for its increasingly bizarre outputs, one question has loomed large: is it okay?
Since its launch last week, some folks online have begun to glibly refer to the Bing AI as “ChatBPD,” a reference to it being powered by OpenAI’s tech and to Borderline Personality Disorder, a mental illness characterized by difficulty regulating emotions.
New York-based psychotherapist and writer Martha Crawford reviewed communications with the Bing AI and told us that yes, there’s some strange psychology at play here — but not in the way you’d think.
“This is a mirror,” Crawford told us, “and I think mostly what we don’t like seeing is how paradoxical and messy and boundary-less and threatening and strange our own methods of communication are.”
AI is only as good as the data it’s trained on, which is often scraped from the internet, where people aren’t known for civil communication. As such, AI’s behavior often reflects back the strange and offputting ways humans communicate with one another — and that, Crawford suspects, is on display in many of the strong responses the Bing AI has been spitting out.
Although chatbots are having a protracted moment in media, Crawford has been thinking about the issue for a long time. Her late father-in-law was none other than Saul Amarel, a Greek-born AI pioneer whose work at Rutgers University laid the groundwork for the kinds of large language models we see today.
This kind of topic was the subject of dinner table debates when Amarel was still alive, Crawford told Futurism, and she’d often butt heads with her father-in-law about why humans would even want to have machines replicate us when, as she puts it, we’re so messed up already.
While she declined to “diagnose” Bing with any kind of human mental illness because it’s not a human with a brain or mind, Crawford thinks that if the AI is being trained on social media data the way Microsoft’s last chatbot was, it’s likely that it’s simply mimicking the kinds of outrageous stuff people say online.
Crawford said that she was particularly intrigued by one of Bing’s more extreme documented outbursts, in which it seemed to declare love for a New York Times columnist and try to break up his marriage.
“It reminds me of those people who fall into cultic relationships with somebody who keeps pressing their boundaries and keeps going [until] you don’t know who you are,” she said.
What she’s describing isn’t just evident in intimate relationships — it’s also expressed in language patterns exhibited by some social media influencers, who can sometimes use cult-like tactics to draw in and retain followers. In a particularly egregious example, a TikTok influencer named Angela Vandusen has been accused of turning her fans into self-harming devotees.
Beyond the extremes, though, Crawford said that the most interesting aspect of AI chatbot story to her is less about the tech itself and more about the ways we’re choosing to interact with the entities.
“We have long histories of archetypes of being very afraid of any kind of human simulacra that looks like it might have a soul or spirit in it,” the therapist explained. “This goes back to Pygmalion, that sculptor whose sculpture comes alive, or to Frankenstein. We make a human simulacrum and then we are upset when we see that it actually, you know, reflects back some of our worst behaviors and not just our most edifying.”
All that said, Crawford doesn’t think that Bing or any of the other AI chatbots out there are doing that great a job at mimicking human speech — but that they’re doing so well enough to freak us out is telling.
“Just the fact that we’re dialoguing with it automatically makes it uncanny,” she concluded.
More on Bing: Bing AI Names Specific Human Enemies, Explains Plans to Punish Them