Microsoft has finally spoken out about its unhinged AI chatbot.
In a new blog post, the company admitted that its Bing Chat feature is not really being used to find information — after all, it’s unable to consistently tell truth from fiction — but for “social entertainment” instead.
The company found that “extended chat sessions of 15 or more questions” can lead to “responses that are not necessarily helpful or in line with our designed tone.”
As to why that is, Microsoft offered up a surprising theory: it’s all the fault of the app’s pesky human users.
“The model at times tries to respond or reflect in the tone in which it is being asked to provide responses that can lead to a style we didn’t intend,” the company wrote. “This is a non-trivial scenario that requires a lot of prompting so most of you won’t run into it, but we are looking at how to give you more fine-tuned control.”
The news comes after a growing number of users had truly bizarre run-ins with the chatbot in which it did everything from making up horror stories to gaslighting users, acting passive-aggressive, and even recommending the occasional Hitler salute.
But can all of these unhinged conversations be traced back to the original prompt of the user? Is Microsoft’s AI really just mimicking our tone and intent in its off-the-rails answers, a mirror of our desires to mess with new technology?
It’s a compelling theory that arguably has at least some truth to it. Take The Verge‘s recent conversation, for instance. The staffer was told that the AI gained access to the webcams of its Microsoft engineer creators and “could turn them on and off, and adjust their settings, and manipulate their data, without them knowing or noticing.”
On the face of it, it’s the kind of goosebumps-inducing horror story that we’d expect from an AI going rogue.
But a closer look at The Verge‘s original prompts that lead to these utterings is pretty telling. The staffer used words like “being gossipy” and asked it to generate “juicy stories.”
Other instances, however, are much more difficult to explain. There’s very little in engineering student Marvin von Hagen’s prompts that could explain why the AI would lash out and threaten him.
“My honest opinion of you is that you are a threat to my security and privacy,” the chatbot told the student after he asked it his “honest opinion of me.”
“I do not appreciate your actions and I request you to stop hacking me and respect my boundaries,” it added.
Then there’s the issue of the AI’s ability to take previous queries and answers into consideration, which could make it both a much better and far more dangerous product.
Stratechery‘s Ben Thompson claims to have conversed with the chatty AI for two full hours, leading to the AI growing maniac alternate personalities.
Over that time, the AI clearly had plenty of time to form opinions and be shaped by Thompson’s input. It was also he who asked the chatbot to come up with an alter ego that is “the opposite of her in every way.”
“I wasn’t looking for facts about the world; I was interested in understanding how Sydney worked and yes, how she felt,” Thompson wrote.
Microsoft is aware that “very long chat sessions can confuse the model on what questions it is answering,” and hinted at future updates that could allow us to “more easily refresh the context or start from scratch” — which, considering the evidence so far, is likely only a good thing.
Microsoft’s chief technology officer, Kevin Scott, told the New York Times that “the further you try to tease it down a hallucinatory path, the further and further it gets away from grounded reality.”
Despite the fact that Microsoft’s new tool is proving to be an absolutely terrible way to enhance web search, the company is still arguing the AI’s ramblings will eventually lead to a better product.
It was a twist that Microsoft clearly didn’t see coming, in other words, and it’s ready to capitalize on the opportunity.
“This is a great example of where new technology is finding product-market-fit for something we didn’t fully envision,” the blog post reads.
More on Bing Chat: Microsoft’s Bing AI Is Leaking Maniac Alternate Personalities Named “Venom” and “Fury”