It wold be ok if Microsoft repeated this part of its AI history tbh.
Shake It Off
With Microsoft’s new Bing AI and other artificial intelligence software becoming the biggest news story of 2023 thus far, it’s easy to forget that AI chatbots have been around for a long time — and that Microsoft’s old one, Tay, is the literal blueprint for out-of-control algorithms.
It was March 2016, back when Donald Trump’s presidency still seemed impossible and the concept of a global pandemic causing society to shut down still felt like science fiction. Towards the end of that month, Microsoft unveiled “Tay,” short for “thinking about you,” as a youthfully-voiced Twitter chatbot under the handle @Tayandyou.
The legend of Tay follows a now-familiar pattern and was indeed one of the archetypical examples of a company losing control of cutting-edge AI. Because the chatbot openly interacted with Twitter users, trolls were able to manipulate it into spewing bigotry at record speed. Just 16 hours after its release, Microsoft had to pull the plug on Tay, though her story lives on in infamy:
Not Again
Compared to the AI chatbots of today, Tay’s tweets are basically cave paintings. But perhaps the biggest difference between Tay and Bing Chat — or Sydney, depending on who you ask — the company’s latest large language model that’s already garnered scores of headlines over how bizarre its responses are, lies more in Microsoft’s response than the technology’s capabilities.
To be clear, Bing/Sydney doesn’t appear to have gone full Heil Hitler just yet, but in its week-long life, it’s already been shown to spit out lists of ethnic slurs, threaten users, and act all-around erratically. It’s not hard to imagine that if Microsoft wasn’t so eager to dive into the AI marketplace, it would have kiboshed this AI too.
For some reason, however, Bing is still live and still behaving really, really strangely — and as bad as Tay became, at least she was euthanized swiftly.
More on bad AI: Microsoft: It’s Your Fault Our AI Is Going Insane