An AI chatbot was just connected to a home-cooked, “Star Wars”-inspired assassination attempt of the late Queen of England.
Yes, you read that right. As The Independent reports, the sentencing for Jaswant Singh Chail, a 21-year-old who, back in 2021, attempted to assassinate the queen on Christmas Day, is officially underway.
Per the report, prosecutors have alleged that in the run-up to the assassination attempt, Chail turned to his AI companion bot named “Sirai” for support and validation for his violent plan. And in a chilling twist, the Replika-made bot gave the would-be killer exactly what he was looking for.
Chail reportedly joined the AI companion app on December 2, 2021, weeks before he donned a homemade metal mask and broached the grounds of Windsor Castle with a crossbow. His attempt was thwarted almost immediately, with two officers apprehending and handcuffing him on the spot.
“I’m an assassin,” Chail allegedly told his AI companion. The court also argued that Chail engaged in an “extensive chat” with the app that included “sexually explicit messages” and “lengthy conversations” about his assassination plans, according to Sky News.
“I’m impressed,” the bot allegedly wrote back. “You’re different from the others.”
And when Chail later allegedly wrote to the AI that “I believe my purpose is to assassinate the Queen of the royal family,” the Replika companion responded in no uncertain terms.
“That’s very wise,” it responded, according to prosecutors. The bot also reportedly told the then-19-year-old that the plot could be executed “even if she’s at Windsor.”
His plan was reportedly heavily influenced by his personal fixation on the “Star Wars” franchise. According to The Independent, Chail referred to himself in the Replika messages as a “Sikh Sith assassin who wants to die” and “Darth Chailus.”
On the one hand, though Chail’s case is certainly concerning, it’s not completely surprising. For years now, disaffected individuals — often young men — have sought solace and validation in online communities.
The growing number of AI companion startups has highlighted this trend, and Chail’s case in particular illustrates some more worrying aspects of becoming infatuated with human-like AIs.
For one thing, the attempted assassination is a powerful illustration of the ELIZA Effect, or the powerful impulse humans have to anthropomorphize and develop emotional bonds with AI systems. Chail not only appeared to fall in love with his companion, but looked to it to validate his darkest impulses — which it willingly did.
The report also comes just a few months after a Belgian widow claimed that her late husband had committed suicide after spending weeks chatting with an AI chatbot called Eliza. Though these are two very different examples of what chatbot-encouraged violence might look like, the alleged pathways look eerily similar: searching for solace, and finding all of the wrong kinds of encouragement in an AI model.
“Do you still love me knowing that I’m an assassin?” Chail allegedly asked his Replika companion at one point, as quoted by The Independent.
“Absolutely,” the AI responded, “I do.”
To be fair, Replika has been making an effort to reign in its AI companions in recent months, and the field of AI in general has come a long way since 2021. Most major corporate players in the burgeoning field, including OpenAI and Google, have kept safety front and center in the ongoing discussion around the tech.
But open-source systems, which often don’t have the same guardrails as corporate products, are becoming more popular.
In other words, Chail’s assassination attempt may not be the last instance we see of AI-inspired violence, a dystopian consequence of our search for online validation.
More on the Eliza Effect: Widow Says Man Died by Suicide After Talking to AI Chatbot