Facebook owner Meta had some very awkward questions to answer after its AI assistant denied that former president Donald Trump had been shot, even though he absolutely was indeed wounded by a gunman earlier this month — bizarre, conspiracy-laden claims that highlight the technology’s glaring shortcomings, even with the resources of one of the world’s most powerful tech companies behind it.
It’s especially striking, considering that Meta CEO Mark Zuckerberg called Trump’s immediate reaction to being shot “badass” and inspiring, contradicting his company’s lying chatbot.
In a blog post on Tuesday, Meta’s global head of policy Joel Kaplan squarely placed the blame on the AI’s tendency to “hallucinate,” a convenient and responsibility-dodging synonym for bullshitting.
“These types of responses are referred to as hallucinations, which is an industry-wide issue we see across all generative AI systems, and is an ongoing challenge for how AI handles real-time events going forward,” Kaplan wrote. “Like all generative AI systems, models can return inaccurate or inappropriate outputs, and we’ll continue to address these issues and improve these features as they evolve and more people share their feedback.”
But how much longer AI companies will be able to use “hallucinations” as an excuse for their outright lying AI systems remains to be seen. Despite tech giants’ best efforts, their much-hyped AI products continue to distort the truth with an astonishing degree of confidence.
The incident also highlights how much AI companies are struggling with AI-generated content during a chaotic and misinformation-fuelled presidential election.
After an onslaught of criticism, Meta chose to silence its AI chatbot.
“Rather than have Meta AI give incorrect information about the attempted assassination, we programmed it to simply not answer questions about it after it happened — and instead give a generic response about how it couldn’t provide any information,” Kaplan wrote. “This is why some people reported our AI was refusing to talk about the event.”
Kaplan also said that the company had “experienced an issue related to the circulation of a doctored photo of former President Trump with his fist in the air, which made it look like the Secret Service agents were smiling.”
“Because the photo was altered, a fact check label was initially and correctly applied,” he added.
Unfortunately, the company’s tech failed to distinguish between real and undoctored photos shortly after — a potentially dangerous shortcoming, given the prevalence of disinformation already being circulated on social media.
“Given the similarities between the doctored photo and the original image — which are only subtly (although importantly) different — our systems incorrectly applied that fact check to the real photo, too,” Kaplan wrote.
Meta wasn’t alone in having major trouble having its AI respond to quickly changing news. Google also had to do some damage control, denying that it was “censoring” news about the assassination in its signature “Autocomplete” feature.
“Overall, these types of prediction and labeling systems are algorithmic,” Google’s communications team tweeted. “While our systems work very well most of the time, you can find predictions that may be unexpected or imperfect, and bugs will occur.”
Google has already struggled with its still “experimental” AI search feature called “AI Overview” coming up with a torrent of confidently stated lies in response to users’ queries.
Meanwhile, Trump used the opportunity to take potshots at the tech industry.
“Facebook has just admitted that it wrongly censored the Trump ‘attempted assassination photo,’ and got caught,” he wrote in a characteristically rambling Truth Social post. “Same thing for Google. They made it virtually impossible to find pictures or anything about this heinous act.”
While there’s little evidence to suggest that Google and Meta are trying to “RIG THE ELECTION,” per Trump, the two companies have clearly failed to demonstrate that their AI features are ready for primetime.
And Kaplan’s comments shouldn’t come as a surprise. For quite some time now, we’ve heard AI companies use “hallucinations” to excuse their chatbots coming up with falsehoods.
But are they doing enough? Is this the best Meta can do to respond to a rapidly changing news cycle?
We’ve already heard from tech execs who’ve admitted that there’s a chance the issue of “hallucinations” will never be solved.
In an interview with The Verge earlier this year, Google CEO Sundar Pichai said that these lies could be an “inherent feature” of large language models.
Apple CEO Tim Cook similarly said that he would “never claim” that his company’s AI wouldn’t come up with confidently told lies with a “100 percent” certainty in a Washington Post interview last month.
Yet time is of the essence. The presidential race is well underway and we’ve already seen plenty of AI-generated disinformation being circulated online, from “deepfakes” to doctored images.
And as the technology continues to improve, the lines between what’s real and what’s fake will continue to blur, making it even more difficult for the likes of Meta and AI to slow the dissemination of disinformation on its platforms.
More on AI hallucinations: CEO of Google Says It Has No Solution for Its AI Providing Wildly Incorrect Information
Share This Article