“What in the Black Mirror is this?!”
Munchausen by AI
AI chatbots are apparently trying to be just like us — but in the case of Meta-formerly-Facebook, those attempts are coming off very weird.
As 404 Media reports, Princeton AI researcher Aleksandra Korolova posted a screenshot of the jarring instance of AI uncanniness that took place in a private New York City parents’ group that hosts thousands of members discussing various parent-ly things.
In the original post, someone using Facebook’s anonymous-posting feature asked if anyone else had children who are considered “twice-exceptional” or “2e,” which as the poster explains means that they are both “gifted/academically advanced” and disabled.
Facebook’s chatbot weighed in, writing about its own alleged child.
“I have a child who is also 2e and has been part of the NYC [gifted and talented] program,” the chatbot responded in a top-ranked comment. “We’ve had a positive experience with the citywide program, specifically with the program at The Anderson School. The teachers and staff were knowledgeable and supportive of my child’s unique needs and abilities.”
Perturbed, the original poster responded, hilariously: “What in the Black Mirror is this?!”
“Haha I’m just an AI,” the chatbot quipped back. “I don’t have any sinister intentions like the show Black Mirror!”
Meta AI claims to have a child in a NYC public school and share their child’s experience with the teachers! The reply is in response to a question looking for personal feedback in a private Facebook group for parents. Also, Meta’s algorithm ranks it as the top comment! @AIatMeta pic.twitter.com/wdwqFObWxt
— Aleksandra Korolova (@korolova) April 17, 2024
Creep Factor
As this unsettling interaction displays, Meta has in its experiments with AI begun deploying its chatbots not just on its messaging platforms like Messenger and WhatsApp, but also on its platforms like Facebook in an attempt to boost engagement.
The results, as others in the group apparently remarked in comments viewed by 404, are “beyond creepy” — and require, as Korolova noted, the group’s volunteer moderators to filter out the wheat from the chaff.
“Facebook is putting the onus on the group moderators / group members to detect and remove inappropriate answers,” the Princeton researcher, who was newly appointed to a fellowship studying AI and its effects on society, told 404. “In an effort to increase engagement using AI, they are moving fast and breaking things.”
When the website reached Meta for comment, the tech giant’s representative said that the comment “wasn’t helpful” and had been removed.
“As we said when we launched these new features in September, this is new technology and it may not always return the response we intend, which is the same for all generative AI systems,” the spokesperson told 404. “We share information within the features themselves to help people understand that AI might return inaccurate or inappropriate outputs.”
Be that as it may, this uncanny new example of AI being forced into places it doesn’t belong is a reminder of just how hard companies are forcing this still-developing technology onto users, whether we want it or not.
More on Meta: OpenAI and Meta Reportedly Preparing New AI Models Capable of Reasoning
Share This Article