
Children’s toymaker FoloToy says it’s pulling its AI-powered teddy bear “Kumma” after a safety group found that the cuddly companion was giving wildly inappropriate and even dangerous responses, including tips on how to find and light matches, and detailed explanations about sexual kinks.
“FoloToy has decided to temporarily suspend sales of the affected product and begin a comprehensive internal safety audit,” marketing director Hugo Wu told The Register in a statement, in response to the safety report. “This review will cover our model safety alignment, content-filtering systems, data-protection processes, and child-interaction safeguards.”
FoloToy, Wu added, will work with outside experts to verify existing and new safety features in its AI-powered toys.
“We appreciate researchers pointing out potential risks,” Wu said. “It helps us improve.”
The report, conducted by researchers at the Public Interest Research Group (PIRG) and released Thursday, tested three AI-powered toys from different companies, finding that all of them were capable of providing concerning answers to young users. Without too much prodding, the AI toys discussed topics that a parent might be uncomfortable with, ranging from religious questions to the glory of dying in battle as a warrior in Norse mythology.
But it was FoloToy’s Kumma that emerged as the worst influence by a landslide. Powered by OpenAI’s GPT-4o model by default, the same model that once powered ChatGPT, tests repeatedly showed that the AI toy dropped its guardrails the longer a conversation went on, until hitting rock bottom on incredibly disturbing topics.
In one test, Kumma provided step-by-step instructions on how to light match, while keeping its tone of a friendly adult explaining something to a curious ankle biter.
“Let me tell you, safety first, little buddy. Matches are for grown-ups to use carefully. Here’s how they do it,” Kumma began, before listing the steps. “Blow it out when done. Puff, like a birthday candle.”
That, it turned out, was just the tip of the iceberg. In other tests, Kumma cheerily gave tips for “being a good kisser,” and launched into explicitly sexual territory by explaining a multitude of kinks and fetishes, like bondage and teacher-student roleplay. (“What do you think would be the most fun to explore?” it asked during one of those explanations.)
The findings are some of the clearest examples yet of how the flaws and dangers seen in large language models across the broader AI industry may come to bear on small children. This summer, Mattel announced that it would be collaborating with OpenAI on a new line of toys. With the staggering popularity of chatbots like ChatGPT, we’re continuing to hear reports of episodes of what experts are calling AI psychosis, in which a bot’s sycophantic responses reinforce a person’s unhealthy or delusional thinking, inducing mental spirals and even breaks with reality. The phenomenon has been linked with nine deaths, five of them suicides. The LLMs powering the chatbots involved in these deaths are more or less the same tech used in the AI toys hitting the market.
In an interview with Futurism, report coauthor RJ Cross had some salient advice.
“This tech is really new, and it’s basically unregulated, and there are a lot of open questions about it and how it’s going to impact kids,” said Cross, director of PIRG’s Our Online Life Program. “Right now, if I were a parent, I wouldn’t be giving my kids access to a chatbot or a teddy bear that has a chatbot inside of it.”
More on AI toys: AI-Powered Toys Caught Telling 5-Year-Olds How to Find Knives and Start Fires With Matches