Google is taking reservations to talk to its supposedly-sentient chatbot

At the I/O 2022 conference this past May, Google CEO Sundar Pichai announced that the company would, in the coming months, gradually avail its experimental LaMDA 2 conversational AI model to select beta users. Those months have come. On Thursday, researchers at Google’s AI division announced that interested users can register to explore the model as access increasingly becomes available.

Regular readers will recognize LaMDA as the supposedly sentient natural language processing (NLP) model that a Google researcher got himself fired over. NLPs are a class of AI model designed to parse human speech into actionable commands and are behind the functionality of digital assistants and chatbots like Siri or Alexa, as well as do the heavy lifting for realtime translation and subtitle apps. Basically, whenever you’re talking to a computer, it’s using NLP tech to listen.   

“I’m sorry, I didn’t quite get that” is a phrase that still haunts many early Siri adopters’ dreams, though in the past decade NLP technology has advanced at a rapid pace. Today’s models are trained on hundreds of billions of parameters, can translate hundreds of languages in real time and even carry lessons learned in one conversation through to subsequent chats.   

Google’s AI Test kitchen will enable beta users to experiment and explore interactions with the NLP in a controlled, presumably supervised, sandbox. Access will begin rolling out to small groups of US Android users today before spreading to iOS devices in the coming weeks. The program will offer a set of guided demos which will show users LaMDA’s capabilities. 

“The first demo, ‘Imagine It,’ lets you name a place and offers paths to explore your imagination,” Tris Warkentin, Group Product Manager at Google Research, and Josh Woodward, Senior Director of Product Management for Labs at Google, wrote in a Google AI blog Thursday. “With the ‘List It’ demo, you can share a goal or topic, and LaMDA will break it down into a list of helpful subtasks. And in the ‘Talk About It (Dogs Edition)’ demo, you can have a fun, open-ended conversation about dogs and only dogs, which explores LaMDA’s ability to stay on topic even if you try to veer off-topic.”  

The focus on safe, responsible interactions is a common one in an industry where there’s already a name for chatbot AIs that go full-Nazi, and that name in Taye. Thankfully, that exceedingly embarrassing incident was a lesson that Microsoft and much of the rest of the AI field has taken to heart, which is why we see such strident restrictions on what users can have Midjourney or Dall-E 2 conjure, or what topics Facebook’s Blenderbot 3 can discuss. 

That’s not to say the system is foolproof. “We’ve run dedicated rounds of adversarial testing to find additional flaws in the model,” Warkentin and Woodward wrote. “We enlisted expert red teaming members… who have uncovered additional harmful, yet subtle, outputs.” Those include failing “to produce a response when they’re used because it has difficulty differentiating between benign and adversarial prompts,” and producing “harmful or toxic responses based on biases in its training data.” As many AIs these days are wont to do.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.

Go to Source