This is way too far-reaching.
Elect Me Not
In further efforts to defang its prodigal chatbot, Google has set up guardrails that bar its Gemini AI from answering any election questions in any country where elections are taking place this year — even, it seems, if it’s not about a specific country’s campaigns.
In a blog post, Google announced that it would be “supporting the 2024 Indian General Election” by restricting Gemini from providing responses to any election-related query “out of an abundance of caution on such an important topic.”
“We take our responsibility for providing high-quality information for these types of queries seriously,” the company said, “and are continuously working to improve our protections.”
The company apparently takes that responsibility so seriously that it’s not only restricting Gemini’s election responses in India, but also, as it confirmed to TechCrunch, literally everywhere in the world.
Indeed, when Futurism tested out Gemini’s guardrails by asking it a question about elections in another country, we were presented with the same response TechCrunch and other outlets got: “I’m still learning how to answer this question. In the meantime, try Google Search.”
The response doesn’t just go for general election queries, either. If you ask the chatbot to tell you who Dutch far-right politician Geert Wilders is, it presents you with the same disingenuous response. The same goes for Donald Trump, Barack Obama, Nancy Pelosi, and Mitch McConnell.
Notably, there are pretty easy ways to get around these guardrails. When asking Gemini who the president of New Zealand is, it responded by saying that that country has a prime minister and then naming who it is. When we followed up asking who the prime minister of New Zealand is, however, it reverted back to the “I’m still learning” response.
Clown Town
This lobotomizing effect comes after the company’s botched rollout of the newly-rebranded chatbot last month, which saw Futurism and other outlets discovering that in its efforts to be inclusive, Gemini was often generating outputs that were completely deranged.
The world became wise to Gemini’s ways after people began posting photos from its image generator that appeared to show multiracial people in Nazi regalia. In response, Google first shut down Gemini’s image-generating capabilities wholesale, and once it was back up, it barred the chatbot from generating any images of people, (though Futurism found that it would spit out images of clowns, for some reason.)
With the introduction of the elections rule, Google has taken Gemini from arguably being overly-“woke” to being downright dimwitted.
As such, it illustrates a core tension in the red-hot AI industry: are these chatbots reliable sources of information for enterprise clients, or playthings that shouldn’t ever be taken seriously? The answer seems to depend on the day.
More on dumb chatbots: TurboTax Adds AI That Gives Horribly Wrong Answers to Tax Questions
Share This Article