Google Staff Warned Its AI Was a “Pathological Liar” Before They Released It Anyway

“AI ethics has taken a back seat.”

Pathological Liar

Google asked around 80,000 of its employees to test its still-unreleased Bard AI chatbot before it released it to the public last month, Bloomberg reports.

And the reviews, as it turns out, were absolutely scathing. The AI was a “pathological liar,” one worker concluded, according to screenshots obtained by Bloomberg. Another tester called it out for being “cringe-worthy.” A different employee was even told potentially life-threatening advice on how to land a plane or go scuba diving.

In a February note, which was seen by nearly 7,000 workers, another employee called Bard ‘worse than useless: please do not launch.”

In short, it was a complete disaster — yet, as we all know, the company decided to launch it anyways, labeling it as an “experiment” and adding prominent disclaimers.

Catch Up

Google’s decision was likely a desperate move to catch up with the competition, with OpenAI racing ahead with its highly popular ChatGPT, despite the tech being in a seemingly underdeveloped state.

According to Bloomberg, Google employees tasked with figuring out the safety and ethical implications of the company’s new products were told to stand aside as AI tools were being developed.

“AI ethics has taken a back seat,” former Google manager and president of the Signal Foundation Meredith Whittaker told Bloomberg. “If ethics aren’t positioned to take precedence over profit and growth, they will not ultimately work.”

The tech giant, however, maintains that AI ethics remain a top priority.

Yet two Google employees told Bloomberg that AI ethics reviews are almost entirely voluntary at Google, outside of more delicate areas like biometrics or identity features.

Google executive Jen Gennai said during a recent meeting that upcoming AI products could score an “80, 85 percent, or something” on “fairness,” a bias indicator, instead of having to aim for “99 percent,” according to the report.

Frustration

Worse yet, it sounds like Google’s upper ranks are actively siloing teams that are working on new AI features, shielding the rest of the company from what they’re actually working on.

“There is a great amount of frustration, a great amount of this sense of like, what are we even doing?” Margaret Mitchell, who used to lead Google’s AI ethical team before being pushed out over a fairness dispute in 2021, told Bloomberg.

“Even if there aren’t firm directives at Google to stop doing ethical work,” she added, “the atmosphere is one where people who are doing the kind of work feel really unsupported and ultimately will probably do less good work because of it.”

More on Bard: Google Surprised When Experimental AI Learns Language It Was Never Trained On

Share This Article

Go to Source