Startup Shocked When 4Chan Immediately Abuses Its Voice-Cloning AI

On January 23, ElevenLabs — an AI startup founded by former Google and Palantir employees — announced two things: a $2 million funding round, and the release of a beta for a AI voice generator called Eleven, described in a company press release as an “AI speech platform promising to revolutionize audio storytelling.”

“The most realistic and versatile AI speech software, ever,” reads the venture’s website. “Eleven brings the most compelling, rich and lifelike voices to creators and publishers seeking the ultimate tools for storytelling.”

Now, a little over a week later, ElevenLabs is already being forced to reckon with, as they put it in a Monday Twitter thread, “an increasing number of voice cloning misuse cases.” And though the company didn’t clarify any details about said misuse, a Motherboard deep dive into the 4Chan gutters found that a number of the site’s chaos monsters strongly appear to have abused the tech to produce phony clips of celebrities saying racist, violent, or otherwise terrible things.

Shocking stuff, we know. Surely nobody could have seen this coming.

Among other abuses, Motherboard reports that one 4Chan-er posted a voice that sounded very much like Emma Watson reading a section of Hitler’s “Mein Kampf,” while another used a Ben Shapiro-esque voice to make “racist remarks” about US House representative Alexandria Ocasio-Cortez.

In yet another dark turn, another user reportedly synthesized a fictional voice saying “trans rights are human rights” while making sounds indicating they were being strangled. And though no user outright said that they were using ElevenLabs’ tech, Motherboard noted that at least one post contained a link to the beta.

“The clips run the gamut from harmless,” reads the report, “to violent, to transphobic, to homophobic, to racist.”

To its credit, ElevenLabs does seem to be taking the misconduct seriously, additionally claiming in that same thread that all generated audio can be traced back to individual users. They even offered several proposed ideas for tightened guardrails, a list that includes adding additional account verifications like “payment info or even full ID verification,” “verifying copyright,” and even dropping the feature altogether in favor of “manually [verifying] each cloning request.” That last proposal would certainly be the safest, albeit the least lucrative, at least in the short term.

And that, of course, continues to be the issue with generative AI betas. The generative AI marketplace, led by OpenAI, is absolutely red hot, and startups have a better chance of securing VC funding if they have a product — even if it’s a half-baked one — to bring to market. And while beta testing is normal for the tech industry, traditional betas’ biggest problems tended to be bugs and glitches; those of generative AI, on the other hand, have the power to violate and harm real human beings, as depicted so clearly by ElevenLabs’ blunder.

It’s like putting a car that can go 100 miles per hour to market sans seatbelts or airbags. People will almost certainly get hurt, but at least you’ll show investors that you have a really fast car. (Note: the ElevenLabs beta is still available.)

READ MORE: AI-Generated Voice Firm Clamps Down After 4chan Makes Celebrity Voices for Abuse

More on generative AI: Buzzfeed Columnist Tells CEO to “Get F*cked” For Move to AI Content

Share This Article

Go to Source