Finally!
Keep Quiet
Sayonara, scammers! The Federal Communications Commission announced Thursday a ban on unsolicited robocalls that use an AI-generated voice, sparing all of us from infernally annoying form of phone spam. The decision was unanimous, CNN reports, and comes just weeks after AI was used to imitate President Joe Biden in bogus phone calls to New Hampshire voters.
Per the ruling, the FCC explained that AI-generated voices in scam robocalls are outlawed by the 1991 Telephone Consumer Protection Act (TCPA) , because they count as “artificial.”
In the TCPA’s original wording, “any telephone call to any residential telephone line using an artificial or prerecorded voice to deliver a message without the prior express consent of the called party” is prohibited by the act.
“Bad actors are using AI-generated voices in unsolicited robocalls to extort vulnerable family members, imitate celebrities, and misinform voters,” FCC chair Jessica Rosenworcel said in a statement, as quoted by CNN.
“We’re putting the fraudsters behind these robocalls on notice.” For now, anyone wanting to make robocalls with the technology must obtain the express prior consent of the called party.
Scamorama
It was a swift decision from the FCC, which only announced its intentions a week ago. The agency was almost certainly spurred on by an especially foreboding robocall scam last month, in which an AI-generated voice impersonating President Biden told New Hampshire residents not to vote in the state’s primary election.
The culprits have since been identified. The New Hampshire attorney general alleges that a Texas telecom company called Life Corp was behind the scheme, and is now subject to an investigation.
While it’s unclear if anyone was fooled by the calls, the fabricated voice of Biden is eerily convincing, as is the case with many scammy AI voice clones of which, the world’s seen a disturbing rise over the past few years. Some are used to trick elderly citizens into forking over money by using an AI to impersonate a relative in an emergency. Others have even targeted banks, stealing millions of dollars.
With this latest scheme — which comes in an election year no less — the technology has underlined its potential to spread disinformation and target political figures. And that, more than anything, will probably be keeping lawmakers suspicious of AI technology on their toes.
More on generative AI: Coke’s AI-Generated Super Bowl Ad Is Downright Scary
Share This Article