The AI-generated hell of the 2024 election

Filed under:

Manipulated images, edited video, misleading robocalls — none of these things are new to American electoral politics. But with the advent of cheap generative AI, the 2024 presidential election is shaping up to be an unprecedented battleground between voters and their would-be manipulators. The election cycle will test the limits of these new technologies, the resilience of the public’s evolving media literacy, and the capabilities of the regulators who are struggling to keep control of the situation.

You can read all our coverage below.

Highlights

  • Pin PINNED

    Black and yellow collage of Joe Biden and Donald Trump

    Black and yellow collage of Joe Biden and Donald Trump
    Image: The Verge

    Our new Thursday episodes of Decoder are all about deep dives into big topics in the news, and this week, we’re continuing our miniseries on one of the biggest topics of all: generative AI.

    Last week, we took a look at the wave of copyright lawsuits that might eventually grind this whole industry to a halt. Those are basically a coin flip — and the outcomes are off in the distance, as those cases wind their way through the legal system. A bigger problem right now is that AI systems are really good at making just believable enough fake images and audio — and with tools like OpenAI’s new Sora, maybe video soon, too.

    Read Article >

  • Election officials are freaking out about AI.

    First there was the Joe Biden robocall, where a deepfake of the president’s voice told New Hampshire voters to stay home during the primary. Now election officials worry they, too, will be impersonated during this election cycle.

    “It has the potential to do a lot of damage,” Arizona’s secretary of state, who tested out a deepfake of himself last year, told Politico.

  • Illustration of two smartphones sitting on a yellow background with red tape across them that reads “DANGER”

    Illustration of two smartphones sitting on a yellow background with red tape across them that reads “DANGER”
    Illustration by Amelia Holowaty Krales / The Verge

    The Federal Communications Commission is making it illegal for robocalls to use AI-generated voices. The ruling, issued on Thursday, gives state attorneys general the ability to take action against callers using AI voice cloning tech.

    As outlined in the ruling, AI-generated voices are now considered “an artificial or prerecorded voice” under the Telephone Consumer Protection Act (TCPA). This restricts callers from using AI-generated voices for non-emergency purposes or without prior consent. The TCPA includes bans on a variety of automated call practices, including using an “artificial or prerecorded voice” to deliver messages, but it wasn’t explicitly stated whether this included AI-powered voice cloning. The new ruling clarifies that these recordings should indeed fall under the law’s scope.

    Read Article >

  • A picture of Joe Biden with red and blue graphics.

    A picture of Joe Biden with red and blue graphics.
    Image: Laura Normand / The Verge

    Lingo Telecom and Life Corporation are linked to a robocall campaign that used an AI voice clone of President Joe Biden to persuade New Hampshire voters not to vote, said New Hampshire Attorney General John Formella during a press conference on Tuesday. Authorities have issued cease-and-desist orders as well as subpoenas to both companies. The two companies are both based in Texas and have been accused of illegal robocall investigations in the past, the FCC noted in the document. 

    In its cease-and-desist order to Lingo Telecom, the FCC accused the company of “originating illegal robocall traffic.” According to the document, the robocalls began on January 21st of this year, two days before the New Hampshire presidential primary. Voters in the state received calls that played an “apparently deepfake prerecorded message” of the president advising potential Democratic voters not to vote in the upcoming primary election. The calls were spoofed to appear to come from the spouse of former New Hampshire Democratic Party official Kathy Sullivan. 

    Read Article >

  • “This clear bid to interfere in the New Hampshire primary demands a thorough investigation and a forceful response.”

    Congressman Joseph Morelle (D-NY) wants the Department of Justice to investigate an allegedly AI-generated Joe Biden robocall that provided false information to voters — the latest sign that AI-generated disinformation will be an ongoing election concern. The state’s own Department of Justice is already investigating.

  • Virginia Elections

    Virginia Elections
    Julia Nikhinson/For The Washington Post via Getty Images

    Amid growing concern that AI can make it easier to spread misinformation, Microsoft is offering its services, including a digital watermark identifying AI content, to help crack down on deepfakes and enhance cybersecurity ahead of several worldwide elections.

    In a blog post co-authored by Microsoft president Brad Smith and Microsoft’s corporate vice president, Technology for Fundamental Rights Teresa Hutson, the company said it will offer several services to protect election integrity, including the launch of a new tool that harnesses the Content Credentials watermarking system developed by the Coalition for Content Provenance Authenticity’s (C2PA). The goal of the service is to help candidates protect the use of their content and likeness, and prevent deceiving information from being shared. 

    Read Article >

  • Image of Meta’s wordmark on a red background.

    Image of Meta’s wordmark on a red background.
    Illustration: Nick Barclay / The Verge

    Meta announced Wednesday that it would require advertisers to disclose when potentially misleading AI-generated or altered content is featured in political, electoral, or social issue ads.

    The new rule applies to advertisements on Facebook and Instagram that contain “realistic” images, videos, or audio falsely showing someone doing something they never did or imagining a real event playing out differently than it did in reality. Content depicting realistic-looking fake people or events would also need to be disclosed. The policy is expected to go into effect next year.

    Read Article >

  • A Facebook logo surrounded by blue dots and white squiggles.

    A Facebook logo surrounded by blue dots and white squiggles.
    Illustration by Nick Barclay / The Verge

    The Facebook Oversight Board is reviewing a new case involving a doctored video of President Joe Biden that could reshape Meta’s policies on “manipulated media” ahead of the 2024 election.

    The video in question includes an altered clip of Biden placing an “I Voted” sticker on his granddaughter’s chest and kissing her cheek during last year’s midterm elections. That footage was edited to repeat Biden’s motion of touching the girl’s chest set to Pharoahe Monch’s “Simon Says” when the rapper says, “Girls, rub on your titties.” It was posted on Facebook with a caption calling Biden “a sick pedophile.”

    Read Article >

  • Hands with additional fingers typing on a keyboard.

    Hands with additional fingers typing on a keyboard.
    Image: Álvaro Bernis / The Verge

    A phony AI-generated attack ad from the Republican National Committee (RNC) offered Congress a glimpse into how the tech could be used in next year’s election cycle. Now, Democrats are readying their response.

    On Tuesday, Rep. Yvette Clarke (D-NY) introduced a new bill to require disclosures of AI-generated content in political ads. Clarke told The Washington Post Tuesday that her bill was a direct response to the RNC ad that launched last week. The video came out soon after President Joe Biden announced his 2024 reelection campaign, depicting a dystopian future where Biden reinstates the draft to aid Ukraine’s war effort and causes China to invade Taiwan if reelected.

    Read Article >

  • TikTok logo

    TikTok logo
    Illustration: Alex Castro / The Verge

    As the prospect of a US TikTok ban continues to grow, the video app has refreshed its content moderation policies. The rules on what content can be posted and promoted are largely unchanged but include new restrictions on sharing AI deepfakes, which have become increasingly popular on the app in recent months.

    The bulk of these moderation policies (or “Community Guidelines,” in TikTok’s parlance) is unchanged and unsurprising. There’s no graphic violence allowed, no hate speech, and no overtly sexual content, with gradated rules for the latter based on the subject’s age. One newly expanded section, though, covers “synthetic and manipulated media” — aka AI deepfakes, which have become increasingly popular on the app in recent months.

    Read Article >

Go to Source