As Wired reports, an AI startup called BattlegroundAI promises to use AI to churn out a flood of digital advertisements for progressive political campaigns, claiming on its website that its service can help “develop hundreds of ads in minutes.”
The company claims it’s helping smaller, underfunded, left-wing campaigns gain a competitive advantage against higher-dollar rivals, all with the goal of countering MAGA’s political influence.
But in an ever-enshittified internet that’s already flooded with a ridiculous amount of stuff — including, increasingly, oceans of low-quality AI-generated content — we gotta say: this sucks.
Speaking to Wired, Battleground CEO Maya Hutchinson told Wired that the tool is “kind of like having an extra intern on your team.” It’s also designed only for text campaigns; according to a template available on the company’s website, that includes text-based ads for social media sites like Instagram, Facebook, X-formerly-Twitter, Google Search, YouTube, and Google, as well as programmatic ads. Hutchinson also claims that the AI isn’t designed to fully automate the process, or mass-post its outputs to ad channels without human intervention.
“You might not have a lot of time, or a huge team,” the CEO told Wired, “but you’re definitely reviewing it.”
But even if there is human review — which is always a big “if” when it comes to AI content workflows — in the process of creating and later publishing Battleground-generated advertising blurbs, the tool is explicitly designed to mass produce political content. It also, crucially, doesn’t include anything in the way of AI watermarks, or any disclosure to netizens that the ads they’re consuming were created synthetically. In fact, in a May blog post, the company bragged that a survey it conducted found that most voters can’t tell the difference anyway.
“A common criticism of content generated by large language models is that it is still too easy to identify, lacks consistency, is devoid of style, and is prone to information overload,” reads the blog, before declaring that, according to Battleground’s research, “voters may not be able to easily spot or detect AI-generated content.”
“AI, like social media, is simply a creation of our own making, not something we should fear, but one we should control to propel us into a more advanced society,” the post argues. “Progressive campaigns now have the chance to do that and win more races this cycle.”
But is flooding the web with loads of unmarked AI-generated text designed to capture netizens’ attention progressive? Or even ethical?
Last year, a survey conducted by the Artificial Intelligence Policy Institute (AIPI) asked a cohort of Americans if they supported “requiring that any political ads disclose and watermark content created by AI.” The response was overwhelming: sixty-nine percent of participants responded that yes, all political ads should disclose the use of AI. So sure, people might not always be able to identify AI-generated content — but that’s the point of watermarking.
People actually do care how the media they consume is created, and for valid reasons, they’re skeptical of AI-generated material. That in mind, as AI slop continues its steady march through the web, transparency regarding where, how, and why generative AI was used to create a given piece of content is an important means of protecting trust and clarity in the splintered, hazy digital world where so many of voters get their news and information.
And in an even bigger way, Battleground feels like it’s completely misunderstanding this political moment. The Democrats’ nominee Vice President Kamala Harris and her running mate Minnesota Governor Tim Walz have captured a rare organic wave of press, public interest, and dare we say vibes that feel unmistakably authentic and human. From the coconut-pilled Brat edits to the touching moment during the Democratic National Convention this week when Walz’s son cried proud tears of joy for his father, there’s a distinct realness to this campaign that’s been able to crack through a packed and imitative digital environment.
Using AI to flood the zone with bland, automated political mad libs, on the other hand, feels like the domain of Harris and Walz’ opponents in the MAGA wing of the Republican party. As Charlie Warzel wrote this week for The Atlantic, the right-wing crowd has seized on AI slop as a distinct aesthetic — a dangerously inauthentic narrative soup into which they just might dissolve. Why would Democrats want to join them?
More on AI and the election: Donald Trump Says He Doesn’t Know Anything about AI-Generated Taylor Swift Images He Posted
Share This Article