Taylor Swift Has Threatened Legal Action Over AI and Fake Nudes Before

This week, outrage has brewed online over AI-generated porn images of Taylor Swift — and as it turns out, some of the alleged forces behind the disgusting images may have felt threats of the artist’s legal wrath before.

As reports about the controversy indicate, many of the phony Swift nudes that ended up on social media were also uploaded to a site called Celeb Jihad, which has long been a spot for digitally altered images that purport to feature lascivious images of famous women.

Back in 2011, when these facsimiles were created by simply photoshopping a celeb’s face onto nude photos of other women — all done, of course, without the consent of either — Swift’s legal team was said to have threatened Celebrity Jihad posting fake “leaked” topless photos of her.

As TMZ reported at the time, the artist’s lawyers sent the grody website a letter demanding it take down a post titled “Taylor Swift Topless Private Pic Leaked?” saying that it contained “false pornographic images and false ‘news.'”

Whatever ended up happening between Celeb Jihad and Swift back then seems lost to the ages now, but it wasn’t the last time Swift got battle-ready over the alleged misappropriation of her likeness.

In a 2019 memoir, Microsoft president Brad Smith described a teachable moment he experienced regarding Swift when the singer’s representatives contacted him three years prior, threatening legal action over the company’s millennial chatbot named “Tay.”

These days, Tay is the stuff of legends, a pre-ChatGPT exercise in AI chaos where, basically, the publicly available chatbot became really racist, really fast. But as Smith seemed to suggest in his memoir, titled “Tools and Weapons,” the takedown request from the superstar may have played a part in the company’s, er, swift kiboshing of the chatbot.

As Smith told it, an attorney representing Swift argued that the “use of the name Tay created a false and misleading association between the popular singer and our chatbot, and that it violated federal and state laws.” Just 18 hours later, the Microsoft president explained, Tay was euthanized.

Curiously enough, Microsoft also plays into this latest Swift AI debacle, too.

As 404 Media reported, creeps on the Telegram channel where many of the offending images seem to have been collaboratively brainstormed advised each other to use, among other tools, Microsoft’s free text-to-image generator. While that AI product and the others used by the scum included some guardrails that barred them from using some terms to generate the lewd imagery of Swift, they found and subsequently shared simple workarounds to make it spit out what they wanted.

In a statement to 404, a Microsoft spokesperson said the company is “investigating these reports and are taking appropriate action to address them,” citing its AI user code of conduct that prohibits “adult or non-consensual intimate content.”

It’s an expected boilerplate response used to shield companies in these sorts of circumstances — and considering Swift’s litigiousness, which was echoed in rumors shared with Daily Mail about how “furious” she is over the fake AI nudes, covering their asses seems pertinent.

More on awry AI: Man Says Sunglass Hut Used Facial Recognition to Falsely Jail Him, Where He Got Sexually Assaulted

Share This Article

Go to Source