If You Checked In on X, the Social Network Formerly Known as Twitter

Sometime in the last 24-48 hours, there was a good chance you would have come across AI-generated deepfake still images and videos featuring the likeness of Taylor Swift on X. The images depicted her engaged in explicit sexual activity with an assortment of fans of her pro U.S. football athlete boyfriend Travis Kelce’s NFL team the Kansas City Chiefs. This explicit nonconsensual imagery of Swift was resoundingly condemned and decried by her legions of fans, with the hashtag #ProtectTaylorSwift trending alongside “Taylor Swift AI” on X earlier today, and prompting headlines in news outlets around the world, even as X struggled to remove the content and block it, playing “whack-a-mole” as it was re-posted by various new accounts.

It has also led to renewed calls by U.S. lawmakers to crack down on the fast-moving generative AI marketplace. But there remain big questions about how to do so without stifling innovation or outlawing parody, fan art, and other unauthorized depictions of public figures that have traditionally been protected under the U.S. Constitution’s First Amendment, which guarantees citizens rights to freedom of expression and speech.

The Dangers of AI Image and Video Generation

It’s still unclear just what AI image and video generation tools were used to make the Swift deepfakes — leading services Midjourney and OpenAI’s DALL-E 3, for example, prohibit the creation of sexually explicit or even any sexually suggestive content on a policy and technical level. According to Newsweek, the X account @Zvbear admitted to creating some of the images and has since turned their account on private.

Independent tech news outlet 404 Media tracked the images down to a group on the messaging app Telegram, and said they used “Microsoft’s AI tools,” and Microsoft’s Designer more specifically, which are powered by OpenAI’s DALL-E 3 image model, which also prohibits even innocuous creations featuring Swift or other famous faces.

These AI image generation tools, in our usage of them (VentureBeat uses these and other AI tools to generate article header imagery and text content), actively flag such instructions from users (known as “prompts”), block the creation of imagery containing this content, and warn the user that they risk losing their account for violating the terms of use.

Addressing the Issue

Still, the popular Stable Diffusion image generation AI model created by the startup Stability AI is open source, and can be used by any individual, group, or company to create a variety of imagery including sexually explicit imagery. In fact, this is exactly what got the image generation service and community Civitai into trouble with journalists at 404 Media, who observed users creating a stream of nonconsensual pornographic and deepfake AI imagery of real people, celebrities, and popular fictional characters.

Citivai has since said it is working to stamp out the creation of this type of imagery, and there has been no indication yet that it is responsible for enabling the Swift deepfakes at issue this week. Additionally, model creator Stability AI’s implementation of the Stable Diffusion AI generation model on the website Clipdrop also prohibits explicit “pornographic” and violent imagery.

Regardless of all these policy and technical measures designed to prevent the creation of AI deepfake porn and explicit imagery, users have found ways to bypass them or access other services that provide such imagery, leading to the flood of Swift images over the last few days.

“Even as AI is readily embraced for consensual creations by increasingly famous names in pop culture, the technology is also clearly being used for increasingly malicious purposes, which may stain its reputation among the public and lawmakers.”
– Author of the Article

AI vendors and those who rely on them may suddenly find themselves in hot water for using the tech at all, even if it is for something innocuous or inoffensive, and need to be prepared to answer how they will prevent or stamp out explicit and offensive content. If and when new regulation does come into effect, it could severely limit AI generation models’ capabilities, and therefore the work products of those who depend on them for less offensive uses.

A report from UK tabloid newspaper The Daily Mail notes the Swift nonconsensual explicit images were uploaded to the website Celeb Jihad, and that Swift is reportedly “furious” about their dissemination and considering legal action. Whether that is against Celeb Jihad for hosting them, or the AI image generator tool companies such as Microsoft or OpenAI for enabling their creation, is still not known.

The very spread of these AI-generated images has prompted renewed concern over the use of generative AI creation tools and their ability to create imagery that depicts real people — famous or otherwise — in compromising, embarrassing, and explicit situations. Perhaps then it is not surprising to see calls from lawmakers in the U.S., Swift’s home country, to further regulate the technology.

Proposed Legislation

Tom Kean, Jr., a Republican Congressman from the state of New Jersey who has recently introduced two bills designed to regulate AI — the AI Labeling Act and the Preventing Deepfakes of Intimate Images Act — released a statement to the press and VentureBeat today, urging Congress to take up and pass said legislation.

Kean’s proposed legislation would, in the case of the first bill, require AI multimedia generator companies to add “a clear and conspicuous notice” to their generated works that it is “AI-generated content.” It’s not clear, however, how this notice would stop the creation or dissemination of explicit AI deepfake porn and images.

The second bill, cosponsored by Kean and his colleague across the political aisle, Joe Morelle, a Democratic Congressman of New York state, would amend the 2022 Violence Against Women Act Reauthorization Act to allow victims of nonconsensual deep fakes to sue the creators and possibly the software companies behind them for damages of $150,000, plus legal fees or additional damages shown.

Both bills stop short of banning AI generations of famous faces wholesale, which is probably a smart move, given it is likely that such a prohibition would ultimately be overturned by the lower courts or the U.S. Supreme Court. Unauthorized artworks of public figures have traditionally been viewed as allowable speech by the courts under the U.S. Constitution’s First Amendment, and even prior to AI, could be found widely in the form of editorial cartoons, caricatures, editorial illustrations, fan art — even explicit fan art — and more media not signed off by the subjects depicted.

This is because courts have found public figures and celebrities to have waived their “right to privacy” by capitalizing on their image. However, celebrities have successfully sued those who misappropriated their image for commercial gain under the “right of publicity,” a term coined by federal appeals court judge Jerome N. Frank in a 1953 case, which essentially comes down to celebrities being able to control the commercial usage of their own image. If Swift sues, it would likely be under this latter right. The new bills are unlikely to help her particular case, but would presumably make it easier for future victims to successfully sue those who deepfaked them.

In order to actually become law, both of the new bills will have to be taken up by relevant committees and voted through to the full House of Representatives, as well as an analogous bill introduced to the U.S. Senate and passed by that separate but related body. Finally, the U.S. President would need to sign a reconciled bill uniting the work from both legislative bodies of Congress. So far, the only thing that has happened on both bills is their introduction and referral to committees.

“It is clear that AI technology is advancing faster than the necessary guardrails.”
– Congressman Tom Kean, Jr.

Source: The original source of this article can be found here

Total
0
Shares
Leave a Reply

Your email address will not be published. Required fields are marked *

Related Posts