Just a week after President Biden’s Executive Order on the use of AI highlighted potential societal harms including disinformation, Adobe responded today to a controversy over an AI-generated stock image of a Gaza explosion that was used by several small blogs and websites without being labeled as AI-generated.
Controversy Over AI-Generated Stock Image
The Australian news outlet Crikey first reported the image, among other photorealistic images of Gaza available on Adobe Stock. That led to significant pushback on X (formerly Twitter): An Adobe spokesperson responded to the controversy with the following statement:
“Adobe Stock is a marketplace that requires all generative AI content to be labeled as such when submitted for licensing. These specific images were labeled as generative AI when they were both submitted and made available for license in line with these requirements. We believe it’s important for customers to know what Adobe Stock images were created using generative AI tools. Adobe is committed to fighting misinformation, and via the Content Authenticity Initiative, we are working with publishers, camera manufacturers, and other stakeholders to advance the adoption of Content Credentials, including in our own products. Content Credentials allows people to see vital context about how a piece of digital content was captured, created, or edited including whether AI tools were used in the creation or editing of the digital content.”
Adobe Stock, which describes itself as a “service that provides designers and businesses with access to millions of high-quality curated and royalty-free photos, videos, illustrations, vector graphics, 3D assets, and templates for all their creative projects,” is arguably less known for editorial or photojournalism imagery than its competitor Getty Images, which has a separate section on its website for editorial images.
The Use of AI in Adobe Stock
Adobe Stock has had partnerships in the past around editorial assets. However, it no longer offers editorial assets as part of its stock offering. Instead, Adobe touts the storytelling possibilities of its stock imagery, both traditional and AI-generated. On Adobe’s website, a post from August 2022 defines “illustrative editorial” as “conceptual imagery designed to illustrate articles on current events and newsworthy topics. This type of content often features images of real brands and products — like signs on buildings, soda cans, computers, and cars — to convey a story.”
For illustrative editorial, the website says Adobe does not accept images that feature recognizable people, images of restricted events such as conventions and sports games, images that feature tight crops of copyrighted or trademarked material, such as stamps, fine art, or other content that may violate privacy rights, or digitally created or manipulated versions of trademarked logos or other brand content other than social media icons.
The Israel-Gaza image controversy isn’t the first time that Adobe Stock has come under fire for issues around generative AI. In June, VentureBeat reported that a vocal group of contributors to Adobe Stock, which includes 300 million images, illustrations, and other content that trained Adobe’s Firefly model, said Adobe trained Firefly on their stock images without express notification or consent. The Adobe Stock creators said Firefly’s popularity is making it far less likely that users will purchase stock images. In addition, they said a flooding of gen AI images into Adobe Stock is cannibalizing the platform.
“Adobe Stock respects the rights of third parties and requires all Stock contributors to comply with our terms, including those specific to the use of generative AI tools. You can find those terms here.”