OpenAI is adding new watermarks to DALL-E 3

OpenAI is adding new watermarks to DALL-E 3
Share on facebook
Share on twitter
Share on pinterest

OpenAI’s image generator DALL-E 3 will add watermarks to image metadata as more companies roll out support for standards from the Coalition for Content Provenance and Authenticity (C2PA).

The company says watermarks from C2PA will appear in images generated on the ChatGPT website and the API for the DALL-E 3 model. Mobile users will get the watermarks by February 12th. They’ll include both an invisible metadata component and a visible CR symbol, which will appear in the top left corner of each image.

People can check the provenance — which AI tool was used to make the content — of any image generated by OpenAI’s platforms through websites like Content Credentials Verify. So far, only still images, not videos or text, can carry the watermark. 

a:hover]:text-gray-63 [&>a:hover]:shadow-underline-black dark:[&>a:hover]:text-gray-bd dark:[&>a:hover]:shadow-underline-gray [&>a]:shadow-underline-gray-63 dark:[&>a]:text-gray-bd dark:[&>a]:shadow-underline-gray”>Image: OpenAI

OpenAI says adding the watermark metadata to images represents a “negligible effect on latency and will not affect the quality of the image generation.” It will also increase image sizes slightly for some tasks. 

The C2PA, a group consisting of companies like Adobe and Microsoft, has been pushing the use of the Content Credentials watermark to identify the provenance of content and show if it was made by humans or with AI. Adobe created a Content Credentials symbol, which OpenAI is adding to DALL-E 3 creations. Meta recently announced it will add tags to AI-generated content on its social media platforms. 

Identifying AI-generated content is one of the flagship directives in the Biden administration’s executive order on AI. But watermarking is not a surefire way to stop misinformation. OpenAI points out that C2PA’s metadata can “easily be removed either accidentally or intentionally,” especially as most social media platforms often remove metadata from uploaded content. Taking a screenshot omits the metadata.

“We believe that adopting these methods for establishing provenance and encouraging users to recognize these signals are key to increasing the trustworthiness of digital information,” OpenAI says on its site. 

Source

Subscribe to our Newsletter

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper mattis, pulvinar dapibus leo.

Share this post with your friends

Share on facebook
Share on google
Share on twitter
Share on linkedin

You sound like a bot

In 2018, a viral joke started going around the internet: scripts based on “making a bot watch 1,000 hours” of just about anything. The premise

Read More »

In defense of busywork

In the show Severance’s dystopian workplace — is there any other kind? — employees spend their days studying arrays of numbers bobbing on their screens.

Read More »

How AI can make history

Like millions of other people, the first thing Mark Humphries did with ChatGPT when it was released in late 2022 was ask it to perform

Read More »

Leave a Reply

Your email address will not be published. Required fields are marked *