
How do you tell whether an image has been generated by AI — or whether it’s a genuine photograph?
As a John Doe, the guy next door, normal people, at first glance? Barely at all. In the sense of „not at all“, because image and video generators keep getting better.
So let’s simply accept that digital photos and AI-generated images will only be distinguishable with forensic tools (provided nobody has, say, maliciously trained their generator to circumvent those very tools).
However: there is a technology that provides transparency on a voluntary basis. Which brings us neatly to the topic of „We as a company (voluntarily) label our AI images, because our brand stands for transparency and fairness.“
Can you spot the little badge marked „cr“ in the image above? What does that „cr“ symbol on images actually mean? Behind it lies information about whether the image was created or edited by humans, by AI, or by a combination of both. This kind of labelling makes good sense under the EU AI Act, and frankly it’s the fair thing to do for users as well. As a user, there’s something rather reassuring about knowing a photo is guaranteed to be real.
This is cr
„cr“ is a labelling system for visual content. It’s offered by contentcredentials.org, run by the „Content Authenticity Initiative„. Among others, the AI image generation tool Bria uses it, as does OpenAI (ChatGPT) and Adobe Firefly (even when you use „Generate Image“ via Firefly inside Photoshop).
Google (Nano Banana) is not on board.
Adobe participates across all its tools; on the camera manufacturer side, Leica (M11-P) and Nikon (Z9) are in the game.
The technology behind cr
With cr, cryptographically hashed and signed information, essentially metadata, is embedded in the image file (or uploaded to the [Adobe] cloud [including a URL that gets integrated into the file], or supplied as a .c2pa sidecar file):
This information lets you determine whether the image came from an AI (or was edited with AI), whether it’s a genuine photograph with one or more named creators, or both.
In the world of the Content Authenticity Initiative, this metadata is called a „manifest“, stored collectively in a „manifest store“ within the media file. If you’re curious about what these data fields can contain: click here.
The concept is somewhat reminiscent of the familiar EXIF data found in digital photos, only massively beefed up. EXIF data, however, are easily tampered with; Content Credentials are not. Moreover, cr doesn’t replace good old EXIF data; it’s additional data with relatively little overlap. But: Both cr and EXIF data can be deleted. Ouch.
The biggest problems with cr (and the simple solution):
- Support from image-viewing software providers and platforms outside the Adobe ecosystem is patchy at best.
- cr data is frequently stripped during image editing, for instance if you upload an AI image to Canva and export it, or re-save it with an image viewer, or if a web CMS converts an uploaded PNG to WebP. This will undoubtedly improve in future, but legacy software will never catch up
- The most low-tech method therefore remains: simply Photoshop the creator’s name straight into the image. Case in point: have a look at an Aldi leaflet (at least in Germany) you’ll find several images with a „Generated by AI“ or „Background generated with AI“ label baked right in. A really clean, low-tech solution!
Verifying images with cr
Verifying an image is straightforward, either via the website or by appending a parameter to the URL, as in the following example. After „source=“ you simply add the URL of the image you want to check.
https://contentcredentials.org/verify?source=https://www.testwebsite.com/test.jpg
Adobe also offers a (Chromium-based) plugin for the Chrome browser: Adobe Content Authenticity
What a cr verification result looks like:

How it works:
- The AI image generator (or camera, or image editing software) inserts
a) a signed code snippet into the image file (or uploads it to a cloud) (or both) (or supplies it as a sidecar). In the special case of video, start and end frames are additionally defined.
b) a clickable button into the image, provided the viewing application supports it, which you can see above (and may have already spotted on LinkedIn posts): that’s the „cr“ badge (the „Content Credentials pin„). It „belongs to“ the Coalition for Content Provenance and Authenticity (C2PA). - The file format is „relatively“ flexible (and handled erratically): Bria and ChatGPT embed cr into PNG files, Adobe Lightroom can only integrate it into JPGs. Photoshop handles both PNG and JPG. Adobe Premiere Pro offers cr for audio and video files (including MP3, MP4, MOV, AVI), plus TIFFs.
- Clicking the pin opens the contentcredentials.org website. However, the website displaying the image must support showing the pin (and that’s where things get thin; LinkedIn, where I’ve seen it myself, and Behance already support it; many more sites will probably follow).
- You can also upload the original image directly to contentcredentials.org, and the data will be read out.
- There, further details about the AI image are revealed:
- „This image was generated with an AI tool“
- Application used: Bria AI
- AI tool used: Bria AI
- Action: „Created“ (i.e. not merely edited)
- An editing history is also recorded
- These data fields can also include the creator’s name, which is obviously brilliant for tracking authorship
- If you’re a photographer or digital artist, you can use Lightroom (for instance) to embed relevant information into the image, such as:
- „Producer“, i.e. your name
- Links to social media accounts: e.g. Behance, Instagram, LinkedIn, X
- Which software tools were involved
- Where applicable, the provenance of image components
- Where applicable, editing steps taken
- Where applicable, the camera model
- AI opt-out: „This photo must not be used to train AI models“
- However, credentials are lost when:
- You re-save the file using a different application
- Your content management system converts uploaded images (e.g. PNGs) into, say, WebP, as WordPress does here, for instance
- If you want credentials to be stored „permanently“ in the file, as „Durable Content Credentials“, you’ll additionally need to use digital watermarking (e.g. from providers such as Digimarc, ATSC, or Adobe Trustmark) or fingerprinting (e.g. from Adobe or Digicaps). Click here for more details.
- Credentials are also added for perfectly mundane edits. Example: you use an AI tool to remove the background from an image. The AI provider may then insert a cr snippet into the file. Your image is now flagged as AI-edited (or, even worse, AI-generated). Whether that’s sensible or absurd is open to debate.
Caveat: none of this is entirely robust yet
Example: I uploaded an image generated with ChatGPT to Bria and edited it there. The ChatGPT image had credentials. During editing – inpainting, i.e. adding new image content – Bria deleted the existing credentials and overwrote them with its own Bria credentials. In other words, the credential implementations across providers are not compatible.
The reverse route – editing a Bria image with ChatGPT – likewise results in the credentials being overwritten.
Image 1: OpenAI’s credentials — looking good (OpenAI even creates two credentials: one for generation and one for editing)

Image 2, after editing with Bria (there’s now an additional picture hanging on the wall at the top of the image). Only the Bria credential remains. ChatGPT’s „authorship“ was deleted.

Verdict: still early days
Traceability and provenance records for images are a splendid idea. In the long run, we may well need to adjust our thinking:
- Certified images can be verified by users (or by software, browsers, etc.) as „genuine“ or „AI-heavy“.
- Images without a certificate could automatically arouse suspicion (or be rejected outright by software or browsers).
In any case, the topic is currently virtually unknown and plays no practical role just yet. We’ll have to watch how things develop. In principle, though, labelling is a jolly good idea.
Get in touch
Which file formats does Content Credentials support?
cr can be embedded into all the common „end-user-facing“ media file formats for images, video, and audio that one encounters in everyday digital life. Venerable formats such as GIF and BMP didn’t make the cut.
- Exchange formats:
- Image formats:
- JPEG (= .JPG)
- PNG
- WebP
- TIFF
- DNG
- HEIC
- HEIF
- SVG
- AVIF (AV1)
- Video formats:
- M4A (= .MP4)
- MOV
- AVI
- Audio formats:
- MP3
- WAV
Read similar articles
- Spotting AI Images: Labelling via „Content Credentials“

- How do LLMs „brains“ „work“: Tokens and Vektors visualised

- Longevity Marketing: How Brands and Customers Find Each Other

- Claude: What Can Anthropic’s AI Do?

- ChatGPT: Overview, use GPTs, alternatives

- Transcreation requires courage

- AI Consultant: What a professional can do for you

- Sustainability Copywriter: Expert or Allrounder?

