As the world has been grappling with the challenges posed by artificial intelligence, a few companies and individuals are finding new ways to safeguard the works of photographers. An example is the Content Authenticity Initiative, which was started by Adobe, The New York Times, and Twitter in 2019. However, how this will work in the field of journalism has been a question posed by many. To give better clarity, the Xposure festival in Sharjah invited Santiago Lyon, the Head of Advocacy and Education for this initiative, to speak about how this open-sourced endeavor can help the community.
Editor’s Note: The Phoblographer was part of the press trip to Sharjah. We were invited to see the festival, interact with the photographers, and share our insights with our readers. Since we believe in transparency, we want to let our readers know that the trip was entirely paid for, but this post is not sponsored. We want to keep our readers updated about the new ways to protect their work.
For the uninitiated, Lyon is an award-winning photographer, photo editor, media executive, and educator with 40 years of experience in the field. Having photographed varied wars in four continents, which include countries such as Bosnia, Sarajevo, Somalia, Palestine, Israel, Albania, Kosovo, and Afghanistan, Lyon moved to the Associated Press. After 15 years of service as director of photography, he found himself with Adobe, where he is leading a team to perfect the Content Authenticity Initiative.

“The problem, as we know, is misinformation,” Lyon said at Xposure. ” You will have seen this picture of the Pope in the Puffy Code. It’s not new. But the reason I included it in the presentations is because I have a confession to make. The confession is that when I saw that image, I thought, “Hey, it might be real.” Why not? It’s a progressive pope, Vatican City, right next to Rome, the fashion capital. Of course, I was wrong. And the point is that if I can be wrong, I can only imagine what a less informed or less expert viewer might think of that,” he says. He further showcased images from the North Carolina floods, which had their own AI-generated images, as well as the recreation of Robert Capa‘s famous D-Day images in today’s time. “Now these images are controversial. A lot of people don’t like the idea that AI has been used to replicate or create iconic images from a time past. I think it’s actually quite an interesting exercise because it shows us where this technology is headed. And a lot of people are fearful of this technology, and are fearful of generative AI. And I believe that we should embrace it and understand it,” he added.

What does this tell us that here we are in 2025 and there is pretty much zero empirical evidence about the origins of anything we consume online. And so we base our reactions to things on trust.
Santiago Lyon
As part of the Content Authenticity Initiative, the project has four pillars: transparency and authenticity: detection, policy, education, and provenance. “Provenance is a term that some of you might be familiar with from the art world. The provenance of the painting, the wounded, etc. In this case, when we talk about provenance, we’re talking about the basic facts about the origins of a piece of content. Where did it come from? How might it have been manipulated along its journey from creation through editing and onto publication? And then sharing some or all of that information with the viewer so that they can make a better informed decision about whether to trust something based on the information, provenance information that we provide,” he explained. In addition, the Content Authenticity Initiative is working not only with journalists but also with corporations such as insurance companies and law enforcement. “How does a court of law know that digital material that’s entered as evidence into legal proceedings is what it purports to be? An image of somebody doing something. How do they know that it’s real?” explains Lyon.

Content credentials, Lyon says, will thus work as ” a digital nutrition label for online content in the same way in the supermarket.” This is where C2PA comes into play, which has varied important segments within it: capture, edit, publish and trust. The first two will require Content Autenticuty Initiavtoe to work with camera manufacturers and software developers to integrate C2PA into their system. Publishing will then help maintain these standards, such as new agencies and more. Lastly, trust is something that will be built through the provenance information, allowing creators and users to see the metadata attached to the files and the edits that have been made to them.

However, these will have customizable settings, as there are various reasons for one to avoid that. “We recognize that not everybody wants to share that information. There are sometimes privacy concerns and security concerns. If you have a conflict photographer, go to a war zone. You would be ill-advised to share their GPS location because it could make them a target, and we’ve seen that happen in the past with fatal results. Or if you’re a human rights defender working in a dictatorship, you probably don’t want your name inextricably associated with a piece of content showing human rights abuse. So ultimately, all of this will be customizable, and the producer and the publisher can choose how much or how little information they want to share according to their use case,” says Lyon.

Here is a short explanation of how C2PA will function:
When we talk about how to make this durable and how to make this resilient, what we’re using here is first of all secure metadata. So this is using cryptography, asset hashes, it’s what’s called public key infrastructure technology, not dissimilar to what you use when you go online to make a secure banking transaction. It’s not new. We’re purposing it here for contact, potentially use. We’re also using invisible water marking. So this is water marking with its can’t see, it’s embedded into the pixels and it provides another layer of resilience. And then we’re using what is called cloud-based fingerprinting, where we store some information about the file on the cloud, and then we’re able to compare that information with the file that we have at hand. The reason that we’re interested in these three approaches is because all of them in isolation are vulnerable. If I have secure metadata, it can get stripped off by itself. If I have watermarking by itself, it can be compromised and fingerprinting can also be compromised. but by combining them together they are much stronger and resilient than the sum of their parts. Blockchain is also a possibility here, the underlying technical standard contemplates the use of blockchain. Blockchain has some advantages and disadvantages. It’s immutable, which is to say it’s not changeable because it’s all distributed on multiple ledgers around the internet. But that can be a problem, for example, in the news industry, if you issue corrections, you can’t correct something that’s on the blockchain, you have to issue another asset. So this technology here is what we call additive. If I want to make a change, I can update the manifest to reflect the change. We’re also offering content credentials to protect from data scraping.
From the looks of it, the Content Authenticity Initiative is one of the saving graces of our time. What it needs is adaptation from not just the readers but also technology compamies who are fueling generative AI’s growth. Of course, bad actors are everywhere, and they may try to stay ahead of the latest innovations. However, we are certain that as long as there are people who support this unique open source, we may find a way to survive.
