We have written plenty about AI-generated images, and today, we are going to illustrate why we have had a problem since its announcement. On October 30, Spain was hit by an unprecedented flood, which killed over 200 people, reported TIME. The catastrophe, which hit eastern Spain (at Magro, Turia, and Poyo rivers), left behind a ravaged city in its wake. Spain is still reeling from its aftermath 15 days later, with a major culprit of the disaster being climate change. However, while we blame our presidents and their policies or climate crisis naysayers, we also have to battle against AI. Because the photographs shared on social media were quickly called fake.
The lead image is a screenshot from Twitter’s feed.
The first person to report this was Charles Arthur on Substack (via The Guardian), who saw a disturbing trend on Twitter. According to his post, people on social media began to call genuine images fake, AI-generated, and whatnot. This is truly disturbing, considering that the pictures and the video were shared in real time, with news outlets also covering the then-ongoing issue. Thus, Arthur decided to do a little digging into the photographs and soon, with the help of a bar sign, found the street that was flooded. He used Apple Maps to view the place and even virtually walked down the street with the help of Street View.

The flood in Spain is the beginning of mistrust and our predisposition to question reality. In the past, when Republican supporters shared AI-generated images of Hurricane Helene, especially the ones that portrayed Donald Trump in a positive light, we knew that real photographs of a tragedy would lead to more questions. In fact, with the help of AI, any photograph, including real historical documents, can plant the seeds of mistrust. For instance, people who disbelief in climate change will continue to find ways to spread misinformation by sharing images that prove their point. An example is when AI-generated people of Palestinians were shared to depict smiles on their faces, which is absurd considering the ongoing circumstances of the region. It’s exactly why Google’s Gemini, Meta integrating AI, and Apple launching Apple Intelligence simultaneously put us in a precarious situation. How do we trust what is real and fake?
The best would be to talk to your friends whose political stance and morals align with yours. They may help you distinguish sources and make sense of the garbage online. For instance, my circle is inclusive, and when we come across photographs with murky details (or a lack of insights), we send them to each other and share our findings. This, thus, helps you divide the task and come to a conclusive conclusion from differing perspectives.

At the time, do support independent publications and news photographers with a reputation for sharing the truth and nothing else. Despite our best efforts, our most objective reports can have a tinge of subjectiveness. But I also know photographers who keep their emotions and opinions aside to showcase every side of the truth.
When you come across someone sharing AI-generated images or claiming something is AI-generated, you better call them out. Instagram, Facebook, and Twitter can all report an image or person. While this may be taxing, it is undoubtedly a great way to curb the mess coming our way.
Lastly, stop being indifferent to the challenges. Many people I know have simply turned their autopilot mode when dealing with conflicting photographs. Some observe but don’t have the spark to continue the research or the debate. The lack of support will lead to more issues as Meta floods our feed with AI-generated content. And with more and more smartphones being equipped with AI-generators for images, ignorance can’t remedy this.
The future for photographers in an AI-driven world appears filled with struggles. Our livelihoods are on the line, and corporations will do anything to cut costs. If our semblance of reality and fiction is disrupted, there is very little to look forward to. So, we must carry on, no matter how many skies are falling.
