Photographs from Hurricane Helene portrayed the long-lasting and devastating impact of the natural calamity on multiple states. While this was a moment for us to empathize and help those in need, it became a political playground for ring-wing parties representatives like Amy Kremer to swing the votes. And no, not by blaming one another. Rather, by using AI-generated images to confuse, misguide, and run smear campaigns.
The images within the article are screenshots from social media platforms.
How Right-Wing Parties Use AI-Generated Images
It wasn’t just Kremer who received flak for her seriously problematic approach, which she considered “emblematic” as it depicts the pain and suffering of the people, but anyone who endorsed a political leader. For instance, images of Donald Trump wading through the waters to meet the survivors are another misinforming visual that will certainly earn him some brownie points.



Other examples include Trump using AI-generated images of Kamala Harris, his contender, to debase her speeches and work, as well as a clip of Taylor Swift allegedly supporting Trump. The latter is a serious issue, as Swift has been quite vocal about her approval of the Democratic party. Similarly, Trump supporters have also used AI-generated images of him hugging individuals from the Black community to help portray him in a favorable light and gain their support. Since the Black community holds in six states, such images are used by conservatives to win their favor.
“Political campaigns have trust issues to begin with,” said Phillip Walzak, a political consultant in New York to The New York Times. “No candidate wants to be accused of posting deepfakes in the election or using A.I. in a way that deceives voters.”
How is it Being Used?
Such instances are seen not only in the United States but across the world. In India, Prime Minister Modi has been depicted as a valor in AI-generated images, someone who can save the Hindu community against Muslims, a minority that is currently being portrayed as evil by the Bharatiya Janta Party (BJP). Similarly Prabowo Subianto, the winning candidate of Indonesia’s presidential election, also used such tools to appeal to younger audiences.

“It is not simply that AI is ‘deceiving’ voters into believing something that is blatantly false; rather, AI enables content to be produced that is more creative and that can draw upon more innovative cultural references,” said Amogh Dar Sharma, a lecturer at the University of Oxford who studies political communications, to Al Jazeera. “This yields political propaganda which is more entertaining and therefore more shareable, which enables widespread circulation,” he added.
In another interesting report by 404 Media, it’s not only tech companies and parties that benefit; it’s also being used by people in third-world countries like India to earn money. Jason Koebler, who was responsible for the investigation, found that such AI content is being devised to infiltrate Facebook’s algorithm to get engagement and advertising remuneration. As a result, the boom of AI slander, patriotic images, and more so will continue to grow.
How to Stop The Spread of AI Images
The US Federal Communications Commission has already banned robocalls, a new AI-generated tool that calls potential voters and speaks to them through the phone. So, if that is already achieved, it is also high time that candidates are eliminated for using AI-generated images. For starters, it manipulates the voters, which is the biggest red flag for a healthy democracy. Similarly, it is increasingly difficult to figure out its veracity. “There are a lot of studies showing that people have a very hard time knowing what is real versus not when it comes to online imagery,” said Emily Vraga, a health communication researcher at the University of Minnesota, to NPR. “This was true even before ChatGPT.”
Furthermore, with generative AI tools now being integrated into phones from Apple and Google, the number of AI-generated images is likely to increase tenfold, creating a tsunami of confusion and problems. This, along with social media algorithms and AI-generated images, is a disaster recipe waiting to happen. “Algorithms, at least historically, have been driving people into these rabbit holes,” said Kai Larsen, the co-author of the 2021 book Automated Machine Learning for Business, to CU Boulder Today. “If you are willing to believe one piece of misinformation, then the algorithm is now finding out that you like conspiracy theories. So why not feed you more of them?”
While electoral photographs must be checked multiple times before they are shared, a long-term solution would be something like the “Blueprint for an AI Bill of Rights,” which was drafted in 2022 by the White House’s Office of Science and Technology Policy. The bill has not been passed as a law, but it is a great start. Similarly, a large onus must fall on politicians, whose careers depend entirely on how they serve the country.
AI-generated images are not only crass and boring but also stereotypical and harmful. The 2016 elections in the US had third-party interference without AI, but now, it seems any Tom, Dick, and Harry can create controversial images for a few likes. Perhaps it’s about time that we put an end to such issues once and for all.
