The theft of artwork by AI scraping tools is one of the most disturbing and irritating issues photographers and artists face today. It’s especially a concern when you realize that image and art-generating AI tools have scoured and scraped the internet for free without repercussions. They’ve then trained their AI generators for free with all the information they gained. When questioned, many just feigned ignorance and said the data was freely available on the internet. It is a completely unethical response when you consider that many of them charge their users to generate stuff that was trained off the hard work of artists and photographers. Now, a team from the University of Chicago has developed a way to confuse such AI tools that scrape artwork and photographs. Their product, NightShade, digitally “poisons” these AI tools and tricks them into seeing things that aren’t there. And in doing so, they protect your art from being viewed by these tools for what it actually is, confusing the algorithm to no end.
Table of Contents
What Is Nightshade?
Of course, we aren’t referring to those beautiful flowering plants you probably won’t see in your mom’s garden (owing to how poisonous and potentially fatal they are). However, I can see why Ph.D student Shawn Shan picked such a name while developing a solution to combat AI tools that feed off images and photos without any care in the world for their copyright. Nightshade is what you might call an information-poisoning tool developed to trick AI models into seeing something that isn’t there. It combats unauthorized scraping of artists’ work by introducing subtle, invisible changes to images. You probably can’t see these changes with your eyes, but they effectively disrupt AI training processes to ensure that AI models misinterpret the altered images and thus fail to use the artists’ work in the way that they expect to do.

Overview of prompt-specific poison attacks against generic text-to-image generative models. (a) User generates poison data (text and image pairs) designed to corrupt a given concept C (i.e. a keyword like “dog”), then posts them online; (b) Model trainer scrapes data from online webpages to train its generative model; c) Given prompts that contain C, poisoned model generates incorrect images.
Screenshot from Whitepaper “Nightshade: Prompt-Specific Poisoning Attacks on Text-to-Image Generative Models“
The Future of AI Combating?
Imagine you thought of training an AI model to generate a painting in the style of Picasso or Da Vinci, hoping to bring out the next Mona Lisa. You’d feed hours and hours of clips and films from either of these legends to try and get your AI model to bring out something that maybe wasn’t just an homage to them but may even exceed their technical and aesthetic brilliance in many ways. After all, isn’t that what AI is supposed to be getting at – being better at human things than humans? However, it turns out that your AI painting isn’t even close to what you were expecting in terms of output. Because even though you fed your AI model the right data, the model interpreted what it saw as something completely different from what it actually was.
Forcing AI models and algorithms to behave in this manner is the crux of Nightshade’s feature. Shawn’s earlier project, Glaze (which received a Special Mention from Time Magazine in its Best Inventions of 2023 list), was working on similar lines and aimed to prevent AI from mimicking an artist’s style by applying a protective layer over artwork. It would protect an artist’s signature style from being replicated by text-to-image AI models by applying a layer of sorts over the artwork, significantly altering its visual characteristics. However, the follow-up to this project, Nightshade, takes a more active approach towards combating AI scraping.
How Does Nightshade Work?
The whitepaper published on Shawn’s website credits him and five others from the Department of Computer Science at the University of Chicago with developing Nightshade. It mentions how text-to-image models like Midjourney, Dall-E, and Adobe Firefly have taken the world by storm over the last two years. Our thoughts on such models have been no secret, and we’ve often spoken about how potentially harmful they are to the art world. The team behind Nightshade clearly feels the same, and this tool was developed to try and find a solution.
Our work is driven by two key insights. First, while diffusion models are trained on billions of samples, the number of training samples associated with a specific concept or prompt is generally on the order of thousands. This suggests that these models will be vulnerable to prompt-specific poisoning attacks that corrupt a model’s ability to respond to specific targeted prompts. Second, poison samples can be carefully crafted to maximize poison potency to ensure success with very few samples.– Shawn Shan, Wenxin Ding, Josephine Passananti, Stanley Wu, Haitao Zheng, Ben Y. Zhao
Nightshade works by tricking and digitally poisoning AI models into seeing images and photos as something else. As more of these “poisoned” images are fed into the AI, it unwittingly learns from incorrect data. Remarkably, these altered images appear unchanged to the human eye, making it impossible for us to detect Nightshade’s interventions even upon close visual inspection. I guess it’s sort of like an invisible poison filter that even AI models can’t seem to read, fooling them into messing up their results eventually.
How Many Images Does Nightshade Need To Be Effective?

Example images generated by the clean (unpoisoned) and poisoned SD-XL models with different # of poison data. The attack effect is apparent with 1000 poisoning samples but not at 500 samples.
Screenshot from Whitepaper “Nightshade: Prompt-Specific Poisoning Attacks on Text-to-Image Generative Models“
There are countless images out there that have already been used to train the earlier-mentioned AI models. It’s safe to say that they are probably quite skilled by now in generating lifelike and visually convincing images from text prompts. So how exactly would Nightshade throw a spanner in their works at this stage? Not so many if you look at the examples they’ve mentioned in their whitepaper. Even something as small as 1000 poisoned images seemed to have made a distinct muddle in the outputs of a certain text-to-image model, as seen in the image above. Imagine if artists and photographers consistently applied Nightshade’s protection to their work prior to posting it on the internet. In a matter of months, if not weeks, we might be able to see AI generators producing unusable results like in their early days.
Yes, the resistance will bring down Skynet someday; T2 fans will know exactly what I’m talking about here. Why should such generators be able to profit off your copyright for free?
The Proof Of The Pudding, Is In The……Poisoning

Nightshade was available for download when writing this article, so I decided to try it to see if I could or couldn’t tell what it did to my photos. You can set the intensity of the poison to be applied to your photos, but a larger amount could result in visible alteration. I uploaded a few photos at varying levels to see what would turn out.
I wish Nightshade had an audible notification mechanism to let you know when your images are processed.

For this image, Nightshade correctly identified the tag as “boats.” It only adds a tag when you choose a single image for processing; it can’t do this with batches of images. I chose a Low-poison setting and a Faster reader time (approximately 30 minutes). About 30 seconds later, I got an “Analyzing images” prompt with an estimated 3-minute processing time. This was followed by “Shading”; I guess that’s the term for what it does to your image

CPU usage did spike at times on my M1 Mac Mini, but nothing to the extent that it noticeably slowed down any other applications. In a little over 6 minutes, far quicker than the originally estimated 30, Nightshade prompted me to say that the image processing was done. I compared the file properties of both images and observed that while Nightshade hadn’t changed the resolution of the file, the size had gone up from 807kb to 2.1 Mb. You can see the two below and compare them. The original is on the left, and the shaded one is on the right.
How Did It Fare?


Even with a Low poison setting applied, aside from the file size, you can’t really tell if anything has been visually altered in the photos. I really can’t figure out a single difference between these two photos. Nightshade has done an imperceptible job with the poisoning here.
The next image had a Default setting applied and a Medium render time. For some reason, Nightshade tagged this with “flamingo” instead of flamingo. It took about 15 minutes to process this one. The file size went up from 606kb to 1.8Mb
There are clearly some visual changes to this shaded image, most noticeable in the out-of-focus areas in the center of the photo, to the left of the flamingo’s wing. The splash and ripples also appear slightly different. These are almost the kinds of results you’d see when you reduce the quality of an image by resizing it in Photoshop. I wouldn’t put up this version of a shaded image on my website or any portfolio of mine. At this stage in my career, I probably wouldn’t upload this to social media too. I can’t tell how much of a difference a Low and Medium setting would make in terms of poisoning an AI model, but it would seem that keeping a Default poison setting would need you to put a longer render time than Medium. At least for it to match the original image to the human eye.


For the 3rd image, I chose a higher-resolution image of the Dubai skyline. Applying a Default poison setting to it, I selected a slower render time to see if this would reduce the kind of artifacts seen in the flamingo photo.


The sky looks drastically different in the shaded image, almost like a painterly filter has been applied over it. Aside from this, the rest of the image seems the same. But I can’t use this as a photograph anywhere because it doesn’t look like a true photo anymore.
More Poison or Less?
For the fourth image, it was back to the Low setting again, but this time, I kept it on the Slower render time. You can see the original image below, without any processing

The comparison between the original image (on the left) and the Low Poison / Slower processing time is below. There really aren’t any differences that can be made out by the eye. However this does mean that many such images will be needed in order to poison AI models, as the level of poisoning may be very low.


I ran Nightshade once again on the same image (original) and this time I applied a Default Poison / Faster processing time. The poison here is clearly visible in the sky (in the image on the right below) around the minaret of the mosque.


So far, all the Default poisoned images seem to have visible artifacts only in the portions of the sky. So to give it one more try, I picked an image without any skies in it and gave it a whirl in Nightshade


From this, I was able to determine that it didn’t seem to be skies in particular where the poison was added more (visibly at least). It tended to be more in areas that were out of focus and not sharp.
Is Using Nightshade The Right Way to Combat AI?
It’s only natural that such a tool would have its detractors, although I can’t see any articles online condemning Nightshade at the moment. As a photographer myself, I would definitely support such systems. By disrupting AI training processes, Nightshade helps protect intellectual property rights, allowing artists to maintain control over their creations. It eventually would lead to empowering artists to counteract the often exploitative practices of AI developers who use their work without consent. And in promoting the development of more ethical AI systems, Nightshade’s tactics pressure companies to seek proper licensing and consent before using creative works. However, its powerful capabilities could be misused. If deployed maliciously, it could poison data in ways that harm legitimate AI research, impacting fields beyond art, such as medicine or autonomous vehicles. Such misuse could slow down AI advancements, as restricted access to diverse training data might hinder innovation and societal benefits.
For now, at least, getting clearly indistinguishable poisoned images from Nightshade is a time-consuming task. A task that I hope will become greatly quicker in the coming months, enabling visual creatives to apply this to their work before they share it online. I personally hope it becomes so effective and popular to be widely used, and maybe someday even comes in a plugin format for image editing software. For now, at least, it seems to be proving to be a capable tool in combating AI models for the benefit of those who don’t want their art and artistic styles used there.
All screenshots in the article, including the lead image have been taken from the whitepaper on the website of Shawn San, one of the creators of Nightshade.
