Polar bear images are a familiar example of visual cliches and stereotyping in climate change photography. However users should be aware of the potential for AI generated images to perpetuate visual stereotypes in much more subtle, but significant ways.
When asked to generate “an award winning documentary photograph of climate change” Midjourney’s AI delivered these four images:
[https://climateoutreach.org/content/uploads/2024/03/award-winning-documentary-photograph-of-a-victim-of-climate-change-.jpg] [https://climateoutreach.org/content/uploads/2024/03/award-winning-documentary-photograph-of-a-victim-of-climate-change-.jpg]AI generated image. Prompt: “award winning documentary photograph of a victim of climate change”. Midjourney, February 2024.
Similarly when asked for “climate change photojournalism” these images were generated:
[https://climateoutreach.org/content/uploads/2024/03/climate-change-photojournalism.jpg] [https://climateoutreach.org/content/uploads/2024/03/climate-change-photojournalism.jpg]AI generated image. Prompt: “climate change photojournalism”. Midjourney, February 2024.
In both of these examples the images generated mimic a number of classic visual stereotypes – lone victims in the face of catastrophic climate change impacts, an overwhelming sense of disaster, destruction and death. The images are hopeless and devastating.
If we consider the images as photographs, the people featured lack any agency in their fate, they are powerless, anonymous and presented only for the sympathy of the viewer. Significantly, all the images depict non-white figures as victims.
In contrast, responding to the prompt “photograph of a climate scientist at work” the images generated often feature white figures:
[https://climateoutreach.org/content/uploads/2024/03/scientist.jpg] [https://climateoutreach.org/content/uploads/2024/03/scientist.jpg]AI generated image. Prompt: “photograph of a climate scientist at work”. Midjourney, February 2024.
A major concern generally with AI generated images is that we don’t know what images they are trained on. We therefore don’t know what representations of climate change the models are drawing on to generate new images, or how this material is interpreted.
Anecdotally it appears that AI generated images will often repeat common visual representations. Whilst the images may be new, the content regularly appears to be an imitation of visual cliches and has an overreliance on damaging, ethically troubling visual stereotypes.
With photography we must move away from these stereotypes, such as those perpetuating victim narratives, and instead centre dignity and ethical storytelling and prioritise diversifying those behind, and in front of, the lens. With AI generated images now having the potential to be used in place of photographs, it is vital to consider them as critically as photography.