Press release: Better Images of AI launches a free stock image library of more realistic images of artificial intelligence

  • Non-profit collaboration starts to make and distribute more accurate and inclusive visual representations of AI
  • Follows research showing that current popular images of AI using themes like white human-like robots and glowing brains and blue backgrounds create barriers to understanding of technology, trust, and diversity
  • Available for technical, science, news and general media and marketing communications

December 14, 2021 08:00 AM Coordinated Universal Time (UTC)

LONDON, UK. Today sees the launch of Better Images of AI Image Library, which makes available the first commissioned and curated stock images of artificial intelligence (AI) in response to various research studies which have substantiated concerns about the negative impacts of the existing available imagery. is a collaboration between various global academics, artists, diversity advocates, and non-profit organisations. It aims to help create a more representative and realistic visual language for AI systems, themes, applications and impacts. It is now starting to provide free images, guidance and visual inspiration for those communicating on AI technologies. 

At present, the available downloadable images on photo libraries, search engines, and content platforms are dominated by a limited range of images, for example, those based on science fiction inspired shiny robots, glowing brains and blue backgrounds. These tropes are often used as inspiration even when new artwork is commissioned by media or tech companies.

The first few images to be released on the library showcase different approaches to visually communicating technologies such as computer vision and natural language processing and to communicating themes such as the role of ‘click workers’ who annotate data use in machine learning training and other human input to machine learning.

A photographic rendering of a young black man standing in front of a cloudy blue sky, seen through a refractive glass grid and overlaid with a diagram of a neural network
Image by Alan Warburton / © BBC / Better Images of AI / Quantified Human / Licenced by CC-BY 4.0
Two digitally illustrated green playing cards on a white background, with the letters A and I in capitals and lowercase calligraphy over modified photographs of human mouths in profile.
Alina Constantin / Better Images of AI / Handmade A.I / Licenced by CC-BY 4.0
A banana, a plant and a flask on a monochrome surface, each one surrounded by a thin white frame with letters attached that spell the name of the objects
Max Gruber / Better Images of AI / Banana / Plant / Flask / Licenced by CC-BY 4.0

Better Images of AI is coordinated by We and AI and includes research, development and artistic input from BBC R&D, with academic partners Leverhulme Centre for the Future of Intelligence. Founding supporters of the initiative include the Ada Lovelace Institute, The Alan Turing Institute, The Institute for Human-Centred AI, Digital Catapult, International Centre for Ethics in the Sciences and Humanities (IZEW), All Tech is Human, Feminist Internet and the Finnish Center for Artificial Intelligence (FCAI). These organisations will advise on the creation of images, ensuring that social and technical considerations and expertise underpin the creation and distribution of compelling new images.

Octavia Reeve, Interim Lead, Ada Lovelace Institute said:

“The images that depict AI play a fundamental role in shaping how we perceive it. Those perceptions shape the ways AI is built, designed, used and adopted. To ensure these technologies work for people and society we must develop more representative, inclusive, diverse and realistic images of AI. The Ada Lovelace Institute is delighted to be a Founding Supporter of the Better Images of AI initiative.”

Dr. Kanta Dihal, Senior Research Fellow at the Leverhulme Centre for the Future of Intelligence, University of Cambridge said:

“Images of white plastic androids, Terminators, and blue brains have been increasingly widely criticized for misinforming people about what AI is, but until now there has been a huge lack of suitable alternative images. I am incredibly excited to see the Better Images of AI project leading the way in providing these alternatives.”

Dr. Charlotte Webb, Co-founder of Feminist Internet said: 

“The images we use to describe and represent AI shape not only how it is understood in the public imaginary, but also how we build, interact with and subvert it. Better Images is trying to intervene in the picturing of AI so we can expand beyond the biases and lack of imagination embedded in today’s stock imagery.”  

Professor Teemu Roos, Finnish Center for Artificial Intelligence, University of Helsinki said:

Images are not just decoration – especially in today’s fast-paced media environment, headlines and illustrations count at least as much as the actual story. But while it’s easy to call out bad stock photos, it’s very hard to find good alternatives. I’m extremely happy to see an initiative like the Better Images of AI filling a huge gap in the way we can communicate about AI without perpetuating harmful misconceptions and mystification of AI.

David Ryan Polgar, Founder and Director of All Tech Is Human said:

“Visual representation of artificial intelligence greatly influences our overall conception of how AI is impacting society, along with signalling inclusion of who is, and who should be, involved in the process. Given the ubiquitous nature of AI and its broad impact on most every aspect of our lives, Better Images of AI is a much-needed shift away from the intimidatingly technical and often mystical portrayal of AI that assumes an unwarranted neutrality. AI is made by humans and all humans should feel welcome to participate in the conversation around it.”

Tania Duarte, Co-Founder of We and AI said:

“We have found that misconceptions about AI make it hard for people to be aware of the impact of AI systems in their lives, and the human agency behind them. Myths about sentient robots are fuelled by the pictures they see, which are overhyped, futuristic, colonial, and distract from the real opportunities and issues. That’s why We and AI are so pleased to have coordinated this project which will build greater public engagement with AI, and support more trustworthy AI.”

The Better Images of AI project has so far been funded by volunteers at We and AI and BBC R&D, and now invites sponsors, donations in kind and other support in order to grow the repository and ensure that more images from artists from underrepresented groups, and from the global south can be included. 

Better Images of AI invites interest from organisations who wish to know more about the briefs developed as part of the project and to get involved in working with artists to represent their AI projects. They also wish to make contact with artists and art organisations who are interested in joining the project.


For further information: info (at)

For funding offers: tania.duarte (at)




We and AI are a UK non-profit organisation engaging, connecting and activating communities to make AI work for everybody. Their volunteers develop programmes including the Race and AI Toolkit, and AI Literacy & AI in Society workshops. They support a greater diversity of people to get involved in shaping the impact and opportunities of AI systems.
Website: Email: hello (at)

Better Images of AI’s first Artist: Alan Warburton

A photographic rendering of a young black man standing in front of a cloudy blue sky, seen through a refractive glass grid and overlaid with a diagram of a neural network

In working towards providing better images of AI, BBC R&D are commissioning some artists to create stock pictures for open licence use. Working with artists to find more meaningful and helpful yet visually compelling ways to represent AI has been at the core of the project.

The first artist to complete his commission is London-based Alan Warburton. Alan is a multidisciplinary artist exploring the impact of software on contemporary visual culture. His hybrid practice feeds insight from commercial work in post-production studios into experimental arts practice, where he explores themes including digital labour, gender and representation, often using computer-generated images (CGI). 

His artwork has been exhibited internationally at venues including BALTIC, Somerset House, Ars Electronica, the National Gallery of Victoria, the Carnegie Museum of Art, the Austrian Film Museum, HeK Basel, Photographers Gallery, London Underground, Southbank Centre and Channel 4. Alan is currently doing a practice-based PhD at Birkbeck, London looking at how commercial software influences contemporary visual cultures.

Warburton’s first encounters with AI are likely familiar to us all through the medium of disaster and science fiction films that presented assorted ideas of the technology to broad audiences through the late 1990s and early 2000s. 

As an artist, Warburton says it is over the past few years that technological examples have jumped out for him to help create his work. “In terms of my everyday working life, I suppose that rendering – the process of computing photorealistic images – has always been an incredibly slow and complex process but in the last four or five years various pieces of software that are part of the rendering  process have begun to incorporate AI technologies in increasing degrees,” he says. “AI noise reduction or things like rotoscoping are affected as the very mundane labour-intensive activities involved in the work of an animator and visual effects artists or image manipulator have been sped up. 

“AI has also affected me in the way it has affected everyone else through smart phone technology and through the way I interact with services provided by energy companies or banks or insurance people. Those are the areas that are more obscured, obtuse or mysterious because you don’t really see the systems. But with image processing software I have an insight into the reality of how AI is being used.” 

Warburton’s knowledge of software and AI tools has ensured that he is able to critically analyse which tools are beneficial. “I have been quite discriminatory in the way I use AI tools. There’s workflow tools that speed things up as well as image libraries and 3D model libraries. But the latter ones provide politically charged content even though it’s not positioned as such. Presets available in software will give you white skinned caucasian bodies and allow you to photorealistically simulate people but, for example, there’s hair simulation algorithms that default to caucasian hair. There’s this variegated tapestry of AI software tools, libraries, databases that you have to be discriminatory in the use of or be aware of the limitations and bias and voice those criticisms.” 

The artist’s personal use of technology is also careful and thought through. “I don’t have my face online,” he says. “There’s no content of me speaking online, I don’t have photographs online. That’s slightly unusual for someone who works as an artist and has necessary public engagement as part of my job, but I’m very aware that anything I put online can be used as training data –  if it’s public domain (materials available to the public as a whole, especially those not subject to copyright or other legal restrictions) then it’s fair game.

“Whilst my image is unlikely to be used for nefarious ends or contribute directly to a problematic database, there’s a principle that I stick to and I have stuck to for a very long time. There’s some control over my data, my presence and my image that I like to police although I am aware that my data is used in ways that I don’t understand. Keeping control over that data requires labour, you have to go through all of the options in consent forms and carefully select what you are willing to give away and not. Being discriminatory about how your data is used to construct powerful systems of control and AI is a losing game. You have to some extent to accept that your participation with these systems relies on you giving them access to your data.”

When it comes to addressing the issues of AI representation in the wider world, Warburton can see the issues that need to be solved and acknowledges that there is no easy answer. “Over the past five or ten years we have had waves of visual interpretations of our present moment,” he says. “Unfortunately many of those have reached back into retro tropes. So we’ve had vaporwave and post-internet aesthetics and many different Tumblr vibes trying to frame the present visual culture or the technological now but using retro imagery that seemed regressive. 

“We don’t have a visual language for a dematerialised culture.”

“We don’t have a visual language for a dematerialised culture. It’s very difficult to represent the culture that comes through the conduit of the smartphone. I think that’s why people have resorted to these analogue metaphors for culture. We may have reached the end of these attempts to describe data or AI culture, we can’t use those old symbols anymore and yet we still don’t have a popular understanding of how to describe them. I don’t know if it’s even possible to build a language that describes the way data works. Resorting to metaphor seems like a good way of solving that problem but this also brings in the issue of abstraction and that’s another problem.”

Alan’s experience and interest in this field of work have led to some insightful and recognisable visualisations of how AI operates and what is involved, which can act as inspiration for other artists with less knowledge of the technology. Future commissions from BBC R&D for the Better Images of AI project will enable other artists to use their different perspectives to help evolve this new visual language for dematerialised culture.

Nel blu dipinto di blu; or the “anaesthetics” of stock images of AI

Most of the criticism concerning stock images of AI focuses on their cliched and kitschy subjects. But what if a major ethical problem was not in the subjects but rather in the background? What if a major issue was, for instance, the abundant use of the color blue in the background of these images? This is the thesis we would like to discuss in detail in this post.

Stock images are usually ignored by researchers because they are considered the “wallpaper” of our consumer culture. Yet, they are everywhere. Stock images of emerging technologies such as AI (but also quantum computing, cloud computing, blockchain, etc.) are widely used, for example, in science communication and marketing contexts: conference announcements, book covers, advertisements for university masters, etc. There are at least two reasons for us to take these images seriously.

The first reason is “ethical-political” (Romele, forthcoming). It is interesting to note that even the most careful AI ethicists pay little attention to the way AI is represented and communicated, both in scientific and popular contexts. For instance, a volume of more than 800 pages like the Oxford Handbook of Ethics of AI (Dubber, Pasquale, and Das 2020) does not contain any chapter dedicated to the representation and communication, textual or visual, of AI; however, the volume’s cover image is taken from iStock, a company owned by Getty Images. 1 The subject of it is a classic androgynous face made of “digital particles” that become a printed circuit board. The most interesting thing about the image, however, is not its subject (or figure, as we say in art history) but its background, which is blue. I take this focus on the background rather than the figure from the French philosopher Georges Didi-Huberman (2005) and, in particular, from his analysis of Fra Angelico’s painting.

Fresco “Annunciation” by Fra Angelico in San Marco, Florence (Public domain, via Wikimedia Commons)

Didi-Huberman devotes some admirable pages to Fra Angelico’s use of white in his fresco of the Annunciation painted in 1440 in the convent of San Marco in Florence. This white, present between the Madonna and the Archangel Gabriel, spreads not only throughout the entire painting but also throughout the cell in which the fresco was painted. Didi-Huberman’s thesis is that this white is not a lack, that is, an absence of color and detail. It is rather the presence of something that, by essence, cannot be given as a pure presence, but only as a “trace” or “symptom”. This thing is none other than the mystery of the Incarnation. Fra Angelico’s whiteness is not to be understood as something that invites absence of thought. It is rather a sign that “gives rise to thought,”2 just as the Annunciation was understood in scholastic philosophy not as a unique and incomprehensible event, but as a flowering of meanings, memories, and prophecies that concern everything from the creation of Adam to the end of time, from the simple form of the letter M (Mary’s initial) to the prodigious construction of the heavenly hierarchies. 

A glimmering square mosaic with dark blue and white colors consisting of thousands of small pictures

The image above collects about 7,500 images resulting from a search for “Artificial Intelligence” in Shutterstock. It is an interesting image because, with its “distant viewing,” it allows the background to emerge on the figure. In particular, the color of the background emerges. Two colors seem to dominate these images: white and blue. Our thesis is that these two colors have a diametrically opposed effect to Fra Angelico’s white. If Fra Angelico’s white is something that “gives rise to thought,” the white and blue in the stock images of AI have the opposite effect.

Consider the history of blue as told by French historian Michel Pastoureau (2001). He distinguishes between several phases of this history: a first phase, up to the 12th century, in which the color was almost completely absent; an explosion of blue between the 12th and 13th centuries (consider the stained glass windows of many Gothic cathedrals); a moral and noble phase of blue (in which it became the color of the dress of Mary and the kings of France); and finally, a popularization of blue, starting with Young Werther and Madame Bovary and ending with the Levi’s blue jeans industry and the company IBM, which is referred to as the Big Blue. To this day, blue is the statistically preferred color in the world. According to Pastoureau, the success of blue is not the expression of some impulse, as could be the case with red. Instead, one gets the impression that blue is loved because it is peaceful, calming, and anesthetizing. It is no coincidence that blue is the color used by supranational institutions such as UN, UNESCO, and European Community, as well as Facebook and Meta, of course. In Italy, the police force is blue, which is why policemen are disdainfully called “Smurfs”.

If all this is true, then the problem with stock AI images is that, instead of provoking debate and “disagreement,” they lead the viewer into forms of acceptance and resignation. Rather than equating experts and non-experts, encouraging the latter to influence innovation processes with their opinions, they are “screen images”—following the etymology of the word “screen,” which means “to cover, cut, and separate”. The notion of “disagreement” or “dissensus” (mésentente in French) is taken from another French philosopher, Jacques Rancière (2004), according to whom disagreement is much more radical than simple “misunderstanding (malentendu)” or “lack of knowledge (méconnaissance)”. These, as the words themselves indicate, are just failures of mutual understanding and knowledge that, if treated in the right way, can be overcome. Interestingly, much of the literature interprets science communication precisely as a way to overcome misunderstanding and lack of knowledge. Instead, we propose an agonistic model of science communication and, in particular, of the use of images in science communication. This means that these images should not calm down, but rather promote the flourishing of an agonistic conflict (i.e., a conflict that acknowledges the validity of the opposing positions but does not want to find a definitive and peaceful solution to the conflict itself).3 The ethical-political problem with AI stock images, whether they are used in science communication contexts or popular contexts, is then not the fact that they do not represent the technologies themselves. If anything, the problem is that while they focus on expectations and imaginaries, they do not promote individual or collective imaginative variations, but rather calm and anesthetize them.

This brings me to my second reason for talking about stock images of AI, which is “aesthetic” in nature. The term “aesthetics” should be understood here in an etymological sense. Sure, it is a given that these images, depicting half-flesh, half-circuit brains, variants of Michelangelo’s The Creation of Adam in human-robot version, etc., are aesthetically ugly and kitschy. But here I want to talk about aesthetics as a “theory of perception”—as suggested by the Greek word aisthesis, which means precisely “perception”. In fact, we think there is a big problem with perception today, particularly visual perception, related to AI. In short, I mean that AI is objectively difficult to depict and hence make visible. This explains, in our opinion, the proliferation of stock images.

We think there are three possible ways to depict AI (which is mostly synonymous with machine learning) today: (1) the first is by means of the algorithm, which in turn can be embedded in different forms, such as computer code or a decision tree. However, this is an unsatisfactory solution. First, because it is not understandable to non-experts. Second, because representing the algorithm does not mean representing AI: it would be like saying that representing the brain means representing intelligence; (2) the second way is by means of the technologies in which AI is embedded: drones, autonomous vehicles, humanoid robots, etc. But representing the technology is not, of course, representing AI: nothing actually tell us that this technology is really AI-driven and not just an empty box; (3) finally, the third way consists of giving up representing the “thing itself” and devoting ourselves instead to expectations, or imaginaries. This is where we would put most of the stock images and other popular representations of AI.4

Now, there is a tendency among researchers to judge (ontologically, ethically, and aesthetically) images of AI (and of technologies in general) according to whether they represent the “thing itself” or not. Hence, there is a tendency to prefer (1) to (2) and (2) to (3). An image is all the more “true,” “good,” and “aesthetically appreciable” the closer it is (and therefore the faithful it is) to the thing it is meant to represent. This is what we call “referentialist bias”. But referentialism, precisely because of what we said above, works poorly in the case of AI images, because none of these images can really come close to and be faithful to AI. Our idea is not to condemn all AI images, but rather to save them, precisely by giving up referentialism. If there is an aesthetics (which, of course, is also an ethics and ontology) of AI images, its goal is not to depict the technology itself, namely AI. If anything, it is to “give rise to thought,” through depiction, about the “conditions of possibility” of AI, i.e., its techno-scientific, social-economic, and linguistic-cultural implications.

Alongside theoretical work such as the one we discuss above, we also try to conduct empirical research on these images. We showed earlier an image that is the result of quali-quantitative analysis we have conducted on a large dataset of stock images. In this work, we first used the web crawler Shutterscrape, which allowed us to download massive numbers of images and videos from Shutterstock. We obtained about 7,500 stock images for the search “Artificial Intelligence”. Second, we used PixPlot, a tool developed by Yale’s DH Lab.5 The result is accessible through the link in the footnote.6 The map is navigable: you can select one of the ten clusters created by the algorithm and, for each of them, you can zoom and de-zoom, and choose single images. We also manually labeled the clusters with the following names: (1) background, (2) robots, (3) brains, (4) faces and profiles, (5) labs and cities, (6) line art, (7) Illustrator, (8) people, (9) fragments, and (10) diagrams.

On a black background thousands of small pixel-like images floating similar to the shape of a world map

Finally, there’s another little project of which we are particularly fond. It is the Instagram profile Inspired by existing initiatives such as the NotMyRobot!8 Twitter profile and blog, wants to monitor the use of AI stock images in science communication and marketing contexts. The project also aims to raise awareness among both stakeholders and the public of the problems related to the depiction of AI (and other emerging technologies) and the use of stock imagery for it.

In conclusion, we would like to advance our thesis, which is that of an “anaesthetics” of AI stock images. The term “anaesthetics” is a combination of “aesthetics” and “anesthetics.” By this, we mean that the effect of AI stock images is precisely one that, instead of promoting access (both perceptual and intellectual) and forms of agonism in the debate about AI, has the opposite consequence of “putting them to sleep,” developing forms of resignation in the general public. Just as Fra Angelico’s white expanded throughout the fresco and, beyond the fresco, into the cell, so it is possible to think that the anaesthetizing effects of blue expand to the subjects, as well as to the entire media and communication environment in which these AI images proliferate.


  1. Also visible at
  2.  The expression is borrowed from Ricoeur (1967)
  3.  On the agonistic model, inspired by Chantal Mouffe’s philosophy, in science and technology, see Popa, Blok, and Wessenlink (2020)
  4. Needless to say, this is an idealistic distinction, in the sense that these levels are mostly overlapping: algorithm codes are colored, drones fly over green fields and blue skies that suggest hope and a future for humanity, and stock images often refer, albeit vaguely, to existing technologies (touch screens, networks of neurons, etc.)
  6. Another empirical work, which we did with other colleagues (Marta Severo —Paris Nanterre University, Olivier Buisson —Inathèque and Claude Mussou —Inathèque) consisted in using a tool called Snoop, developed by the French Audiovisual Archive (INA) and the French National Institute for Research in Digital Science and Technology (INRIA), and also based on an AI algorithm. While with PixPlot the choice of the clusters is automatic, with Snoop the classes are decided by the researcher and the class members are found by the algorithm. With Snoop, we were able to fine-tune PixPlot’s classes, and create new ones. For instance, we have created the class “white robots” and, within this class, the two subclasses of female and infantine robots.


Dubber, M., Pasquale, F., and Das, S. 2020. The Oxford Handbook of Ethics of AI. Oxford: Oxford University Press. 

Pastoureau, M. 2001. Blue: The History of a Color. Princeton: Princeton University Press.

Popa, E.O., Blok, V. & Wessenlik, R. 2020. “An Agonistic Approach to Technological Conflict”. Philosophy & Technology.

Rancière, J. 2004. Disagreement: Politics and Philosophy. Minneapolis: Minnesota University Press.

Ricoeur, P. 1967. The Symbolism of Evil. Boston: Beacon Press.Romele, A. forthcoming. “Images of Artificial Intelligence: A Blind Spot in AI Ethics”. Philosophy & Technology.

Image credits

Title image showing the painting “l’accord bleu (RE 10)”, 1960 by Yves Klein, photo by Jaredzimmerman (WMF), CC BY-SA 3.0, via Wikimedia Commons

About us

Alberto Romele is research associate at the IZEW, the International Center for Ethics in the Sciences and Humanities at the University of Tübingen, Germany. His research focuses on the interaction between philosophy of technology, digital studies, and hermeneutics. He is the author of Digital Hermeneutics (Routledge, 2020).

Dario Rodighiero is FNSF Fellow at Harvard University and Bibliotheca Hertziana. His research focuses on data visualization at the intersection of cultural analytics, data science, and digital humanities. He is also lecturer at Pantheon-Sorbonne University, and recently he authored Mapping Affinities (Metis Presses 2021).