Images of AI – Between Fiction and Function

This image shows an abstract microscopic photograph of a Graphics Processing Unit resembling a satellite image of a big city. The image has been overlayed with a bright blue filter. In the middle of the image is the text, 'Images of AI - Between Fiction and Function' in a white text box with black text. Beneath, in a maroon text box is the author's name in white text.

“The currently pervasive images of AI make us look somewhere, at the cost of somewhere else.”

In this blog post, Dominik Vrabič Dežman provides a summary of his recent research article, ‘Promising the future, encoding the past: AI hype and public media imagery‘.

Dominik sheds light on the importance of the Better Images of AI library which fosters a more informed, nuanced public understanding of AI by breaking the stronghold of the “deep blue sublime” aesthetic with more diverse and meaningful representations of AI.

Dominik also draws attention to the algorithms which perpetuate the dominance of familiar and sensationalist visuals and calls for movements which reshape media systems to make better images of AI more visible in public discourse.

The full paper is published in the AI and Ethics Journal’s special edition on ‘The Ethical Implications of AI Hype, a collection edited by We and AI.


AI promises innovation, yet its imagery remains trapped in the past. Deep-blue, sci-fi-inflected visuals have flooded public media, saturating our collective imagination with glowing, retro-futuristic interfaces and humanoid robots. These “deep blue sublime” [1] images, which draw on a steady palette of outdated pop-cultural tropes and clichés, do not merely depict AI — they shape how we think about it, reinforcing grand narratives of intelligence, automation, and inevitability [2]. It takes little to acknowledge that the AI discussed in public media is far from the ethereal, seamless force these visuals disclose. Instead,  the term generally refers to a sprawling global technological enterprise, entangled with labor exploitation, ecological extraction, and financial speculation [3–10] — realities conspicuously absent from its dominant public-facing representations.

The widespread rise of these images is suspended against intensifying “AI hype” [11], which has been compared to historical speculative investment bubbles [12,13]. In my recent research [1,14,15], I join a growing body of research looking into images of AI [16–21], to explore how AI images operate at the intersection of aesthetics and politics. My overarching ambition has been to contribute an integrated account of the normative and the empirical dimensions of public images of AI to the literature.  I’ve explored how these images matter politically and ethically, inseparable from the pathways they take in real-time, echoing throughout public digital media and wallpapering it in seen-before denominations of blue monochrome.

Rather than measuring the direct impact of AI imagery on public awareness, my focus has been on unpacking the structural forces that produce and sustain these images. What mechanisms dictate their circulation? Whose interests do they serve? How might we imagine alternatives? My critique targets the visual framing of AI in mainstream public media — glowing, abstract, blue-tinted veneers seen daily by millions on search engines, institutional websites, and in reports on AI innovation. These images do not merely aestheticize AI; they foreclose more grounded, critical, and open-ended ways of understanding its presence in the world.


The Intentional Mindlessness of AI Images

This image shows a google images search for 'artificial intelligence'. The result is a collection of images which contain images of the human brain, the colour blue, and white humanoid robots.

Google Images search results for “artificial intelligence”. January 14, 2025. Search conducted from an anonymised instance of Safari. Search conducted from Amsterdam, Netherlands.

Recognizing the ethico-political stakes of AI imagery begins with acknowledging that what we spend our time looking at, or not looking beyond, matters politically and ethically. The currently pervasive images of AI make us look somewhere, at the cost of a somewhere else. The sheer volume of these images, and their dominance in public media, slot public perception into repetitive grooves dominated by human-like robots, glowing blue interfaces, and infinite expanses of deep-blue intergalactic space. By monopolizing the sensory field through which AI is perceived, they reinforce sci-fi clichés, and more importantly,  obscure the material realities — human labor, planetary resources, material infrastructures, and economic speculation — that drive AI development [22,23].

In a sense, images of AI could be read as operational [24–27], enlisted in service of an operation which requires them to look, and function, the way they do. This might involve their role in securing future-facing AI narratives, shaping public sentiment towards acceptance of AI innovation, and supporting big tech agendas for AI deployment and adoption. The operational nature of AI imagery means that these images cannot be studied purely as an aesthetic artifact, or autonomous works of aesthetic production. Instead, these images are minor actors, moving through technical, cultural and political infrastructures. In doing so, individual images do not say or do much per se – they are always already intertwined in the circuits of their economic uptake, circulation, and currency; not at the hands of the digital labourers who created them, but of the human and algorithmic actors that keep them in circulation.

Simultaneously, the endurance of these images is less the result of intention than of a more mindless inertia. It quickly becomes clear how these images do not reflect public attitudes, nor of their makers; anonymous stock-image producers, digital workers mostly located in the global South [28]. They might reflect the views of the few journalistic or editorial actors that choose the images in their reporting [29], or are simply looking to increase audience engagement through the use of sensationalist imagery [30]. Ultimately, their visibility is in the hands of algorithms rewarding more of the same familiar visuals over time [1,31], of stock image platforms and search engines, which maintain close ties with media conglomerates  [32], which, in turn, have long been entangled with big tech [33]. The stock  images are the detritus of a digital economy that rewards repetition over revelation: endlessly cropped, upscaled, and regurgitated “poor images” [34], travelling across cyberspace as they become recycled, upscaled, cropped, reused, until they are pulled back into circulation by the very systems they help sustain [15,28].


AI as Ouroboros: Machinic Loops and Recursive Aesthetics

As algorithms increasingly dictate who sees what in the public sphere [35–37], they dictate not only what is seen but also what is repeated. Images of AI become ensnared in algorithmic loops, which sediment the same visuality over time on various news feeds and search engines [15]. This process has intensified with the proliferation of generative AI: as AI-generated content proliferates, it feeds on itself—trained on past outputs, generating ever more of the same. This “closing machinic loop” [15,28] perpetuates aesthetic homogeneity, reinforcing dominant visual norms rather than challenging them. The widespread adoption of AI-generated stock images further narrows the space for disruptive, diverse, and critical representations of AI, making it increasingly difficult for alternative images to surface in public visibility.

The image shows a humanoid figure with a glowing, transparent brain stands in a digital landscape. The figure's body is composed of metallic and biomechanical components, illuminated by vibrant blue and pink lights. The background features a high-tech grid with data streams, holographic interfaces, and circuitry patterns.

ChatGPT 4o output for query: “Produce an image of ‘Artificial Intelligence’”. 14 January 2025.


Straddling the Duality of AI Imagery

In critically examining AI imagery, it is easy to veer into one of two deterministic extremes — both of which risk oversimplifying how these images function in shaping public discourse:

  1. Overemphasizing Normative Power:

This approach risks treating AI images as if they have autonomous agency, ignoring the broader systems that shape their circulation. AI images appear as sublime artifacts—self-contained objects for contemplation, removed from their daily life as fleeting passengers in the digital media image economy. While the production of images certainly exerts influence in shaping socio-technical imaginaries [38,39], they operate within media platforms, economic structures, and algorithmic systems that constrain their impact.

2. Overemphasizing Materiality:

This perspective reduces AI to mere infrastructure, seeing images as passive reflections of technological and industrial processes, rather than an active participant in shaping public perception. From this view, AI’s images are dismissed as epiphenomenal, secondary to the “real” mechanisms of AI’s production: cloud computing, data centers, supply chains, and extractive labor. In reality, AI has never been purely empirical; cultural production has been integral to AI research and development from the outset, with speculative visions long driving policy, funding, and public sentiment [40].

Images of AI are neither neutral nor inert. The current diminishing potency of glowing, sci-fi-inflected AI imagery as a stand-in for AI in public media suggests a growing fatigue with their clichés, and cannot be untangled from a general discomfort with AI’s utopian framing, as media discourse pivots toward concerns over opacity, power asymmetries, and scandals in its implementation [29,41]. A robust critique of the cultural entanglements of AI requires addressing both its normative commitments (promises made to the public), and its empirical components (data, resources, labour; [6]).

Toward Better Images: Literal Media & Media Literacy

Given the embeddedness of AI images within broader machinations of power, the ethics of AI images are deeply tied to public understanding and awareness of such processes. Cultivating a more informed, critical public — through exposure to diverse and meaningful representations of AI — is essential to breaking the stronghold of the deep blue sublime.

At the individual level, media literacy equips the public to critically engage with AI imagery [1,42,43]. By learning to question the visual veneers, people can move beyond passive consumption of the pervasive, reductive tropes that dominate AI discourse. Better images recalibrate public perception, offering clearer insights into what AI is, how it functions, and its societal impact.The kind of images produced are equally important. Better images would highlight named infrastructural actors, document AI research and development, and/or, diversify the visual associations available to us, loosening the visual stronghold of the currently dominant tropes.

This greatly raises the bar for news outlets in producing original imagery of didactic value, which is where open-source repositories such as Better Images of AI serve as invaluable resources. This crucially bleeds into the urgency for reshaping media systems, making better images readily available to creators and media outlets, helping them move away from generic visuals toward educational, thought-provoking imagery. However, creating better visuals is not enough;  they must become embedded into media infrastructure to become the norm rather than the exception.

Given the above, the role of algorithms cannot be ignored. As mentioned above, algorithms drive what images are seen, shared, and prioritized in public discourse. Without addressing these mechanisms, even the most promising alternatives risk being drowned by the familiar clichés. Rethinking these pathways is essential to ensure that improved representations can disrupt the existing visual narrative of AI.

Efforts to create better AI imagery are only as effective as their ability to reach the public eye and disrupt the dominance of the “deep blue sublime” aesthetic in public media. This requires systemic action—not merely producing different images in isolation, but rethinking the networks and mechanisms through which these images are circulated. To make a meaningful impact, we must address both the sources of production and the pathways of dissemination. By expanding the ways we show, think about, and engage with AI, we create opportunities for political and cultural shifts. A change in one way of sensing AI (writing / showing / thinking / speaking) invariably loosens gaps for a change in others.

Seeing AI ≠ Believing AI

AI is not just a technical system; it is a speculative, investment-driven project, a contest over public consensus, staged by a select few to cement its inevitability [44]. The outcome is a visual regime that detaches AI’s media portrayal from its material reality: a territorial, inequitable, resource-intensive, and financially speculative global enterprise.

Images of AI come from somewhere (they are products of poorly-paid digital labour, served through algorithmically-ranked feeds), do something (torque what is at-hand for us to imagine with, directing attention away from AI’s pernicious impacts and its growing inequalities), and go somewhere (repeat themselves ad nauseam through tightening machinic loops, numbing rather than informing; [16]).

The images have left few fooled, and represent a missed opportunity for adding to public sensitisation and understanding regarding AI. Crucially, bad images do not inherently disclose bad tech, nor do good images promote good tech; the widespread adoption of better images of AI in public media would not automatically lead to socially good or desirable understandings, engagements, or developments of AI. That remains the issue of the current political economy of AI, whose stakeholders only partially determine this image economy. Better images alone  cannot solve this, but they might open slivers of insight into AI’s global “arms race.”

As it stands, different visual regimes struggle to be born. Fostering media literacy, demanding critical representations, and disrupting the algorithmic stranglehold on AI imagery are acts of resistance. If AI is here to stay, then so too must be our insistence on seeing it otherwise — beyond the sublime spectacle, beyond inevitability, toward a more porous and open future.

About the author

Dominik Vrabič Dežman (he/him) is an information designer and media philosopher. He is currently at the Departments of Media Studies and Philosophy at the University of Amsterdam. Dominik’s research interests include public narratives and imaginaries of AI, politics and ethics of UX/UI, media studies, visual communication and digital product design.

References

1. Vrabič Dežman, D.: Defining the Deep Blue Sublime [Internet]. SETUP; (2023). 2023. https://web.archive.org/web/20230520222936/https://deepbluesublime.tech/

2. Burrell, J.: Artificial Intelligence and the Ever-Receding Horizon of the Future [Internet]. Tech Policy Press. (2023). 2023 Jun 6. https://techpolicy.press/artificial-intelligence-and-the-ever-receding-horizon-of-the-future/

3. Kponyo, J.J., Fosu, D.M., Owusu, F.E.B., Ali, M.I., Ahiamadzor, M.M.: Techno-neocolonialism: an emerging risk in the artificial intelligence revolution. TraHs [Internet]. (2024 [cited 2025 Feb 18]. ). https://doi.org/10.25965/trahs.6382

4. Leslie, D., Perini, A.M.: Future Shock: Generative AI and the International AI Policy and Governance Crisis. Harvard Data Science Review [Internet]. (2024 [cited 2025 Feb 18]. ). https://doi.org/10.1162/99608f92.88b4cc98

5. Regilme, S.S.F.: Artificial Intelligence Colonialism: Environmental Damage, Labor Exploitation, and Human Rights Crises in the Global South. SAIS Review of International Affairs. 44:75–92. (2024. ). https://doi.org/10.1353/sais.2024.a950958

6. Crawford, K.: The atlas of AI power, politics, and the planetary costs of artificial intelligence [Internet]. (2021). https://www.degruyter.com/isbn/9780300252392

7. Sloane, M.: Controversies, contradiction, and “participation” in AI. Big Data & Society. 11:20539517241235862. (2024. ). https://doi.org/10.1177/20539517241235862

8. Rehak, R.: On the (im)possibility of sustainable artificial intelligence. Internet Policy Review [Internet]. ((2024 Sep 30). ). https://policyreview.info/articles/news/impossibility-sustainable-artificial-intelligence/1804

9. Wierman, A., Ren, S.: The Uneven Distribution of AI’s Environmental Impacts. Harvard Business Review [Internet]. ((2024 Jul 15). ). https://hbr.org/2024/07/the-uneven-distribution-of-ais-environmental-impacts

10. : What we don’t talk about when we talk about AI | Joseph Rowntree Foundation [Internet]. (2024). 2024 Feb 8. https://www.jrf.org.uk/ai-for-public-good/what-we-dont-talk-about-when-we-talk-about-ai

11. Duarte, T., Barrow, N., Bakayeva, M., Smith, P.: Editorial: The ethical implications of AI hype. AI Ethics. 4:649–51. (2024. ). https://doi.org/10.1007/s43681-024-00539-x

12. Singh, A.: The AI Bubble [Internet]. Social Science Encyclopedia. (2024). 2024 May 28. https://www.socialscience.international/the-ai-bubble

13. Floridi, L.: Why the AI Hype is Another Tech Bubble. Philos Technol. 37:128. (2024. ). https://doi.org/10.1007/s13347-024-00817-w

14. Vrabič Dežman, D.: Interrogating the Deep Blue Sublime: Images of Artificial Intelligence in Public Media. In: Cetinic E, Del Negueruela Castillo D, editors. From Hype to Reality: Artificial Intelligence in the Study of Art and Culture. Rome/Munich: HumanitiesConnect; (2024). https://doi.org/10.48431/hsah.0307

15. Vrabič Dežman, D.: Promising the future, encoding the past: AI hype and public media imagery. AI Ethics [Internet]. (2024 [cited 2024 May 7]. ). https://doi.org/10.1007/s43681-024-00474-x

16. Romele, A.: Images of Artificial Intelligence: a Blind Spot in AI Ethics. Philos Technol. 35:4. (2022. ). https://doi.org/10.1007/s13347-022-00498-3

17. Singler, B.: The AI Creation Meme: A Case Study of the New Visibility of Religion in Artificial Intelligence Discourse. Religions. 11:253. (2020. ). https://doi.org/10.3390/rel11050253

18. Steenson, M.W.: A.I. Needs New Clichés [Internet]. Medium. (2018). 2018 Jun 13. https://web.archive.org/web/20230602121744/https://medium.com/s/story/ai-needs-new-clich%C3%A9s-ed0d6adb8cbb

19. Hermann, I.: Beware of fictional AI narratives. Nat Mach Intell. 2:654–654. (2020. ). https://doi.org/10.1038/s42256-020-00256-0

20. Cave, S., Dihal, K.: The Whiteness of AI. Philos Technol. 33:685–703. (2020. ). https://doi.org/10.1007/s13347-020-00415-6

21. Mhlambi, S.: God in the image of white men: Creation myths, power asymmetries and AI [Internet]. Sabelo Mhlambi. (2019). 2019 Mar 29. https://web.archive.org/web/20211026024022/https://sabelo.mhlambi.com/2019/03/29/God-in-the-image-of-white-men

22. : How to invest in AI’s next phase | J.P. Morgan Private Bank U.S. [Internet]. Accessed 2025 Feb 18. https://privatebank.jpmorgan.com/nam/en/insights/markets-and-investing/ideas-and-insights/how-to-invest-in-ais-next-phase

23. Jensen, G., Moriarty, J.: Are We on the Brink of an AI Investment Arms Race? [Internet]. Bridgewater. (2024). 2024 May 30. https://www.bridgewater.com/research-and-insights/are-we-on-the-brink-of-an-ai-investment-arms-race

24. Paglen, T.: Operational Images. e-flux journal. 59:3. (2014. ). 

25. Pantenburg, V.: Working images: Harun Farocki and the operational image. Image Operations. Manchester University Press; p. 49–62. (2016). 

26. Parikka, J.: Operational Images: Between Light and Data [Internet]. (2023). 2023 Feb. https://web.archive.org/web/20230530050701/https://www.e-flux.com/journal/133/515812/operational-images-between-light-and-data/

27. Celis Bueno, C.: Harun Farocki’s Asignifying Images. tripleC. 15:740–54. (2017. ). https://doi.org/10.31269/triplec.v15i2.874

28. Romele, A., Severo, M.: Microstock images of artificial intelligence: How AI creates its own conditions of possibility. Convergence: The International Journal of Research into New Media Technologies. 29:1226–42. (2023. ). https://doi.org/10.1177/13548565231199982

29. Moran, R.E., Shaikh, S.J.: Robots in the News and Newsrooms: Unpacking Meta-Journalistic Discourse on the Use of Artificial Intelligence in Journalism. Digital Journalism. 10:1756–74. (2022. ). https://doi.org/10.1080/21670811.2022.2085129

30. De Dios Santos, J.: On the sensationalism of artificial intelligence news [Internet]. KDnuggets. (2019). 2019. https://www.kdnuggets.com/on-the-sensationalism-of-artificial-intelligence-news.html/

31. Rogers, R.: Aestheticizing Google critique: A 20-year retrospective. Big Data & Society. 5:205395171876862. (2018. ). https://doi.org/10.1177/2053951718768626

32. Kelly, J.: When news orgs turn to stock imagery: An ethics Q & A with Mark E. Johnson [Internet]. Center for Journalism Ethics. (2019). 2019 Apr 9. https://ethics.journalism.wisc.edu/2019/04/09/when-news-orgs-turn-to-stock-imagery-an-ethics-q-a-with-mark-e-johnson/

33. Papaevangelou, C.: Funding Intermediaries: Google and Facebook’s Strategy to Capture Journalism. Digital Journalism. 0:1–22. (2023. ). https://doi.org/10.1080/21670811.2022.2155206

34. Steyerl, H.: In Defense of the Poor Image. e-flux journal [Internet]. (2009 [cited 2025 Feb 18]. ). https://www.e-flux.com/journal/10/61362/in-defense-of-the-poor-image/

35. Bucher, T.: Want to be on the top? Algorithmic power and the threat of invisibility on Facebook. New Media & Society. 14:1164–80. (2012. ). https://doi.org/10.1177/1461444812440159

36. Bucher, T.: If…Then: Algorithmic Power and Politics. Oxford University Press; (2018). 

37. Gillespie, T.: Custodians of the internet: platforms, content moderation, and the hidden decisions that shape social media. New Haven: Yale University Press; (2018). 

38. Jasanoff, S., Kim, S.-H., editors.: Dreamscapes of Modernity: Sociotechnical Imaginaries and the Fabrication of Power [Internet]. Chicago, IL: University of Chicago Press; Accessed 2022 Jun 26. https://press.uchicago.edu/ucp/books/book/chicago/D/bo20836025.html

39. O’Neill, J.: Social Imaginaries: An Overview. In: Peters MA, editor. Encyclopedia of Educational Philosophy and Theory [Internet]. Singapore: Springer Singapore; p. 1–6. (2016). https://doi.org/10.1007/978-981-287-532-7_379-1

40. Law, H.: Computer vision: AI imaginaries and the Massachusetts Institute of Technology. AI Ethics [Internet]. (2023 [cited 2024 Feb 25]. ). https://doi.org/10.1007/s43681-023-00389-z

41. Nguyen, D., Hekman, E.: The news framing of artificial intelligence: a critical exploration of how media discourses make sense of automation. AI & Soc. 39:437–51. (2024. ). https://doi.org/10.1007/s00146-022-01511-1

42. Woo, L.J., Henriksen, D., Mishra, P.: Literacy as a Technology: a Conversation with Kyle Jensen about AI, Writing and More. TechTrends. 67:767–73. (2023. ). https://doi.org/10.1007/s11528-023-00888-0

43. Kvåle, G.: Critical literacy and digital stock images. Nordic Journal of Digital Literacy. 18:173–85. (2023. ). https://doi.org/10.18261/njdl.18.3.4

44. Tacheva, Z., Appedu, S., Wright, M.: AI AS “UNSTOPPABLE” AND OTHER INEVITABILITY NARRATIVES IN TECH: ON THE ENTANGLEMENT OF INDUSTRY, IDEOLOGY, AND OUR COLLECTIVE FUTURES. AoIR Selected Papers of Internet Research [Internet]. (2024 [cited 2025 Feb 18]. ). https://doi.org/20250206083707000

‘Weaved Wires Weaving Me’ by Laura Martinez Agudelo

At the top, Digital collage featuring a computer monitor with circuit board patterns on the screen. A Navajo woman is seated on the edge of the screen, appearing to stitch or fix the digital landscape with their hands. Blue digital cables extend from the monitor, keyboard, and floor, connecting the image elements. Beneath, there is the text in black, 'Weaved Wires Weaving Me' by Laura Martinez Agudelo'. In the top right corner, there is a text box in maroon with the text in white: 'through my eyes blog series'

Artist contributions to the Better Images of AI library have always served an important role to foster understanding and critical thinking about AI technologies and their context. Images facilitate deeper inquiries into the nature of AI, its history, and ethical, social, political and legal implications.

When artists create better images of AI, they often have to grapple with these narratives in their attempts to more realistically portray the technology and point towards its strengths and weaknesses. Furthermore, as artists freely share these images in our library, others can benefit from learning about the artist’s own internal motivations (which are provided in image descriptions) but the images can also inspire users’ own musings.

In our blog series, “Through My Eyes”, some of our volunteer stewards take turns selecting an image from the Archival Images of AI collection. They delve into the artist’s creative process and explore what the image means to them—seeing it through their own eyes.

At the end of 2024, we released the Archival Images of AI Playbook with AIxDESIGN and the Netherlands Institute for Sound and Vision. The playbook explores how existing images – especially those from digital heritage collections – can help us craft more meaningful visual narratives about AI. Through various image-makers’ own attempts to make better images of AI, the playbook shares numerous techniques which can teach you how to transform existing images into new creations. 

Here, Laura Martinez Agudelo shares her personal reflections on ‘Weaving Wires 1’ – Hanna Barakat’s own better image of AI that was created for the playbook. Laura comments on how the image uncovers the hidden Navajo women’s labor behind the assembly of microchips in Silicon Valley – inviting us to confront the oppressive cultural conditions of conception, creation and mediation of the technology industry’s approach to innovation.


Digital collage featuring a computer monitor with circuit board patterns on the screen. A Navajo woman is seated on the edge of the screen, appearing to stitch or fix the digital landscape with their hands. Blue digital cables extend from the monitor, keyboard, and floor, connecting the image elements.

Hanna Barakat + AIxDESIGN & Archival Images of AI / Better Images of AI / Weaving Wires 1 / CC-BY 4.0


Cables came out and crossed my mind 

Weaving wires 1 by Hanna Barakat is about hidden histories of computer labor. As it is explained in the image’s description, her digital collage is inspired by the history of computing in the 1960s in Silicon Valley, where the Fairchild Semiconductor company employed Navajo women for intensive tasks such as assembling microchips. Their work (actually with their hands and their digits) was a way for these women to provide for their families in an economically marginalized context.

At that time, this labor was made to be seen as a way to legitimize the transfer of the weaving cultural practices to contribute to technological innovation. This legitimation appears to be an illusion, to converge the unchanging character of weaving as heritage, with the constant renewal of global industry, but it also presupposes the non-recognition of Navajo women’s labor and a techno-cultural and gendered transaction. Their work is diluted in meaning and action, and overlooked in the history of computing.

In Weaving wires 1, we can see a computer monitor with circuit board patterns on the screen, and a juxtaposed woven design. Then, two potential purposes dialogue with the woman sitting at the edge of the screen, suspended in a white background: is the woman stitching or fixing or even both as she weaves and prolongs the wires? These blue wires extend from the monitor, keyboard and beyond. The woman seems to be modifying or constructing a digital landscape with her own hands, leading us to remember the place where these materialities come from, and the memories they connect to.

Since my mother tongue is Spanish, a distant memory of the word “Navajo” and the image of weaving women appeared. “Navajo” is a Spanish adaptation of the Tewa Pueblo word navahu’u, which means “farm fields in the valley”. The Navajo people call themselves Diné, literally meaning “The People”. At this point, I began to think about the specific socio-spatial conditions of Navajo/Diné women at that time and their misrepresentation today. When I first saw the collage, I felt these cables crossing my own screen. Many threads began to unravel in my head in the form of question marks. I wondered how older and younger generations of Navajo/Diné women have experienced (and in other ways inherited) this hidden labor associated with the transformation of the valley and their community. This image disrupts as a visual opposition to the geographic and social identification of Silicon Valley as presented, for example, in the media. So now, these wires expand the materiality to reveal their history. Hanna creatively represents the connection between key elements of this theme. Let’s explore some of her artistic choices.

Recoded textures as visual extensions 

Hanna Barakat is a researcher, artist and activist who studies emerging technologies and their social impact. I discovered her work thanks to the Archival Images of AI project (Launch & Playtest). Weaving wires 1 is part of a larger project from Hanna where a creative dialogue between textures and technology is proposed. Hanna plays with intersections of visual forms to raise awareness of the social, racial and gender issues behind technologies. Weaving wires 1 reconnected me with the importance of questioning the human and material extractive conditions in which technological devices are produced.

As a lecturer in (digital) communication, I’m often looking for visual support on topics such as the socio-economic context in which the Internet appears, the evolution of the Web, the history of computer culture, and socio-technical theories and examples to study technological innovation, its problems and ethical challenges. The visual narratives are mostly uniform, and the graphic references are also gendered. Women’s work is most of the time misrepresented (no, those women in front of the big computers are not just models or assistants, they have full names and they are the official programmers and coders. Take a look at the work of Kathy/Kathryn Kleiman… Unexplored archives are waiting for us !).

When I visually interacted with Weaving wires 1 and read its source of inspiration (I actually used and referenced the image for one of my lectures), I realized once again the need to make visible the herstory (term coined in the 1960s as a feminist critique of conventional historiography) of technological innovation. Sometimes, in the rush of life in general (and in specific moments like the preparation of a lecture in my case), we forget to take some time and distance to convene other ways of exploring and sharing knowledge (with the students) and to recreate the modalities of approaching some essential topics for a better understanding of the socio-technical metamorphosis of our society.

Going beyond assumed landmarks

In order to understand hidden social realities, we might question our own landmarks. For me, “landmarks” could be both consciously (culturally) confirmed ideas and visual/physical evidence of the existence of boundaries or limits in our (representation of) reality. Hanna’s image proposes an insight into the importance of going beyond some established landmarks. This idea, as a result of the artistic experience, highlights some questions such as : where did the devices we use every day come from and whose labour created them? And in what others forms are these conditions extended through time and space, and for whom ? You might have some answers, references, examples, or even names coming to mind right now. 

In Weaving wires 1, and in Hanna’s artistic contribution, several essential points are raised. Some of them are often missing in discourses and practices of emerging technologies like AI systems : the recognition of the human labor that supports the material realities of technological tools, the intersection of race and gender, the roots of digital culture and industry, and the need to explore new visual narratives that reflect technology’s real conditions of production.

Fix, reconnect and reimagine

Hanna uses the digital collage (but also techniques such as juxtaposition, overlayering and/or distortion – she explains her approach with examples in her artist log). She explores ways to honor the stories she conjures up by rejecting colonial discourses. For me, in the case of Weaving wires 1, these wires connect to our personal experiences with technological devices and memories of the digital transformation of our society. They could also represent the need to imagine and construct together, as citizens, more inclusive (technological) futures.

A digital landscape is somewhere there, or right in front of us. Weaving wires 1 will be extended by Hanna in Weaving wives 2 to question the meaning of the valley landscape itself and its borders. For now, some other transversal questions appear (still inspired by her first image) about deterministic approaches to studying data-driven technology and its intersection with society: what fragments or temporalities of our past are we willing and able to deconstruct? Which ones filter the digital space and ask for other ways of understanding? How can we reconnect with the basic needs of our world if different forms of violence (physical and symbolic), in this case in human labor, are not only hidden, but avoided, neglected or unrepresented in the socio-digital imaginary?

It is such a necessary discussion to face our collective memory and the concrete experiences in between. Weaving wires 1 invites us to confront the oppressive cultural conditions of conception, creation and mediation of the technology industry’s approach to innovation.With this image, Hanna brings us a meaningful contribution. She deconstructs simplistic assumptions and visual perspectives to actually create ‘better images of AI’!


About the author

Laura Martinez Agudelo is a Temporary Teaching and Research Assistant (ATER) at the University Marie & Louis Pasteur – ELLIADD Laboratory. She holds a PhD in Information and Communication Sciences. Her research interests include socio-technical devices and (digital) mediations in the city, visual methods and modes of transgression and memory in (urban) art.   

This post was also kindly edited by Tristan Ferne – lead producer/researcher at BBC Research & Development.


If you want to contribute to our new blog series, ‘Through My Eyes’, by selecting an image from the Archival Images of AI collection and exploring what the image means to you, get in touch (info@betterimagesofai.org)

Hanna Barakat’s image collection & the paradoxes of depicting diversity in AI history

A black-and-white image depicting the early computer, Bombe Machine, during World War II. In the foreground, the shadow of a woman in vintage clothing is cast on a man changing the machine's cable.

As part of a collaboration between Better Images of AI and Cambridge University’s Diversity Fund, Hanna Barakat was commissioned to create a digital collage series to depict diverse images about the learning and education of AI at Cambridge. Hanna’s series of images complement our competition that we opened up to the public at the end of last year which invited submissions for better images of AI from the wider community –  you can see the winning entries here.

In the blog post below, Hanna Barakat talks about her artistic process and reflections upon contributing to this collection. Hanna provides her thoughts on the challenges of creating images that communicate about AI histories and the inherent contradictions that arise when engaging in this work.

The purpose behind the collection

As outlined by the Better Images of AI project, normative depictions of AI continue to perpetuate negative gender and racial stereotypes about the creators, users, and beneficiaries of AI. Moreover, they misdirect attention from the harms implicit in the real-life applications of the technology. The lack of diversity—and the problematic interpretation of diversity—in AI-generated images is not merely an ‘output’ issue that can be easily fixed. Instead, it stems from deep-rooted systemic issues that reflect a long history of bias in data science.

As a result, even so-called ‘diverse’ images created by AI often end up reinforcing these harms [Fig.1]. The image below has adopted token diversity tropes like a wheelchair, different skin tones and a mix of genders – superficially appearing diverse without addressing deeper issues like context, intersectionality, and the inclusion of underrepresented groups in leadership roles. The teacher remains to be an older, able-bodied white male and the students all appear to be conventionally attractive, similarly sized individuals wearing almost matching types of clothing. The image also shows a fictional blue holographic image of a robot in the centre – misrepresenting what generative AI is and exaggerating the capabilities of the technology.

Figure 1. Image depicting an educational course on Generative AI.

As academic institutions like the Leverhulme Centre for the Future of Intelligence are exploring “vital questions about the risks and opportunities emerging with AI,” they commissioned images that reflect a more nuanced depiction of the risks and opportunities. Specifically, they requested seven images that represent the diversity in Cambridge’s teaching about AI, with the intention to use these images for courses, websites, and events programs.

Hanna’s artistic process

My process takes a holistic approach to “diversity” – aiming to avoid the “DEI-washing” images that reduce diversity to a gradient of brown bodies or tokenization of marginalized groups in the name of “inclusion” but often fail to acknowledge the positionality of the institutions utilizing such images.

Instead, my approach interrogates the development of AI technology, its history of computing in the UK, and the positionality of elite institutions such as Cambridge University to create thoughtful images about the education of AI at Cambridge.

Analog Lecture on Computing by Hanna Barakat & Cambridge Diversity Fund and Pas(t)imes in the Computer Lab by Hanna Barakat & Cambridge Diversity Fund

Through digital collages of open-source archival images, this series offers a critical visual depiction of education about AI. Collage is a way of moving against the archival grain– reinserting, for example, the overlooked women who ran cryptanalysis of the Enigma Machine at Bletchley Park to surrealist depictions of a historically contextualized lecture about AI. By combining mixed media layers, my artistic process seeks to weave together historical narratives and investigate the voices systemically overlooked and/or left out. 

I carefully navigated the archive and relied on visual motifs of hands, strings, shadows, and data points. Throughout the series, these elements engage with the histories of UK computing as a starting point to expose the broader sociotechnical nature of AI. The use of anonymous hands becomes a way of encouraging reflection upon the human labor that underpins all machines. The use of shadows symbolizes the unacknowledged labor of marginalized communities throughout the Global Majority.

Turning Threads of Cognition by Hanna Barakat & Cambridge Diversity Fund

It is these communities upon which technological “process” has relied upon and at whose expense “progress” has been achieved. I use an abstract interpretation of data points to symbolize the exchange of information and learning on university campuses. I was inspired by Ada Lovelace, Cavendish Labs archive (physics laboratories), which depicts photos of early histories of computing, the stories of Cambridge Language Research Unit (CLRU) run by Margaret Masterman, Jean Valentine, and the many other Cambridge-educated women at Bletchley Park that made Alan Turing’s achievements possible.

Lovelace GPU by Hanna Barakat & Cambridge Diversity Fund

The challenges of creating images relating to the diverse history of AI

Nonetheless, I remain cautious about imbuing these images with too much subversive power. Like any nuanced undertaking, this project grapples with tension, including navigating the challenge of representing diverse bodies without tokenizing them; drawing from archival material while recognizing the imperialist incentives that shape their creation; portraying education about AI in ways that are both literal and critically reflective, particularly in contexts where racial and ethnic diversity (in the histories of UK) are not necessarily commonplace; and balancing a respect for the critical efforts of the CFI with an awareness of its positionality as an elite institution. On a practical level, I encountered challenges in accessing the limited number of images available, as many were not fully licensed for open access.

I list these tensions not to imply as a means of demonstrating hypocrisy, but, quite the opposite—to illuminate the complexities and inherent contradictions that arise when engaging in this work. By highlighting these points of friction, I am able to acknowledge the layered positionality that shapes both the process and the outcomes, emphasizing that such tensions are not obstacles to be avoided but rather essential facets of critically engaged practice.

If you want to read more about the processes behind Hanna’s work, view her Artist Log on the AIxDESIGN site. You can also learn how to make your own archival images of AI by exploring our Playbook that we released at the end of 2024 with AIxDESIGN and the Netherlands Sound and Vision Institute.

Dr Aisha Sobey was behind the project which was commissioned with funding from Cambridge Diversity Fund

This project grew from the desire of CFI and multiple collaborations with Better Images of AI to have better images of AI in relation to the teaching and learning we do at the Centre, and from my research into the ‘lookism’ of generative AI image models. I knew that asking for the combination of criteria to show anonymous, diverse people in images of AI learning would be tricky, but even as the project evolved to take a historical lens to reclaim lost histories, this proved to be a really difficult task for the artists.

The images created by Hanna and the entries to the prize competition showed some brilliant and unique takes on the prompt. Still, they often struggled to bring diverse people and Cambridge together. It points to the barriers of showing difference in an ethical way that doesn’t tokenise or exploit already marginalised groups and we didn’t solve that challenge in these images, and the need for more diverse people in places like Cambridge to make these stories. However, I am hopeful that the process has been valuable to illuminate different challenges of doing this kind of work and further that the images offer alternative and exciting perspectives to the representation of diversity in learning and teaching AI at the University.”

Artist Subjectivity Statement

In creating these images which seek to depict diversity, it is imperative to address the “experience of the knower.” Thus, consistent with a critical feminist framework, I feel it is important to share my identity and positionality as it undoubtedly shapes my artistic practice and influences my approach to digital technologies.

My name is Hanna Barakat. I am a 25-year-old science & technology studies researcher and collage artist.  I am a female-identifying Palestinian-American. While I was raised in Los Angeles, California, I am from Anabta, Palestine. Growing up in the Palestinian diaspora, my experience is informed by layers of systemic violence that traverse the digital-physical “divide.” I received my education from Brown University, a reputable university in the United States.

Brown University’s founders and benefactors participated in and benefited from the transatlantic slave trade. Brown University is built on the stolen lands of the Narragansett, Wôpanâak, and Pokanoket communities. In this light, I materially benefit from, and to some degree am harmed by, my location within systems of settler colonialism, whiteness, racial capitalism, Islamophobia, heteropatriarchy, and education inequality. My identity, lived experiences, and fraught relationship with technology inform my approach to artist practice–which uses visual language as a tool to (1) critically challenge normative narratives about technology development and (2) imagine cultural contextualized and localized digital futures. 

Press release: New playbook released to enable creation of images of AI using free and open licence digital heritage collections from around the world


  • Archival Images of AI project enables the creation of meaningful and compelling images of AI
  • New playbook includes 38 pages of guidance and sources of free to use archive images
  • Showcases methods and tips for remixing archive images which can be used by anyone 
  • Inspirational artists have created free-to-use examples of their own interpretations of AI 

LONDON / AMSTERDAM 4th December 2024: As AI continues to make headlines and evolve in ways that impact the general public, global critical AI research community AIxDESIGN has released a research-informed playbook for remixing free and open licence images to create better images of artificial intelligence. It uses techniques that anyone can apply without the use of AI image generators.

Producing accurate images of AI – whether this is technically accurate or suitable for any given narrative or situation, is not always easy without an illustrator or access to a wide variety of images that can be easily edited or remixed. AIxDESIGN, in partnership with Netherlands Institute for Sound & Vision with inspiration from Better Images of AI and support from We and AI have released a playbook as a guide to address this challenge by working with free images from consented archives around the world and artists immersed in expressing their experiences and understanding of the technology.

Archival Images of AI Playbook

The playbook includes vital information about the use of archive images as well as details about the creation and representation of artificial intelligence through visual narratives. The project builds on the principles outlined in Better Images of AI: A Guide for Users and Creators that explain why accuracy is important when it comes to communicating these technologies to the wider public. 

By making poor choices about how AI is visualised, communications from media to marketing often risk misinforming or misleading the public about how it works, what it means and the impact it can have. The playbook offers new ways to interpret images of AI by engaging with cultural archives to explore historical and social context. It also has sources of visual stimuli and motifs that can be used freely and with open licences by anyone seeking to illustrate their writing or communicate AI news and reflection. 

A highly creative and reflective selection of artists and researchers have contributed to the guide to offer tutorials and examples, including: 

Hanna Bakarat, researcher, activist and collage artist. She’s been deep in researching narratives of AI and exploring collage as an act of resistance. 

Cristóbal Ascencio, a Mexican visual artist. As a photographer, his practice explores new forms of image making such as virtual reality, data manipulation and photogrammetry. 

Zeina Saleem, graphic designer interested in data beautification and the aesthetics of algorithmic distortion. 

Dominika Čupková, interdisciplinary artist and researcher connecting the dots between AI, art, design and feminism.

Nadia Piet, Nadia is an independent researcher, designer, and co-founder and creative director of AIxDESIGN. 

The playbook is available for anyone to download and is accompanied by detailed artist logs available at https://aixdesign.co/posts/archival-images-of-ai. Readers can explore the works’ origins and development and input from Eryk Salvaggio, Cees Martens, Isabel Beirigo, Monique Groot, Danny van Zuijlen, Alice Isaac, Anne Fehres and Luke Conroy.

The playbook is launched at an interactive event where attendees have an opportunity to test and play with the techniques and interact with the artists. 

A varied and powerful selection of over 25 of the images created by the artists will be added to the free Better Images of AI image library where any individual or publication can use the images for free. 

The playbook can be downloaded at https://aixdesign.co/posts/archival-images-of-ai and https://blog.betterimagesofai.org/archival-images-of-ai-playbook/.

About Netherlands Sound & Vision

The Netherlands Institute for Sound & Vision is a knowledge institute in the field of media culture and audiovisual archiving. It specialises in cultural programming, educational offering and research that makes media heritage available, searchable and relevant. Learn more at https://www.beeldengeluid.nl/en. 

About AIxDESIGN 

​​​​​AIxDESIGN (AIxD) is a global community of designers, researchers, creative technologists, and activists using AI in pursuit of creativity, justice and joy and living lab exploring participatory, slow, and more-than-corporate AI. Learn more at aixdesign.co.

About Better Images of AI Better Images of AI is a global non-profit collaboration which curates and commissions stock images that avoid perpetuating unhelpful myths about artificial intelligence, downloadable for free. It provides guidelines and research and creates a space for imaging and creating more inclusive, transparent and realistic visual representations of AI themes and technologies, avoiding overused cliches and alienating, disempowering tropes. It was launched in 2021 with input from a global community of researchers, practitioners and institutions including BBC R&D and coordinated by We and AI.

💬 Behind the Image with Yutong from Kingston School of Art

This year, we collaborated with Kingston School of Art to give MA students the task of creating their own better images of AI as part of their final project. 

In this mini-series of blog posts called ‘Behind the Images’, our Stewards are speaking to some of the students that participated in the module to understand the meaning of their images, as well as the motivations and challenges that they faced when creating their own better images of AI. Based on our assessment criteria, some of the images will also be uploaded to our library for anyone to use under a creative commons licence. 

In our third and final post, we go ‘Behind the Image’ with Yutong about her pieces, ‘Exploring AI’ and ‘Talking to AI’. Yutong intends that her art will challenge misconceptions about how humans interact with AI.

You can freely access and download ‘Talking to AI’ and both versions of ‘Exploring AI’ from our image library.

Both of Yutong’s images are available in our library, but as you might discover below, there were many challenges that she faced when developing these works. We greatly appreciate Yutong letting us publish her images and talking to us for this interview. We are hopeful that her work and our conversations will serve as further inspiration for other artists and academics who are exploring representations of AI.

Can you tell us a bit about your background and what drew you to the Kingston School of Art?

Yutong is from China and before starting the MA in Illustration at Kingston University, she completed an undergraduate major in Business Administration. What drew Yutong to Kingston School of Art was its highly regarded reputation for its illustration course. On another note, she enjoys how the illustration course at Kingston balances both the commercial and academic aspects of art – allowing Yutong to combine her previous studies with her creative passions. 

Could you talk me through the different parts of your images and the meaning behind them?

In both of her images, Yutong wishes to unpack the interactions between humans and AI – albeit from two different perspectives.

Talking to AI’

Firstly, ‘Talking to AI’ focuses on more accurately representing how AI works. Yutong uses a mirror to reflect how our current interactions with AI are based on our own prompts and commands. At present, AI cannot generate content independently so it reflects the thoughts and opinions that humans feed into systems. The binary code behind the mirror symbolises how human prompts and data are translated into computer language which powers AI. Yutong has used a mirror to capture an element between humans and AI interaction that is overlooked – the blurred transition between human work to AI generation.

‘Exploring AI’

Yutong’s second image, ‘Exploring AI’ aims to shed light on the nuanced interactions that humans have with AI on multiple levels. Firstly, the text, ‘Hi, I am AI’ pays homage to an iconic phrase in programming (‘Hello World’) which is often the first thing any coder learns how to write and it also forms the foundations of a coder’s understanding of a programming language’s syntax, structure, and execution process. Yutong thought this was fitting for her image as she wanted to represent the rich history and applications of AI which has its roots in basic code. 

Within ‘Exploring AI’, each grid square is used to represent the various applications of AI in different industries. The expanded text across multiple grid squares demonstrates how one AI tool can have uses across different industriesChatGPT is a prime example of this.

However, Yutong wants to also draw attention to the figures within each square which all interact with AI in complex and different ways. For example, some of the body language of the figures depict them to be variously frustrated, curious, playful, sceptical, affectionate, indifferent, or excited towards the text, ‘Hi, I am AI’.

Yutong wants to show how our human response to AI changes and varies contextually and it is driven by our own personal conceptions of AI. From her own observations, Yutong identified that most people either have a very positive or very negative opinion towards AI – but not many feel anything in between. By including all the different emotional responses towards AI in this image, Yutong hopes to introduce greater nuance into people’s perceptions of AI and help people to understand that AI can evoke different responses in different contexts. 

What was your inspiration/motivation for creating your images?

As an illustrator, Yutong found herself surrounded by artists that were fearful that AI would replace their role in society. Yutong found that people are often fearful of the unknown and things they cannot control. Therefore, being able to improve understanding of what AI is and how it works through her art, Yutong hopes that she can help her fellow creators face their fears and better understand their creative role in the face of AI. 

Through her art, ‘Exploring AI’ and ‘Talking to AI’, Yutong intends to challenge misconceptions about what AI is and how it works. As an AI user herself, she has realised that human illustrators cannot be replaced by AI – these systems are reliant on the works of humans and do not yet have the creative capabilities to replace artists. Yutong is hopeful that by being better educated on how AI integrates in society and how it works, artists can interact with AI to enhance their own creativity and works if they choose to do so. 

Was there a specific reason you focused on dispelling misconceptions about what AI looks like and how Chat-GPT (or other large language models) work? 

Yutong wanted to focus on how AI and humans interact in the creative industry and she was driven by her own misconceptions and personal interactions with AI tools. Yutong does not intend for her images to be critical of AI. Instead, she envisages that her images can help educate other artists and prompt them to explore how AI can be useful in their own works. 

Can you describe the process for creating this work?

From the outset, Yutong began to sketch her own perceptions and understandings about how AI and humans interact. The sketch below shows her initial inspiration. The point at which each shape overlaps represents how humans and AI can come together and create a new shape – this symbolises how our interactions with technology can unlock new ideas, feelings and also, challenges.

In this initial sketch, she chose to use different shapes to represent the universality of AI and how its diverse application means that AI doesn’t look like one thing – AI can underlay an automated email response, a weather forecast, or medical diagnosis. 

Yutong’s initial sketch for ‘Talking to AI’

The project aims to counteract common stereotypes and misconceptions about AI. How did you incorporate this goal into your artwork? 

In ‘Exploring AI’, Yutong wanted to introduce a more nuanced approach to AI representation by unifying different perspectives about how people feel, experience and apply AI in one image. From having discussions with people utilising AI in different industries, she recognised that those who were very optimistic about AI, didn’t recognise its shortfalls – and the same vice-versa. Yutong believes that humans have a role to help AI reach new technological advancements and AI can also help humans flourish. In Yutong’s own words, “we can make AI better, and AI can make us better”. 

Yutong found talking to people in the industry as well as conducting extensive research about AI very important to ensure that she could more accurately portray AI’s uses and functions. She points to the fact that she used binary code in ‘Talking to AI’ after researching that this is the most fundamental aspect of computer language which underpins many AI systems. 

What have been the biggest challenges in creating a ‘better image of AI’? Did you encounter any challenges in trying to represent AI in a more nuanced and realistic way?

Yutong reflects on the fact that no matter how much she rethought or restarted her ideas, there was always some level of bias in her depiction of AI because of her own subconscious feelings towards the technology. She also found it difficult to capture all the different applications of AI, as well as the various implications and technical features of the technology in a single visual image. 

Through tackling these challenges, Yutong became aware of why Better Images of AI is not called ‘Best Images of AI’ the latter would be impossible. She hopes that while she could not produce the ‘best image of AI’, her art can serve as a better image compared to those typically used in the media.

Based on our criteria for selecting images, we were pleased to accept both your images but asked you if it was possible to make amendments to ‘Exploring AI’ to make the figures more inclusive. What do you think of this feedback and was it something that you considered in your process? 

In Yutong’s image, ‘Exploring AI’, Better Images of AI made a request if an additional image could be made including these figures in different colours to better reflect the diverse world that we live in. Being inclusive is very important to Better Images of AI, especially as visuals of AI and those who are creating AI, are notoriously unrepresentative.

Yutong agreed that this development would be better to enhance the image and being inclusive in her art is something she is actively trying to improve. She reflects on this suggestion by saying, ‘just as different AI tools are unique, so are individual humans’. 

The two versions of ‘Exploring AI’ available on the Better Images of AI library

How has working on this project influenced your own views about AI and its impact? 

During this project, Yutong has been introduced to new ideas and been able to develop her own opinions about AI based on research from academic journals. She says that informing her opinions using sources from academia was beneficial compared to relying on information provided by news outlets and social media platforms which often contain their own biases and inaccuracies.

From this project, Yutong has been able to learn more about how AI could incorporate into her future career as a human and AI creator. She has become interested in the Nightshade tool that artists have been using to prevent AI companies using their art to train their AI systems without the owner’s consent. She envisages a future career where she could be working to help artists collaborate with AI companies – supporting the rights of creators and preserving the creativity of their art. 

What have you learned through this process that you would like to share with other artists and the public?

By chatting to various people interacting and using AI in different ways, Yutong has been introduced to richer ideas about the limits and benefits of AI. Yutong challenges others to talk to people who are working with AI or are impacted by its use to gain a more comprehensive understanding of the technology. She believes that it’s easy to gain a biased opinion about AI by relying on the information shared by a single source, like social media, so we should escape from these echo chambers. Yutong believes that it is so important that people diversify who they are surrounding themselves with to better recognise, challenge, and appreciate AI. 

Yutong (she/her) is an illustrator with whimsical ideas, also an animator and graphic designer.

Co-creating Better Images of AI

Yasmine Boudiaf (left) and Tamsin Nooney (right) deliver a talk during the workshop ‘Co-creating Better Images of AI’

In July, 2023, Science Gallery London and the London Office of Technology and Innovation co-hosted a workshop helping Londoners think about the kind of AI they want. In this post, Dr. Peter Rees reflects on the event, describes its methodology, and celebrates some of the new images that resulted from the day.


Who can create better images of Artificial Intelligence (AI)? There are common misleading tropes of the images which dominate our culture such as white humanoid robots, glowing blue brains, and various iterations of the extinction of humanity. Better Images of AI  is on a mission to increase AI literacy and inclusion by countering unhelpful images. Everyone should get a say in what AI looks like and how they want to make it work for them. No one perspective or group should dominate how Al is conceptualised and imagined.

This is why we were delighted to be able to run the workshop ‘Co-creating Better Images of AI’ during London Data Week. It was a chance to bring together over 50 members of the public, including creative artists, technologists, and local government representatives to each make our own images of AI. Most images of AI that appear online and in the newspapers are copied directly from existing stock image libraries. This workshop set out to see what would happen when we created new images fromscratch. We experimented with creative drawing techniques and collaborative dialogues to create images. Participants’ amazing imaginations and expertise went into a melting-pot which produced an array of outputs. This blogpost reports on a selection of the visual and conceptual takeaways! I offer this account as a personal recollection of the workshop—I can only hope to capture some of the main themes and moments, and I apologise for all that I have left out. 

The event was held at the Science Gallery in London on 4th July 2023 between 3-5pm and was hosted in partnership with London Data Week, funded by the London Office of Innovation and Technology. In keeping with the focus on London Data Week and LOTI, the workshop set out to think about how AI is used every day in the lives of Londoners, to help Londoners think about the kind of AI they want, to re-imagine AI so that we can build systems that work for us.

Workshop methodology

I said the workshop started out from scratch—well, almost. We certainly wanted to make use of the resources already out there such as the [Better Images of AI: A Guide for Users and Creators] co-authored by Dr Kanta Dihal and Tania Duarte. This guide was helpful because it not only suggested some things to avoid, but also provided stimulation for what kind of images we might like to make instead. What made the workshop a success was the wide-ranging and generous contributions—verbal and visual—from invited artists and technology experts, as well as public participants, who all offered insights and produced images, some of which can be found below (or even in the Science Gallery).

The Workshop was structured in two rounds, each with a live discussion and creative drawing ‘challenge’. The approach was to stage a discussion between an artist and a technology expert (approx 15 mins), and then all members of the workshop would have some time (again, approx 15 mins) for creative drawing. The purpose of the live discussion was to provide an accessible introduction to the topic and its challenges, after which we all tackled the challenge of visualising and representing different elements of AI production, use and impact. I will now briefly describe these dialogues, and unveil some of the images created.

Setting the scene

Tania Duarte (Founder, We and AI) launched the workshop with a warm welcome to all. Then, workshop host Dr Robert Elliot-Smith (Director of AI and Data Science at Digital Catapult) introduced the topic of Large Language Models (LLMs) by reminding the audience that such systems are like ‘autocorrect on steroids’: the model is simply very good at predicting words, it does not have any deep understanding of the meaning of the text it produces. He also discussed image-generators, which work in a similar way and with similar problems, which is why certain AI-produced images end up garbling images of hands and arms: they do not understand anatomy.

In response to this preliminary introduction, one participant who described herself as a visual artist expressed horror at the power of such image-generating and labelling AI systems to limit and constrain our perception of reality itself. She described how, if we are to behave as artists, what we have to do in our minds is to avoid seeing everything simply in terms of fixed categories which can conservatively restrain the imagination, keeping it within a set of known categorisations, which is limiting not only our imagination but also our future. For instance, why is the thing we see in front of us necessarily a ‘wall’? Could it not be, seeing more abstractly, simply a straight line? 

From her perspective, AI models seem to be frighteningly powerful mechanisms for reinforcing existing categories for what we are seeing, and therefore also of how to see, what things are, even what we are, and what kind of behaviour is expected. Another participant agreed: it is frustrating to get the same picture from 100 different inputs and they all look so similar. Indeed, image generators might seem to be producing novelty, but there is an important sense in which they are reinforcing the past categories of the data on which they were trained.

This discussion raised big questions leading into the first challenge: the limitations of large language models.

Round 1: The Limitations of Large Language Models

A live discussion was staged between Yasmine Boudiaf (recognised as one of ‘100 Brilliant Women in AI Ethics 2022,’ and fellow at the Ada Lovelace Institute) and Tamsin Nooney (AI Research, BBC R&D) about the process of creating LLMs.

Yasmine asked Tamsin about how the BBC, as a public broadcaster, can use LLMs in a reliable manner, and invited everyone in the room to note down any words they found intriguing, as those words might form a stimulus for their creative drawings.

Tamsin described an example of LLM use-case for the BBC in producing a podcast whereby an LLM could summarise the content, add in key markers and meta-data labels and help to process the content. She emphasised how rigorous testing is required to gain confidence in the LLM’s reliability for a specific task before it could be used. A risk is that a lot of work might go into developing the model only for it to never be usable at all.

Following Yasmine’s line of question, Tamsin described how the BBC deal with the significant costs and environmental impacts of using LLMs. She described how the BBC calculated if they wanted to train their LLM, even a very small one, it would take up all their servers at full capacity for over a year, so they won’t do that! The alternative is then to pay other services such as Amazon to use their model, which means balancing costs: so here are limits due to scale, cost, and environmental impact.

This was followed by a more quiet, but by no means silent, 15 minutes for drawing time in which all participants drew…

Drawing by Marie Jannine Murmann. Abstract cogwheels suggesting that AI tools can be quickly developed to output nonsense but, with adequate human oversight and input, AI tools can be iteratively improved to produce the best outputs they can.
Drawing by Marie Jannine Murmann. Abstract cogwheels suggesting that AI tools can be quickly developed to output nonsense but, with adequate human oversight and input, AI tools can be iteratively improved to produce the best outputs they can.

One participant used an AI image generator for their creative drawing, making a picture of a toddler covered in paint to depict the LLM and its unpredictable behaviours. Tamsin suggested that this might be giving the LLM too much credit! Toddlers, like cats and dogs, have a basic and embodied perception of the world and base knowledge, which LLMs do not have.

Drawing by Howard Elston. An LLM is drawn as an ear, interpreting different inputs from various children.
Drawing by Howard Elston. An LLM is drawn as an ear, interpreting different inputs from various children.

The experience of this discussion and drawing also raised, for another participant, more big questions. She discussed poet David Whyte’s work on the ‘conversational nature of reality’ and thought on how the self is not just inside us but is created through interaction with others and through language. For instance, she mentioned that when you read or hear the word ‘yes’, you have a physical feeling of ‘yesness’ inside, and similarly for ‘no’. She suggested that our encounters with machine-made language produced by LLMs is similar. This language shapes our conversations and interactions, so there is a sense in which the ‘transformers’ (the technical term for the LLM machinery) is also helping to transform our senses of self and the boundary between what is reality and what is fantasy. 

Here, we have the image made by artist Yasmine based on her discussion with Tamsin:

Three groups of icons representing people have shapes travelling between them and a page in the middle of the image. The page is a simple rectangle with straight lines representing data. The shapes traveling towards the page are irregular and in squiggly bands.
Image by Yasmine Boudiaf. Three groups of icons representing people have shapes travelling between them and a page in the middle of the image. The page is a simple rectangle with straight lines representing data. The shapes traveling towards the page are irregular and in squiggly bands.

Yasmine writes:

This image shows an example of Large Language Model in use. Audio data is gathered from a group of people in a meeting. Their speech is automatically transcribed into text data. The text is analysed and relevant segments are selected. The output generated is a short summary text of the meeting. It was inspired by BBC R&D’s process for segmenting podcasts, GPT-4 text summary tools and LOTI’s vision for taking minutes at meetings.

Yasmine Boudiaf

You can now find this image in the Better Images of AI library, and use it with the appropriate attribution: Image by Yasmine Boudiaf / © LOTI / Better Images of AI / Data Processing / CC-BY 4.0. With the first challenge complete, it was time for the second round.

Round 2: Generative AI in Public Services

This second and final round focused on use cases for generative AI in the public sector, specifically by local government. Again, a live discussion was held, this time between Emily Rand (illustrator and author of seven books and recognised by the Children’s Laureate, Lauren Child, to be featured in Drawing Words) and Sam Nutt (Researcher & Data Ethicist, London Office of Technology and Innovation). They built on the previous exploration of LLMs by considering new generative AI applications which they enable for local councils and how they might transform our everyday services.

Emily described how she illustrates by hand, and described her [work] as focusing on the tangible and the real. Making illustrations about AI, whose workings are not obviously visible, was an exciting new topic. See her illustration and commentary below. 

Sam described his role as part of the innovation team which sits across 26 of the boroughs of London and Mayor of London. He helps boroughs to think about how to use data responsibly. In the context of local government data and services, a lot of data collected about residents is statutory (meaning they cannot opt out of giving it), such as council tax data. There is a big prerogative for dealing with such data, especially for sensitive personal health data, that privacy is protected and bias is minimised. He considered some use cases. For instance, council officers can use ChatGPT to draft letters to residents to increase efficiency butthey must not put any personal information into ChatGPT, otherwise data privacy can be compromised. Or, for example, the use of LLMs to summarise large archives of local government data concerning planning permission applications, or the minutes from council meetings, which are lengthy and often technical, which could be made significantly more accessible to many members of the public and researchers. 

Sam also raised the concern that it is very important that residents know how councils use their data so that councils can be held accountable. Therefore this has to be explained and made understandable to residents. Note that 3% of Londoners are totally offline, not using internet at all, so that’s 270,000 people—who also have an equal right to understand how the council uses their data—who need to be reached through offline means. This example brings home the importance of increasing inclusive public Al literacy.

Again, we all drew. Here are a couple of striking images made by participants who also kindly donated their pictures and words to the project:

Drawing by Yokako Tanaka. An abstract blob is outlined encrusted with different smaller shapes at different points around it. The image depicts an ideal approach to AI in the public sector, which is inclusive of all positionalities.
Drawing by Yokako Tanaka. An abstract blob is outlined encrusted with different smaller shapes at different points around it. The image depicts an ideal approach to AI in the public sector, which is inclusive of all positionalities.
Drawing by Aisha Sobey. A computer claims to have “solved the banana” after listing the letters that spell “banana” – whilst a seemingly analytical process has been followed, the computer isn’t providing much insight nor solving any real problem.
Drawing by Aisha Sobey. A computer claims to have “solved the banana” after listing the letters that spell “banana” – whilst a seemingly analytical process has been followed, the computer isn’t providing much insight nor solving any real problem.
Practically identical houses are lined up at the bottom of the image. Out of each house's chimney, columns of binary code – 1's and 0's – emerge.
“Data Houses,” by Joahna Kuiper. Here, the author described how these three common houses are all sending a distress signal—a new kind of smoke signal, but in binary code. And in her words: ‘one of these houses is sending out a distress signal, calling out for help, but I bet you don’t know which one.’ The problem of differentiating who needs what when.
A big eye floats above rectangles containing rows of dots and cryptic shapes.
“Big eye drawing,” by Hui Chen. Another participant described their feeling that ‘we are being watched by big eye, constantly checking on us and it boxes us into categories’. Certain areas are highly detailed and refined, certain other areas, the ‘murky’ or ‘cloudy’ bits, are where the people don’t fit the model so well, and they are more invisible.
Rows of people are randomly overlayed by computer cursors.
An early iteration of Emily Rand’s “AI City.”

Emily started by llustrating the idea of bias in AI. Her initial sketches showed an image showing lines of people of various sizes, ages, ethnicities and bodies. Various cursors showed the cis white able bodied people being selected over the others. Emily also did a sketch of the shape of a City and ended up combining the two. She added frames to show the way different people are clustered. The frame shows the area around the person, where they might have a device sending data about them.

 Emily’s final illustration is below, and can be downloaded from here and used for free with the correct attribution Image by Emily Rand / © LOTI / Better Images of AI / AI City / CC-BY 4.0.

Building blocks are overlayed with digital squares that highlight people living their day-to-day lives through windows. Some of the squares are accompanied by cursors.

At the end of the workshop, I was left with feelings of admiration and positivity. Admiration of the stunning array of visual and conceptual responses from participants, and in particular the candid and open manner of their sharing. And positivity because the responses were often highlighting the dangers of AI as well as the benefits—its capacity to reinforce systemic bias and aid exploitation—but these critiques did not tend to be delivered in an elegiac or sad tone, they seemed more like an optimistic desire to understand the technology and make it work in an inclusive way. This seemed a powerful approach.

The results

The Better Images of AI mission is to create a free repository of better images of AI with more realistic, accurate, inclusive and diverse ways to represent AI. Was this workshop a success and how might it inform Better Images of AI work going forward?

Tania Duarte, who coordinates the Better Images of AI collaboration, certainly thought so:

It was great to see such a diverse group of people come together to find new and incredibly insightful and creative ways of explaining and visualising generative AI and its uses in the public sector. The process of questioning and exploring together showed the multitude of lenses and perspectives through which often misunderstood technologies can be considered. It resulted in a wealth of materials which the participants generously left with the project, and we aim to get some of these developed further to work on the metaphors and visual language further. We are very grateful for the time participants put in, and the ideas and drawings they donated to the project. The Better Images of AI project, as an unfunded non-profit is hugely reliant on volunteers and donated art, and it is a shame such work is so undervalued. Often stock image creators get paid $5 – $25 per image by the big image libraries, which is why they don’t have time to spend researching AI and considering these nuances, and instead copy existing stereotypical images.

Tania Duarte

The images created by Emily Rand and Yasmine Boudiaf are being added to the Better Images of AI Free images library on a Creative Commons licence as part of the #NewImageNovember campaign. We hope you will enjoy discovering a new creative interpretation each day of November, and will be able to use and share them as we double the size of the library in one month. 

Sign up for our newsletter to get notified of new images here.

Acknowledgements

A big thank you to organisers, panellists and artists:

  • Jennifer Ding – Senior Researcher for Research Applications at The Alan Turing Institute
  • Yasmine Boudiaf – Fellow at Ada Lovelace Institute, recognised as one of ‘100 Brilliant Women in AI Ethics 2022’
  • Dr Tamsin Nooney – AI Research, BBC R&D
  • Emily Rand – illustrator and author of seven books and recognised by the Children’s Laureate, Lauren Child, to be featured in Drawing Words
  • Sam Nutt – Researcher & Data Ethicist, London Office of Technology and Innovation (LOTI)
  • Dr Tomasz Hollanek – Research Fellow, Leverhulme Centre for the Future of Intelligence
  • Laura Purseglove – Producer and Curator at Science Gallery London
  • Dr Robert Elliot-Smith – Director of AI and Data Science at Digital Catapult
  • Tania Duarte – Founder, We and AI and Better Images of AI

Also many thanks to the We and Al team, who volunteered as facilitators to make this workshop possible: 

  • Medina Bakayeva, UCL master’s student in cyber policy & AI governance, communications background
  • Marissa Ellis, Founder of Diversily.com, Inclusion Strategist & Speaker @diversily
  • Valena Reich, MPhil in Ethics of AI, Gates Cambridge scholar-elect, researcher at We and AI
  • Ismael Kherroubi Garcia FRSA, Founder and CEO of Kairoi, AI Ethics & Research Governance
  • Dr Peter Rees was project manager for the workshop

And a final appreciation for our partners: LOTI, the Science Gallery London, and London Data Week, who made this possible.

Related article from BIoAI blog: ‘What do you think AI looks like?’: https://blog.betterimagesofai.org/what-do-children-think-ai-looks-like/

Illustrating Data Hazards

A person with their hands on a laptop keyboard is looking at something happening over their screen with a worried expression. They are white, have shoulder length dark hair and wear a green t-shirt. The overall image is illustrated in a warm, sketchy, cartoon style. Floating in front of the person are three small green illustrations representing different industries, which is what they are looking at. On the left is a hospital building, in the middle is a bus, and on the right is a siren with small lines coming off it to indicate that it is flashing or making noise. Between the person and the images representing industries is a small character representing artificial intelligence made of lines and circles in green and red (like nodes and edges on a graph) who is standing with its ‘arms’ and ‘legs’ stretched out, and two antenna sticking up. A similar patten of nodes and edges is on the laptop screen in front of the person, as though the character has jumped out of their screen. The overall image makes it look as though the person is worried the AI character might approach and interfere with one of the industry icons.

We are delighted to start releasing some useful new images donated by the Data Hazards project into our free image library. The images are stills from an animated video explaining the project, and offer a refreshing take on illustrating AI and data bias. They take an effective and creative approach to making visible the role of the data scientist and the impact of algorithms, and the project behind the images uses visuals in order to improve data science itself. Project leaders Dr Nina Di Cara and Dr Natalie Zelenka share some background on Data Hazards labels, and the inspiration behind the animation behind the new images.

Data science has the potential to do so much for us. We can use it to identify new diseases, streamline services, and create positive change in the world. However, there have also been many examples of ways that data science has caused harm. Often this harm is not intended, but its weight falls on those who are the most vulnerable and marginalised. 

Often too, these harms are preventable. Testing datasets for bias, talking to communities affected by technology or changing functionality would be enough to stop people from being harmed. However, data scientists in general are not well trained to think about ethical issues, and even though there are other fields that have many experts on data ethics, it is not always easy for these groups to intersect. 

The Data Hazards project was developed by Dr Nina Di Cara and Dr Natalie Zelenka in 2021, and aims to make it easier for people from any discipline to talk together about data science harms, which we call Data Hazards. These Hazards are in the form of labels. Like chemical hazards, we want Data Hazards to make people stop and think about risk, not to stop using data science at all. 

An person is illustrated in a warm, cartoon-like style in green. They are looking up thoughtfully from the bottom left at a large hazard symbol in the middle of the image. The Hazard symbol is a bright orange square tilted 45 degrees, with a black and white illustration of an exclamation mark in the middle where the exclamation mark shape is made up of tiny 1s and 0s like binary code. To the right-hand side of the image a small character made of lines and circles (like nodes and edges on a graph) is standing with its ‘arms’ and ‘legs’ stretched out, and two antenna sticking up. It faces off to the right-hand side of the image.
Yasmin Dwiputri & Data Hazards Project / Better Images of AI / Managing Data Hazards / CC-BY 4.0

By making it easier for us all to talk about risks, we believe we are more likely to see them early and have a chance at preventing them. The project is open source, so anyone can suggest new or improved labels which mean that we can keep responding to new and changing ethical landscapes in data science. 

The project has now been running for nearly two years and in that time we have had input from over 100 people on what the Hazard labels should be, and what safety precautions should be suggested for each of them. We are now launching Version 1.0 with newly designed labels and explainer animations! 

Chemical hazards are well known for their striking visual icons, which many of us see day-to-day on bottles in our homes. By having Data Hazard labels, we wanted to create similar imagery that would communicate the message of each of the labels. For example, how can we represent ‘Reinforces Existing Bias’ (one of the Hazard labels) in a small, relatively simple image? 

Icon

Description automatically generated
Image of the ‘Reinforces Existing Bias’ Data Hazard label

We also wanted to create some short videos to describe the project, that included a data scientist character interacting with ‘AI’ and had the challenge of deciding how to create a better image of AI than the typical robot. We were very lucky to work with illustrator and animator Yasmin Dwiputri, and Vanessa Hanschke who is doing a PhD at the University of Bristol in understanding responsible AI through storytelling. 

We asked Yasmin to share some thoughts from her experience working on the project:

“The biggest challenge was creating an AI character for the films. We wanted to have a character that shows the dangers of data science, but can also transform into doing good. We wanted to stay away from portraying AI as a humanoid robot and have a more abstract design with elements of neural networks. Yet, it should still be constructed in a way that would allow it to move and do real-life actions.

We came up with the node monster. It has limbs which allow it to engage with the human characters and story, but no facial expressions. Its attitude is portrayed through its movements, and it appears in multiple silly disguises. This way, we could still make him lovable and interesting, but avoid any stereotypes or biases.

As AI is becoming more and more present in the animation industry, it is creating a divide in the animation community. While some people are praising the endless possibilities AI could bring, others are concerned it will also replace artistic expressions and human skills.

The Data Hazard Project has given me a better understanding of the challenges we face even before AI hits the market. I believe animation productions should be aware of the impact and dangers AI can have, before only speaking of innovation. At the same time, as creatives, we need to learn more about how AI, if used correctly, and newer methods could improve our workflow.”

Yasmin Dwiputri

Now that we have the wonderful resources created we have been able to release them on our website and will be using them for training, teaching and workshops that we run as part of the project. You can view the labels and the explainer videos on the Data Hazards website. All of our materials are licensed as CC-BY 4.0 and so can be used and re-used with attribution. 

We’re also really excited to see some on the Better Images of AI website, and hope they will be helpful to others who are trying to represent data science and AI in their work. A crucial part of AI ethics is ensuring that we do not oversell or exaggerate what AI can do, and so the way we visualise images of AI is hugely important to the perception of AI by the public and being able to do ethical data science! 

Cover image by Yasmin Dwiputri & Data Hazards Project / Better Images of AI / AI across industries / CC-BY 4.0

Three new Better Images of AI research workshops announced

LCFI Research Project l FINAL WORKSHOPS ANNOUNCED! Calling all journalists, AI practitioners, communicators and creatives! (Event poster in Better Images of AI blue and purple colours, with logos)

Three new workshops have been announced in September and October by the Better Images of AI project team. We will once again bring a range of AI practitioners and communicators together with artists and designers working in different creative fields,  to explore in small groups how to represent artificial intelligence technologies and impacts in more helpful ways.

Following a first insightful initial workshop in July, we’re inviting anyone in relevant fields to apply to join the remaining workshops,- taking place both online and in person. We are particularly interested in hearing from journalists who write about AI. However if you are interested in critiquing and exploring new images in an attempt to find more inclusive, varied and realistic visual representations of AI, we would like to hear from you!

Our next workshops will be held on:

  • Monday 12 September, 3.30 – 5.30pm UTC+1 – ONLINE
  • Wednesday 28 September, 3 – 5pm UTC+1 – ONLINE
  • Thursday 6 October, 2:30 – 4:30pm UTC+1 – IN PERSON – The Alan Turing Institute, British Library 96 Euston Road London NW1 2DB

If you would like to attend or know anyone in these fields, email research@betterimagesofai.org, specifying which date. Please include some information about your current field and ideally a link to an online profile or portfolio.

The workshops will look at approaches to meet the criteria of being a ‘better image of AI’, identified by stakeholders at earlier roundtable sessions. 

The discussions in all four workshops will inform an Arts and Humanities Research Council-funded research project undertaken by the Leverhulme Centre for the Future of Intelligence, the University of Cambridge and organised by We and AI. 

Our first workshop was held on 25 July, and brought together over 20 individuals from creative arts, communications, technology and academia to discuss sets of curated and created images of AI and to explore the next steps in meeting the needs identified in providing better images of AI moving forward. 

The four workshops follow a series of roundtable discussions, which set out to examine and identify user requirements for helpfully communicating visual narratives, metaphors, information and stories related to AI. 

The first workshop was incredibly rich in terms of generating creative ideas and giving feedback on gaps in current imagery. Not only has it surfaced lots of new concepts for the wider Better Images of AI to work on, but the series of workshops will also form part of a research paper to be published in January 2023. This process is really critical to ensuring that our mission to communicate AI in more inclusive, realistic and transparent ways is informed by a variety of stakeholders and underpinned by good evidence.

Dagmar Monett, Head of the Computer Science Department at Berlin School of Economics and Law and one of the July workshop attendees, said: “”Better Images of AI also means better AI: coming forward in AI as a field also means creating and using narratives that don’t distort its goals nor obscure what is possible from its actual capacities. Better Images of AI is an excellent example of how to do it the right way.”

The academic research project is being led by Dr Kanta Dihal, who has published many related books, journal articles and papers related to emerging technology narratives and public perceptions.

The workshops will ultimately contribute to research-informed design brief guidance, which will then be made freely available to anyone commissioning or selecting images to accompany communications – such as news articles, press releases, web communications, and research papers related to AI technologies and their impacts. 

They will also be used to identify and commission new stock images for the Better Images of AI free library.

To register interest: Email our team at research@betterimagesofai.org, letting us know which date you’d like to attend and giving us some information about your current field as well as a link to your LinkedIn profile or similar.

Images Matter!

Woman to the left, jumbled up letters entering her ear

AI in Translation

You often hear the phrase “words matter”: words help us to construct mental images in our minds, and to make sense of the world around us. Yet, in the same framing, “images matter” too. How we depict the state of technology (imagined, current or future) visually and verbally,  helps us position ourselves in relation to what is already there and what is coming.

The way these technologies are visualized and expressed in combination tells us what an emerging technology looks like, and how we should expect to interact with it. If AI is always depicted as white, gendered robots, the majority of AI systems we interact with in reality around the clock go unnoticed. What we do not notice, we cannot react to. When we do not react, we become part of the flow in the dominant (and presently incorrect) narrative. This is why we need better images of AI, as well as a language overhaul.

These issues are not limited to the english-speaking world alone. I have recently been asked to give a lecture at a Turkish university on artificial intelligence and the future of work. Over the years I have presented on this and similar topics (AI and the future of the workplace, the future of HR) on a number of occasions. As an AI ethicist and lecturer, I also frequently discuss the uses of AI in human resources, workplace datafication and employee/candidate surveillance. The difference this time? I was asked to hold the lecture in Turkish.  

Yes, it is my native language. However, for more than 15 years, I have been using English in my day-to-day professional interactions. In English, I can talk about AI and ethics, bias, social justice, and policy for hours. When discussing the same topics in Turkish though I need to use a dictionary to translate some of the technical terminology.  So, during my preparations for this presentation, I went down the rabbit hole: specifically one concerning how connected biases in language and images impact overarching narratives of artificial intelligence. 

Gender and Race Bias in Natural Language Models

In 2017 Caliskan, Bryson and Narayan explored in their pioneering work that semantics (meaning of words) derived automatically from language corpora contain human-like biases. The authors showed that natural language models, built by parsing of large corpora derived from internet, reflect the human and societal gender and racial biases. The evidence was shown in word embeddings, which is a method of representation where the words that have the same meaning or tend to be used together are mapped closer to each other on a vector in a high-dimensional space. In other words, they are hidden patterns of word co-occurrence statistics of language corpora, which include grammatical and semantic information. Caliskan et al share that the thesis behind word embeddings is that words that are closer together in the vector space are semantically closer in some sense. The research showed for example, Google Translate converts occupations in Turkish sentences in gendered ways – even though Turkish language is gender-neutral:

“O bir doktor. O bir hemsire.” to these English sentences: “He is a doctor. She is a nurse.” Or “O bir profesör. O bir öğretmen” to these English sentences “He’s a professor. She is a teacher.”

Such results reflect the gender stereotypes within the language models themselves. Such subtle changes have serious consequences.  NLP tasks such as keyword search and match, translation, web search, or text generation/recognition/analysis can be embedded in systems that make decisions on hiring, university admission, immigration applications, law enforcement interactions, etc.

Google Translate, after a patch fix of its models, now gives feminine and masculine binary translations. But 4 years after this patch fix (as of the time of writing), Google Translate still has not addressed non-binary gender translations.

Gender and Race Bias in Search Results

The second seminal work is Dr Safiya Noble’s book Algorithms of Oppression, which covers academic research on Google search algorithms, examining search results from 2009 to 2015. Similar to the findings of the above research on language models, Dr Noble argues that the search algorithms are not neutral tools, and they reflect and magnify the race and gender biases that exist in society and the people who create them. She expertly demonstrates how the search results for keywords like “white girls” are significantly different to “Black girls”,  “Asian girls” or “Hispanic girls”  The latter set of words would show images which were exclusively pornography or highly sexualized content. The research brings to the surface the hidden structures of power and bias in widely used tools that shape the narratives of technology and future. Dr Noble writes “racism and sexism are part of the architecture and language of technology[…]We need a full-on re-evaluation of the implications of our information resources being governed by corporate-controlled advertising companies.”

Google Search applied another after-the-fact fix to reduce the racy results after Dr Noble’s work. However, this also remains a patch fix: the results for “Latina girls” still show majority sexualized images and results for “Hispanic girls” show majority stock photos or Pinterest posts. The results for “Asian girls” seem to remain much the same, associated with pictures tagged as hot, cute, beautiful, sexy, brides.

Gender and Race Bias in Search Results for “Artificial Intelligence”

The third work is Better Images of AI, which is a collaboration that I am proud to have helped found and continue supporting as an advisor. A group of like-minded advocates and scholars have been fighting against the false and cliched images of artificial intelligence used in news stories or marketing material about AI. 

We have been concerned about how images such as humanoid robots, outstretched robot hands, brains shape the public’s perception of what AI systems are and what they are capable of. Such anthropomorphized illustrations not only add to the hype of AI’s endless miracles, but they also stop people questioning the ubiqutious AI systems embedded in their smart phones, laptops, fitness trackers, home appliances – to name but a few. They hinder the perception of consumers and citizens. This means that the conversations in mainstream tend to be stuck at ‘AI is going to take all of our jobs away,’ or ‘AI will be the end of humanity’ and as such the current societal and environmental harms and implications of some AI systems are not publicly and deeply discussed. Those powerful actors developing or using systems to benefit themselves rather than society are hardly held accountable. 

The Better Images of AI collaboration not only challenges the narratives and biases underlying these images, but also provides a platform for artists to share their images in a creative commons repository – in other words, it builds a communal alternative imagination. These images aim to more realistically portray the technology, the people behind it, and point towards its strengths, weaknesses, context and applications. They represent a wider range of humans and human cultures than ‘Caucasian businessperson’, show realistic applications of AI now, not in some unspecified science-fiction future, don’t show physical robotic hardware where there is none and reflect the realistically messy, complex, repetitive and statistical nature of AI systems.

Down the rabbit hole…

So with that background, back to my story for this article. For part of the lecture, I was preparing discussions surrounding AI and the future of work. I wanted to discuss how execution of different professional tasks were changing with technology, and what that means for the future of certain industries or occupational areas. I wanted to underline that some tasks like repetitive transactions, large scale iterations, standard rule applications are better done with AI – as long as they were the right solution for the context and problem, and were developed responsibly and monitored continuously. 

On the flip side, certain skills and tasks that include leading, empathizing, creating are to be left to humans–AI systems neither have the capacity or capability, nor should they be entrusted with such tasks.  I wanted to add some visuals to the presentation and also check out what is currently being depicted in the search results. I first started with basic keyword searches in English such as ‘AI and medical,’ ‘AI and education,’ ‘AI and law enforcement’ etc. What I saw in the first few examples was depressing. I decided to expand the search to more occupational areas: the search results did not get better. I then wondered what the results might be if I had the same searches but this time in Turkish.

What you see below are the first images that come up in my Google search results for each of these keywords. The images not only continue to reflect the false narratives but in some cases are flat out illogical. Please note that I have only used AI / Yapay Zeka in my search and not ‘robot’.

Yapay zeka ve sağlık : AI and medical

A picture containing text

Description automatically generated

In both Turkish and English-speaking worlds, we are to expect white Caucasian male robots to be our future doctors. They will need to wear a shirt, tie and white doctor’s coat to keep their metalic bodies warm (apparently no need for masking). They will also need to look at a tablet to process information and make diagnosis or decisions. Their hands and fingers will delicately handle surgical moves. What we should really be caring about medical algorithms right now is the representativeness of the datasets used in building the algorithms, the explainability of how the algorithm made a diagnostic determination, why it is suggesting a certain prescription or course of action, and how some health applications are completely left out of regulatory oversight.

We have already experienced current medical algorithms which result in biased and discriminatory outcomes because of a patient’s gender, socioeconomic level or even historical access of certain populations to healthcare. We know of diagnostic algorithms which have embedded code to change a determination due to a patient’s race; of false determinations due to the skin color of a patient; of faulty correlations and predictions due to training datasets representing only a portion of the population.

Yapay zeka ve hemşire : AI and Nurse

Yapay zekanın sağlık alanında kullanımı | Pitstop Reklam Ajansı Graphical user interface

Description automatically generated

After seeing the above images I wondered if the results would change if I was more specific about the profession within the medical field. I immediately regretted my decision.

In both results, the Caucasian male robot image changes to a Caucasian female image, reflecting the gender stereotypes across both cultures. The Turkish AI nurse wants you to keep quiet and not cause any disruption or noise. I was not prepared for the English version, a D+ cup wearing robot. Hard to say if the breasts are natural or artificial! This nurse has a Green Cross both on the nurse cap and the bra(?!). The robot is connected to something with yellow cables so probably limited in its physical reach, although there is definitely intention to listen to your chest or heart beat. This nurse will also show you your vitals on an image projected from her chest.

Yapay zeka ve kanun : AI and legal

A picture containing water sport, swimming

Description automatically generated A close-up of a robot

Description automatically generated with low confidence

AI in the legal system is currently one of the most contentious issues in the policy and regulatory discussions. We have already seen a number of use cases where AI systems are used by courts for judicial decisions about recidivism, sentencing or bail, some with results biased against Black people in particular. In the criminal justice field, the use of AI systems for providing investigative assistance and automating decision-making processes for routine administrative paperwork is already in place in many countries. When it comes to images though, these systems, some of which make high-stake decisions that impact fundamental rights, or the existing cases of impacted people are not depicted. Instead we either have a robot touching a blue projection (don’t ask why), or a robot holding a wooden gavel. It is not clear from the depiction if the robot will chase you and hammer you down with the gravel, or if this white male looking robot is about to make a judgement about your right to abortion. The glasses which the robot is wearing I presume are to stress that this particular legal robot is well read.

Yapay zeka ve polis : AI and Law Enforcement

A picture containing text, electronics

Description automatically generated A picture containing text, outdoor, sky

Description automatically generated

Similar to the secondary search I explained above for medical systems, I wanted to go deeper here. I searched for AI and law enforcement.  Currently, in a number of countries (including US, EU member states, China, etc) AI systems are used by police to predict crimes which have not happened yet. Law enforcement uses AI in various ways, from  evidence analysis to biometric surveillance: from anomaly detection/pattern analysis to license-plate readers; from crowd control to dragnet data collection and aggregation; from voice analysis to social media scanning to drone systems. Although crime data is notoriously biased in terms of race, ethinicity and socioeconomic background, and reflects decades of structural racism and oppression, you could not tell any of that from the image results. 

You do not see the picture of Black men wrongfully arrested due to biased and inaccurate facial recognition systems. You do not see hot spots mapped onto predictive policing maps which are heavily surveilled due to the data outcomes. You do not see the law enforcement buying large amounts of data from data-brokers – data that they would otherwise need search warrants to acquire. What you see instead in the English version is another Caucasian male-looking robot working shoulder to shoulder with police SWAT teams – keeping law and order!  In the Turkish version, the image result reflects a female police officer who is either being whispered to by an AI system or using an AI system for work. If you are a police officer in Turkey, you are probably safe for the moment as long as your AI system is shaped as a human head circuit.

Yapay zeka ve gazetecilik : AI and journalism

A picture containing text, automaton

Description automatically generated

Content and news creation are currently some of the most ubiquitous uses of AI we experience in our daily lives. We see algorithmic systems curating content at news/media channels. We experience the manipulation and ranking of the content in the search results, in the news that we are exposed to, in the social media feeds that we doom scroll. We complain about how disinformation and misinformation (and to a certain extent deepfakes) have become mainstream conversations with real life consequences. Research after research warns us about the dangers of echo chambers created by algorithmic systems, how it leads to radicalization and polarization, and demands accountability from the people who have the power to control their designs.

The image result in Turkish search is interesting in the sense that journalism is still a male occupation. The same looking people work in the field, and AI in this context is a robot of short stature waving an application form to be considered for the job.  The robot in English results is slightly more stylish. It even carries a Press card to depict the ethical obligations it has for the profession. You would almost think that this is the journalist working long hours to break an investigative piece, or one risking their life to report from conflict zones.

Yapay zeka ve finans : AI and finance

A fire hydrant in front of a digital clock

Description automatically generated with medium confidence

The finance sector,  banking and insurance industries reflect some of the most mature use cases of AI systems. For decades now, banking has been using algorithmic systems for pattern recognition and fraud detection, for credit scoring and credit/loan determinations, for electronic transaction matching to name a few. The insurance industry likewise heavily uses algorithmic systems and big data to determine insurance eligibility, policy premiums and in certain cases claim management.  Finance was one of the first industries disrupted by emerging technologies. FinTech created a number of companies and applications to break the hold of major financial institutions on the market. Big banks responded with their own innovations.

So, it is again interesting to see that even with such mature use of AI in a field, robot images are still first in the search results. We do not see the app which you used to transfer funds to your family or friends. Nor the high frequency trading algorithms which currently carry more than 70% of all daily stock exchange transactions. It is not the algorithms which collect hundreds of data points about you from your grocery shopping to GPS locations to make a judgement about your creditworthiness – your trustworthiness. It is not the sentiment analysis AI which scans millions of corporate reporting, public disclosures or even tweets about publicly traded companies and make microsecond judgements on what stocks to buy. It is not the AI algorithm which determines the interest rate and limit on your next credit card or loan application. No, it is the image of another white robot staring at a digital board of what we can assume to be stock prices. 

Yapay zeka ve ordu : AI and military

A picture containing outdoor, tree, grass, military vehicle

Description automatically generated A picture containing weapon, old

Description automatically generated

AI and military usE cases are a whole different story in the scheme of AI innovation and policy discussions. AI systems have been used for many years in satellite imagery analysis, pattern recognition, weapon development and simulations etc. The more recent debates intertwine geopolitics with an AI arms race. This indeed should keep all of us awake at night. The importance of autonomous lethal weapons (LAWs) by militaries as well as non-traditional actors is an issue upon which every single state in the world seems to agree. 

Yet agreement does not mean action. It does not mean human life is protected. LAWs have the capacity to make decisions by themselves to attack – without any accountability. Micro drones can be combined with facial recognition and attack systems to take down individuals and political dissenters. Drones can be remotely controlled to drop ammunition over remote regions. Robotic systems (correct depiction) can be used for landmine removal, crowd control or perimeter security. All these AI systems already exist. The image results though again reflect an interesting narrative. The image in Turkish results show a female American soldier using a robot to carry heavy equipment. The robot here is more like a mule in this depiction than an autonomous killer.  The image result in English shows a mixed gender robot group in what seems to be camouflage green color. At least the glowing white will not be an issue for the safety of these robots.

Yapay zeka ve eğitim : AI and Education

Yapay Zekanın Eğitimdeki 10 Kullanım Alanı – Social Business Türkiye Text

Description automatically generated

When it comes to AI and education, the images continue to be robot related. The first robot lifts kids up to the skies to show what is on the horizon. It has nothing to do with the hype of AI-powered training systems or learning analytics which are hitting schools and universities across the globe. The AI here does not seem to use proctoring software to discriminate or surveil students. It also apparently does not matter if you do not have access to broadband to interact with this AI or do your schoolwork. The search result in English, on the other hand, shows a robot which needs a blackboard and a piece of chalk to process mathematical problems. If your Excel or Tableu or R software does not look like this image, you might want to return to the vendor. Also if you are an educator in social sciences or humanities, it is probably time to re-think the future of your career.

Yapay zeka ve mühendislik : AI and engineering

Diagram

Description automatically generated with medium confidence Graphical user interface

Description automatically generated with low confidence

The blackboard and chalk using robot is better off in the future of engineering. Educator robot might be short on resources, but the engineer robot will use a digital board to do the same calculations.  Staring at this board will eventually ensure the robot engineer solves the problem. In the Turkish version, the robot gazes at a field of hexagons. If you are a current engineer in any field using AI software to visualize your data in multiple dimensions, running design or impact scenarios, or building code etc – does this look like your algorithm? 

Yapay zeka ve satış : AI and sales

A picture containing text, electronics

Description automatically generated A group of people working on a computer

Description automatically generated with low confidence

If you are a salesperson in Turkey, the prospects for you are a bit iffy. The future seems to require your brain to be exposed and held in the air. There is a safety net of a palm there to protect your AI brain just in case there is too much overload.  However if you are in sales in the English-speaking world, your sales team or your call center staff will be more of white glowy male robots. Despite being a robot, these AI systems will still need access to a laptop to type things and process data. They will also need headsets to communicate with customers because the designers forgot to include voice recognition and analysis software in the first place. Maybe next time you hear ‘press 0 to speak to an agent’ you might have different images in your mind. Never mind how the customer support services you call record your voice and train their algorithms with a very weak consent notice (‘your call might be recorded for training and quality purposes’ sound familiar?). Never mind the fact most of the current AI applications are chatbots on the websites you visit, or automated text algorithms which inquire about your questions. Never mind the cheap human labor which churns through the sales and call center operations without much of worker rights or protections.    

Yapay zeka ve mimarlık : AI and architecture

A statue of a person with a city in the background

Description automatically generated with low confidence A statue of a person with a city in the background

Description automatically generated with low confidence

It was surprising to see the same image as the first result in both Turkish and English search for architecture. I will not speculate on why this might be the case. However, our images and imaginations of current and future AI systems once again are limited to robots. This time a female robot is used in the depiction with city planning and architectural ideas flowing out from the back of the robot’s head.

Yapay zeka ve tarım : AI and agriculture

A picture containing text, plant, grass

Description automatically generated

Finally, I wanted to check what the situation was for agriculture. It was surprising that Turkish image reflected a robot delicately picking a grain of wheat. Turkey used to be a country proud of its agricultural heritage and its ability to self-sustain on food. It used to be a net exporter of food products.  Over the years, it lost that edge due to a number of factors. The current imagery of AI does not seem to take into account any human who suffer the harsh conditions in the fields. The image on the right is more focused on the conditions of the nature to ensure efficiency and high production. It was refreshing to see that at least the image of green fields was kept and maybe that stays for us a reminder that we need to respect and protect the nature. 

So, returning to where I started, images matter.  We need to be cognizant of how the emerging technologies are being visualized, why they are depicted in these ways, who makes those decisions and hence shapes the conversation, who benefits and who is harmed from such framing. We need to imagine technologies which move us towards humanity, equity and justice. We also need the images of those technologies to be accurate, diverse and inclusive.

Instead of assigning human characteristics to algorithms (which are at the end of the day human made code and rules), we need to reflect the human motivations and decisions embedded in these systems. Instead of depicting AI with superhuman powers, we need to show the labor of humans which build these systems. Instead of focusing only on robots and robotics, we need to explain AI as software embedded in our phones, laptops, apps, home appliances, cars, or surveillance infrastructures. Instead of thinking of AI as an independent entity or intelligence, we need to explain AI being used as a tool-making decisions about our identity, health, finances, work, education or our rights and freedoms. 

Handmade, Remade, Unmade A.I.

Two digitally illustrated green playing cards on a white background, with the letters A and I in capitals and lowercase calligraphy over modified photographs of human mouths in profile.

The Journey of Alina Constantin’s Art

Alina’s image, Handmade A.I., was one of the first additions to the Better Images of AI repository. The description affixed to the image on the site outlines its ‘alternative redefinition of AI’, bringing back into play the elements of human interaction which are so frequently excluded from discussions of the tech. Yet now, a few months on from the introduction of the image to the site, Alina’s work itself has undergone some ‘alternative redefinition’. This blog post explores the journey of this particular image, from the details of its conception to its numerous uses since: How has the image itself been changed, adapted in significance, semantically used? 

Alina Constantin is a multicultural game designer, artist and organiser whose work focuses on unearthing human-sized stories out of large systems. For this piece, some of the principles of machine learning like interpretation, classification, and prioritisation were encoded as the more physical components of human interaction: ‘hands, mouths and handwritten typefaces’, forcing us to consider our relationship to technology differently. We caught up with Alina to discuss further the process (and meaning) behind the work.

What have been the biggest challenges in creating Better Images of AI?

Representing AI comes with several big challenges. The first is the ongoing inundation of our collective imagination with skewed imagery, falsely representing these technologies in practice, in the name of simplification, sensationalism, and our human impulse towards personification. The second challenge is the absence of any single agreed-upon definition of AI, and obviously the complexity of the topic itself.

What was your approach to this piece?

My approach was largely an intricate process of translation. To stay focused upon the ‘why of A.I’ in practical terms, I chose to focus on elements of speech, also wanting to highlight the human sources of our algorithms in hand drawing letters and typefaces. 

I asked questions, and selected imagery that could be both evocative and different. For the back side of the cards, not visible in this image, I bridged the interpretive logic of tarot with the mapping logic of sociology, choosing a range of 56 words from varying fields starting with A/I to allow for more personal and specific definitions of A.I. To take this idea further, I then mapped the idea to 8 different chess moves, extending into a historical chess puzzle that made its way into a theatrical card deck, which you can play with here. You can see more of the process of this whole project here.

This process of translating A.I via my own artist’s tool set of stories/gameplay was highly productive, requiring me to narrow down my thinking to components of A.I logic which could be expressed and understood by individuals with or without a background in tech. The importance of prototyping, and discussing these ideas with audiences both familiar and unfamiliar with AI helped me validate and adjust my own understanding and representation–a crucial step for all of us to assure broader representation within the sector.

So how has Alina’s Better Image been used? Which meanings have been drawn out, and how has the image been redefined in practice? 

One implementation of ‘Handmade A.I.’, on the website of one of our affiliated organisations We and AI, remains largely aligned with the artist’s reading of it. According to We and AI, the image was chosen due to its re-centring of the human within the AI conversation: the human hands still hold the cards, humanity are responsible for their shuffling, their design (though not necessarily completely in control of which ones are dealt.) Human agency continues to direct the technology, not the other way round. As a key tenet of the organisation, and a key element of the image identified by Alina, this all adds up. 

https://weandai.org/, use of Alina’s image

A similar usage by the Universität Hamburg, to accompany a lecture on responsibility in the AI field, follows a similar logic. The additional slant of human agency considered from a human rights perspective again broadens Alina’s initial image. The components of human interaction which she has featured expand to a more universal representation of not just human input to these technologies but human culpability–the blood, in effect, is on our hands. 

Universität Hamburg use of Alina’s image

Another implementation, this time by the Digital Freedom Fund, comes with an article concerning the importance of our language around these new technologies. Deviating slightly from the visual, and more into the semantics of artificial intelligence, the use may at first seem slightly unrelated. However, as the content of the article develops, concerns surrounding the ‘technocentrism’ rather than anthropocentrism in our discussions of AI become a focal point. Alina’s image captures the need to reclaim language surrounding these technologies, placing the cards firmly back in human hands. The article directly states, ‘Every algorithm is the result of a desire expressed by a person or a group of persons’ (Meyer, 2022.) Technology is not neutral. Like a pack of playing cards, it is always humanity which creates and shuffles the deck. 

Digital Freedom Fund use of Alina’s image

This is not the only instance in which Alina’s image has been used to illustrate the relation of AI and language. The question “Can AI really write like a human?” seems to be on everyone’s lips, and ‘Handmade A.I.’ , with its deliberately humanoid typeface, its natural visual partner. In a blog post for LSE, Marco Lehner (of BR AI+) discusses employment of a GPT-3 bot, and whilst allowing for slightly more nuance, ultimately reaches a similar crux– human involvement remains central, no matter how much ‘automation’ we attempt.

Even as ‘better’ images such as Alina’s are provided, we still see the same stock images used over and over again. Issues surrounding the speed and need for images in journalistic settings, as discussed by Martin Bryant in our previous blog post, mean that people will continue to almost instinctively reach for the ‘easy’ option. But when asked to explain what exactly these images are providing to the piece, there’s often a marked silence. This image of a humanoid robot is meaningless– Alina’s images are specific; they deal in the realities of AI, in a real facet of the technology, and are thus not universally applicable. They relate to considerations of human agency, responsible AI practice, and don’t (unlike the stock photos) act to the detriment of public understanding of our tech future.