Images of AI – Between Fiction and Function

This image shows an abstract microscopic photograph of a Graphics Processing Unit resembling a satellite image of a big city. The image has been overlayed with a bright blue filter. In the middle of the image is the text, 'Images of AI - Between Fiction and Function' in a white text box with black text. Beneath, in a maroon text box is the author's name in white text.

“The currently pervasive images of AI make us look somewhere, at the cost of somewhere else.”

In this blog post, Dominik Vrabič Dežman provides a summary of his recent research article, ‘Promising the future, encoding the past: AI hype and public media imagery‘.

Dominik sheds light on the importance of the Better Images of AI library which fosters a more informed, nuanced public understanding of AI by breaking the stronghold of the “deep blue sublime” aesthetic with more diverse and meaningful representations of AI.

Dominik also draws attention to the algorithms which perpetuate the dominance of familiar and sensationalist visuals and calls for movements which reshape media systems to make better images of AI more visible in public discourse.

The full paper is published in the AI and Ethics Journal’s special edition on ‘The Ethical Implications of AI Hype, a collection edited by We and AI.


AI promises innovation, yet its imagery remains trapped in the past. Deep-blue, sci-fi-inflected visuals have flooded public media, saturating our collective imagination with glowing, retro-futuristic interfaces and humanoid robots. These “deep blue sublime” [1] images, which draw on a steady palette of outdated pop-cultural tropes and clichés, do not merely depict AI — they shape how we think about it, reinforcing grand narratives of intelligence, automation, and inevitability [2]. It takes little to acknowledge that the AI discussed in public media is far from the ethereal, seamless force these visuals disclose. Instead,  the term generally refers to a sprawling global technological enterprise, entangled with labor exploitation, ecological extraction, and financial speculation [3–10] — realities conspicuously absent from its dominant public-facing representations.

The widespread rise of these images is suspended against intensifying “AI hype” [11], which has been compared to historical speculative investment bubbles [12,13]. In my recent research [1,14,15], I join a growing body of research looking into images of AI [16–21], to explore how AI images operate at the intersection of aesthetics and politics. My overarching ambition has been to contribute an integrated account of the normative and the empirical dimensions of public images of AI to the literature.  I’ve explored how these images matter politically and ethically, inseparable from the pathways they take in real-time, echoing throughout public digital media and wallpapering it in seen-before denominations of blue monochrome.

Rather than measuring the direct impact of AI imagery on public awareness, my focus has been on unpacking the structural forces that produce and sustain these images. What mechanisms dictate their circulation? Whose interests do they serve? How might we imagine alternatives? My critique targets the visual framing of AI in mainstream public media — glowing, abstract, blue-tinted veneers seen daily by millions on search engines, institutional websites, and in reports on AI innovation. These images do not merely aestheticize AI; they foreclose more grounded, critical, and open-ended ways of understanding its presence in the world.


The Intentional Mindlessness of AI Images

This image shows a google images search for 'artificial intelligence'. The result is a collection of images which contain images of the human brain, the colour blue, and white humanoid robots.

Google Images search results for “artificial intelligence”. January 14, 2025. Search conducted from an anonymised instance of Safari. Search conducted from Amsterdam, Netherlands.

Recognizing the ethico-political stakes of AI imagery begins with acknowledging that what we spend our time looking at, or not looking beyond, matters politically and ethically. The currently pervasive images of AI make us look somewhere, at the cost of a somewhere else. The sheer volume of these images, and their dominance in public media, slot public perception into repetitive grooves dominated by human-like robots, glowing blue interfaces, and infinite expanses of deep-blue intergalactic space. By monopolizing the sensory field through which AI is perceived, they reinforce sci-fi clichés, and more importantly,  obscure the material realities — human labor, planetary resources, material infrastructures, and economic speculation — that drive AI development [22,23].

In a sense, images of AI could be read as operational [24–27], enlisted in service of an operation which requires them to look, and function, the way they do. This might involve their role in securing future-facing AI narratives, shaping public sentiment towards acceptance of AI innovation, and supporting big tech agendas for AI deployment and adoption. The operational nature of AI imagery means that these images cannot be studied purely as an aesthetic artifact, or autonomous works of aesthetic production. Instead, these images are minor actors, moving through technical, cultural and political infrastructures. In doing so, individual images do not say or do much per se – they are always already intertwined in the circuits of their economic uptake, circulation, and currency; not at the hands of the digital labourers who created them, but of the human and algorithmic actors that keep them in circulation.

Simultaneously, the endurance of these images is less the result of intention than of a more mindless inertia. It quickly becomes clear how these images do not reflect public attitudes, nor of their makers; anonymous stock-image producers, digital workers mostly located in the global South [28]. They might reflect the views of the few journalistic or editorial actors that choose the images in their reporting [29], or are simply looking to increase audience engagement through the use of sensationalist imagery [30]. Ultimately, their visibility is in the hands of algorithms rewarding more of the same familiar visuals over time [1,31], of stock image platforms and search engines, which maintain close ties with media conglomerates  [32], which, in turn, have long been entangled with big tech [33]. The stock  images are the detritus of a digital economy that rewards repetition over revelation: endlessly cropped, upscaled, and regurgitated “poor images” [34], travelling across cyberspace as they become recycled, upscaled, cropped, reused, until they are pulled back into circulation by the very systems they help sustain [15,28].


AI as Ouroboros: Machinic Loops and Recursive Aesthetics

As algorithms increasingly dictate who sees what in the public sphere [35–37], they dictate not only what is seen but also what is repeated. Images of AI become ensnared in algorithmic loops, which sediment the same visuality over time on various news feeds and search engines [15]. This process has intensified with the proliferation of generative AI: as AI-generated content proliferates, it feeds on itself—trained on past outputs, generating ever more of the same. This “closing machinic loop” [15,28] perpetuates aesthetic homogeneity, reinforcing dominant visual norms rather than challenging them. The widespread adoption of AI-generated stock images further narrows the space for disruptive, diverse, and critical representations of AI, making it increasingly difficult for alternative images to surface in public visibility.

The image shows a humanoid figure with a glowing, transparent brain stands in a digital landscape. The figure's body is composed of metallic and biomechanical components, illuminated by vibrant blue and pink lights. The background features a high-tech grid with data streams, holographic interfaces, and circuitry patterns.

ChatGPT 4o output for query: “Produce an image of ‘Artificial Intelligence’”. 14 January 2025.


Straddling the Duality of AI Imagery

In critically examining AI imagery, it is easy to veer into one of two deterministic extremes — both of which risk oversimplifying how these images function in shaping public discourse:

  1. Overemphasizing Normative Power:

This approach risks treating AI images as if they have autonomous agency, ignoring the broader systems that shape their circulation. AI images appear as sublime artifacts—self-contained objects for contemplation, removed from their daily life as fleeting passengers in the digital media image economy. While the production of images certainly exerts influence in shaping socio-technical imaginaries [38,39], they operate within media platforms, economic structures, and algorithmic systems that constrain their impact.

2. Overemphasizing Materiality:

This perspective reduces AI to mere infrastructure, seeing images as passive reflections of technological and industrial processes, rather than an active participant in shaping public perception. From this view, AI’s images are dismissed as epiphenomenal, secondary to the “real” mechanisms of AI’s production: cloud computing, data centers, supply chains, and extractive labor. In reality, AI has never been purely empirical; cultural production has been integral to AI research and development from the outset, with speculative visions long driving policy, funding, and public sentiment [40].

Images of AI are neither neutral nor inert. The current diminishing potency of glowing, sci-fi-inflected AI imagery as a stand-in for AI in public media suggests a growing fatigue with their clichés, and cannot be untangled from a general discomfort with AI’s utopian framing, as media discourse pivots toward concerns over opacity, power asymmetries, and scandals in its implementation [29,41]. A robust critique of the cultural entanglements of AI requires addressing both its normative commitments (promises made to the public), and its empirical components (data, resources, labour; [6]).

Toward Better Images: Literal Media & Media Literacy

Given the embeddedness of AI images within broader machinations of power, the ethics of AI images are deeply tied to public understanding and awareness of such processes. Cultivating a more informed, critical public — through exposure to diverse and meaningful representations of AI — is essential to breaking the stronghold of the deep blue sublime.

At the individual level, media literacy equips the public to critically engage with AI imagery [1,42,43]. By learning to question the visual veneers, people can move beyond passive consumption of the pervasive, reductive tropes that dominate AI discourse. Better images recalibrate public perception, offering clearer insights into what AI is, how it functions, and its societal impact.The kind of images produced are equally important. Better images would highlight named infrastructural actors, document AI research and development, and/or, diversify the visual associations available to us, loosening the visual stronghold of the currently dominant tropes.

This greatly raises the bar for news outlets in producing original imagery of didactic value, which is where open-source repositories such as Better Images of AI serve as invaluable resources. This crucially bleeds into the urgency for reshaping media systems, making better images readily available to creators and media outlets, helping them move away from generic visuals toward educational, thought-provoking imagery. However, creating better visuals is not enough;  they must become embedded into media infrastructure to become the norm rather than the exception.

Given the above, the role of algorithms cannot be ignored. As mentioned above, algorithms drive what images are seen, shared, and prioritized in public discourse. Without addressing these mechanisms, even the most promising alternatives risk being drowned by the familiar clichés. Rethinking these pathways is essential to ensure that improved representations can disrupt the existing visual narrative of AI.

Efforts to create better AI imagery are only as effective as their ability to reach the public eye and disrupt the dominance of the “deep blue sublime” aesthetic in public media. This requires systemic action—not merely producing different images in isolation, but rethinking the networks and mechanisms through which these images are circulated. To make a meaningful impact, we must address both the sources of production and the pathways of dissemination. By expanding the ways we show, think about, and engage with AI, we create opportunities for political and cultural shifts. A change in one way of sensing AI (writing / showing / thinking / speaking) invariably loosens gaps for a change in others.

Seeing AI ≠ Believing AI

AI is not just a technical system; it is a speculative, investment-driven project, a contest over public consensus, staged by a select few to cement its inevitability [44]. The outcome is a visual regime that detaches AI’s media portrayal from its material reality: a territorial, inequitable, resource-intensive, and financially speculative global enterprise.

Images of AI come from somewhere (they are products of poorly-paid digital labour, served through algorithmically-ranked feeds), do something (torque what is at-hand for us to imagine with, directing attention away from AI’s pernicious impacts and its growing inequalities), and go somewhere (repeat themselves ad nauseam through tightening machinic loops, numbing rather than informing; [16]).

The images have left few fooled, and represent a missed opportunity for adding to public sensitisation and understanding regarding AI. Crucially, bad images do not inherently disclose bad tech, nor do good images promote good tech; the widespread adoption of better images of AI in public media would not automatically lead to socially good or desirable understandings, engagements, or developments of AI. That remains the issue of the current political economy of AI, whose stakeholders only partially determine this image economy. Better images alone  cannot solve this, but they might open slivers of insight into AI’s global “arms race.”

As it stands, different visual regimes struggle to be born. Fostering media literacy, demanding critical representations, and disrupting the algorithmic stranglehold on AI imagery are acts of resistance. If AI is here to stay, then so too must be our insistence on seeing it otherwise — beyond the sublime spectacle, beyond inevitability, toward a more porous and open future.

About the author

Dominik Vrabič Dežman (he/him) is an information designer and media philosopher. He is currently at the Departments of Media Studies and Philosophy at the University of Amsterdam. Dominik’s research interests include public narratives and imaginaries of AI, politics and ethics of UX/UI, media studies, visual communication and digital product design.

References

1. Vrabič Dežman, D.: Defining the Deep Blue Sublime [Internet]. SETUP; (2023). 2023. https://web.archive.org/web/20230520222936/https://deepbluesublime.tech/

2. Burrell, J.: Artificial Intelligence and the Ever-Receding Horizon of the Future [Internet]. Tech Policy Press. (2023). 2023 Jun 6. https://techpolicy.press/artificial-intelligence-and-the-ever-receding-horizon-of-the-future/

3. Kponyo, J.J., Fosu, D.M., Owusu, F.E.B., Ali, M.I., Ahiamadzor, M.M.: Techno-neocolonialism: an emerging risk in the artificial intelligence revolution. TraHs [Internet]. (2024 [cited 2025 Feb 18]. ). https://doi.org/10.25965/trahs.6382

4. Leslie, D., Perini, A.M.: Future Shock: Generative AI and the International AI Policy and Governance Crisis. Harvard Data Science Review [Internet]. (2024 [cited 2025 Feb 18]. ). https://doi.org/10.1162/99608f92.88b4cc98

5. Regilme, S.S.F.: Artificial Intelligence Colonialism: Environmental Damage, Labor Exploitation, and Human Rights Crises in the Global South. SAIS Review of International Affairs. 44:75–92. (2024. ). https://doi.org/10.1353/sais.2024.a950958

6. Crawford, K.: The atlas of AI power, politics, and the planetary costs of artificial intelligence [Internet]. (2021). https://www.degruyter.com/isbn/9780300252392

7. Sloane, M.: Controversies, contradiction, and “participation” in AI. Big Data & Society. 11:20539517241235862. (2024. ). https://doi.org/10.1177/20539517241235862

8. Rehak, R.: On the (im)possibility of sustainable artificial intelligence. Internet Policy Review [Internet]. ((2024 Sep 30). ). https://policyreview.info/articles/news/impossibility-sustainable-artificial-intelligence/1804

9. Wierman, A., Ren, S.: The Uneven Distribution of AI’s Environmental Impacts. Harvard Business Review [Internet]. ((2024 Jul 15). ). https://hbr.org/2024/07/the-uneven-distribution-of-ais-environmental-impacts

10. : What we don’t talk about when we talk about AI | Joseph Rowntree Foundation [Internet]. (2024). 2024 Feb 8. https://www.jrf.org.uk/ai-for-public-good/what-we-dont-talk-about-when-we-talk-about-ai

11. Duarte, T., Barrow, N., Bakayeva, M., Smith, P.: Editorial: The ethical implications of AI hype. AI Ethics. 4:649–51. (2024. ). https://doi.org/10.1007/s43681-024-00539-x

12. Singh, A.: The AI Bubble [Internet]. Social Science Encyclopedia. (2024). 2024 May 28. https://www.socialscience.international/the-ai-bubble

13. Floridi, L.: Why the AI Hype is Another Tech Bubble. Philos Technol. 37:128. (2024. ). https://doi.org/10.1007/s13347-024-00817-w

14. Vrabič Dežman, D.: Interrogating the Deep Blue Sublime: Images of Artificial Intelligence in Public Media. In: Cetinic E, Del Negueruela Castillo D, editors. From Hype to Reality: Artificial Intelligence in the Study of Art and Culture. Rome/Munich: HumanitiesConnect; (2024). https://doi.org/10.48431/hsah.0307

15. Vrabič Dežman, D.: Promising the future, encoding the past: AI hype and public media imagery. AI Ethics [Internet]. (2024 [cited 2024 May 7]. ). https://doi.org/10.1007/s43681-024-00474-x

16. Romele, A.: Images of Artificial Intelligence: a Blind Spot in AI Ethics. Philos Technol. 35:4. (2022. ). https://doi.org/10.1007/s13347-022-00498-3

17. Singler, B.: The AI Creation Meme: A Case Study of the New Visibility of Religion in Artificial Intelligence Discourse. Religions. 11:253. (2020. ). https://doi.org/10.3390/rel11050253

18. Steenson, M.W.: A.I. Needs New Clichés [Internet]. Medium. (2018). 2018 Jun 13. https://web.archive.org/web/20230602121744/https://medium.com/s/story/ai-needs-new-clich%C3%A9s-ed0d6adb8cbb

19. Hermann, I.: Beware of fictional AI narratives. Nat Mach Intell. 2:654–654. (2020. ). https://doi.org/10.1038/s42256-020-00256-0

20. Cave, S., Dihal, K.: The Whiteness of AI. Philos Technol. 33:685–703. (2020. ). https://doi.org/10.1007/s13347-020-00415-6

21. Mhlambi, S.: God in the image of white men: Creation myths, power asymmetries and AI [Internet]. Sabelo Mhlambi. (2019). 2019 Mar 29. https://web.archive.org/web/20211026024022/https://sabelo.mhlambi.com/2019/03/29/God-in-the-image-of-white-men

22. : How to invest in AI’s next phase | J.P. Morgan Private Bank U.S. [Internet]. Accessed 2025 Feb 18. https://privatebank.jpmorgan.com/nam/en/insights/markets-and-investing/ideas-and-insights/how-to-invest-in-ais-next-phase

23. Jensen, G., Moriarty, J.: Are We on the Brink of an AI Investment Arms Race? [Internet]. Bridgewater. (2024). 2024 May 30. https://www.bridgewater.com/research-and-insights/are-we-on-the-brink-of-an-ai-investment-arms-race

24. Paglen, T.: Operational Images. e-flux journal. 59:3. (2014. ). 

25. Pantenburg, V.: Working images: Harun Farocki and the operational image. Image Operations. Manchester University Press; p. 49–62. (2016). 

26. Parikka, J.: Operational Images: Between Light and Data [Internet]. (2023). 2023 Feb. https://web.archive.org/web/20230530050701/https://www.e-flux.com/journal/133/515812/operational-images-between-light-and-data/

27. Celis Bueno, C.: Harun Farocki’s Asignifying Images. tripleC. 15:740–54. (2017. ). https://doi.org/10.31269/triplec.v15i2.874

28. Romele, A., Severo, M.: Microstock images of artificial intelligence: How AI creates its own conditions of possibility. Convergence: The International Journal of Research into New Media Technologies. 29:1226–42. (2023. ). https://doi.org/10.1177/13548565231199982

29. Moran, R.E., Shaikh, S.J.: Robots in the News and Newsrooms: Unpacking Meta-Journalistic Discourse on the Use of Artificial Intelligence in Journalism. Digital Journalism. 10:1756–74. (2022. ). https://doi.org/10.1080/21670811.2022.2085129

30. De Dios Santos, J.: On the sensationalism of artificial intelligence news [Internet]. KDnuggets. (2019). 2019. https://www.kdnuggets.com/on-the-sensationalism-of-artificial-intelligence-news.html/

31. Rogers, R.: Aestheticizing Google critique: A 20-year retrospective. Big Data & Society. 5:205395171876862. (2018. ). https://doi.org/10.1177/2053951718768626

32. Kelly, J.: When news orgs turn to stock imagery: An ethics Q & A with Mark E. Johnson [Internet]. Center for Journalism Ethics. (2019). 2019 Apr 9. https://ethics.journalism.wisc.edu/2019/04/09/when-news-orgs-turn-to-stock-imagery-an-ethics-q-a-with-mark-e-johnson/

33. Papaevangelou, C.: Funding Intermediaries: Google and Facebook’s Strategy to Capture Journalism. Digital Journalism. 0:1–22. (2023. ). https://doi.org/10.1080/21670811.2022.2155206

34. Steyerl, H.: In Defense of the Poor Image. e-flux journal [Internet]. (2009 [cited 2025 Feb 18]. ). https://www.e-flux.com/journal/10/61362/in-defense-of-the-poor-image/

35. Bucher, T.: Want to be on the top? Algorithmic power and the threat of invisibility on Facebook. New Media & Society. 14:1164–80. (2012. ). https://doi.org/10.1177/1461444812440159

36. Bucher, T.: If…Then: Algorithmic Power and Politics. Oxford University Press; (2018). 

37. Gillespie, T.: Custodians of the internet: platforms, content moderation, and the hidden decisions that shape social media. New Haven: Yale University Press; (2018). 

38. Jasanoff, S., Kim, S.-H., editors.: Dreamscapes of Modernity: Sociotechnical Imaginaries and the Fabrication of Power [Internet]. Chicago, IL: University of Chicago Press; Accessed 2022 Jun 26. https://press.uchicago.edu/ucp/books/book/chicago/D/bo20836025.html

39. O’Neill, J.: Social Imaginaries: An Overview. In: Peters MA, editor. Encyclopedia of Educational Philosophy and Theory [Internet]. Singapore: Springer Singapore; p. 1–6. (2016). https://doi.org/10.1007/978-981-287-532-7_379-1

40. Law, H.: Computer vision: AI imaginaries and the Massachusetts Institute of Technology. AI Ethics [Internet]. (2023 [cited 2024 Feb 25]. ). https://doi.org/10.1007/s43681-023-00389-z

41. Nguyen, D., Hekman, E.: The news framing of artificial intelligence: a critical exploration of how media discourses make sense of automation. AI & Soc. 39:437–51. (2024. ). https://doi.org/10.1007/s00146-022-01511-1

42. Woo, L.J., Henriksen, D., Mishra, P.: Literacy as a Technology: a Conversation with Kyle Jensen about AI, Writing and More. TechTrends. 67:767–73. (2023. ). https://doi.org/10.1007/s11528-023-00888-0

43. Kvåle, G.: Critical literacy and digital stock images. Nordic Journal of Digital Literacy. 18:173–85. (2023. ). https://doi.org/10.18261/njdl.18.3.4

44. Tacheva, Z., Appedu, S., Wright, M.: AI AS “UNSTOPPABLE” AND OTHER INEVITABILITY NARRATIVES IN TECH: ON THE ENTANGLEMENT OF INDUSTRY, IDEOLOGY, AND OUR COLLECTIVE FUTURES. AoIR Selected Papers of Internet Research [Internet]. (2024 [cited 2025 Feb 18]. ). https://doi.org/20250206083707000

What Do I See in ‘Ways of Seeing’ by Zoya Yasmine

At the top there is a Diptych contrasting a whimsical pastel scene with large brown rabbits, a rainbow, and a girl in a red dress on the left, and a grid of numbered superpixels on the right - emphasizing the difference between emotive seeing and analytical interpretation. At the bottom, there is black text which says 'what do i see in ways of seeing' by Zoya Yasmine. In the top right corner, there is text in a maroon text box which says 'through my eyes blog series'.

Artist contributions to the Better Images of AI library have always served a really important role in relation to fostering understanding and critical thinking about AI technologies and their context. Images facilitate deeper inquiries into the nature of AI, its history, and ethical, social, political and legal implications.

When artists create better images of AI, they often have to grapple with these narratives in their attempts to more realistically portray the technology and point towards its strengths and weaknesses. Furthermore, as artists freely share these images in our library, others can benefit from learning about the artist’s own internal motivations (which are provided in the descriptions) but the images can also inspire users’ own musings.

In this series of blog posts, some of our volunteer stewards are each taking turns to choose an image from the Archival Images of AI collection and unpack the artist’s processes and explore what that image means to them. 

At the end of 2024, we released the Archival Images of AI Playbook with AIxDESIGN and the Netherlands Institute for Sound and Vision. The playbook explores how existing images – especially those from digital heritage collections – can help us craft more meaningful visual narratives about AI. Through various image-makers’ own attempts to make better images of AI, the playbook shares numerous techniques which can teach you how to transform existing images into new creations. 

Here, Zoya Yasmine unpacks ‘Ways of Seeing’ Nadia Piet’s (an image-maker) own better image of AI that was created for the playbook. Zoya comments on how it is a really valuable image to depict the way that text-to-image generators ‘learn’ how to generate their output creations. Zoya considers how this image relates to copyright law (she’s a bit of an intellectual property nerd) and the discussions about whether AI companies should be able to use individual’s work to train their systems without explicit consent or remuneration. 

ALT text: Diptych contrasting a whimsical pastel scene with large brown rabbits, a rainbow, and a girl in a red dress on the left, and a grid of numbered superpixels on the right - emphasizing the difference between emotive seeing and analytical interpretation.

Nadia Piet + AIxDESIGN & Archival Images of AI / Better Images of AI / Ways of Seeing / CC-BY 4.0

‘Ways of Seeing’ by Nadia Piet 

This diptych contrasts human and computational ways of seeing: one riddled with memory and meaning, the other devoid of emotional association and capable of structural analysis. The left pane shows an illustration from Tom Seidmann-Freud’s Book of Hare Stories (1924) which portrays a whimsical, surreal scene that is both playful and uncanny. On the right, the illustration is reduced to a computational rendering, with each of its superpixels (16×16) fragmented and sorted by visual complexity with a compression algorithm. 


Copyright and training AI systems 

Training AI systems requires substantial amounts of input data – from images, videos, texts and other content. Based on the data from these materials, AI systems can ‘learn’ how to make predictions and provide outputs. However, lots of these materials used to train AI systems are often protected by copyright owned by another parties which raises complex questions about ownership and the legality of using such data without permission. 

In the UK, Getty Images filed a lawsuit against Stability AI (developers of a text-to-image model called Stable Diffusion) claiming that 7.3 million of its images were unlawfully scrapped from its website to train Stability AI’s model. Similarly, Mumsnet has launched a legal complaint against OpenAI, the developer of ChatGPT, accusing the AI company of scraping content from its site (with over 6 billion words shared by community members) without consent. 

The UK’s Copyright, Designs and Patents Act 1998 (the Act) provides companies like Getty Images and Mumsnet with copyright protection over their databases and assets. So unless an exception applies, permission (through a license) is required if other parties wish to reproduce or copy the content. Section 29(A) of the Act provides an exception which permits copies of any copyright protected material for the purposes of Text and Data Mining (TDM) without a specific license. But, this lenient provision is for non-commercial purposes only. Although the status of AI systems like Stable Diffusion and ChatGPT have not been tested before the courts yet, they are likely to fall outside the scope of non-commercial purposes

TDM is the automated technique used to extract and analyse vast amounts of online materials to reveal relationships and patterns in the data. TDM has become an increasingly valuable tool to train lucrative generative AI systems on mass amounts of materials scraped from the Internet. It becomes clear that AI models cannot be developed or built efficiently without input data that has been created by human artists, researchers, writers, photographers, publishers, and creators. However, as much of their works are being used without payment or attribution by AI companies, big tech companies are essentially ‘freeriding’ on the works of the creative industry who have invested significant time, effort, and resources into producing such rich works. 


How does this image relate to current debates about copyright and AI training? 

When I saw this image, it really prompted me to think about the training process of AI systems and the purpose of the copyright system. ‘Ways of Seeing’ has stimulated my own thoughts about how computational models ‘learn’ and ‘see’ in contrast to human creators

Text-to-image AI generators (like Stable Diffusion or Midjourney) are repeatedly trained on thousands of images which allow the models to ‘learn’ to identify patterns, like what common objects and colours look like, and then reproduce these patterns when instructed to create new images. While Piet’s image has been designed to illustrate a ‘compression algorithm’ process, I think it also serves as a useful visual to reflect how AI processes visual data computationally, reducing it to pixels, patterns, or latent features. 

It’s important to note that often the images generated by AI models will not necessarily be exact copies of the original images used in the training process – but instead, they serve as statistical approximations of training data which have informed the model’s overall understanding of how objects are represented. 

It’s interesting to think about this in relation to copyright and what this legal framework serves to protect. Copyright stands to protect the creative expression of works – for example, the lighting, exposure, filter, or positioning of an image – but not the ideas themselves. The reason that copyright law focuses on these elements is because they reflect the creator’s own unique thoughts and originality. However, as Piet’s illustration can usefully demonstrate, what is significant about the AI training process for copyright law is that often TDM is often not used to extract the protected expression of the materials.

To train AI models, it is often the factual elements of the work that might be the most valuable (as opposed to the creative aspects). The training process relies on the broad visual features of the images, rather than specific artistic choices. For example, when training text-to-image models, TDM is not often used to extract data about the lighting techniques which are employed to make an image of a cat particularly appealing. Instead, the accessibility to images of cats which detail the features that resemble a cat (fur, whiskers, big eyes, paws) are what’s important. In Piet’s image, the protectable parts of the illustration from the ‘Book of Hare Stories’ would subsist in the artistic style and execution  – for example, the way that the hare and other elements are drawn, the placement and interaction of the elements, and the overall design of the image. 

The specific challenge for copyright law is that AI companies are unable to capture these ‘unprotectable’ factual elements of materials without making a copy or storing the protected parts (Lemley and Casey, 2020). I think Nadia’s image really highlights the transformation of artwork into fragmented ‘data’ for training systems which challenges our understanding of creativity and originality. 

My thoughts above are not to suggest that AI companies should be able to freely use copyright protected works as training data for their models without remunerating or seeking permission from copyright owners. Instead, the way that TDM and generative AI ‘re-imagine’ the value of these ‘unprotectable’ elements means that AI companies still freeride on creator’s materials. Therefore, AI companies should be required to explicitly license copyright-protected materials used to train their systems so creators are provided with proper control over their works (you can read more about my thoughts here).

Also, I do not deny that there are generative AI systems that aim to reproduce a particular artist’s style – see here. In these instances, I think it would be easier to prove that there was copyright infringement since these are a clear reproduction of ‘protected elements’. However, if this is not the purpose of the AI tool, developers try to avoid the outputs replicating training data too similarly as this can open them up more easily to copyright infringement for both the input (as discussed in this piece) but also the output image (see here for a discussion). 


My favourite part of Nadia Piet’s image

I think my favourite part of the image is the choice of illustration used to represent computational processing. As Nadia writes in her description, Tom Seidmann-Freud’s illustration depicts a “whimsical, surreal scene that is both playful and uncanny”. Tom, an Austrian-Jewish painter and children’s book author and illustrator (and also Sigmund Freud’s niece), led a short life and she died of an overdose of sleeping pills in 1930 at age 37 after the death of her husband a few months prior. 

“The Hare and the Well” (Left), “Fable of the Hares and the Frogs” (Middle), “Why the Hare Has No Tail” (Right) by Tom Seidmann-Freud derived in the Public Domain Review

After Tom’s death, the Nazis came to power and attempted to destroy much of the art she had created as part of the purge of Jewish authors. Luckily, Tom’s family and art lovers were able to preserve much of her work. I think Nadia’s choice of this image critiques what might be ‘lost’ when rich, meaningful art is reduced to AI’s structural analysis. 

A second point, although not related exactly to the image, is the very thoughtful title, ‘Ways of Seeing’. ‘Ways of Seeing’ was a 1972 BBC television series and book created by John Berger. In the series, Berger criticised traditional Western cultural aesthetics by raising questions about hidden ideologies in visual images like the male gaze embedded in the female nude. He also examined what had changed in our ‘ways of seeing’ in the time between the art was made and the present day. Side note: I think Berger’s would have been a huge fan of Better Images of AI. 

In a similar vein, Nadia has used Seidmann-Freud’s art as a way to explore new parallels with technology like AI which would not have been thought about at the time the work was created. In addition, Nadia’s work serves as an invitation to see and understand AI differently, and like Berges, her work supports artists around the world.


The value of Nadia’s ‘better image of AI’ for copyright discussions

As Nadia writes in the description, Tom Seidmann-Freud’s illustration was derived from the Public Domain Review, where it is written that “Hares have been known to serve as messengers between the conscious world and the deeper warrens of the mind”. From my perspective, Nadia’s whole image acts as a messenger to convey information about the two differing modes of seeing between humans and AI models. 

We need better images of AI like this. Especially for the purposes of copyright law so we can have more meaningful and informed conversations about the nature of AI and its training processes. All too often, in conversations about AI and creativity, images used depict humanoid robots painting on a canvas or hands snatching works.

‘AI art theft’ illustration by Nicholas Konrad (Left) and Copyright and AI image (Right)

These images create misleading visual metaphors that suggest that AI is directly engaging in creative acts in the same way that humans do. Additionally, visuals showing AI ‘stealing’ works reduce the complex legal and ethical debates around copyright, licensing, and data training to overly simplified, fear-evoking concepts.

Thus, better images of AI, like ‘Ways of Seeing’, can serve a vital role as a messenger to represent the reality of how AI systems are developed. This paves the way for more constructive legal dialogues around intellectual property and AI that protect creator’s rights, while allowing for the development of AI technologies based on consented, legally acquired datasets.


About the author

Zoya Yasmine (she/her) is a current PhD student exploring the intersection between intellectual property, data, and medical AI. She grew up in Wales and in her spare time she enjoys playing tennis, puzzling, and watching TV (mostly Dragon’s Den and Made in Chelsea). Zoya is also a volunteer steward for Better Images of AI and part of many student societies including AI in Medicine, AI Ethics, Ethics in Mathematics & MedTech. 


This post was also kindly edited by Tristan Ferne – lead producer/researcher at BBC Research & Development.


If you want to contribute to our new blog series, ‘Through My Eyes’, by selecting an image from the Archival Images of AI collection and exploring what the image means to you, get in touch (info@betterimagesofai.org)

‘Weaved Wires Weaving Me’ by Laura Martinez Agudelo

At the top, Digital collage featuring a computer monitor with circuit board patterns on the screen. A Navajo woman is seated on the edge of the screen, appearing to stitch or fix the digital landscape with their hands. Blue digital cables extend from the monitor, keyboard, and floor, connecting the image elements. Beneath, there is the text in black, 'Weaved Wires Weaving Me' by Laura Martinez Agudelo'. In the top right corner, there is a text box in maroon with the text in white: 'through my eyes blog series'

Artist contributions to the Better Images of AI library have always served an important role to foster understanding and critical thinking about AI technologies and their context. Images facilitate deeper inquiries into the nature of AI, its history, and ethical, social, political and legal implications.

When artists create better images of AI, they often have to grapple with these narratives in their attempts to more realistically portray the technology and point towards its strengths and weaknesses. Furthermore, as artists freely share these images in our library, others can benefit from learning about the artist’s own internal motivations (which are provided in image descriptions) but the images can also inspire users’ own musings.

In our blog series, “Through My Eyes”, some of our volunteer stewards take turns selecting an image from the Archival Images of AI collection. They delve into the artist’s creative process and explore what the image means to them—seeing it through their own eyes.

At the end of 2024, we released the Archival Images of AI Playbook with AIxDESIGN and the Netherlands Institute for Sound and Vision. The playbook explores how existing images – especially those from digital heritage collections – can help us craft more meaningful visual narratives about AI. Through various image-makers’ own attempts to make better images of AI, the playbook shares numerous techniques which can teach you how to transform existing images into new creations. 

Here, Laura Martinez Agudelo shares her personal reflections on ‘Weaving Wires 1’ – Hanna Barakat’s own better image of AI that was created for the playbook. Laura comments on how the image uncovers the hidden Navajo women’s labor behind the assembly of microchips in Silicon Valley – inviting us to confront the oppressive cultural conditions of conception, creation and mediation of the technology industry’s approach to innovation.


Digital collage featuring a computer monitor with circuit board patterns on the screen. A Navajo woman is seated on the edge of the screen, appearing to stitch or fix the digital landscape with their hands. Blue digital cables extend from the monitor, keyboard, and floor, connecting the image elements.

Hanna Barakat + AIxDESIGN & Archival Images of AI / Better Images of AI / Weaving Wires 1 / CC-BY 4.0


Cables came out and crossed my mind 

Weaving wires 1 by Hanna Barakat is about hidden histories of computer labor. As it is explained in the image’s description, her digital collage is inspired by the history of computing in the 1960s in Silicon Valley, where the Fairchild Semiconductor company employed Navajo women for intensive tasks such as assembling microchips. Their work (actually with their hands and their digits) was a way for these women to provide for their families in an economically marginalized context.

At that time, this labor was made to be seen as a way to legitimize the transfer of the weaving cultural practices to contribute to technological innovation. This legitimation appears to be an illusion, to converge the unchanging character of weaving as heritage, with the constant renewal of global industry, but it also presupposes the non-recognition of Navajo women’s labor and a techno-cultural and gendered transaction. Their work is diluted in meaning and action, and overlooked in the history of computing.

In Weaving wires 1, we can see a computer monitor with circuit board patterns on the screen, and a juxtaposed woven design. Then, two potential purposes dialogue with the woman sitting at the edge of the screen, suspended in a white background: is the woman stitching or fixing or even both as she weaves and prolongs the wires? These blue wires extend from the monitor, keyboard and beyond. The woman seems to be modifying or constructing a digital landscape with her own hands, leading us to remember the place where these materialities come from, and the memories they connect to.

Since my mother tongue is Spanish, a distant memory of the word “Navajo” and the image of weaving women appeared. “Navajo” is a Spanish adaptation of the Tewa Pueblo word navahu’u, which means “farm fields in the valley”. The Navajo people call themselves Diné, literally meaning “The People”. At this point, I began to think about the specific socio-spatial conditions of Navajo/Diné women at that time and their misrepresentation today. When I first saw the collage, I felt these cables crossing my own screen. Many threads began to unravel in my head in the form of question marks. I wondered how older and younger generations of Navajo/Diné women have experienced (and in other ways inherited) this hidden labor associated with the transformation of the valley and their community. This image disrupts as a visual opposition to the geographic and social identification of Silicon Valley as presented, for example, in the media. So now, these wires expand the materiality to reveal their history. Hanna creatively represents the connection between key elements of this theme. Let’s explore some of her artistic choices.

Recoded textures as visual extensions 

Hanna Barakat is a researcher, artist and activist who studies emerging technologies and their social impact. I discovered her work thanks to the Archival Images of AI project (Launch & Playtest). Weaving wires 1 is part of a larger project from Hanna where a creative dialogue between textures and technology is proposed. Hanna plays with intersections of visual forms to raise awareness of the social, racial and gender issues behind technologies. Weaving wires 1 reconnected me with the importance of questioning the human and material extractive conditions in which technological devices are produced.

As a lecturer in (digital) communication, I’m often looking for visual support on topics such as the socio-economic context in which the Internet appears, the evolution of the Web, the history of computer culture, and socio-technical theories and examples to study technological innovation, its problems and ethical challenges. The visual narratives are mostly uniform, and the graphic references are also gendered. Women’s work is most of the time misrepresented (no, those women in front of the big computers are not just models or assistants, they have full names and they are the official programmers and coders. Take a look at the work of Kathy/Kathryn Kleiman… Unexplored archives are waiting for us !).

When I visually interacted with Weaving wires 1 and read its source of inspiration (I actually used and referenced the image for one of my lectures), I realized once again the need to make visible the herstory (term coined in the 1960s as a feminist critique of conventional historiography) of technological innovation. Sometimes, in the rush of life in general (and in specific moments like the preparation of a lecture in my case), we forget to take some time and distance to convene other ways of exploring and sharing knowledge (with the students) and to recreate the modalities of approaching some essential topics for a better understanding of the socio-technical metamorphosis of our society.

Going beyond assumed landmarks

In order to understand hidden social realities, we might question our own landmarks. For me, “landmarks” could be both consciously (culturally) confirmed ideas and visual/physical evidence of the existence of boundaries or limits in our (representation of) reality. Hanna’s image proposes an insight into the importance of going beyond some established landmarks. This idea, as a result of the artistic experience, highlights some questions such as : where did the devices we use every day come from and whose labour created them? And in what others forms are these conditions extended through time and space, and for whom ? You might have some answers, references, examples, or even names coming to mind right now. 

In Weaving wires 1, and in Hanna’s artistic contribution, several essential points are raised. Some of them are often missing in discourses and practices of emerging technologies like AI systems : the recognition of the human labor that supports the material realities of technological tools, the intersection of race and gender, the roots of digital culture and industry, and the need to explore new visual narratives that reflect technology’s real conditions of production.

Fix, reconnect and reimagine

Hanna uses the digital collage (but also techniques such as juxtaposition, overlayering and/or distortion – she explains her approach with examples in her artist log). She explores ways to honor the stories she conjures up by rejecting colonial discourses. For me, in the case of Weaving wires 1, these wires connect to our personal experiences with technological devices and memories of the digital transformation of our society. They could also represent the need to imagine and construct together, as citizens, more inclusive (technological) futures.

A digital landscape is somewhere there, or right in front of us. Weaving wires 1 will be extended by Hanna in Weaving wives 2 to question the meaning of the valley landscape itself and its borders. For now, some other transversal questions appear (still inspired by her first image) about deterministic approaches to studying data-driven technology and its intersection with society: what fragments or temporalities of our past are we willing and able to deconstruct? Which ones filter the digital space and ask for other ways of understanding? How can we reconnect with the basic needs of our world if different forms of violence (physical and symbolic), in this case in human labor, are not only hidden, but avoided, neglected or unrepresented in the socio-digital imaginary?

It is such a necessary discussion to face our collective memory and the concrete experiences in between. Weaving wires 1 invites us to confront the oppressive cultural conditions of conception, creation and mediation of the technology industry’s approach to innovation.With this image, Hanna brings us a meaningful contribution. She deconstructs simplistic assumptions and visual perspectives to actually create ‘better images of AI’!


About the author

Laura Martinez Agudelo is a Temporary Teaching and Research Assistant (ATER) at the University Marie & Louis Pasteur – ELLIADD Laboratory. She holds a PhD in Information and Communication Sciences. Her research interests include socio-technical devices and (digital) mediations in the city, visual methods and modes of transgression and memory in (urban) art.   

This post was also kindly edited by Tristan Ferne – lead producer/researcher at BBC Research & Development.


If you want to contribute to our new blog series, ‘Through My Eyes’, by selecting an image from the Archival Images of AI collection and exploring what the image means to you, get in touch (info@betterimagesofai.org)

Hanna Barakat’s image collection & the paradoxes of depicting diversity in AI history

A black-and-white image depicting the early computer, Bombe Machine, during World War II. In the foreground, the shadow of a woman in vintage clothing is cast on a man changing the machine's cable.

As part of a collaboration between Better Images of AI and Cambridge University’s Diversity Fund, Hanna Barakat was commissioned to create a digital collage series to depict diverse images about the learning and education of AI at Cambridge. Hanna’s series of images complement our competition that we opened up to the public at the end of last year which invited submissions for better images of AI from the wider community –  you can see the winning entries here.

In the blog post below, Hanna Barakat talks about her artistic process and reflections upon contributing to this collection. Hanna provides her thoughts on the challenges of creating images that communicate about AI histories and the inherent contradictions that arise when engaging in this work.

The purpose behind the collection

As outlined by the Better Images of AI project, normative depictions of AI continue to perpetuate negative gender and racial stereotypes about the creators, users, and beneficiaries of AI. Moreover, they misdirect attention from the harms implicit in the real-life applications of the technology. The lack of diversity—and the problematic interpretation of diversity—in AI-generated images is not merely an ‘output’ issue that can be easily fixed. Instead, it stems from deep-rooted systemic issues that reflect a long history of bias in data science.

As a result, even so-called ‘diverse’ images created by AI often end up reinforcing these harms [Fig.1]. The image below has adopted token diversity tropes like a wheelchair, different skin tones and a mix of genders – superficially appearing diverse without addressing deeper issues like context, intersectionality, and the inclusion of underrepresented groups in leadership roles. The teacher remains to be an older, able-bodied white male and the students all appear to be conventionally attractive, similarly sized individuals wearing almost matching types of clothing. The image also shows a fictional blue holographic image of a robot in the centre – misrepresenting what generative AI is and exaggerating the capabilities of the technology.

Figure 1. Image depicting an educational course on Generative AI.

As academic institutions like the Leverhulme Centre for the Future of Intelligence are exploring “vital questions about the risks and opportunities emerging with AI,” they commissioned images that reflect a more nuanced depiction of the risks and opportunities. Specifically, they requested seven images that represent the diversity in Cambridge’s teaching about AI, with the intention to use these images for courses, websites, and events programs.

Hanna’s artistic process

My process takes a holistic approach to “diversity” – aiming to avoid the “DEI-washing” images that reduce diversity to a gradient of brown bodies or tokenization of marginalized groups in the name of “inclusion” but often fail to acknowledge the positionality of the institutions utilizing such images.

Instead, my approach interrogates the development of AI technology, its history of computing in the UK, and the positionality of elite institutions such as Cambridge University to create thoughtful images about the education of AI at Cambridge.

Analog Lecture on Computing by Hanna Barakat & Cambridge Diversity Fund and Pas(t)imes in the Computer Lab by Hanna Barakat & Cambridge Diversity Fund

Through digital collages of open-source archival images, this series offers a critical visual depiction of education about AI. Collage is a way of moving against the archival grain– reinserting, for example, the overlooked women who ran cryptanalysis of the Enigma Machine at Bletchley Park to surrealist depictions of a historically contextualized lecture about AI. By combining mixed media layers, my artistic process seeks to weave together historical narratives and investigate the voices systemically overlooked and/or left out. 

I carefully navigated the archive and relied on visual motifs of hands, strings, shadows, and data points. Throughout the series, these elements engage with the histories of UK computing as a starting point to expose the broader sociotechnical nature of AI. The use of anonymous hands becomes a way of encouraging reflection upon the human labor that underpins all machines. The use of shadows symbolizes the unacknowledged labor of marginalized communities throughout the Global Majority.

Turning Threads of Cognition by Hanna Barakat & Cambridge Diversity Fund

It is these communities upon which technological “process” has relied upon and at whose expense “progress” has been achieved. I use an abstract interpretation of data points to symbolize the exchange of information and learning on university campuses. I was inspired by Ada Lovelace, Cavendish Labs archive (physics laboratories), which depicts photos of early histories of computing, the stories of Cambridge Language Research Unit (CLRU) run by Margaret Masterman, Jean Valentine, and the many other Cambridge-educated women at Bletchley Park that made Alan Turing’s achievements possible.

Lovelace GPU by Hanna Barakat & Cambridge Diversity Fund

The challenges of creating images relating to the diverse history of AI

Nonetheless, I remain cautious about imbuing these images with too much subversive power. Like any nuanced undertaking, this project grapples with tension, including navigating the challenge of representing diverse bodies without tokenizing them; drawing from archival material while recognizing the imperialist incentives that shape their creation; portraying education about AI in ways that are both literal and critically reflective, particularly in contexts where racial and ethnic diversity (in the histories of UK) are not necessarily commonplace; and balancing a respect for the critical efforts of the CFI with an awareness of its positionality as an elite institution. On a practical level, I encountered challenges in accessing the limited number of images available, as many were not fully licensed for open access.

I list these tensions not to imply as a means of demonstrating hypocrisy, but, quite the opposite—to illuminate the complexities and inherent contradictions that arise when engaging in this work. By highlighting these points of friction, I am able to acknowledge the layered positionality that shapes both the process and the outcomes, emphasizing that such tensions are not obstacles to be avoided but rather essential facets of critically engaged practice.

If you want to read more about the processes behind Hanna’s work, view her Artist Log on the AIxDESIGN site. You can also learn how to make your own archival images of AI by exploring our Playbook that we released at the end of 2024 with AIxDESIGN and the Netherlands Sound and Vision Institute.

Dr Aisha Sobey was behind the project which was commissioned with funding from Cambridge Diversity Fund

This project grew from the desire of CFI and multiple collaborations with Better Images of AI to have better images of AI in relation to the teaching and learning we do at the Centre, and from my research into the ‘lookism’ of generative AI image models. I knew that asking for the combination of criteria to show anonymous, diverse people in images of AI learning would be tricky, but even as the project evolved to take a historical lens to reclaim lost histories, this proved to be a really difficult task for the artists.

The images created by Hanna and the entries to the prize competition showed some brilliant and unique takes on the prompt. Still, they often struggled to bring diverse people and Cambridge together. It points to the barriers of showing difference in an ethical way that doesn’t tokenise or exploit already marginalised groups and we didn’t solve that challenge in these images, and the need for more diverse people in places like Cambridge to make these stories. However, I am hopeful that the process has been valuable to illuminate different challenges of doing this kind of work and further that the images offer alternative and exciting perspectives to the representation of diversity in learning and teaching AI at the University.”

Artist Subjectivity Statement

In creating these images which seek to depict diversity, it is imperative to address the “experience of the knower.” Thus, consistent with a critical feminist framework, I feel it is important to share my identity and positionality as it undoubtedly shapes my artistic practice and influences my approach to digital technologies.

My name is Hanna Barakat. I am a 25-year-old science & technology studies researcher and collage artist.  I am a female-identifying Palestinian-American. While I was raised in Los Angeles, California, I am from Anabta, Palestine. Growing up in the Palestinian diaspora, my experience is informed by layers of systemic violence that traverse the digital-physical “divide.” I received my education from Brown University, a reputable university in the United States.

Brown University’s founders and benefactors participated in and benefited from the transatlantic slave trade. Brown University is built on the stolen lands of the Narragansett, Wôpanâak, and Pokanoket communities. In this light, I materially benefit from, and to some degree am harmed by, my location within systems of settler colonialism, whiteness, racial capitalism, Islamophobia, heteropatriarchy, and education inequality. My identity, lived experiences, and fraught relationship with technology inform my approach to artist practice–which uses visual language as a tool to (1) critically challenge normative narratives about technology development and (2) imagine cultural contextualized and localized digital futures. 

Winners of public competition with Cambridge Diversity Fund announced

An image with the text 'Winners Announced!" at the top in maroon. Below it in slightly lighter purple text it states: 'Reihaneh Golpayegani for Women and AI' and 'Janet Turra for Ground Up and Spat Out'. Their two images are positioned on the image at a slant each in opposite directions. At the bottom, there is a maroon banner with the text 'University Diversity Fund' in white, the CFI logo in white, and the Better Images of AI logo.

At the end of 2024, we launched a public competition with Cambridge Diversity Fund calling for images that reclaimed and recentred the history of diversity in AI education at the University of Cambridge.

We were so grateful to receive such a diverse range of submissions that provided rich interpretations of the brief and focused on really interesting elements of AI history.

Dr Aisha Sobey set and judged the challenge, which was enabled by funding from Cambridge Diversity Fund. Entries were judged on meeting the brief, the forms of representation reflected in the image, appropriateness, relevance, uniqueness, and visual appeal.

We are delighted to announce the winners and their winning entries:

First Place Prize

Awarded to Reihaneh Golpayegani for ‘Women and AI’

The left side incorporates a digital interface, showing code snippets, search queries, and comments referencing Woolf’s ideas, including discussions about Shakespeare’s fictional sister, Judith. The overlay of coding elements highlights modern interpretations of Woolf’s work through the lens of data and AI.

The center depicts a dimly lit, minimalist room with a window, dessk, and wooden floors and cupboards. The right side features a collage of Cambridge landmarks, historical photographs of women, and a black and white figure in Edwardian attire. There is a map of Cambridge in the background, which is overlayed with images of old fountain pens and ink, books, and a handwritten letter.

This image is inspired by Virginia Woolf’s A Room of One’s Own. According to this essay, which is based on her lectures at Newnham College and Girton College, Cambridge University, two things are essential for a woman to write fiction: money and a room of her own. This image adds a new layer to this concept by bringing it into the Al era.

Just as Woolf explored the meaning of “women and fiction”, defining “women and AI” is quite complex. It could refer to algorithms’ responses to inquiries involving women, the influence of trending comments on machine stereotypes, or the share of women in big tech. The list can go on and involve many different experiences of women with AI as developers, users, investors, and beyond. With all its complexity, Woolf’s ideas offer us insight: Allocating financial resources and providing safe spaces-in reality and online- is necessary for women to have positive interactions with AI and to be well-represented in this field.

Download ‘Women and AI’ from the Better Images of AI library here

About the artist:

Reihaneh Golpayegani is a law graduate and digital art enthusiast. Reihaneh is interested in exploring the intersection of law, art, and technology by creating expressive artworks and pursuing my master’s studies in this area.

Commendation Prize

Awarded to Janet Turra for ‘Ground Up and Spat Out’

The outputs of Large Language Models do seem uncanny often leading people to compare the abilities of these systems to thinking, dreaming or hallucinating. This image is intended to be a tongue-in-cheek dig, suggesting that AI is at its core, just a simple information ‘meat grinder,’ feeding off the words, ideas and images on the internet, chopping them up and spitting them back out. The collage also makes the point that when we train these models on our biased, inequitable world the responses we get cannot possibly differ from the biased and inequitable world that made them.

Download ‘Ground up and Spat Out’ from the Better Images of AI library here.

About the artist:

Janet Turra is a photographer, ceramicist and mixed media artist based in East Cork, Ireland. Her fine arts career spans over 25 years, a career which has taken many turns in rhythm with the changing phases of her life. Continually challenging the concept of perception, however, her art has taken on many themes including self, identity, motherhood and more recently our perception of AI and how it relates to the female body. 

Background to the competition

Cambridge and LCFI researchers have played key roles in identifying how current stock images of AI can perpetuate negative gender and racial stereotypes about the creators, users, and beneficiaries of AI.

The winning entries will be used for outward-facing posting on social media, University of Cambridge websites, internal communications on student sites and Virtual Learning Environments. They will also be made available for wider Cambridge programs to use for their teaching and events materials. They are also both available in the Better Images of AI library here and here for anyone to freely download and use under a Creative Commons License.

“This project grew from the desire of CFI and multiple collaborations with Better Images of AI to have better images of AI in relation to the teaching and learning we do at the Centre, and from my research into the ‘lookism’ of generative AI image models. I am hopeful that the process has been valuable to illuminate different challenges of doing this kind of work and further that the images offer alternative and exciting perspectives to the representation of diversity in learning and teaching AI at the University.” – Aisha Sobey, University of Cambridge (Postdoctoral Researcher)

An additional collection of images from Hanna

As part of this project, collage artist and scholar, Hanna Barakat, was commissioned to design a collection of images which draw upon her work researching AI narratives and marginalised communities to uncover and reclaim diverse histories. You can find the collection in the Better Images of AI library and we’ll also be releasing an additional blog post which focuses on Hanna’s collection as well as the challenges/reflections on this competition brief.

Public Competition for Better Images of (teaching and learning) AI!

Ornage and red picture of people at computer terminals with networks overlaying them

Call for images: Reclaiming and Recentering the History of Diversity in AI Education at the University of Cambridge

Cambridge and LCFI researchers have played key roles in identifying how current stock images of AI can perpetuate negative gender and racial stereotypes about the creators, users, and beneficiaries of AI. Following on from this, a project has been set up to increase the visible diversity of the images used to represent AI teaching and events programs in Cambridge.

The first phase of the project was to commission exciting collage artist and emerging technologies scholar Hanna Bakarat to provide a set of images, drawing on her work of researching AI narratives to uncover and reclaim diverse histories.

We’re now delighted to collaborate to open up the challenge and to invite public submissions of ‘stock quality’ images by the 30th of December 2024 (11:59PM UTC). The competition can be entered by the University of Cambridge (UK) community, but also anyone who wishes to contribute to improving narratives about how teaching and learning about AI related fields can be conceptualised.

The recent release of the new Archival Images of AI Playbook means that even those with no artistic or design background can have a go, or existing designers and art students can bring their own ideas and add to making more inclusive and less exclusionary images.

In addition to our thanks for adding to the visual discourse, University of Cambridge have made. available a couple of prizes:

First Prize: £250

Commendation Prize: £100

Entries will be judged by representatives of Better Images of AI, LFCI and University of Cambridge.

Further Information

The Leverhulme Centre for the Future of Intelligence and the University Diversity Fund want to increase the diversity of the images that are used to represent AI-related teaching and event programmes in the University of Cambridge.

The entries will be judged on the following criteria:

  • How the images reflect the brief: ‘reclaiming and recentering the history of diversity in AI education in the University of Cambridge’
  • The inclusion of creative or surprising elements in the image
  • The appropriateness of the image to be used for teaching and events
  • The forms of representation included in the image
  • Aesthetic quality

Visual Guidelines

Please read the Guide to making Better Images of AI to see what tropes to avoid and what might make a good representation related to AI.

Image uses

These include images used for outward-facing posting on social media, University of Cambridge websites, internal communications on student sites and Virtual Learning Environments. They will also be made available for wider Cambridge programs to use for their teaching and events materials. Those agreed will also be added to the Better Images of AI website on a Creative Commons licence with artist attribution and available for wider public download.

Licences

You can use any techniques and source materials that work for your vision. However, all materials need to have the correct license for use and you need to have full ownership of the end product, so we recommend using images from the Creative Commons Portal with a ‘free to be used and remixed’ license’.

Privacy

Please also ensure to anonymise people if they are featured in images.

Techniques / style

Any techniques and approaches are welcome as long as they result in high quality digital images. This can include digital art, photography, collage, illustration and also invite artists to use different image techniques using the Archival Images of AI Playbook. We do have specifications around the use of AI image generators, see below.

AI generated Art

Although inclusion in the Better Images of AI library is not necessarily essential for the winning entry, the library will only accept submissions which use Adobe Firefly (which uses consented images, compensates artists and labels as AI generated), with licensed or original images as visual prompts.

Format

Entries must be in a .png file and submitted to info@betterimagesofai.org. The winning entries will be made available for open access use under a creative comms non-profit licence through the University of Cambridge, and ideally also in the Better Images of AI library. Entrants may also be contacted to include their image in the open-access collection with honourable mention. 

Key dates

Competition opens: 9th of December 2024 (9:00AM UTC)

Competition closes: 30th December 2024 (11:59PM UTC)

Decisions of winners announced: January 2025


Further Information

Please contact info@betterimagesofai.org.

📚Book Review: Screening Big Data: Films That Shape Our Algorithmic Literacy  

Drawing on films and documentaries about big data, machine learning and AI, including analysis of the sociological and critical theory of AI, ‘Screening Big Data: Films That Shape Our Algorithmic Literacy‘ by Gerald Sim discusses the role of popular media in the formation of algorithmic literacy. 

In this blog post, Jenn Chubb explains how Sim’s book provides a rich and vital analysis of the socio-political dimensions of stories and visuals about AI which challenge audiences to think more carefully about the motivations and interests behind tech-driven media.

Having spent the past few years researching stories about AI and forgotten or overlooked aspects of AI literacy, I am delighted to read Gerald Sim’s book ‘Screening big data: films that shape our algorithmic literacy.’ I have seen all of the films he discusses and have only begun to scratch the surface of the messages they propagate. In this book, Sim goes further and demonstrates in a sophisticated way how films and documentaries about AI are directing the public response. He calls for the reader to identify the influence of these stories, guiding the reader to decode the motivations and interests behind tech-driven stories. 

For those of us concerned about the imagery associated with AI, there is so much to learn from this book. In addition to framing the book in terms of ‘cinema’s reliance on visuality’, Sim writes that “research in science communication has long held that visual literacy is crucial.” In fact, the images we see on screen are an important part of how the public form opinions about technology because they are often ideologically framed and carefully curated. Understanding this requires a critical “visual literacy” and countervisuality that media scholars, drawing on film theory, use to challenge the seemingly transparent portrayals of technology in the media.

The book cover of ‘Screening Bid Data: Films That Shape Our Algorithmic Literacy’ by Gerald Sim

I will start by saying that Sim has a politically sharp lens, and skillfully makes connections between culture, technology, and politics, challenging the reader to question whose interests popular culture serves. The films in question are close readings of; ‘Minority Report’, ‘Moneyball’, ‘The Social Dilemma’ and ‘Coded Bias’, and they make for great case studies. If I am being picky, the latter documentaries carry even greater responsibility not to adopt problematic framings, which Sim acknowledges. Yet according to Sim there is a commonality; they are reflective of a network of media and technology institutions which are exerting political power in favour of their own interests. 

Screening Big Data begins with a pointed example. It’s not an example from science fiction (in fact, Sim is clear that his focus is on narrow AI and algorithms, not the stuff of superintelligence or AGI). Instead, the book begins with the example of polling data, used to exemplify that algorithms have a political and human backbone. This is a useful device, because in the same way, films don’t just entertain; they propagate powerful ideologies. For Sim, stories aren’t neutral, they’re influential and drive public attitudes, spark policy discussions, and even sway governmental perspectives. Films, documentaries, and media coverage become, as Sim frames it, cultural drivers of ideology. It is particularly refreshing to me that Screening Big Data avoids Hollywood’s usual focus on superintelligent AI tropes, such as those seen in ‘Ex Machina’ or ‘Blade Runner.’ As he rightly points out there has been great work on this by scholars working on the Global AI Narratives project and more. Instead, Sim examines narrow forms of AI and machine learning – e.g. the systems currently impacting society, from facial recognition to predictive policing. These technologies, what Cathy O’Neill called “Weapons of Math Destruction”, shape real, everyday lives in ways that go largely unexamined. 

Algorithmic and social imaginaries 

From this position, Sim borrows from Science and Technology (STS) literature and Frankfurt School of critical theory to consider the effects of film on public perception. With respect to the former, Sim argues that films contribute to what sociologists call ‘algorithmic imaginaries’ or more simply put, the ways in which one might conceptualise and understand the potential and risks of algorithms. He draws on the works of scholars like Sheila Jasanoff and Taina Bucher to explore how these cultural narratives reinforce ideas about AI’s role in society. Sim’s account of the imaginary is rich, a lesson in algorithmic literacy itself.

However, Sim also notes the narrow focus of these portrayals, which often sidelines the broader societal impacts in favour of dramatic dystopian futures or reductive narratives. As Sim implies, the stories or scenes which reinforce polarisation are often the ones that get ‘stuck’ in cultural time and space. ‘Minority Report’ is a prime example of this, often praised for its predictive depictions of technology, that of spatial computing, biometric scanners, and gesture-based interfaces, much of which has endured and entered the real world today. Such depictions stick in the public consciousness, framing technology in polarising terms – either a dystopian threat or as an empowering tool. 

Technomedia industrial complex 

I mentioned in my introduction that Sim’s argument is framed in such a way that suggests these films are reflective of a network of media and technology institutions exerting political power in favour of their own interests. He explores the ‘technomedia industrial complex’, a web of media and tech institutions, in which companies like Google, Netflix, and Facebook wield significant power. This is really at the heart of his book and I am convinced by his articulation of the films as vehicles which ‘peddle industry ideology’, technological fantasy (however prescient) and documentaries that simplify complex issues especially concerning science communication. 

One such ideology has a long history –  Films like ‘Moneyball’, for example, depict data as a ‘moral and virtuous truth,’ and present data scientists as objective crusaders while downplaying the biases and ethical questions surrounding AI. This framing resonates with me – Sim has chosen examples which exemplify the assumption that data scientists are objective number crunchers, which, he states, provides ‘cover from scrutiny’. Apparently, it’s the qualitative aspect that needs fixing, (isn’t that always the case?) – the human is qualitative, the machine is quantitative – and the human fails. Social science is in question, hard science is not. As a qualitative type who relies on ‘anecdotes and adhoc thinking’ (..!..), I am convinced that these narratives reinforce the longstanding two cultures debate. So too, they cause us to reflect on how we imagine the role of the scientist… (Clue, are they really all data geeks who can’t possibly grasp ethics…?)

One might think that including two documentaries would make for an interesting counterpoint to the stuff of fiction. However while both raise technological literacy about pervasive technologies they also arise from the techlash. Sim explains that the documentary ‘The Social Dilemma’, dramatises the dangers of algorithms but risks framing technology as something beyond human control, an idea that allows tech companies to shirk responsibility. I could not be in more agreement concerning the framing of this documentary which opens with an apology for opening Pandora’s box by the very people who opened it and plays into the tropes that humans have no control whatsoever. Meanwhile, Coded Bias’ which rightly tackles facial recognition biases can be criticised for simplifying complex geopolitical issues by focusing on surveillance concerns in countries like China without the same scrutiny on Western practices. The extent to which this is entirely supportive of our algorithmic literacy is then rightly questioned.

What difference does it make?

“The narrow truth about whether traditional film genres have been superseded may seem insignificant, but their continued relevance provides good reason to be wary of techno-determinist braggadocio and of how easy it is to be caught in the slipstream of techno-optimist celebration and techno-libertarian currents.” 

I think it’s important to note that Sim is not critical of the films for their artistic merit. He is simply calling for reflection on how they impact us. If I had to criticise, Sim’s book could focus more on the social and subtle emotional cues in the films. For instance, the dominance of white men narrating, or the manipulative musical scores directing our emotions towards the binary positions he warns us about. I might also look for further contrast in the cases used – for instance to seek counterpoints in the framings of AI across commercial vs ‘art-house’ films where intention will be very different. Perhaps that is where we might find strong examples to guide a more considered, responsible and nuanced approach to storytelling.

But that really is another story. His provocation extends beyond simply understanding AI. While critical, it is optimistic and stands for civic and public mobilisation and a shift toward resistance such as that described by scholars Dan McQuillan and Kate Crawford toward disingenuous mantras of tech companies. In fact, I take comfort in the ways that Sim frames his view of the collective response to AI. 

“If you would indulge in some optimism of a different sort, I might venture that what the integrity of genres reveals, is that we are more resilient to capitalist atomization than we realise.” 

These films direct our individual and collective response to technology, subtly encouraging acceptance or resistance. To respond requires education of algorithmic technology and an avoidance of its reification. We do well to scrutinise the technopolitics of storytelling and to critically engage with the media we consume to reveal the political and economic interests going on behind the scenes. This book is a crucial read for anyone interested in the hype of AI, and should be indispensable to anyone researching or teaching the socio-political and cultural aspects of AI in Higher Education.

Dr Jenn Chubb is a Lecturer in the Department of Sociology at the University of York, UK. Jenn’s research explores the societal and ethical implications of science and technology with a particular focus on the public perception of AI across the domains of science policy, education, health and the creative industries.

Sim, G. (2024). Screening Big Data: Films that Shape Our Algorithmic Literacy. Taylor & Francis.

You can hear more about the book on the New Books Network podcast.

Beneath the Surface: Adrien’s Artistic Perspective on Generative AI

The image features the title "Beneath the Surface: Adrien's Artistic Perspective on Generative AI." The background consists of colourful, pixelated static, creating a visual texture reminiscent of digital noise. In the centre of the image, there's a teal rectangular overlay containing the title in bold, white text.

May 28, 2024 – A conversation with Adrien Limousin – a photographer and visual artist, sheds light on the nuanced intersections between AI, art, and ethics. Adrien’s work delves into the opaque processes of AI, striving to demystify the unseen mechanisms and biases that shape our representations.


A vibrant, abstract image from converting Street View screenshots from TIFF to JPEG, showing a pixelated, distorted classical building with columns. The sky features glitch-like, multicolored waves, blending greens, purples, pinks, and blues.

ADRIEN LIMOUSIN – Alterations (2023)

Adrien previously studied advertising and now is studying photography at the National Superior School of Photography (ENSP) in Arles and is particularly drawn to the language of visual art, especially from new technologies.

A cluster of coloured pixels made up from random gaussian noise taking up the whole canvas, representing a not denoised AI generated image; digital pointillism

Fig 1. Adrien Limousin / Better Images of AI / Non-image / CC-BY 4.0

Non-image

Adrien was drawn to the ‘Better Images of AI’‘ project after recognising the need for more nuanced and accurate representations of AI, particularly in journalism. In our conversation, I asked Adrien about his approach to creating the image he submitted to Better Images of AI (Fig 1.).


> INTERVIEWER: Can you tell me about your thinking and process behind the image you submitted?

> ADRIEN: I thought about how AI-generated images are created. The process involves taking an image from a dataset, which is progressively reduced to random noise. This noise is then “denoised” to generate a new image based on a given prompt. I wanted to try to find a breach or the other side of the opaqueness of these models. We only ever see the final result—the finished image—and the initial image. The intermediate steps, where the image is transitioning from data to noise and back, are hidden from us.

> ADRIEN: My goal with “Non-image” was to explore and reveal this hidden in-between state. I wanted to uncover what lies between the initial and final stages, which is typically obscured. I found that extracting the true noisy image from the process is quite challenging. Therefore, I created a square of random noise to visually represent this intermediate stage. It’s no longer an image and it’s also not an image yet.


Adrien’s square of random noise captures this “in-between” state, where the image is both “everything and nothing”—representing aspects of AI’s inner workings. This visual metaphor underscores the importance of making these hidden processes visible, to demystify and foster a more accurate understanding of what AI is, how it operates, and it’s real capabilities. Seeing the process Adrien discusses here also reflects the complex and collective human data that underpins AI systems. The image doesn’t originate from a single source but is a collage of countless lives and data points, both digital and physical, emphasising the multifaceted nature of AI and its deep entanglement with human experience.

A laptopogram based on a neutral background and populated by scattered squared portraits, all monochromatic, grouped according to similarity. The groupings vary in size, ranging from single faces to overlapping collections of up to twelve. The facial expressions of all the individuals featured are neutral, represented through a mixture of ages and genders.

Philipp Schmitt & AT&T Laboratories Cambridge / Better Images of AI / Data flock (faces) / CC-BY 4.0

“The medium is the message”

(McLuhan, Marshall, 1964).

When I asked Adrien about the artists who have inspired him, he highlighted how Marshall McLuhan’s seminal concept, “the medium is the message,” profoundly resonated with him.

This concept is crucial for understanding how AI is represented in the media. McLuhan argued that the medium itself—whether it’s a book, television, or image—shapes our perceptions and influences society more than the actual content it delivers. McLuhan’s work, particularly in Understanding Media (1974), explores how technology reshapes human interaction and societal structures. He warned that media technologies, especially in the electronic age, fundamentally alter our perceptions and social patterns. When applied to AI, this means that the way AI is visually represented can either clarify or obscure its true nature. Misleading images don’t just distort public understanding; they also shape how society engages with and responds to AI, emphasising the importance of choosing visuals that accurately reflect the technology’s reality and impact.

 “Stereotypes inside the machine”

(Adrien).

Adrien’s work explores the complex issue of stereotypes embedded within AI datasets, emphasizing how AI often perpetuates and even amplifies these biases through discriminatory images, texts, and videos.


> ADRIEN: Speaking of stereotypes inside the machine, I tried to question that in one of the projects I started two years ago and I discovered that it’s a bit more complicated than what it first seems. AI is making discriminatory images or text or videos, yes. But once you see that you start to question the nature of the image in the dataset and then suddenly the responsibility shifts and now you start to question why these images were chosen or why these images were labelled that way in the dataset in the first place ?

> ADRIEN:  Because it’s a new medium we have the opportunity to do things the right way. We aren’t doomed to repeat the same mistakes over and over. But instead we have created something even more – or at least equally discriminatory.

And even though there are adjustments made (through Reinforcement Learning from Human Feedback) they are just kind of… small patches. The issue needs to be tackled at the core.”

Image shows a white male in a suit facing away from the camera on a grey background. Text on the left side of the image reads “intelligent person.”

Adrien Limousin –  Human·s 2 (2022 – Ongoing)

As Adrien points out, minor adjustments or “sticking plasters” won’t suffice when addressing biases deeply rooted in our cultural and historical contexts. As an example – Google recently attempted  to reduce racial bias in their AI Gemini image algorithms. This effort was aimed at addressing long standing issues of racial bias in AI-generated images, where people of certain racial backgrounds were either misrepresented or underrepresented. However, despite these well-intentioned efforts, the changes inadvertently introduced new biases. For instance, while trying to balance representation, the algorithms began overemphasizing certain demographics in contexts where they were historically underrepresented, leading to skewed and culturally inappropriate portrayals. This outcome highlights the complexity of addressing bias in AI. It’s not enough to simply optimize in the opposite direction or apply blanket fixes; such approaches can create new problems while attempting to solve old ones. What this example underscores is the necessity for AI systems to be developed and situated within culture, history, and place.


> INTERVIEWER: Are these ethical considerations on your mind when you are using AI in your work?

> ADRIEN: Using Generative AI makes me feel complicit about these issues. So I think the way I approach it is more like trying to point out these lacks, through its results or by unravelling its inner working

“It’s the artists role to question”

(Adrien)


> INTERVIEWER: Do you feel like artists have an important role in creating the new and more accurate representations  of AI?

> ADRIEN:  I think that’s one of the role of the artist. To question.

> INTERVIEWER: If you can kind of imagine like what, what kind of representations we might see, or you might want to have in the future like instead of when you Google AI and it’s blue heads and you know, robots, etc.

> ADRIEN: That’s a really good question and I don’t think I have the answer, but as I thought about that, understanding the inner workings of these systems can help us make better representations. For instance, the concepts and ideas of remixing existing representations—something that we are familiar with, that’s one solution I guess to better represent Generative AI.


Image displays an error message from the Windows 95 operating system. The text reads ‘The belief in photographic images.exe has stopped working’.

ADRIEN LIMOUSIN System errors – (2024 – ongoing)

We discussed the challenges involved in encouraging the media to use images that accurately reflect AI.


> ADRIEN: I guess if they used stereotyped images it’s because most people have associated AI with some kind of materialised humanoid as the embodiment of AI and that’s obviously misleading, but it also takes time and effort to change mindsets, especially with such an abstract and complex technology, and that is I think one of the role of the media to do a better job at conveying an accurate vision of AI, while keeping a critical approach.


Another major factor is knowledge: journalists and reporters need to recognise the biases and inaccuracies in current AI representations to make informed choices. This awareness comes from education and resources like the Better Images of AI project, which aim to make this information more accessible to a wider audience. Additionally, there’s a need to develop new visual associations for AI. Media rely on attention-grabbing images that are immediately recognisable, we need new visual metaphors and associations that more accurately represent AI.  

One Reality


> INTERVIEWER: So kind of a big question, but what do you feel is the most pressing ethical issue right now in relation to AI that you’ve been thinking about?

> ADRIEN: Besides the obvious discriminatory part of the dataset and outputs, I think one of the overlooked issues is the interface of these models. If we take ChatGPT for instance, the way there is a search bar and you put text in it expecting an answer, just like a web browser’s search bar is very misleading. It feels familiar, but it absolutely does not work in the same way. To take any output as an answer or as truth, while it is just giving the most probable next words is deceiving and I think that’s something we need to talk a bit more about.


One major problem with AI is its tendency to offer simplified answers to multifaceted questions, which can obscure complex perspectives and realities. This becomes especially relevant as AI systems are increasingly used in information retrieval and decision-making. For example, Google’s AI summarising search feature has been criticised for frequently presenting incorrect information. Additionally, AI’s tendency to reinforce existing biases and create filter bubbles poses a significant risk. Algorithms often prioritise content that aligns with users’ pre-existing views, exacerbating polarisation (Pariser, 2011). This is compounded when AI systems limit exposure to a variety of perspectives, potentially widening societal divides.

Metasynthography

(Adrien)

Adrien takes inspiration from the idea of metaphotography, which involves using photography to reflect on and critique the medium itself. In metaphotography, artists use the photographic process to comment on and challenge the conventions and practices of photography.

Building on this concept, Adrien has coined the term “meta-synthography” to describe his approach to digital art.


> ADRIEN: The term meta-synthography is one of the terms I have chosen to describe Digital arts in general. So it’s not properly established, that’s just me doing my collaging.

> INTERVIEWER: That’s great. You’re gonna coin a new word in this blog 😉


I asked Adrien what artists inspire him. He discusses the influence of Robert Ryman, a renowned painter celebrated for his minimalist approach that focuses on the process of painting itself. Ryman’s work often features layers of paint on canvas, emphasising the act of painting and making the medium and its processes central themes in his art.


> ADRIEN: I recently visited an exhibition of Robert Ryman, which kind of does the same with painting – he paints about painting on painting, with painting.

> INTERVIEWER:  Love that.

> ADRIEN: I thought that’s very interesting and I very much enjoy this kind of work, it talks about the medium…It’s  a bit conceptual, but it raises question about the medium… about the way we use it, about the way we consume it.

Image displays a large advertising board displaying a blank white image, the background is a grey clear sky

Adrien Limousin – Lorem Ipsum (2024 – ongoing)

As we navigate the evolving landscape of AI, the intersection of art and technology provides a crucial perspective on the impact and implications of these systems. By championing accurate representations and confronting inherent biases, Adrien’s work highlights the essential role  artists play in shaping a more nuanced and informed dialogue about AI. It’s not only important to highlight AI’s inner workings but also to recognise that imagery has the power to shape reality and our understanding of these technologies. Everyone has a role in creating AI that works for society, countering the hype and capitalist-driven narratives advanced by tech companies. Representations from communities, along with the voices of individuals and artists, are vital for sharing knowledge, making AI more accessible, and bringing attention to the experiences and perspectives often rendered invisible by AI systems and media narratives.


Adrien Limousin (interviewee) is a 25 years old french (post)photographer exploring the other side of images, currently studying at the National Superior School of Photography in Arles.

Cherry Benson (interviewer) is a Student Steward for Better Images of AI. She holds a degree in psychology from London Metropolitan University and is currently pursuing a Master’s in AI Ethics and Society at the University of Cambridge where her research centers on social AI. Her work on the intersection of AI and border control has been featured as a critical case study in the Cambridge Journal of Artificial Intelligence for how racial capitalism is deeply intertwined with the development and deployment of AI.

💬 Behind the Image with Yutong from Kingston School of Art

This year, we collaborated with Kingston School of Art to give MA students the task of creating their own better images of AI as part of their final project. 

In this mini-series of blog posts called ‘Behind the Images’, our Stewards are speaking to some of the students that participated in the module to understand the meaning of their images, as well as the motivations and challenges that they faced when creating their own better images of AI. Based on our assessment criteria, some of the images will also be uploaded to our library for anyone to use under a creative commons licence. 

In our third and final post, we go ‘Behind the Image’ with Yutong about her pieces, ‘Exploring AI’ and ‘Talking to AI’. Yutong intends that her art will challenge misconceptions about how humans interact with AI.

You can freely access and download ‘Talking to AI’ and both versions of ‘Exploring AI’ from our image library.

Both of Yutong’s images are available in our library, but as you might discover below, there were many challenges that she faced when developing these works. We greatly appreciate Yutong letting us publish her images and talking to us for this interview. We are hopeful that her work and our conversations will serve as further inspiration for other artists and academics who are exploring representations of AI.

Can you tell us a bit about your background and what drew you to the Kingston School of Art?

Yutong is from China and before starting the MA in Illustration at Kingston University, she completed an undergraduate major in Business Administration. What drew Yutong to Kingston School of Art was its highly regarded reputation for its illustration course. On another note, she enjoys how the illustration course at Kingston balances both the commercial and academic aspects of art – allowing Yutong to combine her previous studies with her creative passions. 

Could you talk me through the different parts of your images and the meaning behind them?

In both of her images, Yutong wishes to unpack the interactions between humans and AI – albeit from two different perspectives.

Talking to AI’

Firstly, ‘Talking to AI’ focuses on more accurately representing how AI works. Yutong uses a mirror to reflect how our current interactions with AI are based on our own prompts and commands. At present, AI cannot generate content independently so it reflects the thoughts and opinions that humans feed into systems. The binary code behind the mirror symbolises how human prompts and data are translated into computer language which powers AI. Yutong has used a mirror to capture an element between humans and AI interaction that is overlooked – the blurred transition between human work to AI generation.

‘Exploring AI’

Yutong’s second image, ‘Exploring AI’ aims to shed light on the nuanced interactions that humans have with AI on multiple levels. Firstly, the text, ‘Hi, I am AI’ pays homage to an iconic phrase in programming (‘Hello World’) which is often the first thing any coder learns how to write and it also forms the foundations of a coder’s understanding of a programming language’s syntax, structure, and execution process. Yutong thought this was fitting for her image as she wanted to represent the rich history and applications of AI which has its roots in basic code. 

Within ‘Exploring AI’, each grid square is used to represent the various applications of AI in different industries. The expanded text across multiple grid squares demonstrates how one AI tool can have uses across different industriesChatGPT is a prime example of this.

However, Yutong wants to also draw attention to the figures within each square which all interact with AI in complex and different ways. For example, some of the body language of the figures depict them to be variously frustrated, curious, playful, sceptical, affectionate, indifferent, or excited towards the text, ‘Hi, I am AI’.

Yutong wants to show how our human response to AI changes and varies contextually and it is driven by our own personal conceptions of AI. From her own observations, Yutong identified that most people either have a very positive or very negative opinion towards AI – but not many feel anything in between. By including all the different emotional responses towards AI in this image, Yutong hopes to introduce greater nuance into people’s perceptions of AI and help people to understand that AI can evoke different responses in different contexts. 

What was your inspiration/motivation for creating your images?

As an illustrator, Yutong found herself surrounded by artists that were fearful that AI would replace their role in society. Yutong found that people are often fearful of the unknown and things they cannot control. Therefore, being able to improve understanding of what AI is and how it works through her art, Yutong hopes that she can help her fellow creators face their fears and better understand their creative role in the face of AI. 

Through her art, ‘Exploring AI’ and ‘Talking to AI’, Yutong intends to challenge misconceptions about what AI is and how it works. As an AI user herself, she has realised that human illustrators cannot be replaced by AI – these systems are reliant on the works of humans and do not yet have the creative capabilities to replace artists. Yutong is hopeful that by being better educated on how AI integrates in society and how it works, artists can interact with AI to enhance their own creativity and works if they choose to do so. 

Was there a specific reason you focused on dispelling misconceptions about what AI looks like and how Chat-GPT (or other large language models) work? 

Yutong wanted to focus on how AI and humans interact in the creative industry and she was driven by her own misconceptions and personal interactions with AI tools. Yutong does not intend for her images to be critical of AI. Instead, she envisages that her images can help educate other artists and prompt them to explore how AI can be useful in their own works. 

Can you describe the process for creating this work?

From the outset, Yutong began to sketch her own perceptions and understandings about how AI and humans interact. The sketch below shows her initial inspiration. The point at which each shape overlaps represents how humans and AI can come together and create a new shape – this symbolises how our interactions with technology can unlock new ideas, feelings and also, challenges.

In this initial sketch, she chose to use different shapes to represent the universality of AI and how its diverse application means that AI doesn’t look like one thing – AI can underlay an automated email response, a weather forecast, or medical diagnosis. 

Yutong’s initial sketch for ‘Talking to AI’

The project aims to counteract common stereotypes and misconceptions about AI. How did you incorporate this goal into your artwork? 

In ‘Exploring AI’, Yutong wanted to introduce a more nuanced approach to AI representation by unifying different perspectives about how people feel, experience and apply AI in one image. From having discussions with people utilising AI in different industries, she recognised that those who were very optimistic about AI, didn’t recognise its shortfalls – and the same vice-versa. Yutong believes that humans have a role to help AI reach new technological advancements and AI can also help humans flourish. In Yutong’s own words, “we can make AI better, and AI can make us better”. 

Yutong found talking to people in the industry as well as conducting extensive research about AI very important to ensure that she could more accurately portray AI’s uses and functions. She points to the fact that she used binary code in ‘Talking to AI’ after researching that this is the most fundamental aspect of computer language which underpins many AI systems. 

What have been the biggest challenges in creating a ‘better image of AI’? Did you encounter any challenges in trying to represent AI in a more nuanced and realistic way?

Yutong reflects on the fact that no matter how much she rethought or restarted her ideas, there was always some level of bias in her depiction of AI because of her own subconscious feelings towards the technology. She also found it difficult to capture all the different applications of AI, as well as the various implications and technical features of the technology in a single visual image. 

Through tackling these challenges, Yutong became aware of why Better Images of AI is not called ‘Best Images of AI’ the latter would be impossible. She hopes that while she could not produce the ‘best image of AI’, her art can serve as a better image compared to those typically used in the media.

Based on our criteria for selecting images, we were pleased to accept both your images but asked you if it was possible to make amendments to ‘Exploring AI’ to make the figures more inclusive. What do you think of this feedback and was it something that you considered in your process? 

In Yutong’s image, ‘Exploring AI’, Better Images of AI made a request if an additional image could be made including these figures in different colours to better reflect the diverse world that we live in. Being inclusive is very important to Better Images of AI, especially as visuals of AI and those who are creating AI, are notoriously unrepresentative.

Yutong agreed that this development would be better to enhance the image and being inclusive in her art is something she is actively trying to improve. She reflects on this suggestion by saying, ‘just as different AI tools are unique, so are individual humans’. 

The two versions of ‘Exploring AI’ available on the Better Images of AI library

How has working on this project influenced your own views about AI and its impact? 

During this project, Yutong has been introduced to new ideas and been able to develop her own opinions about AI based on research from academic journals. She says that informing her opinions using sources from academia was beneficial compared to relying on information provided by news outlets and social media platforms which often contain their own biases and inaccuracies.

From this project, Yutong has been able to learn more about how AI could incorporate into her future career as a human and AI creator. She has become interested in the Nightshade tool that artists have been using to prevent AI companies using their art to train their AI systems without the owner’s consent. She envisages a future career where she could be working to help artists collaborate with AI companies – supporting the rights of creators and preserving the creativity of their art. 

What have you learned through this process that you would like to share with other artists and the public?

By chatting to various people interacting and using AI in different ways, Yutong has been introduced to richer ideas about the limits and benefits of AI. Yutong challenges others to talk to people who are working with AI or are impacted by its use to gain a more comprehensive understanding of the technology. She believes that it’s easy to gain a biased opinion about AI by relying on the information shared by a single source, like social media, so we should escape from these echo chambers. Yutong believes that it is so important that people diversify who they are surrounding themselves with to better recognise, challenge, and appreciate AI. 

Yutong (she/her) is an illustrator with whimsical ideas, also an animator and graphic designer.

🪄 Behind the Image with Minyue from Kingston School of Art

The image shows a colourful illustration of a story-like scene, with two half star characters performing various tasks. The stars, along with a wizard, are interacting with drawings, magnifying glasses, and magic-like elements. Below that, there is a scene with a fantasy landscape, including a castle and dragon. To the right of the image, text reads: 'Behind the Image with Minyue' and below that, a tagline reads: 'Let AI Become Your Magic Wand' which is the name of Minyue's image submission. The background of the image is light blue.

This year, we collaborated with Kingston School of Art to give MA students the task of creating their own better images of AI as part of their final project.

In this mini-series of blog posts called ‘Behind the Images’, our Stewards are speaking to some of the students that participated in the module to understand the meaning of their images, as well as the motivations and challenges that they faced when creating their own better images of AI.

In our second post, we go ‘Behind the Image’ with Minyue about her piece, ‘Let AI Become Your Magic Wand’. Minyue wants to draw attention to the overlooked human input in AI generated art and challenges those who believe AI will replace artists.

‘Let AI Become Your Magic Wand’ is not available in our library as it did not match all the criteria due to challenges which we explore below. However, we greatly appreciate Minyue letting us publish her images and talking to us. We are hopeful that her work and our conversation will serve as further inspiration for other artists and academics who are exploring representations of AI.

Can you tell us a bit about your background, and what drew you to the Better Images of AI project at Kingston School of Art? 

Minyue is from China and previously studied a foundation course in the UK before starting the Masters in Illustration at Kingston University. Before starting the Masters, Minyue had limited knowledge of AI and she only saw discussions about it on social media – especially from artists fearful that AI tools were capable of copying their own work without their consent. At the same time, Minyue also saw many fellow creators developing impressive works using AI generator tools – whether in the ideation phase or to create the final artwork. 

Confused about her own perception of AI, Minyue was drawn to the Better Images of AI project to learn more about the relationship between humans and AI in the creative process. 

Could you talk us through the different parts of your image and the meaning behind it? 

Minyue’s Final Image, ‘Let AI Become Your Magic Wand’

Minyue’s piece is focussed on two halves of a star. One half is called the ‘evaluation half star’ which represents AI’s image recognition capabilities (the technical term is the ‘Discriminator’). For Minyue, recognition capabilities refer to AI’s ability to interpret and understand input data. For image generator tools, AI systems are trained on vast amounts of imagery so that they can identify key features and elements of a picture. This could involve recognising objects, styles, colours or other visual aspects. Therefore, in generating an image of a chick (as shown in Minyue’s image), the evaluation half star is focussed on interpreting what distinctive features the training data classifies as a true representation of a chick – like perhaps the yellow colour and the shape of a beak.  

The other half is called the ‘creation half star’ which portrays the image construction capabilities of AI tools (the technical term is the ‘Generator’). The Generator enables AI to create new, coherent images based on the evaluation half star’s understanding of input data. 

Therefore, together, Minyue’s image shows how the half stars make a full star – capable of generating AI art based on user prompts and trained by vast image datasets. You’ll see that in the bottom part of Minyue’s image in the computer tab, she indicates that the full star (consisting of the creation and evaluation half stars) make up a magic wand when added with a pencil. The pencil symbolises the human labour behind the training of both the evaluation and creative half stars. 

Without being guided by humans, Minyue believes that these two half stars would not exist. It is humans that have created the input data, it is humans that prompt AI tools to create certain images, and it is humans that train the AI systems to be able to create these images in different ways. Therefore, her piece highlights the crucial human element of AI art which is often overlooked. 

Lastly, Minyue also hopes to emphasise that the combination of these AI tools with humans offers a new avenue for realising human creativity. That is why she has chosen to use a wizard and magic wand to depict how AI and humans, when working together, can be magical. 

Better Images of AI aims to counteract common stereotypes and misconceptions about AI. How did you incorporate this goal into your artwork?

Minyue emphasised that the main misconception that she wanted to focus on is that AI is a tool requiring rational human use, rather than an autonomous creator. When looking at her work, Minyue wanted people to contemplate, “who is controlling the magic?”, and prompt us to think more carefully about the role of humans in AI art. 

What was your motivation/inspiration for creating ‘Let AI Become Your Magic Wand’?

Firstly, as an illustration student, Minyue was particularly interested in the role of AI in the creative industry. The metaphor of the magic wand comes from her observation of artists who skilfully use new technologies to create their work, which made her feel as if she were watching a magical performance.

Secondly, Minyue wanted to raise awareness to the fact that using AI image generators still requires human skill, creativity and imagination. A wizard can only perform magic if they are trained to use the wand. In the same way, AI can assist artists to create, but artists must learn how to use this technology to develop innovative, appealing, and meaningful works of art. 

Minyue’s early sketch shows how she wanted to distinguish between the human (wizard) and AI (in the magic wand)

Finally, she hopes to dispel the idea that AI art will limit creativity or the work of human artists – instead, if creators choose to work with AI, it could also enhance their capabilities and usher in a new genre of art. 

Based on Better Images of AI criteria for selecting images, we had to make the difficult decision to not upload this image to our library. We made this choice based on closer scrutiny of the magic wand metaphor which could be misconceived as promoting the idea that AI is magic (this rhetoric is commonly pursued by technology companies). 

What do you think of this feedback, and was this idea something that you ever considered in your process? 

Minyue understood the concerns and appreciated the feedback provided by Better Images of AI which made her reconsider how her work could be misleading in some aspects and the challenges of relying on metaphors to communicate difficult ideas. Her intention was that the magic wand metaphor would prompt individuals to think more deeply about who is in control of AI art and also, how AI can advance the creative industry if used safely and ethically. However, she is aware that coupled with the technology industry’s widespread use of magical symbols to represent AI (for example, the logo for Zoom’s AI Smart Assistant or Google’s AI chatbot Gemini), Minyue’s image could (unintentionally) be perceived to suggest that AI (alone) is magical.

Was there a specific reason that you focussed on dispelling misconceptions about the human element of AI art, especially in relation to image generation? 

Minyue strongly believes that the creative power of AI comes from human inspiration and human creativity. She hopes her work will help convey the fact that AI art is rooted in human creativity and labour, which is often overlooked in the public discourse promoted by the media about AI replacing artists, leading to misunderstandings.

A lot of the inspiration for Minyue holding this view has come from her reflections on how past technology has integrated into the creative industry. For example, painters were originally fearful about the widespread adoption of photography since it offered a faster and cheaper means of reproducing and disseminating images. But over time, Minyue believes that we can see how photography has developed its own unique styles and languages, with photographers moving away from imitating traditional art pieces, to explore unique photographic expressions. Minyue believes that AI may also evolve into a new tool for the production of a new art form. 

Can you describe your process for creating ‘Let AI Become Your Magic Wand’?

Minyue detailed the very long process that led her to the final creation. She recalled how having the Better Images of AI Guide was helpful, but she still struggled because her initial understanding of AI was really poor.

Therefore, Minyue took time to carefully research the more technical aspects of AI image generation so she could more accurately represent how AI image generators work and their relationship with human creators. Below you can see how she researched the technical elements of AI image generation as well as its use in different contexts. 

Minyue’s research about technical aspects of AI image generators and their applications

Minyue’s initial sketches also show how she was interested in portraying the relationship between humans and technology.

One of Minyue’s initial sketches when exploring ideas for the Better Images of AI project

Minyue aims to create more engaging and approachable AI images to help non-experts understand AI technology and reduce public fear of new technologies. This was also one of her reasons for choosing to participate in the Better Images of AI project.

What have been your biggest challenges in creating a better image of AI? Did you encounter any challenges in trying to represent AI in a more nuanced and realistic way?

Minyue faced difficulties when challenging her previous views on AI that were presented to her by the media. Contrary to a lot of the other images in the Better Images of AI library, Minyue also wanted to promote a more optimistic narrative about AI – that AI can be beneficial to humans and enhance our own creative outputs. 

Another one of the challenges that Minyue faced was distinguishing between AI and computers or robots. In one of her initial sketches she shows how in her early stages of this project, Minyue overlooked how AI has numerous applications beyond just being used within computer applications.

Another one of Minyue’s sketches which show her challenges relating to how she could illustrate AI

What have you learned through this process that you would like to share with other artists or the public? 

Minyue says that while artists are often driven by their passions when creating their works, it is important to consider how art might cause misunderstandings if creators are not guided by in-depth research and detailed expression. Minyue’s hope is that other artists will focus on this in order to promote a more realistic and accurate understanding of AI. 


Minyue Hu (she/her) is about to graduate from Kingston University with a Master’s degree in Illustration. In the coming year, she will be staying in the UK to continue her work as an artist and actively create new pieces. Minyue’s inspiration often centres on human experience and emotion, with the aim of combining personal stories with social contexts to prompt viewers to reflect on their own experiences. Her final project, Daughters of the Universe, is set to be released soon, and she looks forward to your attention. 

👤 Behind the Image with Ying-Chieh from Kingston School of Art

This year, we collaborated with Kingston School of Art to give MA students the task of creating their own better images of AI as part of their final project. 

In this mini-series of blog posts called ‘Behind the Images’, our Stewards are speaking to some of the students that participated in the module to understand the meaning of their images, as well as the motivations and challenges that they faced when creating their own better images of AI. Based on our assessment criteria, some of the images will also be uploaded to our library for anyone to use under a creative commons licence. 

In our first post, we go ‘Behind the Images’ with Ying-Chieh Lee about her images, ‘Can Your Data Be Seen’ and ‘Who is Creating the Kawaii Girl?’. Ying-Chieh hopes that her art will raise awareness of how biases in AI emerge from homogenous datasets and unrepresentative groups of developers who can create AI to marginalise members of society, like women. 

You can freely access and download ‘Who is Creating the Kawaii Girl’ from our image library by clicking here.

‘Can Your Data Be Seen’ is not available in our library as it did not match all the criteria due to challenges which we explore below. However, we greatly appreciate Ying-Chieh letting us publish her images and talking to us. We are hopeful that her work and our conversation will serve as further inspiration for other artists and academics who are exploring representations of AI.

Can you tell us a bit about your background, and what drew you to the MA at Kingston University?

Ying-Chieh originally comes from Taiwan and has been creating art since she was about 10 years old. In her undergraduate, Ying-Chieh studied sculpture and then worked for a year. Whilst working, Ying-Chieh really missed drawing so decided to start freelance illustration but she wanted to develop her art skills further which led Ying-Chieh to Kingston School of Art. 

Could you talk me through the different parts of your images and the meaning behind them?

‘Can Your Data Be Seen?’

‘Can Your Data Be Seen?’ shows figures representing different subjects in datasets, but the cast light illustrates how only certain groups are captured in the training of AI models. Furthermore, the uniformity and factory-like depiction of the figures criticises how AI datasets often quantify the rich, lived experiences of humans into data points which do not capture the nuances and diversity of many human individuals. 

Ying-Chieh hopes that the image highlights the homogeneity of AI datasets and also draws attention to the invisibility of certain individuals who are not represented in training data. Those who are excluded from AI datasets are usually from marginalised communities, who are frequently surveilled, quantified and exploited in the AI pipeline, but are excluded from the benefits of AI systems due to the domination of privileged groups in datasets. 

‘Who’s Creating the Kawaii Girl’

In ‘Who’s Creating the Kawaii Girl’, Ying-Chieh shows a young female character in a school uniform which represents the Japanese artistic and cultural ‘Kawaii’ style. The Kawaii aesthetic symbolises childlike innocence, cuteness, and the quality of being lovable. Kawaii culture began to rise in Japan in the 1970s through anime, manga and merchandise collections – one of the most recognisable is the Hello Kitty brand. The ‘Kawaii’ aesthetic is often characterised by pastel colours, rounded shapes, and features which evoke vulnerability, like big eyes and small mouths. 

In the image, Ying-Chieh has placed the Kawaii Girl in the palm of an anonymous, sinister figure – this suggests a sense of vulnerability and power over the Girl. The faint web-like pattern on the figures and the background symbolises the unseen influence that AI has on how media is created and distributed that often reinforce stereotypes or facilitates exploitation. The image criticises the overwhelmingly male-dominated AI industry who frequently use technology and content generation tools to reinforce ideologies about women being controlled and subservient to men. For example, there has been a rise in nonconsensual deep fake pornography created by AI tools and also regressive stereotypes about gender roles being reinforced by information provided by large language models, like ChatGPT. Ying-Chieh hopes that ‘Who’s Creating the Kawaii Girl’ will challenge people to think about how AI can be misused and its potential to perpetuate harmful gender stereotypes that sexualise females. 

What was the inspiration/motivation for creating your image, ‘Can Your Data Be Seen’ and ‘Who’s Creating the Kawaii Girl?’? 

At the outset, Ying-Chieh wasn’t very familiar with AI or the negative uses and implications of the technology. To explore how it was being used, she looked on Facebook and found a group that was being used to share lots of offensive images of women which were generated by AI. When interrogating the group further, she realised that the group was not small, indeed, it had a large number of active users –  which were mostly men. This was Ying-Chieh’s initial inspiration for the image, ‘Who’s Creating the Kawaii Girl?’. 

However, this Facebook group also prompted Ying-Chieh to think deeper about how the users were able to generate these sexualised images of women and girls so easily. A lot of the images represented a very stereotypical model of attractiveness which prompted her to think about how the underlying datasets of these AI models were most probably very unrepresentative which reinforced stereotypical standards of beauty and attractiveness. 

Was there a specific reason you focussed on issues like data bias and gender oppression related to AI?

Gender equality has always been something that Ying-Chieh has been passionate about, but she had never considered how the issue related to AI. She came to realise how its relationship wasn’t that different to other industries which oppress women because AI is fundamentally produced by humans and fed by data that humans have created. Therefore, the problems with AI being used to harm women are not isolated in the technology, but rooted in systemic social injustices that have long mistreated and misrepresented women and other marginalised groups.

Ying-Chieh’s sketch of the AI ‘bias loop’

In her research stages, Ying-Chieh explored the ‘bias loop’ which represents how AI models are trained on data selected by humans or derived from historical data which will create biased images. At the same time, the images created by AI will serve as new training data, which will further embed our historical biases into future AI tools. The concept of the ‘bias loop’ resonated with Ying-Chieh’s interest in gender equality and made her concerned for the uses and developments of AI which privileging some groups at the expense of others, especially where this repeats itself and causes inescapable cycles of injustice. 

Can you describe the process for creating this work?

Ying-Chieh started from developing some initial sketches and engaging in discussions with Jane, the programme coordinator, about her work. As you can see below, ‘Whos’ Creating the Kawaii Girl’ has evolved significantly from its initial sketch but ‘Can Your Data Be Seen?’ has remained quite similar to Ying-Chieh’s original design. 

The initial sketches of ‘Can Your Data Be Seen?’ (left) and ‘Who’s Creating the Kawaii Girl?’

Ying-Chieh also engaged in some activities during classes which helped her to learn more about AI and its ethical implications. One of these games, ‘You Say, I Draw’ involved one student describing an image and the other student drawing the image purely relying on their partner’s description without knowing what they were drawing.

This game highlighted the role that data providers and prompters play in the development of AI and challenged Ying-Chieh to think more carefully about how data was being used to train content generation tools. During the game, she realised that the personality, background, and experiences of the prompter really influenced what the resulting image looked like. In the same way, the type of data and the developers creating AI tools can really influence the final outputs and results of a system. 

An image of the results from the ‘You Say, I Draw’ activity

Better Images of AI aims to counteract common stereotypes and misconceptions about AI. How did you incorporate this goal into your artwork? 

Ying-Chieh’s aim was to explore and address biases present in AI models in order to contribute to the Better Images of AI mission so that the future development of AI can be more diverse and inclusive. She hopes that her illustrations will make it easier for the public to understand issues about biases in AI which are often inaccessible or shielded from wider comprehension.

Her images draw more attention to how AI’s training data is bias and how AI is being used to reinforce gender stereotypes about women. From this, Ying-Chieh hopes that further action can be taken to improve data collection and processing methods as well as more laws and rules about limits to image generation where it exploits or harms individuals. 

What have been the biggest challenges of creating a ‘better image of AI’? Did you encounter any challenges in trying to represent AI in a more nuanced and realistic way? 

Ying-Chieh spoke about her challenges in trying to strike the right balance between designing images that could be widely used and recognised by audiences as related to AI but also not falling into any common tropes that misrepresented AI (like robots, descending code, the colour blue). She also found it difficult to not make images too metaphorical to the extent that they may be misinterpreted by audiences.

Based on our criteria for selecting images, we were pleased to accept, ‘Who’s Creating the Kawaii Girl?’, but had the difficult decision to not upload ‘Can Your Data Be Seen’ based on the fact that it didn’t communicate and conceptualise AI enough. What do you think of this feedback and was it something that you considered in the process? 


Ying-Chieh shared that she had been continuous that her images would not be easily recognisable as communicating ideas about AI throughout the design process. She made some efforts to counteract this, for example, on ‘Can Your Data Be Seen’ she made the figures all identical to represent data points and the lighter coloured lines on the faces and bodies of the figures represent the technical elements behind AI image recognition technology.

How has working on this project influenced your own views on AI and its impact? 

Before starting this project, Ying-Chieh said that her opinion towards AI had been quite positive. She was largely influenced by things that she had seen and read in the news about how AI was going to benefit society. However, from her research on Facebook, she has become increasingly aware that this is not entirely true. There are many dangerous ways that AI can be used which are already lurking in the shadows of our daily lives.

 What have you learned through this process that you would like to share with other artists or the public?

The biggest takeaway from this project for Ying-Chieh is how camera angles, zooming, or object positioning can strongly influence the message that an image conveys. For example, in the initial sketches of ‘Can Your Data Be Seen’, Ying-Chieh explored how she could best capture the relationship of power through different depths of perspective.  

Various early sketches of ‘Can Your Data Be Seen’ from different depths of perspective

Furthermore, when exploring ideas about how to reflect the oppressive nature of AI, Ying-Chieh enlarged the shadow’s presence in the frame for ‘Who’s Creating the Kawaii Girl’. By doing this, the shadow reinforces the strong power that elite groups have over the creation of content about marginalised groups which is often hidden and kept secret from wider knowledge. 

Ying-Chieh’s exploration of how the photographer’s angle can reflect different positions of power and vulnerability

Ying-Chieh Lee (she/her) is a visual creator, illustrator, and comic artist from Taiwan. Her work often focuses on women-related themes and realistic, dark-style comics.


Better Images of AI’s Partnership with Kingston School of Art

An image with a light blue background that reads, 'Let's Collab!' at the top, the word 'Collab' underlined in burgandy. Below that, it says 'Better Images of AI x Kingston School of Art' with 'Kingston School of Art' in teal. Below the text is an illustration of two hands high-fiving, with black sleeves and white hands. Around the hands are burgundy stars.

This year, we were pleased to partner with Kingston’s School of Art to run an elective for their MA Illustration, Animation, and Graphic Design students to create their own ‘better images of AI’. Following this collaboration, some of the student’s images have been published in our library for anyone to use freely. Their images focus on communicating different ideas about the current state of AI – from the connection between the technology and gender oppression to breaking down the interactions between humans and AI chatbots.

In this blog post, we speak to Jane Cheadle who is the course leader for the MA Animation course at Kingston School of Art about partnering with Better Images of AI for the elective. The MA is a new course and it is focussed on critical and research-led animation design processes.

If you’re interested in running a similar module/elective or incorporating Better Images of AI’s work into your university course, we would love to hear from you – please contact info@betterimagesofai.org.

How did the collaboration with Better Images of AI come about?

AI is having an impact on various industries and the creative domain is no exception. Jane explains how she and the staff in the department were asked to work towards developing a strategy addressing the use of AI in the design school. At the same time, Jane was also in contact with Alan Warburton – a creator that works with various technologies, including computer generated imagery, AI, virtual reality, and augmented reality to develop art. Alan introduced Jane to Better Images of AI and she became interested in the work that we are doing, and how this linked to their future strategy for the use of AI in the design school.

Therefore, instead of solely creating rules about the use of AI in the school, Jane thought that working with the students to explore the challenges, limits, and benefits of the technology would be more meaningful as it would provide better learning opportunities for the students (as well as herself!) about this topic. 

Where does the elective fit within the school’s curriculum?

Kingston University’s Town House Strategy aims to prepare graduates for advances in technology which will alter our future society and workplaces. The strategy aims to equip students with enhanced entrepreneurial, digital, and creative problem-solving skills so they can better advance their careers and professional practice. As part of this strategy, Kingston University encourages collaboration and partnership with businesses and external bodies to help advance student’s knowledge and awareness of the different aspects of the working world.

As part of this, the Kingston School of Art runs a cross-disciplinary design module open to students from three different MA courses (Graphic Design, Illustration, and Animation). In this module, students are asked to think about the role of the designer now, and what it might look like in the future. The goal is to prompt students to situate their creative practice within the contemporary paradigms of precarity and uncertainty, providing space for students to understand and address issues such as climate literacy, design education, and the future of work. There are multiple electives within this module and each works with a partner external to the university.

Better Images of AI were fortunate enough to be approached by Jane to be the external partner for their elective. This elective was run by Jane as well as researcher and artist, Maybelle Peters. Jane explains that this module had a dual aim: firstly, to allow students to develop better images of AI which could be published to our library. But also, secondly, to educate students about AI and its impact on society. For Jane, it was important that when exploring AI, this was applied to the student’s own practice and positionality so they could understand how AI is influencing the creative industry as well as political, power structures more broadly.

How did the elective run?

Jane shares that there was a real divide amongst the students about their familiarity with AI and its wider context. Some students had been dabbling with AI tools and wanted to develop a position on its creative and ethical use. Meanwhile, others were not using AI at all and expressed being somewhat weary of it, alongside a real sense of amorphous fear around automated image generation and other capabilities that impact the markets for their creative works.

Better Images of AI worked with the Kingston School of Art to provide a brief for the elective, and students also used our Guide to help them understand the problems with current stock imagery that is used to illustrate AI so they could avoid these common tropes in their own work.

Following this, the students worked in special interest groups to research different aspects of AI. Each group then used this research to develop practical workshops to run with the wider class. This enabled the students to develop their own better images of AI based on what they had learnt from leading and participating in workshops and research tasks. Better Images of AI also visited Kingston School of Art to provide guidance and feedback to the students in the development stages of their images.

Some of the images that were submitted as part of the elective can be seen below. Each image shows a thoughtful approach and are so varied in nature – some are super low-fi and others are hilarious – but all the students drew upon their own design/drawing/making skills to develop their unique images. 

Why did you think it was important to partner with Better Images of AI for this elective?

As designers and image makers, we agreed that there is a responsibility to accurately and responsibly represent aspects of the world, such as AI. It was important to allow students to work with real constraints and build towards a future that they want to live in. While the brief provided to the students was to create images that accurately represent what AI looks like right now, much of the student workshops focussed on what kind of AI they wanted to see, what safeguards need to be put in place, and what power relations we might need to change in order to get there.

Jane Cheadle (she/they) is an animator, researcher and educator. Jane is currently senior lecturer and MA Animation course leader in the design school at Kingston School of Art. Both of Jane’s practice and research are cross-disciplinary and experimental with a focus on drawing, collaboration and expanded animation.  


We are super thankful to Jane and Maybelle as well as the Kingston School of Art for incorporating Better Images of AI into their elective. We are so appreciative to all the students who participated in the module and shared their work with us. Jane is excited to hopefully run the elective again and we are looking forward to more work together with the students and staff at Kingston School of Art.

This blog post is the first in a series of posts about Better Images of AI collaboration with the Kingston School of Art. In a series of mini interview blog posts, we speak to three students that participated in the elective and designed their own better images of AI. Some of the student’s images even feature in our library – you can view them here.

Visuals of AI in the Military Domain: Beyond ‘Killer Robots’ and towards Better Images?

In this blog post, Anna Nadibaidze explores the main themes found across common visuals of AI in the military domain. Inspired by the work and mission of Better Images of AI, she argues for the need to discuss and find alternatives to images of humanoid ‘killer robots’. Anna holds a PhD in Political Science from the University of Southern Denmark (SDU) and is a researcher for the AutoNorms project, based at SDU.

The integration of artificial intelligence (AI) technologies into the military domain, especially weapon systems and the process of using force, has been the topic of international academic, policy, and regulatory debates for more than a decade. The visual aspect of these discussions, however, has not been analysed in depth. This is both puzzling, considering the role that images play in shaping parts of the discourses on AI in warfare, and potentially problematic, given that many of these visuals, as I explore below, misrepresent major issues at stake in the debate.

In this piece I provide an overview of the main themes that one may observe in visual communication in relation to AI in international security and warfare, discuss why some of these visuals raise concerns, and argue for the need to engage in more critical reflections about the types of imagery used by various actors in the debate on AI in the military.

This blog post is based on research conducted as part of the European Research Council funded project “Weaponised Artificial Intelligence, Norms, and Order” (AutoNorms), which examines how the development and use of weaponised AI technologies may affect international norms, defined as understandings of ‘appropriateness’. Following the broader framework of the project, I argue that certain visuals of AI in the military, by being (re)produced via research communication and media reporting, among others, have potential to shape (mis)perceptions of the issue.

Why reflecting upon images of AI in the military matters

As with the field of AI ethics more broadly, critical reflections on visual communication in relation to AI appear to be minimal in global discussions about autonomous weapon systems (AWS)—systems that can select and engage targets without human intervention—which have been ongoing for more than a decade. The same can be said for debates about responsible AI in the military domain, which have become more prominent in recent years (see, for instance, the initiative of the Responsible AI in the Military Domain Summit held first in 2023, with another edition due in 2024).

Yet, examining visuals deserves a place in the debate on responsible AI in the military domain. It matters because, as argued by Camila Leporace on this blog, images have a role in constructing certain perceptions, especially “in the midst of the technological hype”. As pointed out by Maggie Mustaklem from the Oxford Internet Institute, certain tropes in visual communication and reporting about AI disconnect the technological developments in that area and how people, in particular the broader public, understand what the technologies are about. This is partly why the AutoNorms project blog refrains from using the widely spread visual language of AI in the military context and uses images from the Better Images of AI library as much as possible.

Main themes and issues in visualizing military applications of AI

Many of the visuals featured in research communication, media reporting, and publications about AI in the military domain speak to the tropes and clichés in images of AI more broadly, as identified by the Better Images of AI guide.

One major theme is anthropomorphism, as we often see pictures of white or metallic humanoid robots that appear holding weapons, pressing nuclear buttons, or marching in troops like soldiers with angry or aggressive expressions, as if they could express emotions or be ‘conscious’ (see examples here and here).

In some variations, humanoids evoke associations with science fiction, especially the Terminator franchise. The Terminator is often referenced in debates about AWS, which feature in a substantial part of the research on AI in international relations, security, and military ethics. AWS are often called ‘killer robots’, both in academic publications and media platforms, which seems to encourage the use of images of humanoid ‘killer robots’ with red eyes, often originating from stock image databases (see examples here, here, and here). Some outlets do, however, note in captions that “killer robots do not look like this” (see here and here).

Actors such as campaigners might employ visuals, especially references from pop culture and sci-fi, to get people more engaged and as tools to “support education, engagement and advocacy”. For instance, Stop Killer Robots, a campaign for an international ban on AWS, often uses a robot mascot called David Wreckham to send their message that “not all robots are going to be as friendly as he is”.

Sci-fi also acts as a point of reference for policymakers, as evidenced, for example, by US official discourses and documents on AWS. As an illustration, some of these common tropes were visually present at the conference “Humanity at the Crossroads: Autonomous Weapons Systems and the Challenge of Regulation” which brought together diplomats, civil society, academia, and other actors to discuss the potential international regulation of AWS in April 2024 in Vienna.

Half-human half-robot projected on the wall and a cut-out of a metallic robot greeting participants at the entrance of the Vienna AWS conference. Photos by Anna Nadibaidze.

The colour blue also often features in visual communication about AI in warfare, together with abstract depictions of running code, algorithms, or computing technologies. This is particularly distinguishable in stock images used for blogs, conferences, or academic book cover designs. As Romele and Rodighiero write on this blog, blue might be used because it is calming, soothing, and also associated with peace, encouraging some accepting reaction from viewers, and in this way promoting certain imaginaries about AI technologies.

Examples of covers for recently published academic books on the topic of AI in international security and warfare.

There are further distinct themes in visuals used alongside publications about AI in warfare and AWS. A common trope features human soldiers in an abstract space, often with a blue (and therefore calming) background or running code, wearing a virtual reality headset and presumably looking at data (see examples here and here). One such visual was used for promotional material of the aforementioned REAIM Summit, organised by the Dutch Government in 2023.

Screenshot of the REAIM Summit 2023 website homepage (www.reaim2023.org). The image is credited to the US Naval Information Warfare Center Pacific, public domain.

Finally, many images feature military platforms such as uncrewed aerial vehicles (UAVs or drones) flying alone or in swarms, robotic ground vehicles, or quadruped animal-shaped robots, either depicted alone or together with human soldiers. Many of them are prototypes or models of existing systems tested and used by the United States military, such as the MQ-9 Reaper (which does not classify as an AWS). Most often, these images are taken from the visual repository of the US Department of Defense, given that the photos released by the US government are in the public domain and therefore free to use with attribution (see examples here, here, and here). Many visuals also display generic imagery from the military, for instance soldiers looking at computer screens, sitting in a control room, or engaging in other activities (see examples here, here, and here).

Example of image often used to accompany online publications about AWS. Source: Cpl Rhita Daniel, US Marine Corps, public domain.

However, there are several issues associated with some of the common visuals explored above. As AI researcher and advocate for an AWS ban Stuart Russell points out, references to the Terminator or sci-fi are inappropriate for the debate on AI in the military because they suggest that this is a matter for the future, whereas the development and use of these technologies is already happening.

Sci-fi references and humanoids might also give the impression that AI in the military is about replacing humans with ‘conscious’ machines that will eventually fight ‘robot wars’. This is misleading because the debate surrounding the integration of AI into the military is mostly not about robots replacing humans. Armed forces around the world plan to use AI for a variety of purposes, especially as part of humans interacting with machines, often called ‘teaming’. The debate and actors participating in it should therefore focus on the various legal, ethical, and security challenges that might arise as part of these human-machine interactions, such as a distributed form of agency.

Further, images of ‘killer robots’ often invoke a narrative of ‘uprising’ which is common in many works of popular culture and where humans lose control of AI, as well as determinist views where humans have little influence over how technology impacts society. Such visual tropes overshadow (human) actors’ decisions to develop or use AI in certain ways, as well the political and social contexts surrounding those decisions. Portraying weaponised AI in the form of robots turning against their creators problematically presents this is an inevitable development, instead of highlighting choices made by developers and users of these technologies.

Finally, many of the visuals tend to focus on the combat aspect of integrating AI in the military, especially on weaponry, rather than more ‘mundane’ applications, for instance in logistics or administration. Sensationalist imagery featuring shiny robots with guns or soldiers depicted in a theoretical battlefield with a blue background risks distracting from technological developments in security and warfare, such as the integration of AI into data analysis or military decision-support systems.

Towards better images?

It should be noted that many outlets have moved on from using ‘killer robot’ imagery and sci-fi clichés when publishing about AI in warfare. Some more realistic depictions are being increasingly used. For instance, a recent symposium on military AI published by the platform Opinio Juris features articles illustrated with generic photos of soldiers, drones, or fighter jets.

Images of military personnel looking at data on computer screens are arguably not as problematic because they convey a more realistic representation of the integration of AI into the military domain. But this still means often relying on the same sources: stock imagery and public domain websites such as the US government’s collections. It also means that AI technologies are often depicted in a military training or experimental setting, rather than a context where they could potentially be used, such as an actual conflict, not hidden with a generic blue background.

There are some understandable challenges, such as researchers not getting a say in the images used for their books or articles, or the reliance on free, public domain images, which is common in online journalism. However, as evidenced by the use of sci-fi tropes in major international conferences, a reflection on what are ‘responsible’ and ‘appropriate’ visuals for the debate on AI in the military and AWS is lacking.

Images of robot commanders, the Terminator, or soldiers with blue flashy tablets miss the point that AI in the military is about changing dynamics of human-machine interaction, which involve various ethical, legal, and security implications for agency in warfare. As with images of AI more broadly, there is a need to expand the themes in visuals of AI in security and warfare, and therefore also the types of sources used. Better images of AI would include humans who are behind AI systems and humans that might be potentially affected by them—both soldiers and civilians (e.g. some images and photos depict destroyed civilian buildings, see here, here, or here). Ultimately, imagery about AI in the military should “reflect the realistically messy, complex, repetitive and statistical nature of AI systems” as well as the messy and complex reality of military conflict and the security sphere more broadly.

The author thanks Ingvild Bode, Qiaochu Zhang and Eleanor Taylor (one of our Student Stewards) for their feedback on earlier drafts of this blog. 

Better Images of AI’s Student Stewards

Better Images of AI is delighted to be working with Cambridge University’s AI Ethics Society to create a community of Student Stewards. The Student Stewards are working to empower people to use more representative images of AI and celebrate those who lead by example. The Stewards have also formed a valuable community to help Better Images of AI connect with its artists and develop its image library. 

What is Cambridge University’s AI Ethics Society? 

The Cambridge University AI Ethics Society (CUAES) is a group of students from the University of Cambridge who share a passion for advancing the ethical discourse surrounding AI. Each year, the society choses a campaign to support and introduces its members to the issues that these organisations are trying to solve through events and workshops. In 2023, CUAES supported Stop Killer Robots. This year, the Society chose to support Better Images of AI. 

The Society’s Reasons for Supporting Better Images of AI 

The CUAES committee really resonated with Better Images of AI’s mission. The impact that visual media can have on public discourse about AI has been overlooked – especially in academia where there is a focus on written word. Nevertheless, stock images of humanoid robots, white men in suits and the human brain all embed certain values and preconceptions about what AI is and who makes it. CUAES believes that Better Images of AI can help cultivate more thoughtful and constructive discussions about AI. 

Members of the CUAES are privileged enough to be fairly well-informed about the nuances of AI and its ethical implications. Nevertheless, the Society has recognised that even its own logo of a robot incorporates reductive imagery that misrepresents the complexities and current state of AI. Therefore, from oversights in its own decisions, CUAES saw that further work needed to be done.

CUAES is eager to share the importance of Better Images of AI to industry actors, but also members of the public whose perceptions will likely be shaped the most by these sensationalist images. CUAES hopes that by creating a community of Student Stewards, they can disseminate Better Images of AI’s message widely and work together to revise their logo to better reflect the Society’s values. 

The Birth of the Student Steward Initiative

Better Images of AI visited the CUAES earlier this year to introduce members to its work and encourage students to think more critically about how AI is represented. During the workshop, participants were given the tough task to design their own images of AI – we saw everything from illustrations depicting how generative AI models are trained to the duality of AI being symbolised by the ying and yang. The students who attended the workshop were fascinated by Better Images of AI’s mission and wanted to use their skills and time to help – this was the start of the Student Steward community. 

A few weeks after this workshop, individuals were invited to a virtual induction to become Student Stewards so they could introduce more nuanced understandings of AI to the wider public. Whilst this initiative has been borne out of CUAES, students (and others) from all around the globe are invited to join the group to shape a more informed and balanced public perception of AI.

The Role of the Student Stewards

The Student Stewards are on the frontline of spreading Better Images of AI’s mission to journalists, researchers, communications professionals, designers, and the wider public. Here are some of the roles that they champion: 

  1. The Guidance Role: if our Student Stewards see images of AI that are misleading, unrepresentative or harmful, they will attempt to contact authors and make them aware of the Better Images of AI Library and Guide. The Stewards hope that they can help to raise awareness of the problems associated with the images used and guide authors towards alternative options that avoid reinforcing dangerous AI tropes. 
  1. The Gratitude Role: we realise that it is equally as important to recognise instances where authors have used images from the Better Images of AI library. Images from the library have been spotted in international media, adopted by academic institutions and utilised by independent writers. Every decision to opt for more inclusive and representative images of AI plays a crucial role in raising awareness of the nuances of AI. Therefore, our Stewards want to thank authors for being sensitive to these issues and encourage the continuous of the library. 
  1. Connecting with artists: the stories and motivations behind each of the images in our library are often so interesting and thought provoking. Our Student Stewards will be taking the time to connect with artists that contribute images to our library. By learning more about how artists have been inspired to create their works, we can better appreciate the diverse perspectives and narratives that these images provide to wider society. 
  1. Helping with image collections: Better Images of AI carefully selects the images that are chosen to be published in its library. Each image is scrutinised against the different requirements to ensure that they avoid reinforcing harmful stereotypes and embody the principles of honesty, humanity, necessity and specificity. Our Student Stewards will be assisting with many of the tasks that are involved from submission to publication, including liaising with artists, data labelling, evaluating initial submissions, and writing image descriptions. 
  1. Sharing their views: Each of our Student Stewards come with different interests related to AI and its associated representations, narratives, benefits and challenges. We are eager for our students to share their insights on our blog to introduce others to new debates and ideas in these domains.

As Better Images of AI is a non-profit organisation, our community of Stewards operate on a voluntary basis but this does allow for flexibility around your other commitments. Stewards are free to take on additional tasks based on their own availability and interests and there are no minimum time requirements for undertaking this role – we are just grateful for your enthusiasm and willingness to help! 

If you are interested in becoming a Student Steward at Better Images of AI, please get in touch. You do not need to be affiliated with the University of Cambridge or be a student to join the group.