Images of AI – Between Fiction and Function

This image shows an abstract microscopic photograph of a Graphics Processing Unit resembling a satellite image of a big city. The image has been overlayed with a bright blue filter. In the middle of the image is the text, 'Images of AI - Between Fiction and Function' in a white text box with black text. Beneath, in a maroon text box is the author's name in white text.

“The currently pervasive images of AI make us look somewhere, at the cost of somewhere else.”

In this blog post, Dominik Vrabič Dežman provides a summary of his recent research article, ‘Promising the future, encoding the past: AI hype and public media imagery‘.

Dominik sheds light on the importance of the Better Images of AI library which fosters a more informed, nuanced public understanding of AI by breaking the stronghold of the “deep blue sublime” aesthetic with more diverse and meaningful representations of AI.

Dominik also draws attention to the algorithms which perpetuate the dominance of familiar and sensationalist visuals and calls for movements which reshape media systems to make better images of AI more visible in public discourse.

The full paper is published in the AI and Ethics Journal’s special edition on ‘The Ethical Implications of AI Hype, a collection edited by We and AI.


AI promises innovation, yet its imagery remains trapped in the past. Deep-blue, sci-fi-inflected visuals have flooded public media, saturating our collective imagination with glowing, retro-futuristic interfaces and humanoid robots. These “deep blue sublime” [1] images, which draw on a steady palette of outdated pop-cultural tropes and clichés, do not merely depict AI — they shape how we think about it, reinforcing grand narratives of intelligence, automation, and inevitability [2]. It takes little to acknowledge that the AI discussed in public media is far from the ethereal, seamless force these visuals disclose. Instead,  the term generally refers to a sprawling global technological enterprise, entangled with labor exploitation, ecological extraction, and financial speculation [3–10] — realities conspicuously absent from its dominant public-facing representations.

The widespread rise of these images is suspended against intensifying “AI hype” [11], which has been compared to historical speculative investment bubbles [12,13]. In my recent research [1,14,15], I join a growing body of research looking into images of AI [16–21], to explore how AI images operate at the intersection of aesthetics and politics. My overarching ambition has been to contribute an integrated account of the normative and the empirical dimensions of public images of AI to the literature.  I’ve explored how these images matter politically and ethically, inseparable from the pathways they take in real-time, echoing throughout public digital media and wallpapering it in seen-before denominations of blue monochrome.

Rather than measuring the direct impact of AI imagery on public awareness, my focus has been on unpacking the structural forces that produce and sustain these images. What mechanisms dictate their circulation? Whose interests do they serve? How might we imagine alternatives? My critique targets the visual framing of AI in mainstream public media — glowing, abstract, blue-tinted veneers seen daily by millions on search engines, institutional websites, and in reports on AI innovation. These images do not merely aestheticize AI; they foreclose more grounded, critical, and open-ended ways of understanding its presence in the world.


The Intentional Mindlessness of AI Images

This image shows a google images search for 'artificial intelligence'. The result is a collection of images which contain images of the human brain, the colour blue, and white humanoid robots.

Google Images search results for “artificial intelligence”. January 14, 2025. Search conducted from an anonymised instance of Safari. Search conducted from Amsterdam, Netherlands.

Recognizing the ethico-political stakes of AI imagery begins with acknowledging that what we spend our time looking at, or not looking beyond, matters politically and ethically. The currently pervasive images of AI make us look somewhere, at the cost of a somewhere else. The sheer volume of these images, and their dominance in public media, slot public perception into repetitive grooves dominated by human-like robots, glowing blue interfaces, and infinite expanses of deep-blue intergalactic space. By monopolizing the sensory field through which AI is perceived, they reinforce sci-fi clichés, and more importantly,  obscure the material realities — human labor, planetary resources, material infrastructures, and economic speculation — that drive AI development [22,23].

In a sense, images of AI could be read as operational [24–27], enlisted in service of an operation which requires them to look, and function, the way they do. This might involve their role in securing future-facing AI narratives, shaping public sentiment towards acceptance of AI innovation, and supporting big tech agendas for AI deployment and adoption. The operational nature of AI imagery means that these images cannot be studied purely as an aesthetic artifact, or autonomous works of aesthetic production. Instead, these images are minor actors, moving through technical, cultural and political infrastructures. In doing so, individual images do not say or do much per se – they are always already intertwined in the circuits of their economic uptake, circulation, and currency; not at the hands of the digital labourers who created them, but of the human and algorithmic actors that keep them in circulation.

Simultaneously, the endurance of these images is less the result of intention than of a more mindless inertia. It quickly becomes clear how these images do not reflect public attitudes, nor of their makers; anonymous stock-image producers, digital workers mostly located in the global South [28]. They might reflect the views of the few journalistic or editorial actors that choose the images in their reporting [29], or are simply looking to increase audience engagement through the use of sensationalist imagery [30]. Ultimately, their visibility is in the hands of algorithms rewarding more of the same familiar visuals over time [1,31], of stock image platforms and search engines, which maintain close ties with media conglomerates  [32], which, in turn, have long been entangled with big tech [33]. The stock  images are the detritus of a digital economy that rewards repetition over revelation: endlessly cropped, upscaled, and regurgitated “poor images” [34], travelling across cyberspace as they become recycled, upscaled, cropped, reused, until they are pulled back into circulation by the very systems they help sustain [15,28].


AI as Ouroboros: Machinic Loops and Recursive Aesthetics

As algorithms increasingly dictate who sees what in the public sphere [35–37], they dictate not only what is seen but also what is repeated. Images of AI become ensnared in algorithmic loops, which sediment the same visuality over time on various news feeds and search engines [15]. This process has intensified with the proliferation of generative AI: as AI-generated content proliferates, it feeds on itself—trained on past outputs, generating ever more of the same. This “closing machinic loop” [15,28] perpetuates aesthetic homogeneity, reinforcing dominant visual norms rather than challenging them. The widespread adoption of AI-generated stock images further narrows the space for disruptive, diverse, and critical representations of AI, making it increasingly difficult for alternative images to surface in public visibility.

The image shows a humanoid figure with a glowing, transparent brain stands in a digital landscape. The figure's body is composed of metallic and biomechanical components, illuminated by vibrant blue and pink lights. The background features a high-tech grid with data streams, holographic interfaces, and circuitry patterns.

ChatGPT 4o output for query: “Produce an image of ‘Artificial Intelligence’”. 14 January 2025.


Straddling the Duality of AI Imagery

In critically examining AI imagery, it is easy to veer into one of two deterministic extremes — both of which risk oversimplifying how these images function in shaping public discourse:

  1. Overemphasizing Normative Power:

This approach risks treating AI images as if they have autonomous agency, ignoring the broader systems that shape their circulation. AI images appear as sublime artifacts—self-contained objects for contemplation, removed from their daily life as fleeting passengers in the digital media image economy. While the production of images certainly exerts influence in shaping socio-technical imaginaries [38,39], they operate within media platforms, economic structures, and algorithmic systems that constrain their impact.

2. Overemphasizing Materiality:

This perspective reduces AI to mere infrastructure, seeing images as passive reflections of technological and industrial processes, rather than an active participant in shaping public perception. From this view, AI’s images are dismissed as epiphenomenal, secondary to the “real” mechanisms of AI’s production: cloud computing, data centers, supply chains, and extractive labor. In reality, AI has never been purely empirical; cultural production has been integral to AI research and development from the outset, with speculative visions long driving policy, funding, and public sentiment [40].

Images of AI are neither neutral nor inert. The current diminishing potency of glowing, sci-fi-inflected AI imagery as a stand-in for AI in public media suggests a growing fatigue with their clichés, and cannot be untangled from a general discomfort with AI’s utopian framing, as media discourse pivots toward concerns over opacity, power asymmetries, and scandals in its implementation [29,41]. A robust critique of the cultural entanglements of AI requires addressing both its normative commitments (promises made to the public), and its empirical components (data, resources, labour; [6]).

Toward Better Images: Literal Media & Media Literacy

Given the embeddedness of AI images within broader machinations of power, the ethics of AI images are deeply tied to public understanding and awareness of such processes. Cultivating a more informed, critical public — through exposure to diverse and meaningful representations of AI — is essential to breaking the stronghold of the deep blue sublime.

At the individual level, media literacy equips the public to critically engage with AI imagery [1,42,43]. By learning to question the visual veneers, people can move beyond passive consumption of the pervasive, reductive tropes that dominate AI discourse. Better images recalibrate public perception, offering clearer insights into what AI is, how it functions, and its societal impact.The kind of images produced are equally important. Better images would highlight named infrastructural actors, document AI research and development, and/or, diversify the visual associations available to us, loosening the visual stronghold of the currently dominant tropes.

This greatly raises the bar for news outlets in producing original imagery of didactic value, which is where open-source repositories such as Better Images of AI serve as invaluable resources. This crucially bleeds into the urgency for reshaping media systems, making better images readily available to creators and media outlets, helping them move away from generic visuals toward educational, thought-provoking imagery. However, creating better visuals is not enough;  they must become embedded into media infrastructure to become the norm rather than the exception.

Given the above, the role of algorithms cannot be ignored. As mentioned above, algorithms drive what images are seen, shared, and prioritized in public discourse. Without addressing these mechanisms, even the most promising alternatives risk being drowned by the familiar clichés. Rethinking these pathways is essential to ensure that improved representations can disrupt the existing visual narrative of AI.

Efforts to create better AI imagery are only as effective as their ability to reach the public eye and disrupt the dominance of the “deep blue sublime” aesthetic in public media. This requires systemic action—not merely producing different images in isolation, but rethinking the networks and mechanisms through which these images are circulated. To make a meaningful impact, we must address both the sources of production and the pathways of dissemination. By expanding the ways we show, think about, and engage with AI, we create opportunities for political and cultural shifts. A change in one way of sensing AI (writing / showing / thinking / speaking) invariably loosens gaps for a change in others.

Seeing AI ≠ Believing AI

AI is not just a technical system; it is a speculative, investment-driven project, a contest over public consensus, staged by a select few to cement its inevitability [44]. The outcome is a visual regime that detaches AI’s media portrayal from its material reality: a territorial, inequitable, resource-intensive, and financially speculative global enterprise.

Images of AI come from somewhere (they are products of poorly-paid digital labour, served through algorithmically-ranked feeds), do something (torque what is at-hand for us to imagine with, directing attention away from AI’s pernicious impacts and its growing inequalities), and go somewhere (repeat themselves ad nauseam through tightening machinic loops, numbing rather than informing; [16]).

The images have left few fooled, and represent a missed opportunity for adding to public sensitisation and understanding regarding AI. Crucially, bad images do not inherently disclose bad tech, nor do good images promote good tech; the widespread adoption of better images of AI in public media would not automatically lead to socially good or desirable understandings, engagements, or developments of AI. That remains the issue of the current political economy of AI, whose stakeholders only partially determine this image economy. Better images alone  cannot solve this, but they might open slivers of insight into AI’s global “arms race.”

As it stands, different visual regimes struggle to be born. Fostering media literacy, demanding critical representations, and disrupting the algorithmic stranglehold on AI imagery are acts of resistance. If AI is here to stay, then so too must be our insistence on seeing it otherwise — beyond the sublime spectacle, beyond inevitability, toward a more porous and open future.

About the author

Dominik Vrabič Dežman (he/him) is an information designer and media philosopher. He is currently at the Departments of Media Studies and Philosophy at the University of Amsterdam. Dominik’s research interests include public narratives and imaginaries of AI, politics and ethics of UX/UI, media studies, visual communication and digital product design.

References

1. Vrabič Dežman, D.: Defining the Deep Blue Sublime [Internet]. SETUP; (2023). 2023. https://web.archive.org/web/20230520222936/https://deepbluesublime.tech/

2. Burrell, J.: Artificial Intelligence and the Ever-Receding Horizon of the Future [Internet]. Tech Policy Press. (2023). 2023 Jun 6. https://techpolicy.press/artificial-intelligence-and-the-ever-receding-horizon-of-the-future/

3. Kponyo, J.J., Fosu, D.M., Owusu, F.E.B., Ali, M.I., Ahiamadzor, M.M.: Techno-neocolonialism: an emerging risk in the artificial intelligence revolution. TraHs [Internet]. (2024 [cited 2025 Feb 18]. ). https://doi.org/10.25965/trahs.6382

4. Leslie, D., Perini, A.M.: Future Shock: Generative AI and the International AI Policy and Governance Crisis. Harvard Data Science Review [Internet]. (2024 [cited 2025 Feb 18]. ). https://doi.org/10.1162/99608f92.88b4cc98

5. Regilme, S.S.F.: Artificial Intelligence Colonialism: Environmental Damage, Labor Exploitation, and Human Rights Crises in the Global South. SAIS Review of International Affairs. 44:75–92. (2024. ). https://doi.org/10.1353/sais.2024.a950958

6. Crawford, K.: The atlas of AI power, politics, and the planetary costs of artificial intelligence [Internet]. (2021). https://www.degruyter.com/isbn/9780300252392

7. Sloane, M.: Controversies, contradiction, and “participation” in AI. Big Data & Society. 11:20539517241235862. (2024. ). https://doi.org/10.1177/20539517241235862

8. Rehak, R.: On the (im)possibility of sustainable artificial intelligence. Internet Policy Review [Internet]. ((2024 Sep 30). ). https://policyreview.info/articles/news/impossibility-sustainable-artificial-intelligence/1804

9. Wierman, A., Ren, S.: The Uneven Distribution of AI’s Environmental Impacts. Harvard Business Review [Internet]. ((2024 Jul 15). ). https://hbr.org/2024/07/the-uneven-distribution-of-ais-environmental-impacts

10. : What we don’t talk about when we talk about AI | Joseph Rowntree Foundation [Internet]. (2024). 2024 Feb 8. https://www.jrf.org.uk/ai-for-public-good/what-we-dont-talk-about-when-we-talk-about-ai

11. Duarte, T., Barrow, N., Bakayeva, M., Smith, P.: Editorial: The ethical implications of AI hype. AI Ethics. 4:649–51. (2024. ). https://doi.org/10.1007/s43681-024-00539-x

12. Singh, A.: The AI Bubble [Internet]. Social Science Encyclopedia. (2024). 2024 May 28. https://www.socialscience.international/the-ai-bubble

13. Floridi, L.: Why the AI Hype is Another Tech Bubble. Philos Technol. 37:128. (2024. ). https://doi.org/10.1007/s13347-024-00817-w

14. Vrabič Dežman, D.: Interrogating the Deep Blue Sublime: Images of Artificial Intelligence in Public Media. In: Cetinic E, Del Negueruela Castillo D, editors. From Hype to Reality: Artificial Intelligence in the Study of Art and Culture. Rome/Munich: HumanitiesConnect; (2024). https://doi.org/10.48431/hsah.0307

15. Vrabič Dežman, D.: Promising the future, encoding the past: AI hype and public media imagery. AI Ethics [Internet]. (2024 [cited 2024 May 7]. ). https://doi.org/10.1007/s43681-024-00474-x

16. Romele, A.: Images of Artificial Intelligence: a Blind Spot in AI Ethics. Philos Technol. 35:4. (2022. ). https://doi.org/10.1007/s13347-022-00498-3

17. Singler, B.: The AI Creation Meme: A Case Study of the New Visibility of Religion in Artificial Intelligence Discourse. Religions. 11:253. (2020. ). https://doi.org/10.3390/rel11050253

18. Steenson, M.W.: A.I. Needs New Clichés [Internet]. Medium. (2018). 2018 Jun 13. https://web.archive.org/web/20230602121744/https://medium.com/s/story/ai-needs-new-clich%C3%A9s-ed0d6adb8cbb

19. Hermann, I.: Beware of fictional AI narratives. Nat Mach Intell. 2:654–654. (2020. ). https://doi.org/10.1038/s42256-020-00256-0

20. Cave, S., Dihal, K.: The Whiteness of AI. Philos Technol. 33:685–703. (2020. ). https://doi.org/10.1007/s13347-020-00415-6

21. Mhlambi, S.: God in the image of white men: Creation myths, power asymmetries and AI [Internet]. Sabelo Mhlambi. (2019). 2019 Mar 29. https://web.archive.org/web/20211026024022/https://sabelo.mhlambi.com/2019/03/29/God-in-the-image-of-white-men

22. : How to invest in AI’s next phase | J.P. Morgan Private Bank U.S. [Internet]. Accessed 2025 Feb 18. https://privatebank.jpmorgan.com/nam/en/insights/markets-and-investing/ideas-and-insights/how-to-invest-in-ais-next-phase

23. Jensen, G., Moriarty, J.: Are We on the Brink of an AI Investment Arms Race? [Internet]. Bridgewater. (2024). 2024 May 30. https://www.bridgewater.com/research-and-insights/are-we-on-the-brink-of-an-ai-investment-arms-race

24. Paglen, T.: Operational Images. e-flux journal. 59:3. (2014. ). 

25. Pantenburg, V.: Working images: Harun Farocki and the operational image. Image Operations. Manchester University Press; p. 49–62. (2016). 

26. Parikka, J.: Operational Images: Between Light and Data [Internet]. (2023). 2023 Feb. https://web.archive.org/web/20230530050701/https://www.e-flux.com/journal/133/515812/operational-images-between-light-and-data/

27. Celis Bueno, C.: Harun Farocki’s Asignifying Images. tripleC. 15:740–54. (2017. ). https://doi.org/10.31269/triplec.v15i2.874

28. Romele, A., Severo, M.: Microstock images of artificial intelligence: How AI creates its own conditions of possibility. Convergence: The International Journal of Research into New Media Technologies. 29:1226–42. (2023. ). https://doi.org/10.1177/13548565231199982

29. Moran, R.E., Shaikh, S.J.: Robots in the News and Newsrooms: Unpacking Meta-Journalistic Discourse on the Use of Artificial Intelligence in Journalism. Digital Journalism. 10:1756–74. (2022. ). https://doi.org/10.1080/21670811.2022.2085129

30. De Dios Santos, J.: On the sensationalism of artificial intelligence news [Internet]. KDnuggets. (2019). 2019. https://www.kdnuggets.com/on-the-sensationalism-of-artificial-intelligence-news.html/

31. Rogers, R.: Aestheticizing Google critique: A 20-year retrospective. Big Data & Society. 5:205395171876862. (2018. ). https://doi.org/10.1177/2053951718768626

32. Kelly, J.: When news orgs turn to stock imagery: An ethics Q & A with Mark E. Johnson [Internet]. Center for Journalism Ethics. (2019). 2019 Apr 9. https://ethics.journalism.wisc.edu/2019/04/09/when-news-orgs-turn-to-stock-imagery-an-ethics-q-a-with-mark-e-johnson/

33. Papaevangelou, C.: Funding Intermediaries: Google and Facebook’s Strategy to Capture Journalism. Digital Journalism. 0:1–22. (2023. ). https://doi.org/10.1080/21670811.2022.2155206

34. Steyerl, H.: In Defense of the Poor Image. e-flux journal [Internet]. (2009 [cited 2025 Feb 18]. ). https://www.e-flux.com/journal/10/61362/in-defense-of-the-poor-image/

35. Bucher, T.: Want to be on the top? Algorithmic power and the threat of invisibility on Facebook. New Media & Society. 14:1164–80. (2012. ). https://doi.org/10.1177/1461444812440159

36. Bucher, T.: If…Then: Algorithmic Power and Politics. Oxford University Press; (2018). 

37. Gillespie, T.: Custodians of the internet: platforms, content moderation, and the hidden decisions that shape social media. New Haven: Yale University Press; (2018). 

38. Jasanoff, S., Kim, S.-H., editors.: Dreamscapes of Modernity: Sociotechnical Imaginaries and the Fabrication of Power [Internet]. Chicago, IL: University of Chicago Press; Accessed 2022 Jun 26. https://press.uchicago.edu/ucp/books/book/chicago/D/bo20836025.html

39. O’Neill, J.: Social Imaginaries: An Overview. In: Peters MA, editor. Encyclopedia of Educational Philosophy and Theory [Internet]. Singapore: Springer Singapore; p. 1–6. (2016). https://doi.org/10.1007/978-981-287-532-7_379-1

40. Law, H.: Computer vision: AI imaginaries and the Massachusetts Institute of Technology. AI Ethics [Internet]. (2023 [cited 2024 Feb 25]. ). https://doi.org/10.1007/s43681-023-00389-z

41. Nguyen, D., Hekman, E.: The news framing of artificial intelligence: a critical exploration of how media discourses make sense of automation. AI & Soc. 39:437–51. (2024. ). https://doi.org/10.1007/s00146-022-01511-1

42. Woo, L.J., Henriksen, D., Mishra, P.: Literacy as a Technology: a Conversation with Kyle Jensen about AI, Writing and More. TechTrends. 67:767–73. (2023. ). https://doi.org/10.1007/s11528-023-00888-0

43. Kvåle, G.: Critical literacy and digital stock images. Nordic Journal of Digital Literacy. 18:173–85. (2023. ). https://doi.org/10.18261/njdl.18.3.4

44. Tacheva, Z., Appedu, S., Wright, M.: AI AS “UNSTOPPABLE” AND OTHER INEVITABILITY NARRATIVES IN TECH: ON THE ENTANGLEMENT OF INDUSTRY, IDEOLOGY, AND OUR COLLECTIVE FUTURES. AoIR Selected Papers of Internet Research [Internet]. (2024 [cited 2025 Feb 18]. ). https://doi.org/20250206083707000

What Do I See in ‘Ways of Seeing’ by Zoya Yasmine

At the top there is a Diptych contrasting a whimsical pastel scene with large brown rabbits, a rainbow, and a girl in a red dress on the left, and a grid of numbered superpixels on the right - emphasizing the difference between emotive seeing and analytical interpretation. At the bottom, there is black text which says 'what do i see in ways of seeing' by Zoya Yasmine. In the top right corner, there is text in a maroon text box which says 'through my eyes blog series'.

Artist contributions to the Better Images of AI library have always served a really important role in relation to fostering understanding and critical thinking about AI technologies and their context. Images facilitate deeper inquiries into the nature of AI, its history, and ethical, social, political and legal implications.

When artists create better images of AI, they often have to grapple with these narratives in their attempts to more realistically portray the technology and point towards its strengths and weaknesses. Furthermore, as artists freely share these images in our library, others can benefit from learning about the artist’s own internal motivations (which are provided in the descriptions) but the images can also inspire users’ own musings.

In this series of blog posts, some of our volunteer stewards are each taking turns to choose an image from the Archival Images of AI collection and unpack the artist’s processes and explore what that image means to them. 

At the end of 2024, we released the Archival Images of AI Playbook with AIxDESIGN and the Netherlands Institute for Sound and Vision. The playbook explores how existing images – especially those from digital heritage collections – can help us craft more meaningful visual narratives about AI. Through various image-makers’ own attempts to make better images of AI, the playbook shares numerous techniques which can teach you how to transform existing images into new creations. 

Here, Zoya Yasmine unpacks ‘Ways of Seeing’ Nadia Piet’s (an image-maker) own better image of AI that was created for the playbook. Zoya comments on how it is a really valuable image to depict the way that text-to-image generators ‘learn’ how to generate their output creations. Zoya considers how this image relates to copyright law (she’s a bit of an intellectual property nerd) and the discussions about whether AI companies should be able to use individual’s work to train their systems without explicit consent or remuneration. 

ALT text: Diptych contrasting a whimsical pastel scene with large brown rabbits, a rainbow, and a girl in a red dress on the left, and a grid of numbered superpixels on the right - emphasizing the difference between emotive seeing and analytical interpretation.

Nadia Piet + AIxDESIGN & Archival Images of AI / Better Images of AI / Ways of Seeing / CC-BY 4.0

‘Ways of Seeing’ by Nadia Piet 

This diptych contrasts human and computational ways of seeing: one riddled with memory and meaning, the other devoid of emotional association and capable of structural analysis. The left pane shows an illustration from Tom Seidmann-Freud’s Book of Hare Stories (1924) which portrays a whimsical, surreal scene that is both playful and uncanny. On the right, the illustration is reduced to a computational rendering, with each of its superpixels (16×16) fragmented and sorted by visual complexity with a compression algorithm. 


Copyright and training AI systems 

Training AI systems requires substantial amounts of input data – from images, videos, texts and other content. Based on the data from these materials, AI systems can ‘learn’ how to make predictions and provide outputs. However, lots of these materials used to train AI systems are often protected by copyright owned by another parties which raises complex questions about ownership and the legality of using such data without permission. 

In the UK, Getty Images filed a lawsuit against Stability AI (developers of a text-to-image model called Stable Diffusion) claiming that 7.3 million of its images were unlawfully scrapped from its website to train Stability AI’s model. Similarly, Mumsnet has launched a legal complaint against OpenAI, the developer of ChatGPT, accusing the AI company of scraping content from its site (with over 6 billion words shared by community members) without consent. 

The UK’s Copyright, Designs and Patents Act 1998 (the Act) provides companies like Getty Images and Mumsnet with copyright protection over their databases and assets. So unless an exception applies, permission (through a license) is required if other parties wish to reproduce or copy the content. Section 29(A) of the Act provides an exception which permits copies of any copyright protected material for the purposes of Text and Data Mining (TDM) without a specific license. But, this lenient provision is for non-commercial purposes only. Although the status of AI systems like Stable Diffusion and ChatGPT have not been tested before the courts yet, they are likely to fall outside the scope of non-commercial purposes

TDM is the automated technique used to extract and analyse vast amounts of online materials to reveal relationships and patterns in the data. TDM has become an increasingly valuable tool to train lucrative generative AI systems on mass amounts of materials scraped from the Internet. It becomes clear that AI models cannot be developed or built efficiently without input data that has been created by human artists, researchers, writers, photographers, publishers, and creators. However, as much of their works are being used without payment or attribution by AI companies, big tech companies are essentially ‘freeriding’ on the works of the creative industry who have invested significant time, effort, and resources into producing such rich works. 


How does this image relate to current debates about copyright and AI training? 

When I saw this image, it really prompted me to think about the training process of AI systems and the purpose of the copyright system. ‘Ways of Seeing’ has stimulated my own thoughts about how computational models ‘learn’ and ‘see’ in contrast to human creators

Text-to-image AI generators (like Stable Diffusion or Midjourney) are repeatedly trained on thousands of images which allow the models to ‘learn’ to identify patterns, like what common objects and colours look like, and then reproduce these patterns when instructed to create new images. While Piet’s image has been designed to illustrate a ‘compression algorithm’ process, I think it also serves as a useful visual to reflect how AI processes visual data computationally, reducing it to pixels, patterns, or latent features. 

It’s important to note that often the images generated by AI models will not necessarily be exact copies of the original images used in the training process – but instead, they serve as statistical approximations of training data which have informed the model’s overall understanding of how objects are represented. 

It’s interesting to think about this in relation to copyright and what this legal framework serves to protect. Copyright stands to protect the creative expression of works – for example, the lighting, exposure, filter, or positioning of an image – but not the ideas themselves. The reason that copyright law focuses on these elements is because they reflect the creator’s own unique thoughts and originality. However, as Piet’s illustration can usefully demonstrate, what is significant about the AI training process for copyright law is that often TDM is often not used to extract the protected expression of the materials.

To train AI models, it is often the factual elements of the work that might be the most valuable (as opposed to the creative aspects). The training process relies on the broad visual features of the images, rather than specific artistic choices. For example, when training text-to-image models, TDM is not often used to extract data about the lighting techniques which are employed to make an image of a cat particularly appealing. Instead, the accessibility to images of cats which detail the features that resemble a cat (fur, whiskers, big eyes, paws) are what’s important. In Piet’s image, the protectable parts of the illustration from the ‘Book of Hare Stories’ would subsist in the artistic style and execution  – for example, the way that the hare and other elements are drawn, the placement and interaction of the elements, and the overall design of the image. 

The specific challenge for copyright law is that AI companies are unable to capture these ‘unprotectable’ factual elements of materials without making a copy or storing the protected parts (Lemley and Casey, 2020). I think Nadia’s image really highlights the transformation of artwork into fragmented ‘data’ for training systems which challenges our understanding of creativity and originality. 

My thoughts above are not to suggest that AI companies should be able to freely use copyright protected works as training data for their models without remunerating or seeking permission from copyright owners. Instead, the way that TDM and generative AI ‘re-imagine’ the value of these ‘unprotectable’ elements means that AI companies still freeride on creator’s materials. Therefore, AI companies should be required to explicitly license copyright-protected materials used to train their systems so creators are provided with proper control over their works (you can read more about my thoughts here).

Also, I do not deny that there are generative AI systems that aim to reproduce a particular artist’s style – see here. In these instances, I think it would be easier to prove that there was copyright infringement since these are a clear reproduction of ‘protected elements’. However, if this is not the purpose of the AI tool, developers try to avoid the outputs replicating training data too similarly as this can open them up more easily to copyright infringement for both the input (as discussed in this piece) but also the output image (see here for a discussion). 


My favourite part of Nadia Piet’s image

I think my favourite part of the image is the choice of illustration used to represent computational processing. As Nadia writes in her description, Tom Seidmann-Freud’s illustration depicts a “whimsical, surreal scene that is both playful and uncanny”. Tom, an Austrian-Jewish painter and children’s book author and illustrator (and also Sigmund Freud’s niece), led a short life and she died of an overdose of sleeping pills in 1930 at age 37 after the death of her husband a few months prior. 

“The Hare and the Well” (Left), “Fable of the Hares and the Frogs” (Middle), “Why the Hare Has No Tail” (Right) by Tom Seidmann-Freud derived in the Public Domain Review

After Tom’s death, the Nazis came to power and attempted to destroy much of the art she had created as part of the purge of Jewish authors. Luckily, Tom’s family and art lovers were able to preserve much of her work. I think Nadia’s choice of this image critiques what might be ‘lost’ when rich, meaningful art is reduced to AI’s structural analysis. 

A second point, although not related exactly to the image, is the very thoughtful title, ‘Ways of Seeing’. ‘Ways of Seeing’ was a 1972 BBC television series and book created by John Berger. In the series, Berger criticised traditional Western cultural aesthetics by raising questions about hidden ideologies in visual images like the male gaze embedded in the female nude. He also examined what had changed in our ‘ways of seeing’ in the time between the art was made and the present day. Side note: I think Berger’s would have been a huge fan of Better Images of AI. 

In a similar vein, Nadia has used Seidmann-Freud’s art as a way to explore new parallels with technology like AI which would not have been thought about at the time the work was created. In addition, Nadia’s work serves as an invitation to see and understand AI differently, and like Berges, her work supports artists around the world.


The value of Nadia’s ‘better image of AI’ for copyright discussions

As Nadia writes in the description, Tom Seidmann-Freud’s illustration was derived from the Public Domain Review, where it is written that “Hares have been known to serve as messengers between the conscious world and the deeper warrens of the mind”. From my perspective, Nadia’s whole image acts as a messenger to convey information about the two differing modes of seeing between humans and AI models. 

We need better images of AI like this. Especially for the purposes of copyright law so we can have more meaningful and informed conversations about the nature of AI and its training processes. All too often, in conversations about AI and creativity, images used depict humanoid robots painting on a canvas or hands snatching works.

‘AI art theft’ illustration by Nicholas Konrad (Left) and Copyright and AI image (Right)

These images create misleading visual metaphors that suggest that AI is directly engaging in creative acts in the same way that humans do. Additionally, visuals showing AI ‘stealing’ works reduce the complex legal and ethical debates around copyright, licensing, and data training to overly simplified, fear-evoking concepts.

Thus, better images of AI, like ‘Ways of Seeing’, can serve a vital role as a messenger to represent the reality of how AI systems are developed. This paves the way for more constructive legal dialogues around intellectual property and AI that protect creator’s rights, while allowing for the development of AI technologies based on consented, legally acquired datasets.


About the author

Zoya Yasmine (she/her) is a current PhD student exploring the intersection between intellectual property, data, and medical AI. She grew up in Wales and in her spare time she enjoys playing tennis, puzzling, and watching TV (mostly Dragon’s Den and Made in Chelsea). Zoya is also a volunteer steward for Better Images of AI and part of many student societies including AI in Medicine, AI Ethics, Ethics in Mathematics & MedTech. 


This post was also kindly edited by Tristan Ferne – lead producer/researcher at BBC Research & Development.


If you want to contribute to our new blog series, ‘Through My Eyes’, by selecting an image from the Archival Images of AI collection and exploring what the image means to you, get in touch (info@betterimagesofai.org)

Explore other posts in the ‘Through My Eyes’ Series

‘Weaved Wires Weaving Me’ by Laura Martinez Agudelo

At the top, Digital collage featuring a computer monitor with circuit board patterns on the screen. A Navajo woman is seated on the edge of the screen, appearing to stitch or fix the digital landscape with their hands. Blue digital cables extend from the monitor, keyboard, and floor, connecting the image elements. Beneath, there is the text in black, 'Weaved Wires Weaving Me' by Laura Martinez Agudelo'. In the top right corner, there is a text box in maroon with the text in white: 'through my eyes blog series'

Artist contributions to the Better Images of AI library have always served an important role to foster understanding and critical thinking about AI technologies and their context. Images facilitate deeper inquiries into the nature of AI, its history, and ethical, social, political and legal implications.

When artists create better images of AI, they often have to grapple with these narratives in their attempts to more realistically portray the technology and point towards its strengths and weaknesses. Furthermore, as artists freely share these images in our library, others can benefit from learning about the artist’s own internal motivations (which are provided in image descriptions) but the images can also inspire users’ own musings.

In our blog series, “Through My Eyes”, some of our volunteer stewards take turns selecting an image from the Archival Images of AI collection. They delve into the artist’s creative process and explore what the image means to them—seeing it through their own eyes.

At the end of 2024, we released the Archival Images of AI Playbook with AIxDESIGN and the Netherlands Institute for Sound and Vision. The playbook explores how existing images – especially those from digital heritage collections – can help us craft more meaningful visual narratives about AI. Through various image-makers’ own attempts to make better images of AI, the playbook shares numerous techniques which can teach you how to transform existing images into new creations. 

Here, Laura Martinez Agudelo shares her personal reflections on ‘Weaving Wires 1’ – Hanna Barakat’s own better image of AI that was created for the playbook. Laura comments on how the image uncovers the hidden Navajo women’s labor behind the assembly of microchips in Silicon Valley – inviting us to confront the oppressive cultural conditions of conception, creation and mediation of the technology industry’s approach to innovation.


Digital collage featuring a computer monitor with circuit board patterns on the screen. A Navajo woman is seated on the edge of the screen, appearing to stitch or fix the digital landscape with their hands. Blue digital cables extend from the monitor, keyboard, and floor, connecting the image elements.

Hanna Barakat + AIxDESIGN & Archival Images of AI / Better Images of AI / Weaving Wires 1 / CC-BY 4.0


Cables came out and crossed my mind 

Weaving wires 1 by Hanna Barakat is about hidden histories of computer labor. As it is explained in the image’s description, her digital collage is inspired by the history of computing in the 1960s in Silicon Valley, where the Fairchild Semiconductor company employed Navajo women for intensive tasks such as assembling microchips. Their work (actually with their hands and their digits) was a way for these women to provide for their families in an economically marginalized context.

At that time, this labor was made to be seen as a way to legitimize the transfer of the weaving cultural practices to contribute to technological innovation. This legitimation appears to be an illusion, to converge the unchanging character of weaving as heritage, with the constant renewal of global industry, but it also presupposes the non-recognition of Navajo women’s labor and a techno-cultural and gendered transaction. Their work is diluted in meaning and action, and overlooked in the history of computing.

In Weaving wires 1, we can see a computer monitor with circuit board patterns on the screen, and a juxtaposed woven design. Then, two potential purposes dialogue with the woman sitting at the edge of the screen, suspended in a white background: is the woman stitching or fixing or even both as she weaves and prolongs the wires? These blue wires extend from the monitor, keyboard and beyond. The woman seems to be modifying or constructing a digital landscape with her own hands, leading us to remember the place where these materialities come from, and the memories they connect to.

Since my mother tongue is Spanish, a distant memory of the word “Navajo” and the image of weaving women appeared. “Navajo” is a Spanish adaptation of the Tewa Pueblo word navahu’u, which means “farm fields in the valley”. The Navajo people call themselves Diné, literally meaning “The People”. At this point, I began to think about the specific socio-spatial conditions of Navajo/Diné women at that time and their misrepresentation today. When I first saw the collage, I felt these cables crossing my own screen. Many threads began to unravel in my head in the form of question marks. I wondered how older and younger generations of Navajo/Diné women have experienced (and in other ways inherited) this hidden labor associated with the transformation of the valley and their community. This image disrupts as a visual opposition to the geographic and social identification of Silicon Valley as presented, for example, in the media. So now, these wires expand the materiality to reveal their history. Hanna creatively represents the connection between key elements of this theme. Let’s explore some of her artistic choices.

Recoded textures as visual extensions 

Hanna Barakat is a researcher, artist and activist who studies emerging technologies and their social impact. I discovered her work thanks to the Archival Images of AI project (Launch & Playtest). Weaving wires 1 is part of a larger project from Hanna where a creative dialogue between textures and technology is proposed. Hanna plays with intersections of visual forms to raise awareness of the social, racial and gender issues behind technologies. Weaving wires 1 reconnected me with the importance of questioning the human and material extractive conditions in which technological devices are produced.

As a lecturer in (digital) communication, I’m often looking for visual support on topics such as the socio-economic context in which the Internet appears, the evolution of the Web, the history of computer culture, and socio-technical theories and examples to study technological innovation, its problems and ethical challenges. The visual narratives are mostly uniform, and the graphic references are also gendered. Women’s work is most of the time misrepresented (no, those women in front of the big computers are not just models or assistants, they have full names and they are the official programmers and coders. Take a look at the work of Kathy/Kathryn Kleiman… Unexplored archives are waiting for us !).

When I visually interacted with Weaving wires 1 and read its source of inspiration (I actually used and referenced the image for one of my lectures), I realized once again the need to make visible the herstory (term coined in the 1960s as a feminist critique of conventional historiography) of technological innovation. Sometimes, in the rush of life in general (and in specific moments like the preparation of a lecture in my case), we forget to take some time and distance to convene other ways of exploring and sharing knowledge (with the students) and to recreate the modalities of approaching some essential topics for a better understanding of the socio-technical metamorphosis of our society.

Going beyond assumed landmarks

In order to understand hidden social realities, we might question our own landmarks. For me, “landmarks” could be both consciously (culturally) confirmed ideas and visual/physical evidence of the existence of boundaries or limits in our (representation of) reality. Hanna’s image proposes an insight into the importance of going beyond some established landmarks. This idea, as a result of the artistic experience, highlights some questions such as : where did the devices we use every day come from and whose labour created them? And in what others forms are these conditions extended through time and space, and for whom ? You might have some answers, references, examples, or even names coming to mind right now. 

In Weaving wires 1, and in Hanna’s artistic contribution, several essential points are raised. Some of them are often missing in discourses and practices of emerging technologies like AI systems : the recognition of the human labor that supports the material realities of technological tools, the intersection of race and gender, the roots of digital culture and industry, and the need to explore new visual narratives that reflect technology’s real conditions of production.

Fix, reconnect and reimagine

Hanna uses the digital collage (but also techniques such as juxtaposition, overlayering and/or distortion – she explains her approach with examples in her artist log). She explores ways to honor the stories she conjures up by rejecting colonial discourses. For me, in the case of Weaving wires 1, these wires connect to our personal experiences with technological devices and memories of the digital transformation of our society. They could also represent the need to imagine and construct together, as citizens, more inclusive (technological) futures.

A digital landscape is somewhere there, or right in front of us. Weaving wires 1 will be extended by Hanna in Weaving wives 2 to question the meaning of the valley landscape itself and its borders. For now, some other transversal questions appear (still inspired by her first image) about deterministic approaches to studying data-driven technology and its intersection with society: what fragments or temporalities of our past are we willing and able to deconstruct? Which ones filter the digital space and ask for other ways of understanding? How can we reconnect with the basic needs of our world if different forms of violence (physical and symbolic), in this case in human labor, are not only hidden, but avoided, neglected or unrepresented in the socio-digital imaginary?

It is such a necessary discussion to face our collective memory and the concrete experiences in between. Weaving wires 1 invites us to confront the oppressive cultural conditions of conception, creation and mediation of the technology industry’s approach to innovation.With this image, Hanna brings us a meaningful contribution. She deconstructs simplistic assumptions and visual perspectives to actually create ‘better images of AI’!


About the author

Laura Martinez Agudelo is a Temporary Teaching and Research Assistant (ATER) at the University Marie & Louis Pasteur – ELLIADD Laboratory. She holds a PhD in Information and Communication Sciences. Her research interests include socio-technical devices and (digital) mediations in the city, visual methods and modes of transgression and memory in (urban) art.   

This post was also kindly edited by Tristan Ferne – lead producer/researcher at BBC Research & Development.


If you want to contribute to our new blog series, ‘Through My Eyes’, by selecting an image from the Archival Images of AI collection and exploring what the image means to you, get in touch (info@betterimagesofai.org)

Explore other posts in the ‘Through My Eyes’ Series