Images of AI – Between Fiction and Function

This image shows an abstract microscopic photograph of a Graphics Processing Unit resembling a satellite image of a big city. The image has been overlayed with a bright blue filter. In the middle of the image is the text, 'Images of AI - Between Fiction and Function' in a white text box with black text. Beneath, in a maroon text box is the author's name in white text.

“The currently pervasive images of AI make us look somewhere, at the cost of somewhere else.”

In this blog post, Dominik Vrabič Dežman provides a summary of his recent research article, ‘Promising the future, encoding the past: AI hype and public media imagery‘.

Dominik sheds light on the importance of the Better Images of AI library which fosters a more informed, nuanced public understanding of AI by breaking the stronghold of the “deep blue sublime” aesthetic with more diverse and meaningful representations of AI.

Dominik also draws attention to the algorithms which perpetuate the dominance of familiar and sensationalist visuals and calls for movements which reshape media systems to make better images of AI more visible in public discourse.

The full paper is published in the AI and Ethics Journal’s special edition on ‘The Ethical Implications of AI Hype, a collection edited by We and AI.


AI promises innovation, yet its imagery remains trapped in the past. Deep-blue, sci-fi-inflected visuals have flooded public media, saturating our collective imagination with glowing, retro-futuristic interfaces and humanoid robots. These “deep blue sublime” [1] images, which draw on a steady palette of outdated pop-cultural tropes and clichés, do not merely depict AI — they shape how we think about it, reinforcing grand narratives of intelligence, automation, and inevitability [2]. It takes little to acknowledge that the AI discussed in public media is far from the ethereal, seamless force these visuals disclose. Instead,  the term generally refers to a sprawling global technological enterprise, entangled with labor exploitation, ecological extraction, and financial speculation [3–10] — realities conspicuously absent from its dominant public-facing representations.

The widespread rise of these images is suspended against intensifying “AI hype” [11], which has been compared to historical speculative investment bubbles [12,13]. In my recent research [1,14,15], I join a growing body of research looking into images of AI [16–21], to explore how AI images operate at the intersection of aesthetics and politics. My overarching ambition has been to contribute an integrated account of the normative and the empirical dimensions of public images of AI to the literature.  I’ve explored how these images matter politically and ethically, inseparable from the pathways they take in real-time, echoing throughout public digital media and wallpapering it in seen-before denominations of blue monochrome.

Rather than measuring the direct impact of AI imagery on public awareness, my focus has been on unpacking the structural forces that produce and sustain these images. What mechanisms dictate their circulation? Whose interests do they serve? How might we imagine alternatives? My critique targets the visual framing of AI in mainstream public media — glowing, abstract, blue-tinted veneers seen daily by millions on search engines, institutional websites, and in reports on AI innovation. These images do not merely aestheticize AI; they foreclose more grounded, critical, and open-ended ways of understanding its presence in the world.


The Intentional Mindlessness of AI Images

This image shows a google images search for 'artificial intelligence'. The result is a collection of images which contain images of the human brain, the colour blue, and white humanoid robots.

Google Images search results for “artificial intelligence”. January 14, 2025. Search conducted from an anonymised instance of Safari. Search conducted from Amsterdam, Netherlands.

Recognizing the ethico-political stakes of AI imagery begins with acknowledging that what we spend our time looking at, or not looking beyond, matters politically and ethically. The currently pervasive images of AI make us look somewhere, at the cost of a somewhere else. The sheer volume of these images, and their dominance in public media, slot public perception into repetitive grooves dominated by human-like robots, glowing blue interfaces, and infinite expanses of deep-blue intergalactic space. By monopolizing the sensory field through which AI is perceived, they reinforce sci-fi clichés, and more importantly,  obscure the material realities — human labor, planetary resources, material infrastructures, and economic speculation — that drive AI development [22,23].

In a sense, images of AI could be read as operational [24–27], enlisted in service of an operation which requires them to look, and function, the way they do. This might involve their role in securing future-facing AI narratives, shaping public sentiment towards acceptance of AI innovation, and supporting big tech agendas for AI deployment and adoption. The operational nature of AI imagery means that these images cannot be studied purely as an aesthetic artifact, or autonomous works of aesthetic production. Instead, these images are minor actors, moving through technical, cultural and political infrastructures. In doing so, individual images do not say or do much per se – they are always already intertwined in the circuits of their economic uptake, circulation, and currency; not at the hands of the digital labourers who created them, but of the human and algorithmic actors that keep them in circulation.

Simultaneously, the endurance of these images is less the result of intention than of a more mindless inertia. It quickly becomes clear how these images do not reflect public attitudes, nor of their makers; anonymous stock-image producers, digital workers mostly located in the global South [28]. They might reflect the views of the few journalistic or editorial actors that choose the images in their reporting [29], or are simply looking to increase audience engagement through the use of sensationalist imagery [30]. Ultimately, their visibility is in the hands of algorithms rewarding more of the same familiar visuals over time [1,31], of stock image platforms and search engines, which maintain close ties with media conglomerates  [32], which, in turn, have long been entangled with big tech [33]. The stock  images are the detritus of a digital economy that rewards repetition over revelation: endlessly cropped, upscaled, and regurgitated “poor images” [34], travelling across cyberspace as they become recycled, upscaled, cropped, reused, until they are pulled back into circulation by the very systems they help sustain [15,28].


AI as Ouroboros: Machinic Loops and Recursive Aesthetics

As algorithms increasingly dictate who sees what in the public sphere [35–37], they dictate not only what is seen but also what is repeated. Images of AI become ensnared in algorithmic loops, which sediment the same visuality over time on various news feeds and search engines [15]. This process has intensified with the proliferation of generative AI: as AI-generated content proliferates, it feeds on itself—trained on past outputs, generating ever more of the same. This “closing machinic loop” [15,28] perpetuates aesthetic homogeneity, reinforcing dominant visual norms rather than challenging them. The widespread adoption of AI-generated stock images further narrows the space for disruptive, diverse, and critical representations of AI, making it increasingly difficult for alternative images to surface in public visibility.

The image shows a humanoid figure with a glowing, transparent brain stands in a digital landscape. The figure's body is composed of metallic and biomechanical components, illuminated by vibrant blue and pink lights. The background features a high-tech grid with data streams, holographic interfaces, and circuitry patterns.

ChatGPT 4o output for query: “Produce an image of ‘Artificial Intelligence’”. 14 January 2025.


Straddling the Duality of AI Imagery

In critically examining AI imagery, it is easy to veer into one of two deterministic extremes — both of which risk oversimplifying how these images function in shaping public discourse:

  1. Overemphasizing Normative Power:

This approach risks treating AI images as if they have autonomous agency, ignoring the broader systems that shape their circulation. AI images appear as sublime artifacts—self-contained objects for contemplation, removed from their daily life as fleeting passengers in the digital media image economy. While the production of images certainly exerts influence in shaping socio-technical imaginaries [38,39], they operate within media platforms, economic structures, and algorithmic systems that constrain their impact.

2. Overemphasizing Materiality:

This perspective reduces AI to mere infrastructure, seeing images as passive reflections of technological and industrial processes, rather than an active participant in shaping public perception. From this view, AI’s images are dismissed as epiphenomenal, secondary to the “real” mechanisms of AI’s production: cloud computing, data centers, supply chains, and extractive labor. In reality, AI has never been purely empirical; cultural production has been integral to AI research and development from the outset, with speculative visions long driving policy, funding, and public sentiment [40].

Images of AI are neither neutral nor inert. The current diminishing potency of glowing, sci-fi-inflected AI imagery as a stand-in for AI in public media suggests a growing fatigue with their clichés, and cannot be untangled from a general discomfort with AI’s utopian framing, as media discourse pivots toward concerns over opacity, power asymmetries, and scandals in its implementation [29,41]. A robust critique of the cultural entanglements of AI requires addressing both its normative commitments (promises made to the public), and its empirical components (data, resources, labour; [6]).

Toward Better Images: Literal Media & Media Literacy

Given the embeddedness of AI images within broader machinations of power, the ethics of AI images are deeply tied to public understanding and awareness of such processes. Cultivating a more informed, critical public — through exposure to diverse and meaningful representations of AI — is essential to breaking the stronghold of the deep blue sublime.

At the individual level, media literacy equips the public to critically engage with AI imagery [1,42,43]. By learning to question the visual veneers, people can move beyond passive consumption of the pervasive, reductive tropes that dominate AI discourse. Better images recalibrate public perception, offering clearer insights into what AI is, how it functions, and its societal impact.The kind of images produced are equally important. Better images would highlight named infrastructural actors, document AI research and development, and/or, diversify the visual associations available to us, loosening the visual stronghold of the currently dominant tropes.

This greatly raises the bar for news outlets in producing original imagery of didactic value, which is where open-source repositories such as Better Images of AI serve as invaluable resources. This crucially bleeds into the urgency for reshaping media systems, making better images readily available to creators and media outlets, helping them move away from generic visuals toward educational, thought-provoking imagery. However, creating better visuals is not enough;  they must become embedded into media infrastructure to become the norm rather than the exception.

Given the above, the role of algorithms cannot be ignored. As mentioned above, algorithms drive what images are seen, shared, and prioritized in public discourse. Without addressing these mechanisms, even the most promising alternatives risk being drowned by the familiar clichés. Rethinking these pathways is essential to ensure that improved representations can disrupt the existing visual narrative of AI.

Efforts to create better AI imagery are only as effective as their ability to reach the public eye and disrupt the dominance of the “deep blue sublime” aesthetic in public media. This requires systemic action—not merely producing different images in isolation, but rethinking the networks and mechanisms through which these images are circulated. To make a meaningful impact, we must address both the sources of production and the pathways of dissemination. By expanding the ways we show, think about, and engage with AI, we create opportunities for political and cultural shifts. A change in one way of sensing AI (writing / showing / thinking / speaking) invariably loosens gaps for a change in others.

Seeing AI ≠ Believing AI

AI is not just a technical system; it is a speculative, investment-driven project, a contest over public consensus, staged by a select few to cement its inevitability [44]. The outcome is a visual regime that detaches AI’s media portrayal from its material reality: a territorial, inequitable, resource-intensive, and financially speculative global enterprise.

Images of AI come from somewhere (they are products of poorly-paid digital labour, served through algorithmically-ranked feeds), do something (torque what is at-hand for us to imagine with, directing attention away from AI’s pernicious impacts and its growing inequalities), and go somewhere (repeat themselves ad nauseam through tightening machinic loops, numbing rather than informing; [16]).

The images have left few fooled, and represent a missed opportunity for adding to public sensitisation and understanding regarding AI. Crucially, bad images do not inherently disclose bad tech, nor do good images promote good tech; the widespread adoption of better images of AI in public media would not automatically lead to socially good or desirable understandings, engagements, or developments of AI. That remains the issue of the current political economy of AI, whose stakeholders only partially determine this image economy. Better images alone  cannot solve this, but they might open slivers of insight into AI’s global “arms race.”

As it stands, different visual regimes struggle to be born. Fostering media literacy, demanding critical representations, and disrupting the algorithmic stranglehold on AI imagery are acts of resistance. If AI is here to stay, then so too must be our insistence on seeing it otherwise — beyond the sublime spectacle, beyond inevitability, toward a more porous and open future.

About the author

Dominik Vrabič Dežman (he/him) is an information designer and media philosopher. He is currently at the Departments of Media Studies and Philosophy at the University of Amsterdam. Dominik’s research interests include public narratives and imaginaries of AI, politics and ethics of UX/UI, media studies, visual communication and digital product design.

References

1. Vrabič Dežman, D.: Defining the Deep Blue Sublime [Internet]. SETUP; (2023). 2023. https://web.archive.org/web/20230520222936/https://deepbluesublime.tech/

2. Burrell, J.: Artificial Intelligence and the Ever-Receding Horizon of the Future [Internet]. Tech Policy Press. (2023). 2023 Jun 6. https://techpolicy.press/artificial-intelligence-and-the-ever-receding-horizon-of-the-future/

3. Kponyo, J.J., Fosu, D.M., Owusu, F.E.B., Ali, M.I., Ahiamadzor, M.M.: Techno-neocolonialism: an emerging risk in the artificial intelligence revolution. TraHs [Internet]. (2024 [cited 2025 Feb 18]. ). https://doi.org/10.25965/trahs.6382

4. Leslie, D., Perini, A.M.: Future Shock: Generative AI and the International AI Policy and Governance Crisis. Harvard Data Science Review [Internet]. (2024 [cited 2025 Feb 18]. ). https://doi.org/10.1162/99608f92.88b4cc98

5. Regilme, S.S.F.: Artificial Intelligence Colonialism: Environmental Damage, Labor Exploitation, and Human Rights Crises in the Global South. SAIS Review of International Affairs. 44:75–92. (2024. ). https://doi.org/10.1353/sais.2024.a950958

6. Crawford, K.: The atlas of AI power, politics, and the planetary costs of artificial intelligence [Internet]. (2021). https://www.degruyter.com/isbn/9780300252392

7. Sloane, M.: Controversies, contradiction, and “participation” in AI. Big Data & Society. 11:20539517241235862. (2024. ). https://doi.org/10.1177/20539517241235862

8. Rehak, R.: On the (im)possibility of sustainable artificial intelligence. Internet Policy Review [Internet]. ((2024 Sep 30). ). https://policyreview.info/articles/news/impossibility-sustainable-artificial-intelligence/1804

9. Wierman, A., Ren, S.: The Uneven Distribution of AI’s Environmental Impacts. Harvard Business Review [Internet]. ((2024 Jul 15). ). https://hbr.org/2024/07/the-uneven-distribution-of-ais-environmental-impacts

10. : What we don’t talk about when we talk about AI | Joseph Rowntree Foundation [Internet]. (2024). 2024 Feb 8. https://www.jrf.org.uk/ai-for-public-good/what-we-dont-talk-about-when-we-talk-about-ai

11. Duarte, T., Barrow, N., Bakayeva, M., Smith, P.: Editorial: The ethical implications of AI hype. AI Ethics. 4:649–51. (2024. ). https://doi.org/10.1007/s43681-024-00539-x

12. Singh, A.: The AI Bubble [Internet]. Social Science Encyclopedia. (2024). 2024 May 28. https://www.socialscience.international/the-ai-bubble

13. Floridi, L.: Why the AI Hype is Another Tech Bubble. Philos Technol. 37:128. (2024. ). https://doi.org/10.1007/s13347-024-00817-w

14. Vrabič Dežman, D.: Interrogating the Deep Blue Sublime: Images of Artificial Intelligence in Public Media. In: Cetinic E, Del Negueruela Castillo D, editors. From Hype to Reality: Artificial Intelligence in the Study of Art and Culture. Rome/Munich: HumanitiesConnect; (2024). https://doi.org/10.48431/hsah.0307

15. Vrabič Dežman, D.: Promising the future, encoding the past: AI hype and public media imagery. AI Ethics [Internet]. (2024 [cited 2024 May 7]. ). https://doi.org/10.1007/s43681-024-00474-x

16. Romele, A.: Images of Artificial Intelligence: a Blind Spot in AI Ethics. Philos Technol. 35:4. (2022. ). https://doi.org/10.1007/s13347-022-00498-3

17. Singler, B.: The AI Creation Meme: A Case Study of the New Visibility of Religion in Artificial Intelligence Discourse. Religions. 11:253. (2020. ). https://doi.org/10.3390/rel11050253

18. Steenson, M.W.: A.I. Needs New Clichés [Internet]. Medium. (2018). 2018 Jun 13. https://web.archive.org/web/20230602121744/https://medium.com/s/story/ai-needs-new-clich%C3%A9s-ed0d6adb8cbb

19. Hermann, I.: Beware of fictional AI narratives. Nat Mach Intell. 2:654–654. (2020. ). https://doi.org/10.1038/s42256-020-00256-0

20. Cave, S., Dihal, K.: The Whiteness of AI. Philos Technol. 33:685–703. (2020. ). https://doi.org/10.1007/s13347-020-00415-6

21. Mhlambi, S.: God in the image of white men: Creation myths, power asymmetries and AI [Internet]. Sabelo Mhlambi. (2019). 2019 Mar 29. https://web.archive.org/web/20211026024022/https://sabelo.mhlambi.com/2019/03/29/God-in-the-image-of-white-men

22. : How to invest in AI’s next phase | J.P. Morgan Private Bank U.S. [Internet]. Accessed 2025 Feb 18. https://privatebank.jpmorgan.com/nam/en/insights/markets-and-investing/ideas-and-insights/how-to-invest-in-ais-next-phase

23. Jensen, G., Moriarty, J.: Are We on the Brink of an AI Investment Arms Race? [Internet]. Bridgewater. (2024). 2024 May 30. https://www.bridgewater.com/research-and-insights/are-we-on-the-brink-of-an-ai-investment-arms-race

24. Paglen, T.: Operational Images. e-flux journal. 59:3. (2014. ). 

25. Pantenburg, V.: Working images: Harun Farocki and the operational image. Image Operations. Manchester University Press; p. 49–62. (2016). 

26. Parikka, J.: Operational Images: Between Light and Data [Internet]. (2023). 2023 Feb. https://web.archive.org/web/20230530050701/https://www.e-flux.com/journal/133/515812/operational-images-between-light-and-data/

27. Celis Bueno, C.: Harun Farocki’s Asignifying Images. tripleC. 15:740–54. (2017. ). https://doi.org/10.31269/triplec.v15i2.874

28. Romele, A., Severo, M.: Microstock images of artificial intelligence: How AI creates its own conditions of possibility. Convergence: The International Journal of Research into New Media Technologies. 29:1226–42. (2023. ). https://doi.org/10.1177/13548565231199982

29. Moran, R.E., Shaikh, S.J.: Robots in the News and Newsrooms: Unpacking Meta-Journalistic Discourse on the Use of Artificial Intelligence in Journalism. Digital Journalism. 10:1756–74. (2022. ). https://doi.org/10.1080/21670811.2022.2085129

30. De Dios Santos, J.: On the sensationalism of artificial intelligence news [Internet]. KDnuggets. (2019). 2019. https://www.kdnuggets.com/on-the-sensationalism-of-artificial-intelligence-news.html/

31. Rogers, R.: Aestheticizing Google critique: A 20-year retrospective. Big Data & Society. 5:205395171876862. (2018. ). https://doi.org/10.1177/2053951718768626

32. Kelly, J.: When news orgs turn to stock imagery: An ethics Q & A with Mark E. Johnson [Internet]. Center for Journalism Ethics. (2019). 2019 Apr 9. https://ethics.journalism.wisc.edu/2019/04/09/when-news-orgs-turn-to-stock-imagery-an-ethics-q-a-with-mark-e-johnson/

33. Papaevangelou, C.: Funding Intermediaries: Google and Facebook’s Strategy to Capture Journalism. Digital Journalism. 0:1–22. (2023. ). https://doi.org/10.1080/21670811.2022.2155206

34. Steyerl, H.: In Defense of the Poor Image. e-flux journal [Internet]. (2009 [cited 2025 Feb 18]. ). https://www.e-flux.com/journal/10/61362/in-defense-of-the-poor-image/

35. Bucher, T.: Want to be on the top? Algorithmic power and the threat of invisibility on Facebook. New Media & Society. 14:1164–80. (2012. ). https://doi.org/10.1177/1461444812440159

36. Bucher, T.: If…Then: Algorithmic Power and Politics. Oxford University Press; (2018). 

37. Gillespie, T.: Custodians of the internet: platforms, content moderation, and the hidden decisions that shape social media. New Haven: Yale University Press; (2018). 

38. Jasanoff, S., Kim, S.-H., editors.: Dreamscapes of Modernity: Sociotechnical Imaginaries and the Fabrication of Power [Internet]. Chicago, IL: University of Chicago Press; Accessed 2022 Jun 26. https://press.uchicago.edu/ucp/books/book/chicago/D/bo20836025.html

39. O’Neill, J.: Social Imaginaries: An Overview. In: Peters MA, editor. Encyclopedia of Educational Philosophy and Theory [Internet]. Singapore: Springer Singapore; p. 1–6. (2016). https://doi.org/10.1007/978-981-287-532-7_379-1

40. Law, H.: Computer vision: AI imaginaries and the Massachusetts Institute of Technology. AI Ethics [Internet]. (2023 [cited 2024 Feb 25]. ). https://doi.org/10.1007/s43681-023-00389-z

41. Nguyen, D., Hekman, E.: The news framing of artificial intelligence: a critical exploration of how media discourses make sense of automation. AI & Soc. 39:437–51. (2024. ). https://doi.org/10.1007/s00146-022-01511-1

42. Woo, L.J., Henriksen, D., Mishra, P.: Literacy as a Technology: a Conversation with Kyle Jensen about AI, Writing and More. TechTrends. 67:767–73. (2023. ). https://doi.org/10.1007/s11528-023-00888-0

43. Kvåle, G.: Critical literacy and digital stock images. Nordic Journal of Digital Literacy. 18:173–85. (2023. ). https://doi.org/10.18261/njdl.18.3.4

44. Tacheva, Z., Appedu, S., Wright, M.: AI AS “UNSTOPPABLE” AND OTHER INEVITABILITY NARRATIVES IN TECH: ON THE ENTANGLEMENT OF INDUSTRY, IDEOLOGY, AND OUR COLLECTIVE FUTURES. AoIR Selected Papers of Internet Research [Internet]. (2024 [cited 2025 Feb 18]. ). https://doi.org/20250206083707000

Nel blu dipinto di blu; or the “anaesthetics” of stock images of AI

Most of the criticism concerning stock images of AI focuses on their cliched and kitschy subjects. But what if a major ethical problem was not in the subjects but rather in the background? What if a major issue was, for instance, the abundant use of the color blue in the background of these images? This is the thesis we would like to discuss in detail in this post.

Stock images are usually ignored by researchers because they are considered the “wallpaper” of our consumer culture. Yet, they are everywhere. Stock images of emerging technologies such as AI (but also quantum computing, cloud computing, blockchain, etc.) are widely used, for example, in science communication and marketing contexts: conference announcements, book covers, advertisements for university masters, etc. There are at least two reasons for us to take these images seriously.

The first reason is “ethical-political” (Romele, forthcoming). It is interesting to note that even the most careful AI ethicists pay little attention to the way AI is represented and communicated, both in scientific and popular contexts. For instance, a volume of more than 800 pages like the Oxford Handbook of Ethics of AI (Dubber, Pasquale, and Das 2020) does not contain any chapter dedicated to the representation and communication, textual or visual, of AI; however, the volume’s cover image is taken from iStock, a company owned by Getty Images. 1 The subject of it is a classic androgynous face made of “digital particles” that become a printed circuit board. The most interesting thing about the image, however, is not its subject (or figure, as we say in art history) but its background, which is blue. I take this focus on the background rather than the figure from the French philosopher Georges Didi-Huberman (2005) and, in particular, from his analysis of Fra Angelico’s painting.

Fresco “Annunciation” by Fra Angelico in San Marco, Florence (Public domain, via Wikimedia Commons)

Didi-Huberman devotes some admirable pages to Fra Angelico’s use of white in his fresco of the Annunciation painted in 1440 in the convent of San Marco in Florence. This white, present between the Madonna and the Archangel Gabriel, spreads not only throughout the entire painting but also throughout the cell in which the fresco was painted. Didi-Huberman’s thesis is that this white is not a lack, that is, an absence of color and detail. It is rather the presence of something that, by essence, cannot be given as a pure presence, but only as a “trace” or “symptom”. This thing is none other than the mystery of the Incarnation. Fra Angelico’s whiteness is not to be understood as something that invites absence of thought. It is rather a sign that “gives rise to thought,”2 just as the Annunciation was understood in scholastic philosophy not as a unique and incomprehensible event, but as a flowering of meanings, memories, and prophecies that concern everything from the creation of Adam to the end of time, from the simple form of the letter M (Mary’s initial) to the prodigious construction of the heavenly hierarchies. 

A glimmering square mosaic with dark blue and white colors consisting of thousands of small pictures

The image above collects about 7,500 images resulting from a search for “Artificial Intelligence” in Shutterstock. It is an interesting image because, with its “distant viewing,” it allows the background to emerge on the figure. In particular, the color of the background emerges. Two colors seem to dominate these images: white and blue. Our thesis is that these two colors have a diametrically opposed effect to Fra Angelico’s white. If Fra Angelico’s white is something that “gives rise to thought,” the white and blue in the stock images of AI have the opposite effect.

Consider the history of blue as told by French historian Michel Pastoureau (2001). He distinguishes between several phases of this history: a first phase, up to the 12th century, in which the color was almost completely absent; an explosion of blue between the 12th and 13th centuries (consider the stained glass windows of many Gothic cathedrals); a moral and noble phase of blue (in which it became the color of the dress of Mary and the kings of France); and finally, a popularization of blue, starting with Young Werther and Madame Bovary and ending with the Levi’s blue jeans industry and the company IBM, which is referred to as the Big Blue. To this day, blue is the statistically preferred color in the world. According to Pastoureau, the success of blue is not the expression of some impulse, as could be the case with red. Instead, one gets the impression that blue is loved because it is peaceful, calming, and anesthetizing. It is no coincidence that blue is the color used by supranational institutions such as UN, UNESCO, and European Community, as well as Facebook and Meta, of course. In Italy, the police force is blue, which is why policemen are disdainfully called “Smurfs”.

If all this is true, then the problem with stock AI images is that, instead of provoking debate and “disagreement,” they lead the viewer into forms of acceptance and resignation. Rather than equating experts and non-experts, encouraging the latter to influence innovation processes with their opinions, they are “screen images”—following the etymology of the word “screen,” which means “to cover, cut, and separate”. The notion of “disagreement” or “dissensus” (mésentente in French) is taken from another French philosopher, Jacques Rancière (2004), according to whom disagreement is much more radical than simple “misunderstanding (malentendu)” or “lack of knowledge (méconnaissance)”. These, as the words themselves indicate, are just failures of mutual understanding and knowledge that, if treated in the right way, can be overcome. Interestingly, much of the literature interprets science communication precisely as a way to overcome misunderstanding and lack of knowledge. Instead, we propose an agonistic model of science communication and, in particular, of the use of images in science communication. This means that these images should not calm down, but rather promote the flourishing of an agonistic conflict (i.e., a conflict that acknowledges the validity of the opposing positions but does not want to find a definitive and peaceful solution to the conflict itself).3 The ethical-political problem with AI stock images, whether they are used in science communication contexts or popular contexts, is then not the fact that they do not represent the technologies themselves. If anything, the problem is that while they focus on expectations and imaginaries, they do not promote individual or collective imaginative variations, but rather calm and anesthetize them.

This brings me to my second reason for talking about stock images of AI, which is “aesthetic” in nature. The term “aesthetics” should be understood here in an etymological sense. Sure, it is a given that these images, depicting half-flesh, half-circuit brains, variants of Michelangelo’s The Creation of Adam in human-robot version, etc., are aesthetically ugly and kitschy. But here I want to talk about aesthetics as a “theory of perception”—as suggested by the Greek word aisthesis, which means precisely “perception”. In fact, we think there is a big problem with perception today, particularly visual perception, related to AI. In short, I mean that AI is objectively difficult to depict and hence make visible. This explains, in our opinion, the proliferation of stock images.

We think there are three possible ways to depict AI (which is mostly synonymous with machine learning) today: (1) the first is by means of the algorithm, which in turn can be embedded in different forms, such as computer code or a decision tree. However, this is an unsatisfactory solution. First, because it is not understandable to non-experts. Second, because representing the algorithm does not mean representing AI: it would be like saying that representing the brain means representing intelligence; (2) the second way is by means of the technologies in which AI is embedded: drones, autonomous vehicles, humanoid robots, etc. But representing the technology is not, of course, representing AI: nothing actually tell us that this technology is really AI-driven and not just an empty box; (3) finally, the third way consists of giving up representing the “thing itself” and devoting ourselves instead to expectations, or imaginaries. This is where we would put most of the stock images and other popular representations of AI.4

Now, there is a tendency among researchers to judge (ontologically, ethically, and aesthetically) images of AI (and of technologies in general) according to whether they represent the “thing itself” or not. Hence, there is a tendency to prefer (1) to (2) and (2) to (3). An image is all the more “true,” “good,” and “aesthetically appreciable” the closer it is (and therefore the faithful it is) to the thing it is meant to represent. This is what we call “referentialist bias”. But referentialism, precisely because of what we said above, works poorly in the case of AI images, because none of these images can really come close to and be faithful to AI. Our idea is not to condemn all AI images, but rather to save them, precisely by giving up referentialism. If there is an aesthetics (which, of course, is also an ethics and ontology) of AI images, its goal is not to depict the technology itself, namely AI. If anything, it is to “give rise to thought,” through depiction, about the “conditions of possibility” of AI, i.e., its techno-scientific, social-economic, and linguistic-cultural implications.

Alongside theoretical work such as the one we discuss above, we also try to conduct empirical research on these images. We showed earlier an image that is the result of quali-quantitative analysis we have conducted on a large dataset of stock images. In this work, we first used the web crawler Shutterscrape, which allowed us to download massive numbers of images and videos from Shutterstock. We obtained about 7,500 stock images for the search “Artificial Intelligence”. Second, we used PixPlot, a tool developed by Yale’s DH Lab.5 The result is accessible through the link in the footnote.6 The map is navigable: you can select one of the ten clusters created by the algorithm and, for each of them, you can zoom and de-zoom, and choose single images. We also manually labeled the clusters with the following names: (1) background, (2) robots, (3) brains, (4) faces and profiles, (5) labs and cities, (6) line art, (7) Illustrator, (8) people, (9) fragments, and (10) diagrams.

On a black background thousands of small pixel-like images floating similar to the shape of a world map

Finally, there’s another little project of which we are particularly fond. It is the Instagram profile ugly.ai.7 Inspired by existing initiatives such as the NotMyRobot!8 Twitter profile and blog, ugly.ai wants to monitor the use of AI stock images in science communication and marketing contexts. The project also aims to raise awareness among both stakeholders and the public of the problems related to the depiction of AI (and other emerging technologies) and the use of stock imagery for it.

In conclusion, we would like to advance our thesis, which is that of an “anaesthetics” of AI stock images. The term “anaesthetics” is a combination of “aesthetics” and “anesthetics.” By this, we mean that the effect of AI stock images is precisely one that, instead of promoting access (both perceptual and intellectual) and forms of agonism in the debate about AI, has the opposite consequence of “putting them to sleep,” developing forms of resignation in the general public. Just as Fra Angelico’s white expanded throughout the fresco and, beyond the fresco, into the cell, so it is possible to think that the anaesthetizing effects of blue expand to the subjects, as well as to the entire media and communication environment in which these AI images proliferate.

Footnotes

  1. https://www.instagram.com/p/CPH_Iwmr216/. Also visible at https://www.oxfordhandbooks.com/view/10.1093/oxfordhb/9780190067397.001.0001/oxfordhb-9780190067397.
  2.  The expression is borrowed from Ricoeur (1967)
  3.  On the agonistic model, inspired by Chantal Mouffe’s philosophy, in science and technology, see Popa, Blok, and Wessenlink (2020)
  4. Needless to say, this is an idealistic distinction, in the sense that these levels are mostly overlapping: algorithm codes are colored, drones fly over green fields and blue skies that suggest hope and a future for humanity, and stock images often refer, albeit vaguely, to existing technologies (touch screens, networks of neurons, etc.)
  5.  https://github.com/YaleDHLab/pix-plot
  6. https://rodighiero.github.io/AI-Imaginary/# Another empirical work, which we did with other colleagues (Marta Severo —Paris Nanterre University, Olivier Buisson —Inathèque and Claude Mussou —Inathèque) consisted in using a tool called Snoop, developed by the French Audiovisual Archive (INA) and the French National Institute for Research in Digital Science and Technology (INRIA), and also based on an AI algorithm. While with PixPlot the choice of the clusters is automatic, with Snoop the classes are decided by the researcher and the class members are found by the algorithm. With Snoop, we were able to fine-tune PixPlot’s classes, and create new ones. For instance, we have created the class “white robots” and, within this class, the two subclasses of female and infantine robots.
  7. https://www.instagram.com/ugly.ai/
  8. https://notmyrobot.home.blog/

References

Dubber, M., Pasquale, F., and Das, S. 2020. The Oxford Handbook of Ethics of AI. Oxford: Oxford University Press. 

Pastoureau, M. 2001. Blue: The History of a Color. Princeton: Princeton University Press.

Popa, E.O., Blok, V. & Wessenlik, R. 2020. “An Agonistic Approach to Technological Conflict”. Philosophy & Technology.

Rancière, J. 2004. Disagreement: Politics and Philosophy. Minneapolis: Minnesota University Press.

Ricoeur, P. 1967. The Symbolism of Evil. Boston: Beacon Press.Romele, A. forthcoming. “Images of Artificial Intelligence: A Blind Spot in AI Ethics”. Philosophy & Technology.

Image credits

Title image showing the painting “l’accord bleu (RE 10)”, 1960 by Yves Klein, photo by Jaredzimmerman (WMF), CC BY-SA 3.0 https://creativecommons.org/licenses/by-sa/3.0, via Wikimedia Commons

About us

Alberto Romele is research associate at the IZEW, the International Center for Ethics in the Sciences and Humanities at the University of Tübingen, Germany. His research focuses on the interaction between philosophy of technology, digital studies, and hermeneutics. He is the author of Digital Hermeneutics (Routledge, 2020).

Dario Rodighiero is FNSF Fellow at Harvard University and Bibliotheca Hertziana. His research focuses on data visualization at the intersection of cultural analytics, data science, and digital humanities. He is also lecturer at Pantheon-Sorbonne University, and recently he authored Mapping Affinities (Metis Presses 2021).

The AI Creation Meme

A robot hand and a human hand reaching out with their fingertips towards each other

This blog post is based on Singler, B (2020) “The AI Creation Meme: A Case Study of the New Visibility of Religion in Artificial Intelligence Discourse” in Religions 2020, 11(5), 253; https://doi.org/10.3390/rel11050253


Few images are as recognisable or as frequently memed as Michelangelo’s Creazione di Adamo (Creation of Adam), a moment from his larger artwork that arches over the Sistine Chapel in Vatican City. Two hands, fingers nearly touching, fingertip to fingertip, a heartbeat apart in the moment of divine creation. We have all seen it reproduced with fidelity to the original or remixed with other familiar pop-culture forms. We can find examples online of god squirting hand sanitiser into Adam’s hand for a Covid-era message. Or a Simpsons cartoon version with Homer as god, reaching out towards a golden remote control. Or George Lucas reaching out to Darth Vader. This creation moment is also reworked into other mediums: the image has been remade with paperclips, satsuma sections, or embroidered as a patch for jeans. Some people have tattooed the two hands nearly touching on their skin, bringing it into their bodies. The diversity of uses and re-uses of the Creation of Adam speak to its enduring cultural impact.

The creation of Adam by Michelangelo
Photography of Michelangelo’s fresco painting “The creation of Adam” which forms part of the Sistine Chapel’s ceiling

My particular interest in the meme-ing of the Creation of Adam is because of its ‘AI Creation’ form, which I have studied by collecting a corpus of 79 indicative examples found online (Singler 2020a). As with some of the above examples, the focus is often narrowed to just the hands and forearms of the subjects. The representation of AI in my corpus came in two primary forms: an embodied robotic hand or a more ethereal, or abstract, ‘digital’ hand. The robotic hands were either jointed white metal and plastic hands or fluid metallic hands without joints – reminiscent of the liquid, shapeshifting, T-1000 model from Terminator 2: Judgement Day (1991). In examples with digital hands, they were either formed with points of light or vector lines. The human hands in the AI Creation Meme also had characteristics in common: almost all were male and Caucasian in skin tone. Some might argue that this replicates how Michelangelo and his contemporaries envisaged Adam and the Abrahamic god. But if we can re-imagine these figures in Simpson’s yellow or satsuma orange, then there are intentional choices being made here about race, representation, and privilege.

The colour blue was also significant in my sample. Grieser’s work (2017) on the popularity of Blue Brains in neuroscience imagery, which applies an “aesthetics of religion” approach, was relevant to this aspect of the AI Creation Meme. She argues that such colour choices and their associations – for instance, blue with “seriousness and trustworthiness”, the celestial and heavenly, and its opposition to dark and muted colours and themes – “target the level of affective attitudes rather than content and arguments” (Grieser 2017, p260). Background imagery also targeted affective attitudes: cosmic backgrounds of galaxies and star systems, cityscapes with skyscrapers, walls of binary text, abstract shapes in patterns such as hexagons, keyboards, symbols representing the fields that employ AI, and more abstract shapes in the same blue colour palette. The more abstract examples were used in more philosophical spaces, while the more business-orientated meme remixes were found more often on business, policy, and technology-focused websites, suggesting active choice in aligning the specific AI Creation meme with the location in which it was used. These were frequently spaces commonly thought of as ‘secular’ – technology and business publications, business consultancy firms, blog posts about fintech, bitcoin, eCommerce, or the future of eCommerce, or the future of work. What then of the distinction between the religious and the secular?

That the original Creation of Adam is a religious image is without question – although its obviously specific to a specific view of a monotheistic god. As a part of the larger work in the Sistine chapel, it was intended to “introduce us to the world of revelation”, according to Pope John Paul II (1994). But such images are not merely broadcasting a message; meaning-making is an interactive event where the “spectator’s well of previous experiences” interplays with the object itself (Helmers 2004, p 65). When approaching an AI Creation Meme, we bring our own experiences and assumptions, including the cultural memory of the original form of the image and its message of monotheistic creation. This is obviously culturally specific, and we might think about what a religious AI Creation Meme from a non-monotheistic faith would look like, as well as who is being excluded in this imaginary of the creation of AI. But this particular artwork has had impact across the world. Even in the most remixed form, we know broadly who is meant to be the Creator and who is the Created, and that this moment is intended to be the very act of Creation.

Some of the AI Creation Memes even give greater emphasis to this moment, with the addition of a ‘spark of life’ between the human hand and the AI hand. The cultural narrative of the ‘spark of life’ likely begins with the scientific works of Luigi Galvani (1737 – 1789). He experimented with animating dead frogs’ legs with electricity and likely inspired Mary Shelley’s Frankenstein. In the 19th Century, the ‘spark of life’ then became a part of the account of the emergence of all life on earth from the ‘primordial soup’ of “ammonia and phosphoric salts, lights, heat, electricity etc.” (Darwin 1871). Grieser also noted such sparks in her work on ‘Blue Brain’ imagery in neuroscience, arguing that such motifs can be seen as perpetuating the aesthetic forms of a “religious history of electricity”, which involves visualising conceptions of communication with the divine (Grieser 2017, p. 253).

Finding such aesthetics, informed by ideology, in what are commonly thought of as ‘secular’ spaces, problematises the distinction between the secular and the religious. In the face of solid evidence against a totalising secularisation and in favour of religious continuity and even flourishing, some interpretations of secularisation have instead focused on how religions have lost control over their religious symbols, rites, narratives, tropes and words. So, we find figures in AI discourse such as Ray Kurzweil being proclaimed ‘a Prophet’, or people online describing themselves as being “Blessed by the Algorithm” when having a particularly good day as a gig economy worker or a content producer, or in general (Singler 2020). These are the religious metaphors we also live by, to paraphrase Lakoff and Johnson (1980).

The virality of humour and memetic culture is also at play in the AI Creation Meme. I’ve mentioned some of the examples where the original Creation Meme is remixed with other pop culture elements, leading to absurdity (the satsuma creation meme is a new favourite of mine!). The AI Creation Meme is perhaps more ‘serious’ than these, but we might see the same kind of context-based humour being expressed through the incongruity of replacing Adam with an AI. Humour though can lead legitimation through a snowballing effect, as something that is initially flippant or humorous can become an object that is indicated towards in more serious discourse. I’ve previously made this argument in relation to New Religious Movements that emerge from jokes or parodies of religion (Singler 2014), but it is also applicable to religious imagery used in unexpected places that gets a conversation started or informs the aesthetics of an idea, such as AI.

The AI Creation meme also inspires thoughts of what is being created. The original Creation of Adam is about the origin of humanity. In the AI Creation Meme, we might be induced to think about the origins of post-humanity. And just as the original Creation of Adam leads us to think on fundamental existential questions, the AI Creation Meme partakes of posthumanism’s “repositioning of the human vis-à-vis various non-humans, such as animals, machines, gods, and demons” (Sikora 2010, p114), and it leads us into questions such as ‘Where will the machines come from?’, ‘What will be our relationship with them?’, and the apocalyptic again, ‘what will be at the end?’. Subsequent calls for our post-human ‘Mind Children’ to spread outwards from the earth might be critiqued as the “seminal fantasies of [male] technology enthusiasts” (Boss 2020, p39), especially as, as we have noted, the AI Creation Meme tends to show ‘the Creator’ as a white male.

However, there are opportunities in critiquing these tendencies and tropes; as with the post-human narrative, we can be alert to what Graham describes as the “contingencies of the boundaries by which we separate the human from the non-human, the technological from the biological, artificial from natural” (2013, p1). Elsewhere I have remarked on the liminality of AI itself and how we might draw on the work of anthropologists such as Victor Turner and Mary Douglas, as well as the philosopher Julia Kristeva, to understand how AI is conceived of, sometimes apocalyptically, as a ‘Mind out of Place” (Singler 2019) as people attempt to understand it in relation to themselves. Paying attention to where and how we force such liminal beings and ideas into specific shapes and what those shapes are can illuminate our preconceptions and biases.

Likewise, the common distinction between the secular and the religious is problematised by the creative remixing of the familiar and the new in the AI Creation Meme. For some, a boundary between these two ‘domains’ is a moral necessity; some see religion as a pernicious irrationality that should be secularised out of society for the sake of reducing harm. There can be a narrative of collaboration in AI discourse, a view that the aims of AI (the development and improvement of intelligence) and the aims of atheism (the end of irrationalities like religion) are sympathetic and build cumulatively upon each other. So, for some, illustrating AI with religious imagery can be anathema. Whether or not we agree with that stance, we can use the AI Creation Meme as an example to question the role of such images in how the public comes to trust or distrust AI. For some, AI as a god or as the ‘child’ of humankind is a frightening idea. For others, it is reassuring and utopian. In either case, this kind of imagery might obscure the reality of current AI’s very un-god-like flaws, the humans currently involved in making and implementing AI, and what biases these humans have that might lead to very real harms.


Bibliography

Boss, Jacob 2020. “For the Rest of Time They Heard the Drum.” In Theology and Westworld. Edited by Juli Gittinger and Shayna Sheinfeld. Lanham, MD: Rowman & Littlefield.

Darwin, Charles 1871. “Letter to Joseph Hooker.” in The Life and Letters of Charles Darwin, Including an Autobiographical Chapter. London, UK: John Murray, vol. 3, p. 18.

Graham, Elaine 2013. “Manifestations of The Post-Secular Emerging Within Discourses Of Posthumanism.” Unpublished Conference Presentation Given at the ‘Imagining the Posthuman’ Conference at Karlsruhe Institute of Technology, July 7–8. Available online: http://hdl.handle.net/10034/297162 (accessed 3 April 2020).

Grieser, Alexandra 2017. “Blue Brains: Aesthetic Ideologies and the Formation of Knowledge Between Religion and Science.” In Aesthetics of Religion: A Connective Concept. Edited by A. Grieser and J. Johnston. Berlin and Boston: De Gruyter.

Helmers, Marguerite 2004. “Framing the Fine Arts Through Rhetoric”. In Defining Visual Rhetoric. Edited by Charles Hills and Maguerite Helmers. Mahweh: Lawrence Erlbaum, pp. 63–86.

Lakoff, George, and Johnson, Mark (1980) Metaphors we Live by, Chicago, USA: University of Chicago Press

Pope John Paul II. 1994. “Entriamo Oggi”, homily preached in the mass to celebrate the unveiling of the restorations of Michelangelo’s frescoes in the Sistine Chapel, 8 April 1994, available at http://www.vatican.va/content/john-paul-ii/en/homilies/1994/documents/hf_jpii_ hom_19940408_restauri-sistina.html (accessed on 19 May 2020)

Sikora, Tomasz 2010. “Performing the (Non) Human: A Tentatively Posthuman Reading of Dionne Brand’s Short Story ‘Blossom’”. Available online: https://depot.ceon.pl/handle/123456789/2190 (accessed 30 March 2020).

Singler, Beth 2020. “‘Blessed by the Algorithm’: Theistic Conceptions of Artificial Intelligence in Online Discourse” In Journal of AI and Society. doi:10.1007/s00146-020-00968-2.

Singler, Beth 2019. “Existential Hope and Existential Despair in AI Apocalypticism and Transhumanism” in Zygon: Journal of Religion and Science 54: 156–76.

Singler, Beth 2014 “‘SEE MOM IT IS REAL’: The UK Census, Jediism and Social Media”, in Journal of Religion in Europe, (2014), 7(2), 150-168. https://doi.org/10.1163/18748929-00702005