Visuals of AI in the Military Domain: Beyond ‘Killer Robots’ and towards Better Images?

In this blog post, Anna Nadibaidze explores the main themes found across common visuals of AI in the military domain. Inspired by the work and mission of Better Images of AI, she argues for the need to discuss and find alternatives to images of humanoid ‘killer robots’. Anna holds a PhD in Political Science from the University of Southern Denmark (SDU) and is a researcher for the AutoNorms project, based at SDU.

The integration of artificial intelligence (AI) technologies into the military domain, especially weapon systems and the process of using force, has been the topic of international academic, policy, and regulatory debates for more than a decade. The visual aspect of these discussions, however, has not been analysed in depth. This is both puzzling, considering the role that images play in shaping parts of the discourses on AI in warfare, and potentially problematic, given that many of these visuals, as I explore below, misrepresent major issues at stake in the debate.

In this piece I provide an overview of the main themes that one may observe in visual communication in relation to AI in international security and warfare, discuss why some of these visuals raise concerns, and argue for the need to engage in more critical reflections about the types of imagery used by various actors in the debate on AI in the military.

This blog post is based on research conducted as part of the European Research Council funded project “Weaponised Artificial Intelligence, Norms, and Order” (AutoNorms), which examines how the development and use of weaponised AI technologies may affect international norms, defined as understandings of ‘appropriateness’. Following the broader framework of the project, I argue that certain visuals of AI in the military, by being (re)produced via research communication and media reporting, among others, have potential to shape (mis)perceptions of the issue.

Why reflecting upon images of AI in the military matters

As with the field of AI ethics more broadly, critical reflections on visual communication in relation to AI appear to be minimal in global discussions about autonomous weapon systems (AWS)—systems that can select and engage targets without human intervention—which have been ongoing for more than a decade. The same can be said for debates about responsible AI in the military domain, which have become more prominent in recent years (see, for instance, the initiative of the Responsible AI in the Military Domain Summit held first in 2023, with another edition due in 2024).

Yet, examining visuals deserves a place in the debate on responsible AI in the military domain. It matters because, as argued by Camila Leporace on this blog, images have a role in constructing certain perceptions, especially “in the midst of the technological hype”. As pointed out by Maggie Mustaklem from the Oxford Internet Institute, certain tropes in visual communication and reporting about AI disconnect the technological developments in that area and how people, in particular the broader public, understand what the technologies are about. This is partly why the AutoNorms project blog refrains from using the widely spread visual language of AI in the military context and uses images from the Better Images of AI library as much as possible.

Main themes and issues in visualizing military applications of AI

Many of the visuals featured in research communication, media reporting, and publications about AI in the military domain speak to the tropes and clichés in images of AI more broadly, as identified by the Better Images of AI guide.

One major theme is anthropomorphism, as we often see pictures of white or metallic humanoid robots that appear holding weapons, pressing nuclear buttons, or marching in troops like soldiers with angry or aggressive expressions, as if they could express emotions or be ‘conscious’ (see examples here and here).

In some variations, humanoids evoke associations with science fiction, especially the Terminator franchise. The Terminator is often referenced in debates about AWS, which feature in a substantial part of the research on AI in international relations, security, and military ethics. AWS are often called ‘killer robots’, both in academic publications and media platforms, which seems to encourage the use of images of humanoid ‘killer robots’ with red eyes, often originating from stock image databases (see examples here, here, and here). Some outlets do, however, note in captions that “killer robots do not look like this” (see here and here).

Actors such as campaigners might employ visuals, especially references from pop culture and sci-fi, to get people more engaged and as tools to “support education, engagement and advocacy”. For instance, Stop Killer Robots, a campaign for an international ban on AWS, often uses a robot mascot called David Wreckham to send their message that “not all robots are going to be as friendly as he is”.

Sci-fi also acts as a point of reference for policymakers, as evidenced, for example, by US official discourses and documents on AWS. As an illustration, some of these common tropes were visually present at the conference “Humanity at the Crossroads: Autonomous Weapons Systems and the Challenge of Regulation” which brought together diplomats, civil society, academia, and other actors to discuss the potential international regulation of AWS in April 2024 in Vienna.

Half-human half-robot projected on the wall and a cut-out of a metallic robot greeting participants at the entrance of the Vienna AWS conference. Photos by Anna Nadibaidze.

The colour blue also often features in visual communication about AI in warfare, together with abstract depictions of running code, algorithms, or computing technologies. This is particularly distinguishable in stock images used for blogs, conferences, or academic book cover designs. As Romele and Rodighiero write on this blog, blue might be used because it is calming, soothing, and also associated with peace, encouraging some accepting reaction from viewers, and in this way promoting certain imaginaries about AI technologies.

Examples of covers for recently published academic books on the topic of AI in international security and warfare.

There are further distinct themes in visuals used alongside publications about AI in warfare and AWS. A common trope features human soldiers in an abstract space, often with a blue (and therefore calming) background or running code, wearing a virtual reality headset and presumably looking at data (see examples here and here). One such visual was used for promotional material of the aforementioned REAIM Summit, organised by the Dutch Government in 2023.

Screenshot of the REAIM Summit 2023 website homepage (www.reaim2023.org). The image is credited to the US Naval Information Warfare Center Pacific, public domain.

Finally, many images feature military platforms such as uncrewed aerial vehicles (UAVs or drones) flying alone or in swarms, robotic ground vehicles, or quadruped animal-shaped robots, either depicted alone or together with human soldiers. Many of them are prototypes or models of existing systems tested and used by the United States military, such as the MQ-9 Reaper (which does not classify as an AWS). Most often, these images are taken from the visual repository of the US Department of Defense, given that the photos released by the US government are in the public domain and therefore free to use with attribution (see examples here, here, and here). Many visuals also display generic imagery from the military, for instance soldiers looking at computer screens, sitting in a control room, or engaging in other activities (see examples here, here, and here).

Example of image often used to accompany online publications about AWS. Source: Cpl Rhita Daniel, US Marine Corps, public domain.

However, there are several issues associated with some of the common visuals explored above. As AI researcher and advocate for an AWS ban Stuart Russell points out, references to the Terminator or sci-fi are inappropriate for the debate on AI in the military because they suggest that this is a matter for the future, whereas the development and use of these technologies is already happening.

Sci-fi references and humanoids might also give the impression that AI in the military is about replacing humans with ‘conscious’ machines that will eventually fight ‘robot wars’. This is misleading because the debate surrounding the integration of AI into the military is mostly not about robots replacing humans. Armed forces around the world plan to use AI for a variety of purposes, especially as part of humans interacting with machines, often called ‘teaming’. The debate and actors participating in it should therefore focus on the various legal, ethical, and security challenges that might arise as part of these human-machine interactions, such as a distributed form of agency.

Further, images of ‘killer robots’ often invoke a narrative of ‘uprising’ which is common in many works of popular culture and where humans lose control of AI, as well as determinist views where humans have little influence over how technology impacts society. Such visual tropes overshadow (human) actors’ decisions to develop or use AI in certain ways, as well the political and social contexts surrounding those decisions. Portraying weaponised AI in the form of robots turning against their creators problematically presents this is an inevitable development, instead of highlighting choices made by developers and users of these technologies.

Finally, many of the visuals tend to focus on the combat aspect of integrating AI in the military, especially on weaponry, rather than more ‘mundane’ applications, for instance in logistics or administration. Sensationalist imagery featuring shiny robots with guns or soldiers depicted in a theoretical battlefield with a blue background risks distracting from technological developments in security and warfare, such as the integration of AI into data analysis or military decision-support systems.

Towards better images?

It should be noted that many outlets have moved on from using ‘killer robot’ imagery and sci-fi clichés when publishing about AI in warfare. Some more realistic depictions are being increasingly used. For instance, a recent symposium on military AI published by the platform Opinio Juris features articles illustrated with generic photos of soldiers, drones, or fighter jets.

Images of military personnel looking at data on computer screens are arguably not as problematic because they convey a more realistic representation of the integration of AI into the military domain. But this still means often relying on the same sources: stock imagery and public domain websites such as the US government’s collections. It also means that AI technologies are often depicted in a military training or experimental setting, rather than a context where they could potentially be used, such as an actual conflict, not hidden with a generic blue background.

There are some understandable challenges, such as researchers not getting a say in the images used for their books or articles, or the reliance on free, public domain images, which is common in online journalism. However, as evidenced by the use of sci-fi tropes in major international conferences, a reflection on what are ‘responsible’ and ‘appropriate’ visuals for the debate on AI in the military and AWS is lacking.

Images of robot commanders, the Terminator, or soldiers with blue flashy tablets miss the point that AI in the military is about changing dynamics of human-machine interaction, which involve various ethical, legal, and security implications for agency in warfare. As with images of AI more broadly, there is a need to expand the themes in visuals of AI in security and warfare, and therefore also the types of sources used. Better images of AI would include humans who are behind AI systems and humans that might be potentially affected by them—both soldiers and civilians (e.g. some images and photos depict destroyed civilian buildings, see here, here, or here). Ultimately, imagery about AI in the military should “reflect the realistically messy, complex, repetitive and statistical nature of AI systems” as well as the messy and complex reality of military conflict and the security sphere more broadly.

The author thanks Ingvild Bode, Qiaochu Zhang and Eleanor Taylor (one of our Student Stewards) for their feedback on earlier drafts of this blog. 

How not to communicate about AI in education

Seventeen multicoloured post-it notes are roughly positioned in a strip shape on a white board. Each one of them has a hand drawn sketch in pen on them, answering the prompt on one of the post-it notes "AI is...." The sketches are all very different, some are patterns representing data, some are cartoons, some show drawings of things like data centres, or stick figure drawings of the people involved.

Camila Leporace – journalist, researcher, and PhD in Education – argues that innovation may not be in artificial intelligence (AI) but in our critical capacity to evaluate technological change.


When searching for “AI in education” on Google Images here in Brazil, in November 2023, there is a clear and obvious  predominance of images of robots. The first five images that appeared for me were: 

  1. A robot teaching numeracy in front of a school blackboard; 
  2. A girl looking at a computer screen from which icons she  is viewing “spill out”; 
  3. A series of icons and a hand catching them in the air; 
  4. A robot finger and a human finger trying to find each other like in Michelangelo’s  “Creation of Adam,” but a brain is between them, keeping the fingers from touching; whilst the  robot finger touches the left half of the brain (which is “artificial” and blue), the  human finger touches the right half of the brain (which is coloured); and
  5. A drawing (not a photo) showing a girl sitting with a book and a robot sat on two books next to her, opposite a screen;

It is curious (and harmful) how images associated with artificial intelligence (AI) in education so inaccurately represent what is actually happening with regard to the insertion of these technologies in Brazilian schools – in fact, in almost every school in the world. AI is not a technology that can be  “touched.” Instead, it is a resource that is present in the programming of the systems we use in an invisible, intangible way. For example, Brazilian schools have been adopting AI tools in writing activities, like the correction of students’ essays; or question-and-answer adaptive learning platforms. In Denmark, teachers have been using apps to audit students ‘moods’, through data collection and the generation of bar charts. In the UK, surveillance involving students and teachers as a consequence of data harvesting is a topic getting a lot of attention. 

AI, however, is not restricted to the educational resources designed for teaching and learning, but in various devices useful for learning beyond formal learning contexts. We all use “learning machines” in our daily lives, as now machine learning is everywhere around us trying to gather information on us to provide content and keep us connected. While we do so, we provide data to feed this machinery. Algorithms classify the large masses of data it receives from us.  Often, it is young people who – in contact with algorithmic platforms – provide their data  while browsing and, in return, receive content that – in theory – matches their profiles. This is quite  controversial, raising questions about data privacy, ethics, transparency and what these data  generation and harvesting procedures can add (or not) to the future of children and young  people. Algorithmic neural networks are based on prediction, applying statistics and other features to process data and obtain results. Otherwise we, humans, are  not predictable.

The core problem with images of robots and “magic” screens in education is that they don’t properly communicate what is happening with AI in the context of teaching and learning. These uninformative images end up diverting attention from what is really important: – interactions on social networks, chatbots, and the countless emotional, psychological and developmental implications arising from these environments. While there are speculations about teachers being replaced by AI, teachers have actually never been more important in supporting parents and carers educate about navigating the digital world. That’s why the prevalence of robot teachers in the imagination doesn’t seem  to help at all. And this prevalence is definitely not new!

When we look into the history of automation in education, we find out that one hundred years ago, in the 1920s, Sidney Pressey developed analog teaching machines basically to apply tests to students. Pressey’s machines preceded those developed by the behaviourist B. F. Skinner in the late 1960s, promising – just like today’s AI platforms for adaptive teaching do – to personalise learning, make the process more fun and relieve the teacher of repetitive tasks. When they came up, those inventions not only promised similar benefits as those which fuel AI systems today, but also raised concerns similar to those we face today, including the hypothesis of replacing the teacher entirely. We could then ask: where is the real innovation regarding automation in education, if the old analog machines are so similar to today’s in their assumptions, applications and the discourse they carry?

Innovation doesn’t lie in big data or deep neural networks, the basic ingredients that boost the latest technologies we are aware of. It lies in our critical capacity to look  at the changes brought about by AI technologies with restraint and to be careful about delegating to them what we can’t actually give up. It lies in our critical thinking on how the learning processes can or cannot be supported by learning machines.

More than ever, we need to analyse what is truly human in intelligence, cognition, creativity; this is a way of guiding us in not delegating what cannot be delegated to artificial systems, no matter how powerful they are in processing data. Communication through images requires special attention. After all, images generate impressions, shape perceptions and can completely alter the general audience’s sense of an important topic. The apprehension  we’ve had towards technology for dozens of years is enough. In the midst of the technological hype, we need critical thinking, shared thoughts, imagination and accuracy.. And certainly need better images  of AI.

Better images of AI can support AI literacy for more people

Marika Jonsson's book cover; a simple yellow cover with the title (in Swedish): "En bok om AI"

Marika Jonsson, doctoral student at KTH Royal Institute of Technology, reflects on overcoming the challenge of developing an Easy Read book on artificial intelligence (AI) with so few informative images about AI available.


There are many things that I take for granted. One of them is that I should be able to easily find information about things I want to know more about. Like artificial intelligence (AI). I find AI exciting, interesting; and I see the possibilities of AI helping me in everyday life. And thanks to the fact that I have been able to read about AI, I have also realised that AI can be used for bad things; that AI creates risks and can promote inequality in society.  Most of us use or are exposed to AI daily, sometimes without being aware of it.

Between May 2020 and June 2023, I participated in a project called AllAgeHub in Sweden, where one of the aims was to spread knowledge about how to use welfare technology to empower people in their everyday lives. The project included a course on AI for the participants, who worked in the public healthcare and social care sectors. The participants then wanted to spread knowledge about AI to clients in their respective sectors. The clients could be, for example, people working in adapted workplaces or living in supported housing. There was a demand for information in Easy Read format. Easy Read format is when you write in easy-to-read language, with common words, short sentences and in simple chronological order. The text should be spaced out and have short lines, and the texts are often supported by images. Easy Read is both about how you write and about how you present what is written. The only problem was that I found almost no Easy Read information about AI in Swedish. My view is that the lack of Easy Read information about AI is a serious matter.

A basic principle behind democracy is that all people are equal and should have the same rights. Therefore, I believe we must have access to information in an understandable way. How else can you express your opinion, vote or consent to something in an informed way? That was the reason I decided to write an Easy Read book about AI. My ambition was to write concretely and support the text with pictures. Then I stumbled on the huge problem of finding informative pictures about AI. The images I found were often abstract or inaccurate. The images could also be depicting AI as robots and conveyed the impression that AI is a creature that can take over the earth and destroy humanity. With images like that, it was hard to explain that, for example, personalised ads, which can entice me to buy things I don’t really need, are based on AI technology. Many people don’t know that we are exposed to AI that affects us in everyday life through cookie choices on the internet. The aforementioned images might also make people afraid of using practical AI tools that can make everyday life easier, such as natural language processing (NLP) tools that convert speech to text or reads text aloud. So, I had to create my own pictures.

I must confess, it was difficult to create clear images that explain AI. I chose to create images that show situations where AI is used, and tried to visualise how certain kinds of AI might operate. One example is that I visualised why a chatbot might give the wrong answer by showing how a word can mean two different things with a picture of each word’s meaning. The two different meanings give the AI tool two possible interpretations about what issue is at hand. The images are by no means perfect, but they are an attempt at explaining some aspects of AI.

Two images with Swedish text explaining the images. 1. A box of raspberries. 2. symbol of person carriying a bag. The Swedish word ”bär” is present in both explanations.
The word for carry and berry is the same in Swedish. The text says: “The word berry can mean two things. Berries that you eat. A person carrying a bag.”

The work of creating concrete, comprehensible images that support our understanding of AI can strengthen democracy by giving more people the opportunity to understand information about the tools they use in their day-to-day lives. I hope more people will be inspired to write about AI in Easy Read, and create and share clear and descriptive images of AI.

As they say,  ”a picture is worth a thousand words,” so we need to choose images that tell the same story as the words we use. At the time I write this blog post, I feel there are very few images to choose from. I am hopeful we can change this, together!


The Easy Read book about AI includes a study guide. It is in Swedish, and is available for free as a pdf on AllAgeHub’s website:

https://allagehub.se/2023/06/29/nu-finns-en-lattlast-bok-om-ai-att-ta-del-av/

Images Matter!

Woman to the left, jumbled up letters entering her ear

AI in Translation

You often hear the phrase “words matter”: words help us to construct mental images in our minds, and to make sense of the world around us. Yet, in the same framing, “images matter” too. How we depict the state of technology (imagined, current or future) visually and verbally,  helps us position ourselves in relation to what is already there and what is coming.

The way these technologies are visualized and expressed in combination tells us what an emerging technology looks like, and how we should expect to interact with it. If AI is always depicted as white, gendered robots, the majority of AI systems we interact with in reality around the clock go unnoticed. What we do not notice, we cannot react to. When we do not react, we become part of the flow in the dominant (and presently incorrect) narrative. This is why we need better images of AI, as well as a language overhaul.

These issues are not limited to the english-speaking world alone. I have recently been asked to give a lecture at a Turkish university on artificial intelligence and the future of work. Over the years I have presented on this and similar topics (AI and the future of the workplace, the future of HR) on a number of occasions. As an AI ethicist and lecturer, I also frequently discuss the uses of AI in human resources, workplace datafication and employee/candidate surveillance. The difference this time? I was asked to hold the lecture in Turkish.  

Yes, it is my native language. However, for more than 15 years, I have been using English in my day-to-day professional interactions. In English, I can talk about AI and ethics, bias, social justice, and policy for hours. When discussing the same topics in Turkish though I need to use a dictionary to translate some of the technical terminology.  So, during my preparations for this presentation, I went down the rabbit hole: specifically one concerning how connected biases in language and images impact overarching narratives of artificial intelligence. 

Gender and Race Bias in Natural Language Models

In 2017 Caliskan, Bryson and Narayan explored in their pioneering work that semantics (meaning of words) derived automatically from language corpora contain human-like biases. The authors showed that natural language models, built by parsing of large corpora derived from internet, reflect the human and societal gender and racial biases. The evidence was shown in word embeddings, which is a method of representation where the words that have the same meaning or tend to be used together are mapped closer to each other on a vector in a high-dimensional space. In other words, they are hidden patterns of word co-occurrence statistics of language corpora, which include grammatical and semantic information. Caliskan et al share that the thesis behind word embeddings is that words that are closer together in the vector space are semantically closer in some sense. The research showed for example, Google Translate converts occupations in Turkish sentences in gendered ways – even though Turkish language is gender-neutral:

“O bir doktor. O bir hemsire.” to these English sentences: “He is a doctor. She is a nurse.” Or “O bir profesör. O bir öğretmen” to these English sentences “He’s a professor. She is a teacher.”

Such results reflect the gender stereotypes within the language models themselves. Such subtle changes have serious consequences.  NLP tasks such as keyword search and match, translation, web search, or text generation/recognition/analysis can be embedded in systems that make decisions on hiring, university admission, immigration applications, law enforcement interactions, etc.

Google Translate, after a patch fix of its models, now gives feminine and masculine binary translations. But 4 years after this patch fix (as of the time of writing), Google Translate still has not addressed non-binary gender translations.

Gender and Race Bias in Search Results

The second seminal work is Dr Safiya Noble’s book Algorithms of Oppression, which covers academic research on Google search algorithms, examining search results from 2009 to 2015. Similar to the findings of the above research on language models, Dr Noble argues that the search algorithms are not neutral tools, and they reflect and magnify the race and gender biases that exist in society and the people who create them. She expertly demonstrates how the search results for keywords like “white girls” are significantly different to “Black girls”,  “Asian girls” or “Hispanic girls”  The latter set of words would show images which were exclusively pornography or highly sexualized content. The research brings to the surface the hidden structures of power and bias in widely used tools that shape the narratives of technology and future. Dr Noble writes “racism and sexism are part of the architecture and language of technology[…]We need a full-on re-evaluation of the implications of our information resources being governed by corporate-controlled advertising companies.”

Google Search applied another after-the-fact fix to reduce the racy results after Dr Noble’s work. However, this also remains a patch fix: the results for “Latina girls” still show majority sexualized images and results for “Hispanic girls” show majority stock photos or Pinterest posts. The results for “Asian girls” seem to remain much the same, associated with pictures tagged as hot, cute, beautiful, sexy, brides.

Gender and Race Bias in Search Results for “Artificial Intelligence”

The third work is Better Images of AI, which is a collaboration that I am proud to have helped found and continue supporting as an advisor. A group of like-minded advocates and scholars have been fighting against the false and cliched images of artificial intelligence used in news stories or marketing material about AI. 

We have been concerned about how images such as humanoid robots, outstretched robot hands, brains shape the public’s perception of what AI systems are and what they are capable of. Such anthropomorphized illustrations not only add to the hype of AI’s endless miracles, but they also stop people questioning the ubiqutious AI systems embedded in their smart phones, laptops, fitness trackers, home appliances – to name but a few. They hinder the perception of consumers and citizens. This means that the conversations in mainstream tend to be stuck at ‘AI is going to take all of our jobs away,’ or ‘AI will be the end of humanity’ and as such the current societal and environmental harms and implications of some AI systems are not publicly and deeply discussed. Those powerful actors developing or using systems to benefit themselves rather than society are hardly held accountable. 

The Better Images of AI collaboration not only challenges the narratives and biases underlying these images, but also provides a platform for artists to share their images in a creative commons repository – in other words, it builds a communal alternative imagination. These images aim to more realistically portray the technology, the people behind it, and point towards its strengths, weaknesses, context and applications. They represent a wider range of humans and human cultures than ‘Caucasian businessperson’, show realistic applications of AI now, not in some unspecified science-fiction future, don’t show physical robotic hardware where there is none and reflect the realistically messy, complex, repetitive and statistical nature of AI systems.

Down the rabbit hole…

So with that background, back to my story for this article. For part of the lecture, I was preparing discussions surrounding AI and the future of work. I wanted to discuss how execution of different professional tasks were changing with technology, and what that means for the future of certain industries or occupational areas. I wanted to underline that some tasks like repetitive transactions, large scale iterations, standard rule applications are better done with AI – as long as they were the right solution for the context and problem, and were developed responsibly and monitored continuously. 

On the flip side, certain skills and tasks that include leading, empathizing, creating are to be left to humans–AI systems neither have the capacity or capability, nor should they be entrusted with such tasks.  I wanted to add some visuals to the presentation and also check out what is currently being depicted in the search results. I first started with basic keyword searches in English such as ‘AI and medical,’ ‘AI and education,’ ‘AI and law enforcement’ etc. What I saw in the first few examples was depressing. I decided to expand the search to more occupational areas: the search results did not get better. I then wondered what the results might be if I had the same searches but this time in Turkish.

What you see below are the first images that come up in my Google search results for each of these keywords. The images not only continue to reflect the false narratives but in some cases are flat out illogical. Please note that I have only used AI / Yapay Zeka in my search and not ‘robot’.

Yapay zeka ve sağlık : AI and medical

A picture containing text

Description automatically generated

In both Turkish and English-speaking worlds, we are to expect white Caucasian male robots to be our future doctors. They will need to wear a shirt, tie and white doctor’s coat to keep their metalic bodies warm (apparently no need for masking). They will also need to look at a tablet to process information and make diagnosis or decisions. Their hands and fingers will delicately handle surgical moves. What we should really be caring about medical algorithms right now is the representativeness of the datasets used in building the algorithms, the explainability of how the algorithm made a diagnostic determination, why it is suggesting a certain prescription or course of action, and how some health applications are completely left out of regulatory oversight.

We have already experienced current medical algorithms which result in biased and discriminatory outcomes because of a patient’s gender, socioeconomic level or even historical access of certain populations to healthcare. We know of diagnostic algorithms which have embedded code to change a determination due to a patient’s race; of false determinations due to the skin color of a patient; of faulty correlations and predictions due to training datasets representing only a portion of the population.

Yapay zeka ve hemşire : AI and Nurse

Yapay zekanın sağlık alanında kullanımı | Pitstop Reklam Ajansı Graphical user interface

Description automatically generated

After seeing the above images I wondered if the results would change if I was more specific about the profession within the medical field. I immediately regretted my decision.

In both results, the Caucasian male robot image changes to a Caucasian female image, reflecting the gender stereotypes across both cultures. The Turkish AI nurse wants you to keep quiet and not cause any disruption or noise. I was not prepared for the English version, a D+ cup wearing robot. Hard to say if the breasts are natural or artificial! This nurse has a Green Cross both on the nurse cap and the bra(?!). The robot is connected to something with yellow cables so probably limited in its physical reach, although there is definitely intention to listen to your chest or heart beat. This nurse will also show you your vitals on an image projected from her chest.

Yapay zeka ve kanun : AI and legal

A picture containing water sport, swimming

Description automatically generated A close-up of a robot

Description automatically generated with low confidence

AI in the legal system is currently one of the most contentious issues in the policy and regulatory discussions. We have already seen a number of use cases where AI systems are used by courts for judicial decisions about recidivism, sentencing or bail, some with results biased against Black people in particular. In the criminal justice field, the use of AI systems for providing investigative assistance and automating decision-making processes for routine administrative paperwork is already in place in many countries. When it comes to images though, these systems, some of which make high-stake decisions that impact fundamental rights, or the existing cases of impacted people are not depicted. Instead we either have a robot touching a blue projection (don’t ask why), or a robot holding a wooden gavel. It is not clear from the depiction if the robot will chase you and hammer you down with the gravel, or if this white male looking robot is about to make a judgement about your right to abortion. The glasses which the robot is wearing I presume are to stress that this particular legal robot is well read.

Yapay zeka ve polis : AI and Law Enforcement

A picture containing text, electronics

Description automatically generated A picture containing text, outdoor, sky

Description automatically generated

Similar to the secondary search I explained above for medical systems, I wanted to go deeper here. I searched for AI and law enforcement.  Currently, in a number of countries (including US, EU member states, China, etc) AI systems are used by police to predict crimes which have not happened yet. Law enforcement uses AI in various ways, from  evidence analysis to biometric surveillance: from anomaly detection/pattern analysis to license-plate readers; from crowd control to dragnet data collection and aggregation; from voice analysis to social media scanning to drone systems. Although crime data is notoriously biased in terms of race, ethinicity and socioeconomic background, and reflects decades of structural racism and oppression, you could not tell any of that from the image results. 

You do not see the picture of Black men wrongfully arrested due to biased and inaccurate facial recognition systems. You do not see hot spots mapped onto predictive policing maps which are heavily surveilled due to the data outcomes. You do not see the law enforcement buying large amounts of data from data-brokers – data that they would otherwise need search warrants to acquire. What you see instead in the English version is another Caucasian male-looking robot working shoulder to shoulder with police SWAT teams – keeping law and order!  In the Turkish version, the image result reflects a female police officer who is either being whispered to by an AI system or using an AI system for work. If you are a police officer in Turkey, you are probably safe for the moment as long as your AI system is shaped as a human head circuit.

Yapay zeka ve gazetecilik : AI and journalism

A picture containing text, automaton

Description automatically generated

Content and news creation are currently some of the most ubiquitous uses of AI we experience in our daily lives. We see algorithmic systems curating content at news/media channels. We experience the manipulation and ranking of the content in the search results, in the news that we are exposed to, in the social media feeds that we doom scroll. We complain about how disinformation and misinformation (and to a certain extent deepfakes) have become mainstream conversations with real life consequences. Research after research warns us about the dangers of echo chambers created by algorithmic systems, how it leads to radicalization and polarization, and demands accountability from the people who have the power to control their designs.

The image result in Turkish search is interesting in the sense that journalism is still a male occupation. The same looking people work in the field, and AI in this context is a robot of short stature waving an application form to be considered for the job.  The robot in English results is slightly more stylish. It even carries a Press card to depict the ethical obligations it has for the profession. You would almost think that this is the journalist working long hours to break an investigative piece, or one risking their life to report from conflict zones.

Yapay zeka ve finans : AI and finance

A fire hydrant in front of a digital clock

Description automatically generated with medium confidence

The finance sector,  banking and insurance industries reflect some of the most mature use cases of AI systems. For decades now, banking has been using algorithmic systems for pattern recognition and fraud detection, for credit scoring and credit/loan determinations, for electronic transaction matching to name a few. The insurance industry likewise heavily uses algorithmic systems and big data to determine insurance eligibility, policy premiums and in certain cases claim management.  Finance was one of the first industries disrupted by emerging technologies. FinTech created a number of companies and applications to break the hold of major financial institutions on the market. Big banks responded with their own innovations.

So, it is again interesting to see that even with such mature use of AI in a field, robot images are still first in the search results. We do not see the app which you used to transfer funds to your family or friends. Nor the high frequency trading algorithms which currently carry more than 70% of all daily stock exchange transactions. It is not the algorithms which collect hundreds of data points about you from your grocery shopping to GPS locations to make a judgement about your creditworthiness – your trustworthiness. It is not the sentiment analysis AI which scans millions of corporate reporting, public disclosures or even tweets about publicly traded companies and make microsecond judgements on what stocks to buy. It is not the AI algorithm which determines the interest rate and limit on your next credit card or loan application. No, it is the image of another white robot staring at a digital board of what we can assume to be stock prices. 

Yapay zeka ve ordu : AI and military

A picture containing outdoor, tree, grass, military vehicle

Description automatically generated A picture containing weapon, old

Description automatically generated

AI and military usE cases are a whole different story in the scheme of AI innovation and policy discussions. AI systems have been used for many years in satellite imagery analysis, pattern recognition, weapon development and simulations etc. The more recent debates intertwine geopolitics with an AI arms race. This indeed should keep all of us awake at night. The importance of autonomous lethal weapons (LAWs) by militaries as well as non-traditional actors is an issue upon which every single state in the world seems to agree. 

Yet agreement does not mean action. It does not mean human life is protected. LAWs have the capacity to make decisions by themselves to attack – without any accountability. Micro drones can be combined with facial recognition and attack systems to take down individuals and political dissenters. Drones can be remotely controlled to drop ammunition over remote regions. Robotic systems (correct depiction) can be used for landmine removal, crowd control or perimeter security. All these AI systems already exist. The image results though again reflect an interesting narrative. The image in Turkish results show a female American soldier using a robot to carry heavy equipment. The robot here is more like a mule in this depiction than an autonomous killer.  The image result in English shows a mixed gender robot group in what seems to be camouflage green color. At least the glowing white will not be an issue for the safety of these robots.

Yapay zeka ve eğitim : AI and Education

Yapay Zekanın Eğitimdeki 10 Kullanım Alanı – Social Business Türkiye Text

Description automatically generated

When it comes to AI and education, the images continue to be robot related. The first robot lifts kids up to the skies to show what is on the horizon. It has nothing to do with the hype of AI-powered training systems or learning analytics which are hitting schools and universities across the globe. The AI here does not seem to use proctoring software to discriminate or surveil students. It also apparently does not matter if you do not have access to broadband to interact with this AI or do your schoolwork. The search result in English, on the other hand, shows a robot which needs a blackboard and a piece of chalk to process mathematical problems. If your Excel or Tableu or R software does not look like this image, you might want to return to the vendor. Also if you are an educator in social sciences or humanities, it is probably time to re-think the future of your career.

Yapay zeka ve mühendislik : AI and engineering

Diagram

Description automatically generated with medium confidence Graphical user interface

Description automatically generated with low confidence

The blackboard and chalk using robot is better off in the future of engineering. Educator robot might be short on resources, but the engineer robot will use a digital board to do the same calculations.  Staring at this board will eventually ensure the robot engineer solves the problem. In the Turkish version, the robot gazes at a field of hexagons. If you are a current engineer in any field using AI software to visualize your data in multiple dimensions, running design or impact scenarios, or building code etc – does this look like your algorithm? 

Yapay zeka ve satış : AI and sales

A picture containing text, electronics

Description automatically generated A group of people working on a computer

Description automatically generated with low confidence

If you are a salesperson in Turkey, the prospects for you are a bit iffy. The future seems to require your brain to be exposed and held in the air. There is a safety net of a palm there to protect your AI brain just in case there is too much overload.  However if you are in sales in the English-speaking world, your sales team or your call center staff will be more of white glowy male robots. Despite being a robot, these AI systems will still need access to a laptop to type things and process data. They will also need headsets to communicate with customers because the designers forgot to include voice recognition and analysis software in the first place. Maybe next time you hear ‘press 0 to speak to an agent’ you might have different images in your mind. Never mind how the customer support services you call record your voice and train their algorithms with a very weak consent notice (‘your call might be recorded for training and quality purposes’ sound familiar?). Never mind the fact most of the current AI applications are chatbots on the websites you visit, or automated text algorithms which inquire about your questions. Never mind the cheap human labor which churns through the sales and call center operations without much of worker rights or protections.    

Yapay zeka ve mimarlık : AI and architecture

A statue of a person with a city in the background

Description automatically generated with low confidence A statue of a person with a city in the background

Description automatically generated with low confidence

It was surprising to see the same image as the first result in both Turkish and English search for architecture. I will not speculate on why this might be the case. However, our images and imaginations of current and future AI systems once again are limited to robots. This time a female robot is used in the depiction with city planning and architectural ideas flowing out from the back of the robot’s head.

Yapay zeka ve tarım : AI and agriculture

A picture containing text, plant, grass

Description automatically generated

Finally, I wanted to check what the situation was for agriculture. It was surprising that Turkish image reflected a robot delicately picking a grain of wheat. Turkey used to be a country proud of its agricultural heritage and its ability to self-sustain on food. It used to be a net exporter of food products.  Over the years, it lost that edge due to a number of factors. The current imagery of AI does not seem to take into account any human who suffer the harsh conditions in the fields. The image on the right is more focused on the conditions of the nature to ensure efficiency and high production. It was refreshing to see that at least the image of green fields was kept and maybe that stays for us a reminder that we need to respect and protect the nature. 

So, returning to where I started, images matter.  We need to be cognizant of how the emerging technologies are being visualized, why they are depicted in these ways, who makes those decisions and hence shapes the conversation, who benefits and who is harmed from such framing. We need to imagine technologies which move us towards humanity, equity and justice. We also need the images of those technologies to be accurate, diverse and inclusive.

Instead of assigning human characteristics to algorithms (which are at the end of the day human made code and rules), we need to reflect the human motivations and decisions embedded in these systems. Instead of depicting AI with superhuman powers, we need to show the labor of humans which build these systems. Instead of focusing only on robots and robotics, we need to explain AI as software embedded in our phones, laptops, apps, home appliances, cars, or surveillance infrastructures. Instead of thinking of AI as an independent entity or intelligence, we need to explain AI being used as a tool-making decisions about our identity, health, finances, work, education or our rights and freedoms. 

Buzzword Buzzkill: Excitement & Overstatement in Tech Communications

An illustration of three „pixelated“ cupboards next to each other with open drawers, the right one is black

The use of AI images is not just an issue for editorial purposes. Marketing, advertising and other forms of communication may also want or need to illustrate work with images to attract readers or to present particular points. Martin Bryant is the founder of tech communications agency Big Revolution and has spent time in his career as an editor and tech writer. 

“AI falls into that same category as things like cyber security where there are no really good images because a lot of it happens in code,” he says. “We see it in outcomes but we don’t see the actual process so illustration falls back on lazy stereotypes. It’s a similar case with cyber security, you’ll see the criminal with the swag bag and face mask stooped over a keyboard and with AI there’s the red-eyed Terminator robot or it’s really cheesy robots that look like sixties sci-fi.”

The influence of sci-fi images in AI is strong and one that can make reporters and editors uncomfortable with their visal options. “ “Whenever I have tried to illustrate AI I’ve always felt like I am short changing people because it ends up being stock images or unnecessarily dystopian and that does a disservice to AI. It doesn’t represent AI as it is now. If you’re talking about the future of AI, it might be dystopian, but it might not be and that’s entirely in our hands as a species how we want AI to influence our lives,” Martin says. “If you are writing about killer robots then maybe a Terminator might be OK to use but if you’re talking about the latest innovation from DeepMind then it’s just going to distort the public understand of AI either to inflate their expectations of what is possible today or it makes them fearful for the future.” 

I should be open here about how I know Martin. We worked together for the online tech publication The Next Web where he was my managing editor and I was UK editor some years ago. We are both very familiar with the pressures of getting fast-moving tech news out online, to be competitive with other outlets and of course to break news stories. The speed at which we work in news has an impact on the choices we can make.

“If it’s news you need to get out quickly, then you just need to get it out fast and you are bound to go for something you have used in the past so it’s ready in the CMS (Content management system – the ‘back end’ of a website where text and images are added.),” Martin says. “You might find some robots or in a stock image library there will be cliches and you just have to go with something that makes some sense to readers. It’s not ideal but you hope that people will read the story and not be too influenced by the image – but a piece always needs an image.”

That’s an interesting point that Martin is making. In order to reach a readership, lots of publications rely on social media to distribute news. It was crowded when we worked together and it sometimes feels even more packed today. Think about the news outlets you follow on Twitter or Facebook, then add to this your friends, contacts and interesting people you like to follow and the amount of output they create with links to news they are reading and want to comment upon. It means we are bombarded with all sorts of images whenever we start scrolling and to stand out in this crowd, you’re going to need something really eye-catching to make someone slow down and read. 

“If it’s a more considered feature piece then there’s maybe more scope for a variety of images, like pictures of the people involved, CEOs, researchers and business leaders,” Martin says. “You might be able to get images commissioned or you can think about the content of the piece to get product pictures, this works for topics like driverless cars. But there is still time pressure and even with a feature, unless you are a well-resourced newsroom with a decent budget, you are likely to be cutting corners on images.” 

Marketing exciting AI

It’s not just the news that is hungry for images of AI. Marketing, advertising and other communications are also battling for our attention and finding the right image to pull in readers, clicks or getting people to use a product is important. Important, but is it always accurate? Martin works with and has covered news of countless startup companies, some of which use AI as a core component of their business proposition. 

“They need to think about potential outcomes when they are communicating,” he says “Say there is a breakthrough in deep neural AI or something it’s going to be interesting to academics and engineers, the average person is not going to get that because a lot of it requires an understanding of how this technology works and so you often need to push startups to think about what it could do, what they are happy with saying is a positive outcome.” 

This matches the thinking of many discussions I have had about art and the representation of AI. In order to engage with people, it can be easier to show them different topics of use and influence from agriculture to medical care or dating. These topics are far more familiar to a wider audience than a schematic for an adversarial network. But claiming an outcome can also be a thorny issue for some company leaders.

“A lot of startup founders from an academic background in AI tend to be cautious about being too prescriptive about how their technology could be used because often if they have not fully productised their work in an offering to a specific market,” Martin explains. “They need to really think about optimistic outcomes about how their tech can make the world better but not oversell it. We’re not saying it’s going to bring about world peace, but if they really think of examples of how the AI can help people in their everyday lives this will help people engage with making the leap from a tech breakthrough they don’t understand to really getting why it’s useful.” 

Overstating AI

AI now appears to be everywhere. It’s a term that has broken out from academia, through engineering and into business, advertising and mainstream media. This is great, it can mean more funding, more research, progress and ethical monitoring and attention. But when tech gets buzzy, there’s a risk that it will be overstated and misconstrued. 

“There’s definitely a sense of wanting to play up AI,” Martin says. “There’s a sense that companies have to say ‘look at our AI!’ when actually that might be overselling what is basic technology behind the scenes. Even if it’s more developed than that, they have to be careful. I think focusing on outcomes rather than technologies is always the best approach. So instead of saying ‘our amazing, groundbreaking AI technology does this’ – focusing on what outcomes you can deliver that no one else can because of that technology is far more important. 

As we have both worked in tech for so long, the buzzword buzzkill is a familiar situation and one that can end up with less excitement and more of an eyeroll. Martin shared some past examples we could learn from, “It’s so hilarious now,” he says. “A few years ago everything had to have a location element, it was the hot new thing and now the idea of an app knowing your location and doing something relevant to it is nothing. But for a while it was the hottest thing. 

“Gamification was a buzzword too. Now gamification is a feature in lots and lots of apps, Duolingo is a great example but it’s subtly used in other areas  but for a while startups would pitch themselves saying ‘we are the gamified version of X’.”

But the overuse of language and their accompanying images is far from over and it’s not just AI that suffers. “Blockchain keeps rearing its head,” Martin points out. “It’s Web3 now, slightly further along the line but the problem with Web3 and AI is that there’s a lot of serious and thoughtful work happening but people go ahead with ‘the blockchain version of X or web3 version of Y’ and because it’s not ready yet or it’s far too complicated for the mainstream, it ends up disillusioning people. I think you see this a bit with AI too but Web3 is the prime example at the moment and it’s been there in various forms for a long time now.” 

To avoid bad visuals and buzzword bingo in the reporting of AI, it’s clear through Martin’s experience that outcomes are a key way of connecting with readers. AI can be a tricky one to wrap your head around if you’re not working in tech, but it’s not that hard when it’s clearly explained.”It really helps people understand what AI is doing for them today rather than thinking of it as something mysterious or a black box of tricks,” Martin says. “That box of tricks can make you sound more competitive but you can’t lie to people about it and you need to focus on outcomes that help people understand clearly what you can do. You’ll not only help people’s understanding of your product but also the general public’s knowledge of   what AI can really do for them.”

Avoiding toy robots: Redrawing visual shorthand for technical audiences

Two pencil drawn 1960s style towy robots being scribbled out by a pencil on a pale blue background

Visually describing AI technologies is not just about reaching out to the general public, it also means getting things marketing and technical communication right. Brian Runciman is the Head of Content – British Computer Society (BCS) The Chartered Institute of IT. His audience is not unfamiliar with complex ideas, so what are the expectations for accompanying images? 

Brian’s work covers the membership magazine for BCS as well as a publicly available website full of news, reports and insights from members. The BCS membership is highly skilled, technically minded and well read – so the content on site and in the magazine needs to be appealing and engaging.  

“We view our audience as the educated layperson,” Brian says. “There’s a base level of knowledge you can assume. You probably don’t have to explain what machine learning or adversarial networks are conceptually and we don’t go into tremendous depth because we have academic journals that do this.” 

Of course writing for a technical audience also means Brian and his colleagues will get smart feedback when something doesn’t quite fit expectations. “With a membership of over 60 thousand, there are some that are very engaged with how published material is presented and quite rightly,” Brian says. “Bad imagery affects the perception of what something really is.”

So what are the rules that Brian and his writers follow? As with many publications there is a house style that they try to keep to and this includes the use of photography and natural imagery. This is common among news publications that choose this over illustration, graphics or highly manipulated images. In some cases this is used to encourage a sense of trust in the readership that images are accurate and have not been changed. This also tends to mean the use of stock images. 

“Stock libraries need to do better,” Brian observes. “When you’re working quickly and stuff needs to be published, there’s not a lot of time to make image choices and searching stock libraries for natural imagery can mean you end up with a toy robot to represent things that are more abstract.”

“Terminators still come up as a visual shorthand,” he says. “But AI and automation designers are often just working to make someone’s use of a website a little bit slicker or easier. If you use a phone or a website to interact with an automated process it does what it is supposed to do and you don’t really notice it – it’s invisible and you don’t want to see it. The other issue is that when you present AI as a robot people think it is embodied. Obviously, there is a crossover but in process automation, there is no crossover, it’s just code, like so much else is.”

Tone things down and make them relatable 

Brian’s decades-long career in publishing means he has some go-to methods for working out the best way to represent an article. “I try to find some other aspect of the piece to focus on,” he says. “So in a piece about weather modelling, we could try and show a modelling algorithm but the other word in the headline is weather and an image of this is something we can all relate to.” 

Brian’s work also means that he has observed trends in the use of images. “A decade or so ago it was more important to show tech,” he says. “In a time when that was easily represented by gadgets and products this was easier than trying to describe technologies like AI. Today we publish in times when people are at the heart of tech stories and those people need to look happy.”

Pictures of people are a good way to show the impact of AI and its target users, but it also raises other questions about diversity – especially if the images are predominantly of middle aged white men. “It’s not necessary,” says Runciman. “We have a lot of head shots of our members that are very diverse. We have people from minorities, researchers who are not white or middle aged – of which there are loads. When people say they can’t find diverse people for a panel I find it ridiculous, there are so many people out there to work with. So we tend to focus on the person who is working on a technology and not just the AI itself.”

The use of images is something that Brian sees every day for work, so what would be on his wish list when it comes to better images of AI? “No cartoon characters and minimal colour usage – something subtle,” he muses. “Skeletal representations of things – line representations of networks, rendered in subtle and fewer colours.” This nods at the cliches of blue and strange bright lights that you can find in a simple search for AI images, but as Brian points out, there are subtler ways of depicting a network and images for publishing that can still be attractive without being an eyesore.

Why Metaphors matter: How we’re misinforming our children about data

An abstract illustration with fluid words spelling Data, Oil, Fluid and Leak

Have you ever noticed how often we use metaphors in our day-to-day language? The words we use matter, and metaphorical language paints mental pictures imbued with hidden and often misplaced assumptions and connotations. In looking at the impact of metaphorical images to represent the technologies and concepts covered within the term artificial intelligence, it can be illuminating to drill down into one element of AI – that of data.

Hattusia recently teamed up with Jen Persson at Defend Digital Me and The Warren Youth Project to consider how the metaphors we attach to data impacts UK policy, amalgamating in a data metaphors report.

In this report, we explore why and how public conversations about personal data don’t work. We suggest what must change to better include children for the sustainable future of the UK national data strategy.

Our starting point is the influence of common metaphorical language: how does the way we talk about data affect our understanding of it? In turn, how does this inform policy choices, and how children feel about the use of data about them in practice?

Still from a video showing Alice Thwaite being interviewed
Watch the full video and interview here

Metaphors are routinely used by the media and politicians to describe something as something else. This brings with it associations made in response in the reader or recipient. We don’t only see the image but receive the author’s opinion or intended meaning on something.

Metaphors are very often used to influence the audience’s opinion. This is hugely important because policymakers often use metaphors to frame and understand problems – the way you understand a problem has a big impact on how you respond to it and construct a solution.

Looking at children’s policy papers and discussions about data in Parliament since 2010, we worked with Julia Slupska to identify three metaphor groups most commonly used to describe data and its properties.

We found that ​​a lot of academic and journalistic debates frame data as ‘the new oil’, for example; while some others describe it as toxic residue or nuclear waste. The range of metaphors used by politicians is more narrow and rarely as critical.

Through our research, we’ve identified the three most prominent sets of metaphors for data used in reports and policy documents. These are:

  • Fluid: data can flow or leak
  • A resource/fuel: data can be mined, can be raw, data is like oil
  • Body or bodily residue: data can be left behind by a person like footprints; data needs protecting

In our workshop at The Warren Youth Project , the participants used all of our identified metaphors in different ways. Some talked about the extraction of data being destructive, while others compared it to a concept that follows you around from the moment you’re born. Three key themes emerged from our discussions:

  • Misrepresentation: the participants felt that data was often inaccurate, or used by third parties as a single source of truth in decision-making. In these cases, there was a sense that they had no control over how they were perceived by law enforcement and other authority figures.
  • Power hierarchies and abuses of power: this theme came out via numerous stories about those with authority over the participants having seemingly unfettered access to their data, thus enforcing opaque processes, leaving the participants powerless and with no control.
  • The use of data ‘in your best interest’: there was unease expressed over data being used or collected for reasons that were unclear and defined by adults, leaving children with a lack of agency and autonomy.

When looking into how children are framed in data policy, we found they are most commonly represented as criminals or victims, or simply missing in the discussion. The National Data Strategy makes a lot of claims of how data can be of use to society in the UK, but only mentions children twice and mostly talks about data like it is a resource to be exploited for economic gain.

The language in this strategy and other policy documents is alienating and dehumanises children into data points for the purpose of predicting criminal behaviour or to attempt to protect them from online harm. The voices of children themselves are left out of the conversation entirely. We propose new and better ways to talk about personal data.

To learn more about our research, watch this video (produced by Matt Hewett) in which I discuss the findings. It breaks down exactly what the three groups were, how the experiences which young people and children had related to data linked back to those three groups, and how changing the metaphors we use when we talk about data could be key to inspiring better outcomes for the whole of society.

We also recommend looking at the full report on the Defend Digital Me website here

From Black Box to Algorithmic Veil: Why the image of the black box is harmful to the regulation of AI

An abstract image containing stylized black cubes and a half-transparent veil infront of a night street scene

The following is based on an excerpt of the upcoming book “Self-imposed Algorithmic Thoughtlessness and the Automation of Crime Control”, Nomos/Hart 2022 by Lucia Sommerer


Language is never innocent: words possess a secondary memory, which in the midst of new meanings mysteriously persists.

Roland Barthes1

The societal, as well as the scholarly discussion about new technologies, is often characterized by the use of metaphors and analogies. When it comes to the legal classification of new technologies, Crootof even speaks of a ‘battle of analogies’2. Metaphors and analogies offer islands of familiarity when legally navigating through the floods of complex technological evolution. Metaphors often begin where the intuitive understanding of new technologies ends.3 The less familiar we feel with a technology, the greater our need for visual language as a set of epistemic crutches. The words that we choose to describe our world, however, have a direct influence on how we perceive the world.4 Wittgenstein even argues that they represent the boundaries of our world.5 Metaphors and analogies are never neutral or ‘innocent’, as Barthes puts it, but come with ‘baggage’6, i.e. metaphors in the digital realm are loaded with the assumptions of the analogue world from which the imagery is borrowed.7 Consider the following question about one of the most widespread metaphors on the subject of algorithms, the black box:

What do you see before your inner eye, when you hear the term ‘black box’?

Some people may think of a monolithic, robust, opaque, dark and square figure.

What few people will see is humans.

This demonstrates both the strengths and the weaknesses of the black box image and thus its Janus-headedness. In the discussion about algorithms, the black box narrative was originally intended as a ‘wake-up call’8 to direct our attention – through memorable visual language – towards certain risks of algorithmic automation; namely towards the risks of a loss of (human) control and understandability. The black box terminology successfully fulfils this task.

But it also threatens to obscure our view of the people behind algorithmic systems and their value judgements. The black box image conceals an opportunity to control the human decisions behind an algorithmic system and falsely suggests that algorithms are independent of human prejudices. By drawing attention to one problem area of the use of algorithms (non-transparency), the black box narrative threatens to distract from others (controllability, hidden human value judgements, lack of neutrality). The term black box hides the fact that algorithms are complex socio-technical systems9 that are based on a multitude of different human decisions10. Further, by presenting algorithmic technology as a monolithic, unchangeable and incomprehensible black box, connotations such as ‘magical’ and ‘oracular’ often arise.11 Instead of provoking criticism, such terms often lead to awe and ultimately surrender to the opacity of the black box. Our options for dealing with algorithms are reduced to ‘use vs. do not use’. Opportunities that would allow for nuances in the human design process of the black box go unnoticed. The inner processes of the black box as a system are sealed off from humans and attributed an inevitability that strongly resembles the inevitability of the forces of nature; forces that can be ‘tamed’ but never systematically controlled.12 The black box narrative also ascribes such problematic inevitability to negative side effects such as the discriminatory effects of an algorithm. This view diverts attention away from the very human-made sources of algorithmic discriminatory behaviour (e.g. selection of training data). The black box narrative in its most widespread form – namely as an unreflected catchphrase – paradoxically achieves the opposite of what it is intended to do; namely, to protect us from a loss of control over algorithms.

In reality it is, however, possible to disclose a number of human value judgements that stand behind even supposed black box algorithm, for example, through logging requirements in the design phase or output testing.

The challenge posed by the regulation of algorithms, therefore, is more appropriately described as an ‘algorithmic veil’ than a black box; an ‘algorithmic veil’ that is placed over human decisions and values. One advantage of the metaphor of the veil is that it almost inherently invites us to lift it. A black box, on the other hand, does not contain such a prompt. Quite the opposite: a black box indicates that an attempt to gain any insight whatsoever is unlikely to succeed. The metaphors we use in the discussion about algorithms, therefore, can directly influence what we think is possible in terms of algorithm regulation. By conjuring up the image of the flowing fabric of an algorithmic veil, which only has to be lifted, instead of a massive black box, which has to be broken open, my intention is not to minimize the challenges of algorithm regulation. Rather, the veil should be understood as an invitation to society, programmers and scholars: instead of talking about what algorithms ‘do’ (as if they were independent actors), we should talk about what the human programmers, statisticians, and data scientists behind the algorithm do. Only when this perspective is adopted can algorithms be more than just ‘tamed’, i.e., systematically controlled by regulation.


1 Roland, Writing Degree Zero, New York 1968, 16.
2 Thomson-DeVeaux FiveThirtyEight v. 29.5.2018, https://perma.cc/YG65-JAXA.
3 So-called cognitive metaphor, cf. Drewer, Die kognitive Metapher als Werkzeug des Denkens. Zur Rolle der Analogie bei der Gewinnung und Vermittlung wissenschaftlicher Erkenntnisse, Tübingen 2003.
4 Lakoff/Johnson, Metaphors We Live By, Chicago 2003; Jäkel, Wie Metaphern Wissen schaffen: die kognitive Metapherntheorie und ihre Anwendung in Modell-Analysen der Diskursbereiche Geistestätigkeit, Wirtschaft, Wissenschaft und Religion, Hamburg 2003.
5 Wittgenstein, Tractatus Logico-Philosophicus – Logisch-Philosophische Abhandlung, Berlin 1963, Satz 5.6.
6 Lakoff/Wehling, „Auf leisen Sohlen ins Gehirn.“ Politische Sprache und ihre heimliche Macht, 4. Aufl., Heidelberg 2016, 1 ff. speak of the so-called ‘Issue Defining Frame’.
7 See for example how metaphors differently relate to the data we unconsciously leave behind on the Internet: data as the ‘new oil’ (Mayer-Schönberger/Cukier, Big Data – A Revolution that will transform how we live, work and think, New York 2013, 20), ‘data waste’ (Harford, Significance 2014, 14 (15)) or ‘data extortion’ (Singer/Maheshwari The New York Times v. 25.4.2017, https://perma.cc/9VF8-J7F7). A metaphor’s starting point has great significance for the outcome of a discussion, as Behavioral Economics Research under the heading of ‘Anchoring’ has shown, see Kahneman, Thinking, Fast and Slow, London 2011, 119 ff.
8 In this sense, Pasquale, The Black Box Society – The Secret Algorithms That Control Money and Information, Cambridge et al. 2015.
9 Cf. Simon, in: Floridi (Hrsg.), The Onlife Manifesto – Being Human in a Hyperconnected Era, Heidelberg et al. 2015, 145 ff., 146; for the corresponding work of the Science & Technology Studies see Simon, Knowing Together: a Social Epistemology for Socio-Technical Epistemic Systems, Diss. Univ. Wien, 2010, 61 ff. m.w.N..
10 See Lehr/Ohm, UCDL Rev. 2017, 653 (668) (‘Out of the ether apparently springs a fully formed “algorithm”’) .
11 Elish/boyd, Communication Monographs 2017, 1 (6 ff.);Garzcarek/Steuer, Approaching Ethical Guidelines for Data Scientists, arXiv 2019, https://perma.cc/RZ5S-P24W (‘algorithms act very similar to ancient oracles’); science fiction framing and a reference to the book/film Minority Report, in which human oracles predict murders with the help of technology, are also frequently found; see Brühl/Steinke Süddeutsche Zeitung v. 4.3.2019, https://perma.cc/6J55-VGCX; Stroud Verge v. 19.2.2014, http://perma.cc/T678-AA68.
12 Similarly, as early as 20 years ago, Nissenbaum, Science and Engineering Ethics 1996, 25 (34).

Title image by Alexa Steinbrück

Nel blu dipinto di blu; or the “anaesthetics” of stock images of AI

Most of the criticism concerning stock images of AI focuses on their cliched and kitschy subjects. But what if a major ethical problem was not in the subjects but rather in the background? What if a major issue was, for instance, the abundant use of the color blue in the background of these images? This is the thesis we would like to discuss in detail in this post.

Stock images are usually ignored by researchers because they are considered the “wallpaper” of our consumer culture. Yet, they are everywhere. Stock images of emerging technologies such as AI (but also quantum computing, cloud computing, blockchain, etc.) are widely used, for example, in science communication and marketing contexts: conference announcements, book covers, advertisements for university masters, etc. There are at least two reasons for us to take these images seriously.

The first reason is “ethical-political” (Romele, forthcoming). It is interesting to note that even the most careful AI ethicists pay little attention to the way AI is represented and communicated, both in scientific and popular contexts. For instance, a volume of more than 800 pages like the Oxford Handbook of Ethics of AI (Dubber, Pasquale, and Das 2020) does not contain any chapter dedicated to the representation and communication, textual or visual, of AI; however, the volume’s cover image is taken from iStock, a company owned by Getty Images. 1 The subject of it is a classic androgynous face made of “digital particles” that become a printed circuit board. The most interesting thing about the image, however, is not its subject (or figure, as we say in art history) but its background, which is blue. I take this focus on the background rather than the figure from the French philosopher Georges Didi-Huberman (2005) and, in particular, from his analysis of Fra Angelico’s painting.

Fresco “Annunciation” by Fra Angelico in San Marco, Florence (Public domain, via Wikimedia Commons)

Didi-Huberman devotes some admirable pages to Fra Angelico’s use of white in his fresco of the Annunciation painted in 1440 in the convent of San Marco in Florence. This white, present between the Madonna and the Archangel Gabriel, spreads not only throughout the entire painting but also throughout the cell in which the fresco was painted. Didi-Huberman’s thesis is that this white is not a lack, that is, an absence of color and detail. It is rather the presence of something that, by essence, cannot be given as a pure presence, but only as a “trace” or “symptom”. This thing is none other than the mystery of the Incarnation. Fra Angelico’s whiteness is not to be understood as something that invites absence of thought. It is rather a sign that “gives rise to thought,”2 just as the Annunciation was understood in scholastic philosophy not as a unique and incomprehensible event, but as a flowering of meanings, memories, and prophecies that concern everything from the creation of Adam to the end of time, from the simple form of the letter M (Mary’s initial) to the prodigious construction of the heavenly hierarchies. 

A glimmering square mosaic with dark blue and white colors consisting of thousands of small pictures

The image above collects about 7,500 images resulting from a search for “Artificial Intelligence” in Shutterstock. It is an interesting image because, with its “distant viewing,” it allows the background to emerge on the figure. In particular, the color of the background emerges. Two colors seem to dominate these images: white and blue. Our thesis is that these two colors have a diametrically opposed effect to Fra Angelico’s white. If Fra Angelico’s white is something that “gives rise to thought,” the white and blue in the stock images of AI have the opposite effect.

Consider the history of blue as told by French historian Michel Pastoureau (2001). He distinguishes between several phases of this history: a first phase, up to the 12th century, in which the color was almost completely absent; an explosion of blue between the 12th and 13th centuries (consider the stained glass windows of many Gothic cathedrals); a moral and noble phase of blue (in which it became the color of the dress of Mary and the kings of France); and finally, a popularization of blue, starting with Young Werther and Madame Bovary and ending with the Levi’s blue jeans industry and the company IBM, which is referred to as the Big Blue. To this day, blue is the statistically preferred color in the world. According to Pastoureau, the success of blue is not the expression of some impulse, as could be the case with red. Instead, one gets the impression that blue is loved because it is peaceful, calming, and anesthetizing. It is no coincidence that blue is the color used by supranational institutions such as UN, UNESCO, and European Community, as well as Facebook and Meta, of course. In Italy, the police force is blue, which is why policemen are disdainfully called “Smurfs”.

If all this is true, then the problem with stock AI images is that, instead of provoking debate and “disagreement,” they lead the viewer into forms of acceptance and resignation. Rather than equating experts and non-experts, encouraging the latter to influence innovation processes with their opinions, they are “screen images”—following the etymology of the word “screen,” which means “to cover, cut, and separate”. The notion of “disagreement” or “dissensus” (mésentente in French) is taken from another French philosopher, Jacques Rancière (2004), according to whom disagreement is much more radical than simple “misunderstanding (malentendu)” or “lack of knowledge (méconnaissance)”. These, as the words themselves indicate, are just failures of mutual understanding and knowledge that, if treated in the right way, can be overcome. Interestingly, much of the literature interprets science communication precisely as a way to overcome misunderstanding and lack of knowledge. Instead, we propose an agonistic model of science communication and, in particular, of the use of images in science communication. This means that these images should not calm down, but rather promote the flourishing of an agonistic conflict (i.e., a conflict that acknowledges the validity of the opposing positions but does not want to find a definitive and peaceful solution to the conflict itself).3 The ethical-political problem with AI stock images, whether they are used in science communication contexts or popular contexts, is then not the fact that they do not represent the technologies themselves. If anything, the problem is that while they focus on expectations and imaginaries, they do not promote individual or collective imaginative variations, but rather calm and anesthetize them.

This brings me to my second reason for talking about stock images of AI, which is “aesthetic” in nature. The term “aesthetics” should be understood here in an etymological sense. Sure, it is a given that these images, depicting half-flesh, half-circuit brains, variants of Michelangelo’s The Creation of Adam in human-robot version, etc., are aesthetically ugly and kitschy. But here I want to talk about aesthetics as a “theory of perception”—as suggested by the Greek word aisthesis, which means precisely “perception”. In fact, we think there is a big problem with perception today, particularly visual perception, related to AI. In short, I mean that AI is objectively difficult to depict and hence make visible. This explains, in our opinion, the proliferation of stock images.

We think there are three possible ways to depict AI (which is mostly synonymous with machine learning) today: (1) the first is by means of the algorithm, which in turn can be embedded in different forms, such as computer code or a decision tree. However, this is an unsatisfactory solution. First, because it is not understandable to non-experts. Second, because representing the algorithm does not mean representing AI: it would be like saying that representing the brain means representing intelligence; (2) the second way is by means of the technologies in which AI is embedded: drones, autonomous vehicles, humanoid robots, etc. But representing the technology is not, of course, representing AI: nothing actually tell us that this technology is really AI-driven and not just an empty box; (3) finally, the third way consists of giving up representing the “thing itself” and devoting ourselves instead to expectations, or imaginaries. This is where we would put most of the stock images and other popular representations of AI.4

Now, there is a tendency among researchers to judge (ontologically, ethically, and aesthetically) images of AI (and of technologies in general) according to whether they represent the “thing itself” or not. Hence, there is a tendency to prefer (1) to (2) and (2) to (3). An image is all the more “true,” “good,” and “aesthetically appreciable” the closer it is (and therefore the faithful it is) to the thing it is meant to represent. This is what we call “referentialist bias”. But referentialism, precisely because of what we said above, works poorly in the case of AI images, because none of these images can really come close to and be faithful to AI. Our idea is not to condemn all AI images, but rather to save them, precisely by giving up referentialism. If there is an aesthetics (which, of course, is also an ethics and ontology) of AI images, its goal is not to depict the technology itself, namely AI. If anything, it is to “give rise to thought,” through depiction, about the “conditions of possibility” of AI, i.e., its techno-scientific, social-economic, and linguistic-cultural implications.

Alongside theoretical work such as the one we discuss above, we also try to conduct empirical research on these images. We showed earlier an image that is the result of quali-quantitative analysis we have conducted on a large dataset of stock images. In this work, we first used the web crawler Shutterscrape, which allowed us to download massive numbers of images and videos from Shutterstock. We obtained about 7,500 stock images for the search “Artificial Intelligence”. Second, we used PixPlot, a tool developed by Yale’s DH Lab.5 The result is accessible through the link in the footnote.6 The map is navigable: you can select one of the ten clusters created by the algorithm and, for each of them, you can zoom and de-zoom, and choose single images. We also manually labeled the clusters with the following names: (1) background, (2) robots, (3) brains, (4) faces and profiles, (5) labs and cities, (6) line art, (7) Illustrator, (8) people, (9) fragments, and (10) diagrams.

On a black background thousands of small pixel-like images floating similar to the shape of a world map

Finally, there’s another little project of which we are particularly fond. It is the Instagram profile ugly.ai.7 Inspired by existing initiatives such as the NotMyRobot!8 Twitter profile and blog, ugly.ai wants to monitor the use of AI stock images in science communication and marketing contexts. The project also aims to raise awareness among both stakeholders and the public of the problems related to the depiction of AI (and other emerging technologies) and the use of stock imagery for it.

In conclusion, we would like to advance our thesis, which is that of an “anaesthetics” of AI stock images. The term “anaesthetics” is a combination of “aesthetics” and “anesthetics.” By this, we mean that the effect of AI stock images is precisely one that, instead of promoting access (both perceptual and intellectual) and forms of agonism in the debate about AI, has the opposite consequence of “putting them to sleep,” developing forms of resignation in the general public. Just as Fra Angelico’s white expanded throughout the fresco and, beyond the fresco, into the cell, so it is possible to think that the anaesthetizing effects of blue expand to the subjects, as well as to the entire media and communication environment in which these AI images proliferate.

Footnotes

  1. https://www.instagram.com/p/CPH_Iwmr216/. Also visible at https://www.oxfordhandbooks.com/view/10.1093/oxfordhb/9780190067397.001.0001/oxfordhb-9780190067397.
  2.  The expression is borrowed from Ricoeur (1967)
  3.  On the agonistic model, inspired by Chantal Mouffe’s philosophy, in science and technology, see Popa, Blok, and Wessenlink (2020)
  4. Needless to say, this is an idealistic distinction, in the sense that these levels are mostly overlapping: algorithm codes are colored, drones fly over green fields and blue skies that suggest hope and a future for humanity, and stock images often refer, albeit vaguely, to existing technologies (touch screens, networks of neurons, etc.)
  5.  https://github.com/YaleDHLab/pix-plot
  6. https://rodighiero.github.io/AI-Imaginary/# Another empirical work, which we did with other colleagues (Marta Severo —Paris Nanterre University, Olivier Buisson —Inathèque and Claude Mussou —Inathèque) consisted in using a tool called Snoop, developed by the French Audiovisual Archive (INA) and the French National Institute for Research in Digital Science and Technology (INRIA), and also based on an AI algorithm. While with PixPlot the choice of the clusters is automatic, with Snoop the classes are decided by the researcher and the class members are found by the algorithm. With Snoop, we were able to fine-tune PixPlot’s classes, and create new ones. For instance, we have created the class “white robots” and, within this class, the two subclasses of female and infantine robots.
  7. https://www.instagram.com/ugly.ai/
  8. https://notmyrobot.home.blog/

References

Dubber, M., Pasquale, F., and Das, S. 2020. The Oxford Handbook of Ethics of AI. Oxford: Oxford University Press. 

Pastoureau, M. 2001. Blue: The History of a Color. Princeton: Princeton University Press.

Popa, E.O., Blok, V. & Wessenlik, R. 2020. “An Agonistic Approach to Technological Conflict”. Philosophy & Technology.

Rancière, J. 2004. Disagreement: Politics and Philosophy. Minneapolis: Minnesota University Press.

Ricoeur, P. 1967. The Symbolism of Evil. Boston: Beacon Press.Romele, A. forthcoming. “Images of Artificial Intelligence: A Blind Spot in AI Ethics”. Philosophy & Technology.

Image credits

Title image showing the painting “l’accord bleu (RE 10)”, 1960 by Yves Klein, photo by Jaredzimmerman (WMF), CC BY-SA 3.0 https://creativecommons.org/licenses/by-sa/3.0, via Wikimedia Commons

About us

Alberto Romele is research associate at the IZEW, the International Center for Ethics in the Sciences and Humanities at the University of Tübingen, Germany. His research focuses on the interaction between philosophy of technology, digital studies, and hermeneutics. He is the author of Digital Hermeneutics (Routledge, 2020).

Dario Rodighiero is FNSF Fellow at Harvard University and Bibliotheca Hertziana. His research focuses on data visualization at the intersection of cultural analytics, data science, and digital humanities. He is also lecturer at Pantheon-Sorbonne University, and recently he authored Mapping Affinities (Metis Presses 2021).

The AI Creation Meme

A robot hand and a human hand reaching out with their fingertips towards each other

This blog post is based on Singler, B (2020) “The AI Creation Meme: A Case Study of the New Visibility of Religion in Artificial Intelligence Discourse” in Religions 2020, 11(5), 253; https://doi.org/10.3390/rel11050253


Few images are as recognisable or as frequently memed as Michelangelo’s Creazione di Adamo (Creation of Adam), a moment from his larger artwork that arches over the Sistine Chapel in Vatican City. Two hands, fingers nearly touching, fingertip to fingertip, a heartbeat apart in the moment of divine creation. We have all seen it reproduced with fidelity to the original or remixed with other familiar pop-culture forms. We can find examples online of god squirting hand sanitiser into Adam’s hand for a Covid-era message. Or a Simpsons cartoon version with Homer as god, reaching out towards a golden remote control. Or George Lucas reaching out to Darth Vader. This creation moment is also reworked into other mediums: the image has been remade with paperclips, satsuma sections, or embroidered as a patch for jeans. Some people have tattooed the two hands nearly touching on their skin, bringing it into their bodies. The diversity of uses and re-uses of the Creation of Adam speak to its enduring cultural impact.

The creation of Adam by Michelangelo
Photography of Michelangelo’s fresco painting “The creation of Adam” which forms part of the Sistine Chapel’s ceiling

My particular interest in the meme-ing of the Creation of Adam is because of its ‘AI Creation’ form, which I have studied by collecting a corpus of 79 indicative examples found online (Singler 2020a). As with some of the above examples, the focus is often narrowed to just the hands and forearms of the subjects. The representation of AI in my corpus came in two primary forms: an embodied robotic hand or a more ethereal, or abstract, ‘digital’ hand. The robotic hands were either jointed white metal and plastic hands or fluid metallic hands without joints – reminiscent of the liquid, shapeshifting, T-1000 model from Terminator 2: Judgement Day (1991). In examples with digital hands, they were either formed with points of light or vector lines. The human hands in the AI Creation Meme also had characteristics in common: almost all were male and Caucasian in skin tone. Some might argue that this replicates how Michelangelo and his contemporaries envisaged Adam and the Abrahamic god. But if we can re-imagine these figures in Simpson’s yellow or satsuma orange, then there are intentional choices being made here about race, representation, and privilege.

The colour blue was also significant in my sample. Grieser’s work (2017) on the popularity of Blue Brains in neuroscience imagery, which applies an “aesthetics of religion” approach, was relevant to this aspect of the AI Creation Meme. She argues that such colour choices and their associations – for instance, blue with “seriousness and trustworthiness”, the celestial and heavenly, and its opposition to dark and muted colours and themes – “target the level of affective attitudes rather than content and arguments” (Grieser 2017, p260). Background imagery also targeted affective attitudes: cosmic backgrounds of galaxies and star systems, cityscapes with skyscrapers, walls of binary text, abstract shapes in patterns such as hexagons, keyboards, symbols representing the fields that employ AI, and more abstract shapes in the same blue colour palette. The more abstract examples were used in more philosophical spaces, while the more business-orientated meme remixes were found more often on business, policy, and technology-focused websites, suggesting active choice in aligning the specific AI Creation meme with the location in which it was used. These were frequently spaces commonly thought of as ‘secular’ – technology and business publications, business consultancy firms, blog posts about fintech, bitcoin, eCommerce, or the future of eCommerce, or the future of work. What then of the distinction between the religious and the secular?

That the original Creation of Adam is a religious image is without question – although its obviously specific to a specific view of a monotheistic god. As a part of the larger work in the Sistine chapel, it was intended to “introduce us to the world of revelation”, according to Pope John Paul II (1994). But such images are not merely broadcasting a message; meaning-making is an interactive event where the “spectator’s well of previous experiences” interplays with the object itself (Helmers 2004, p 65). When approaching an AI Creation Meme, we bring our own experiences and assumptions, including the cultural memory of the original form of the image and its message of monotheistic creation. This is obviously culturally specific, and we might think about what a religious AI Creation Meme from a non-monotheistic faith would look like, as well as who is being excluded in this imaginary of the creation of AI. But this particular artwork has had impact across the world. Even in the most remixed form, we know broadly who is meant to be the Creator and who is the Created, and that this moment is intended to be the very act of Creation.

Some of the AI Creation Memes even give greater emphasis to this moment, with the addition of a ‘spark of life’ between the human hand and the AI hand. The cultural narrative of the ‘spark of life’ likely begins with the scientific works of Luigi Galvani (1737 – 1789). He experimented with animating dead frogs’ legs with electricity and likely inspired Mary Shelley’s Frankenstein. In the 19th Century, the ‘spark of life’ then became a part of the account of the emergence of all life on earth from the ‘primordial soup’ of “ammonia and phosphoric salts, lights, heat, electricity etc.” (Darwin 1871). Grieser also noted such sparks in her work on ‘Blue Brain’ imagery in neuroscience, arguing that such motifs can be seen as perpetuating the aesthetic forms of a “religious history of electricity”, which involves visualising conceptions of communication with the divine (Grieser 2017, p. 253).

Finding such aesthetics, informed by ideology, in what are commonly thought of as ‘secular’ spaces, problematises the distinction between the secular and the religious. In the face of solid evidence against a totalising secularisation and in favour of religious continuity and even flourishing, some interpretations of secularisation have instead focused on how religions have lost control over their religious symbols, rites, narratives, tropes and words. So, we find figures in AI discourse such as Ray Kurzweil being proclaimed ‘a Prophet’, or people online describing themselves as being “Blessed by the Algorithm” when having a particularly good day as a gig economy worker or a content producer, or in general (Singler 2020). These are the religious metaphors we also live by, to paraphrase Lakoff and Johnson (1980).

The virality of humour and memetic culture is also at play in the AI Creation Meme. I’ve mentioned some of the examples where the original Creation Meme is remixed with other pop culture elements, leading to absurdity (the satsuma creation meme is a new favourite of mine!). The AI Creation Meme is perhaps more ‘serious’ than these, but we might see the same kind of context-based humour being expressed through the incongruity of replacing Adam with an AI. Humour though can lead legitimation through a snowballing effect, as something that is initially flippant or humorous can become an object that is indicated towards in more serious discourse. I’ve previously made this argument in relation to New Religious Movements that emerge from jokes or parodies of religion (Singler 2014), but it is also applicable to religious imagery used in unexpected places that gets a conversation started or informs the aesthetics of an idea, such as AI.

The AI Creation meme also inspires thoughts of what is being created. The original Creation of Adam is about the origin of humanity. In the AI Creation Meme, we might be induced to think about the origins of post-humanity. And just as the original Creation of Adam leads us to think on fundamental existential questions, the AI Creation Meme partakes of posthumanism’s “repositioning of the human vis-à-vis various non-humans, such as animals, machines, gods, and demons” (Sikora 2010, p114), and it leads us into questions such as ‘Where will the machines come from?’, ‘What will be our relationship with them?’, and the apocalyptic again, ‘what will be at the end?’. Subsequent calls for our post-human ‘Mind Children’ to spread outwards from the earth might be critiqued as the “seminal fantasies of [male] technology enthusiasts” (Boss 2020, p39), especially as, as we have noted, the AI Creation Meme tends to show ‘the Creator’ as a white male.

However, there are opportunities in critiquing these tendencies and tropes; as with the post-human narrative, we can be alert to what Graham describes as the “contingencies of the boundaries by which we separate the human from the non-human, the technological from the biological, artificial from natural” (2013, p1). Elsewhere I have remarked on the liminality of AI itself and how we might draw on the work of anthropologists such as Victor Turner and Mary Douglas, as well as the philosopher Julia Kristeva, to understand how AI is conceived of, sometimes apocalyptically, as a ‘Mind out of Place” (Singler 2019) as people attempt to understand it in relation to themselves. Paying attention to where and how we force such liminal beings and ideas into specific shapes and what those shapes are can illuminate our preconceptions and biases.

Likewise, the common distinction between the secular and the religious is problematised by the creative remixing of the familiar and the new in the AI Creation Meme. For some, a boundary between these two ‘domains’ is a moral necessity; some see religion as a pernicious irrationality that should be secularised out of society for the sake of reducing harm. There can be a narrative of collaboration in AI discourse, a view that the aims of AI (the development and improvement of intelligence) and the aims of atheism (the end of irrationalities like religion) are sympathetic and build cumulatively upon each other. So, for some, illustrating AI with religious imagery can be anathema. Whether or not we agree with that stance, we can use the AI Creation Meme as an example to question the role of such images in how the public comes to trust or distrust AI. For some, AI as a god or as the ‘child’ of humankind is a frightening idea. For others, it is reassuring and utopian. In either case, this kind of imagery might obscure the reality of current AI’s very un-god-like flaws, the humans currently involved in making and implementing AI, and what biases these humans have that might lead to very real harms.


Bibliography

Boss, Jacob 2020. “For the Rest of Time They Heard the Drum.” In Theology and Westworld. Edited by Juli Gittinger and Shayna Sheinfeld. Lanham, MD: Rowman & Littlefield.

Darwin, Charles 1871. “Letter to Joseph Hooker.” in The Life and Letters of Charles Darwin, Including an Autobiographical Chapter. London, UK: John Murray, vol. 3, p. 18.

Graham, Elaine 2013. “Manifestations of The Post-Secular Emerging Within Discourses Of Posthumanism.” Unpublished Conference Presentation Given at the ‘Imagining the Posthuman’ Conference at Karlsruhe Institute of Technology, July 7–8. Available online: http://hdl.handle.net/10034/297162 (accessed 3 April 2020).

Grieser, Alexandra 2017. “Blue Brains: Aesthetic Ideologies and the Formation of Knowledge Between Religion and Science.” In Aesthetics of Religion: A Connective Concept. Edited by A. Grieser and J. Johnston. Berlin and Boston: De Gruyter.

Helmers, Marguerite 2004. “Framing the Fine Arts Through Rhetoric”. In Defining Visual Rhetoric. Edited by Charles Hills and Maguerite Helmers. Mahweh: Lawrence Erlbaum, pp. 63–86.

Lakoff, George, and Johnson, Mark (1980) Metaphors we Live by, Chicago, USA: University of Chicago Press

Pope John Paul II. 1994. “Entriamo Oggi”, homily preached in the mass to celebrate the unveiling of the restorations of Michelangelo’s frescoes in the Sistine Chapel, 8 April 1994, available at http://www.vatican.va/content/john-paul-ii/en/homilies/1994/documents/hf_jpii_ hom_19940408_restauri-sistina.html (accessed on 19 May 2020)

Sikora, Tomasz 2010. “Performing the (Non) Human: A Tentatively Posthuman Reading of Dionne Brand’s Short Story ‘Blossom’”. Available online: https://depot.ceon.pl/handle/123456789/2190 (accessed 30 March 2020).

Singler, Beth 2020. “‘Blessed by the Algorithm’: Theistic Conceptions of Artificial Intelligence in Online Discourse” In Journal of AI and Society. doi:10.1007/s00146-020-00968-2.

Singler, Beth 2019. “Existential Hope and Existential Despair in AI Apocalypticism and Transhumanism” in Zygon: Journal of Religion and Science 54: 156–76.

Singler, Beth 2014 “‘SEE MOM IT IS REAL’: The UK Census, Jediism and Social Media”, in Journal of Religion in Europe, (2014), 7(2), 150-168. https://doi.org/10.1163/18748929-00702005

AI WHAT’S THAT SOUND? Stories and Sonic Framing of AI

An artistically distorted image of colorful sound waves containing no robots or other clichee representation of AI

The ‘Better Images of AI’ project is so important, as typically, portrayals of AI can be seen to reinforce established and polarised views, which can distract from the pressing issues of today, but we rarely question how AI sounds…

We are researching the sonic framing of AI narratives. In this blog post, we ask, in what ways does a failure to consider the sonic framing of AI influence or undermine attempts to broaden public understanding of AI? Based on our preliminary impressions, we argue that the sonic framing of AI is just as important as other narrative features and propose a new programme of research. We use some brief examples here to explore this.

The role of sonic framing on AI narratives and public perception

Music is useful. We employ music every day to change how we feel, how we think, to distract us, to block out unwanted sound, to help us run faster, to relax, to help us understand, and to send signals to others. Decades of music psychology research have already parsed the many roles music can serve in our everyday lives. Indeed, the idea that music is ‘functional’ or somehow useful has been with us since antiquity. Imagine receiving a cassette tape in the post from someone filled with messages of love: music transmits information and messages. Music can also be employed to frame how we feel about things. Or, written another way, music can manipulate how we feel about certain people, concepts, or things. As such, when we decide to use music to ‘frame’ how we wish a piece of storytelling to be perceived, attention and scrutiny should be paid to the resonances and emotional overtones that music brings to a topic. AI is one such topic and a topic that is heavily subject to hype. This is arguably an inevitable condition of innovation at least at inception, but while the future with AI is so clearly shaped by stories told about AI, the music chosen may also ‘obscure views of the future.’

Affective AI and its role in storytelling

30 years ago, documentarian Michael Rabiger quite literally wrote the book on documentary filmmaking. Now in it’s 7th edition, Directing the Documentary explores the role and responsibility of the filmmaker in presenting factual narratives to an audience. Crucially, Rabiger discusses the use of music in documentary film saying it should never be used to ‘inject false emotion’ thus giving the audience an unreal or amplified or biased view of proceedings. What is the function of a booming calamitous impact sound signalling the obliteration of all humankind at the hands of a robot if not to inject falsified or heightened emotion? Surely this serves only to reinforce dominant narratives of fear and robot uprising – the likes of science fiction. If we are to live alongside AI, as we are already doing, we must consider ways to promote positive emotions to move us away from the human vs machine tropes which are keeping us, well, stuck.

Moreover, we wonder about the notions of authenticity, transparency and explainability. Despite attempts to increase AI literacy through citizen science and initiatives about AI explainability, documentaries and think pieces that promote public engagement with AI and purport to promote ‘understanding’ are often riddled with issues of authenticity or a lack of transparency doing precisely nothing to educate the public. Complex concepts like neural nets, quantum computing, Bayesian probabilistic networks etc. must be reduced (necessarily so) to a level whereby a non-specialist viewer can glean some understanding of the topic. In this course retelling of ‘facts’, composers and music supervisors have an even more crucial role in aiding nuanced comprehension; yet we find ourselves faced with the current trend for bombast, extravagance and bias when it comes to soundtracking AI. Indeed, as much as attention needs to be paid to those who are creating AI technologies to mitigate a creeping bias, attention also needs to be paid to those who are composing music for the same reasons.

Eerie AI?

Techno-pessimism is reinforced by portrayals of AI in visual and sound media – suggestive of a dystopian future. Eerie music in film, for instance, can reinforce a view of AI uprising or express some form of subtle manipulation by AI agents. Casting an ear over the raft of AI documentaries in recent years, we can observe the trend for approaches to sonic framing which reinforce dominant tropes. At the extreme, Mark Crawford’s original score from Netflix’s The Social Dilemma (which is a documentary/drama) is a prime example of this in action. A track titled ‘Am I Really That Bad?’ begins as a childish waltz before gently morphing into a disturbing carnival-esque horror soundtrack. The following track ‘Server Room’ is merely a texture full of throbbing basses, Hitchcock-style string screeches, atonal vibraphones, and rising tension that serves only to make the listener uncomfortable. Alternatively, ‘Theremin Lullaby’ offers up luscious utopian piano textures Max Richter would be proud of, before plunging us into ‘The Sliding Scale’, a cut that comes straight from Tron: Legacy with its chugging bass and blasts of noise and static. Interestingly, in a behind the scenes interview with the composer, we learn that the ‘expert’ cast of the Social Dilemma were interviewed and guided the sound design. However, the film received much criticism for being sensationalist and the cast themselves were criticised as former tech giant employees hiding in plain sight. If these unsubtle, polarised positions are the only sonic fayre on offer, we should be questioning who is shaping music and the extent to which it is being used to actively manipulate audience impressions of AI.

Of course, there are other forms of story and documentaries about AI which are less subject to dramatisation. Some examples exist where sound designers, composers and filmmakers are employing the capabilities afforded by music to help demonstrate complex ideas and support the experience of the viewer in a nuanced manner. A recent episode of the BBC’s Click programme uses a combination of image and music to demonstrate supervised machine learning techniques to great effect. Rather than the textural clouds of utopian AI or the dystopian future hinted (or screamed) at by overly dramatic Zimmer-esque scores, the composer Bella Saer and engineer Yoad Nevo create a musical representation of the images, providing positive and negative aural feedback for the machine learning process. Here, the music transforms into a sonic representation of the processes we are witnessing being played out on the screen. Perhaps this represents the kinds of narratives society needs.

Future research

We don’t yet have the answers, only impressions. It remains a live research and development question as to how far sonic framing influences public perception of AI and we are working on documentary as a starting point. As we move closer to understanding the influence of representation in AI discourse, it surely becomes a pressing matter. Just as the BBC is building and commissioning an image repository of more inclusive and representative images of AI, we hope to provoke discussion about how we can bring together creative and technology industries to reframe how we audibly communicate and conceptualise AI.

Still, a question remains about the stories being told about AI, who is telling them and how they are told. Going forward, our research will investigate and test these ideas, by interviewing composers and sound designers of AI documentaries. As for this blog, we encourage you to pay attention to how AI sounds in the next story you are told about AI or when you see an image. We call for practitioners to dig a little deeper when sonically framing AI.


About us

Dr Jenn Chubb (@JennChubb) is Research Fellow at the University of York, now with XR Stories. She is interested in all things ethics, science and stories. Jenn is researching sonic framing of AI in narratives and sense making. Jenn plays deliberately heavy and haunting music in a band called This House is Haunted.

Dr Liam Maloney (@liamtmaloney) is an Associate Lecturer in Music & Sound Recording at the University of York. Liam is interested in music, society, disco, and what streaming is doing to our listening habits. When he has a minute to spare he also makes ambient music.

Jenn and Liam decided not to use any robot related images. Title image “soundwaves” by seth m (CC BY-NC-ND 2.0)

Do robots dream of unnecessary appliances?

A typical bad stockphoto of AI showing a robotic hand typing on a computer keyboard

The following is an excerpt from the full article on aimyths.org by Daniel Leufer: https://www.aimyths.org/ai-equals-shiny-humanoid-robots


So what makes a robot picture terrible? Well, there seem to be a few typical ways in which things can go wrong, so here is a non-exhaustive list of the main offenders:

  • absurd sexualisation & perpetuation of gender stereotypes
  • default whiteness
  • robots performing ridiculous activities
  • and just generally cringy-looking robots.

A less serious, but no less prevalent, form of terribleness can be seen in the plethora of ridiculous images of robots using electronic appliances for which they could have no conceivable need. Among the most common offenders are:

Robots typing on keyboards…

A typical bad stockphoto of AI showing a robotic hand typing on a computer keyboard

…robots using calculators (presumably doing their taxes for the robo tax)…

A typical bad stockphoto of AI showing two robotic hands using a calculator and a pen

…robots wearing headphones and using laptops in the classic ‘call centre’ image…

A typical bad stockphoto of AI showing three humanoid white robots sitting at tables infront of a laptop

…robots using blackboards…

A typical bad stockphoto of AI showing a humanoid white robot infront of a blackboard filled with mathematical equations

…and robots wearing headphones, typing on keyboards and having loads of flying neon ‘internet symbols’ emerging from their chest:

A typical bad stockphoto of AI showing a humanoid white robot wearing headphones, typing on a keyboard and blueish icons surround him in mid-air

We have even found a robot using a stethoscope, into which the creator of the image somehow managed to shoehorn a semi-naked woman just to make things more unnecessarily terrible:

A bad stockphoto of AI showing a robot using a stethoscope on a human being with transparent skin

Read the full essay here: https://www.aimyths.org/ai-equals-shiny-humanoid-robots