Public Competition for Better Images of (teaching and learning) AI!

Ornage and red picture of people at computer terminals with networks overlaying them

Call for images: Reclaiming and Recentering the History of Diversity in AI Education at the University of Cambridge

Cambridge and LCFI researchers have played key roles in identifying how current stock images of AI can perpetuate negative gender and racial stereotypes about the creators, users, and beneficiaries of AI. Following on from this, a project has been set up to increase the visible diversity of the images used to represent AI teaching and events programs in Cambridge.

The first phase of the project was to commission exciting collage artist and emerging technologies scholar Hanna Bakarat to provide a set of images, drawing on her work of researching AI narratives to uncover and reclaim diverse histories.

We’re now delighted to collaborate to open up the challenge and to invite public submissions of ‘stock quality’ images by the 30th of December 2024 (11:59PM UTC). The competition can be entered by the University of Cambridge (UK) community, but also anyone who wishes to contribute to improving narratives about how teaching and learning about AI related fields can be conceptualised.

The recent release of the new Archival Images of AI Playbook means that even those with no artistic or design background can have a go, or existing designers and art students can bring their own ideas and add to making more inclusive and less exclusionary images.

In addition to our thanks for adding to the visual discourse, University of Cambridge have made. available a couple of prizes:

First Prize: £250

Commendation Prize: £100

Entries will be judged by representatives of Better Images of AI, LFCI and University of Cambridge.

Further Information

The Leverhulme Centre for the Future of Intelligence and the University Diversity Fund want to increase the diversity of the images that are used to represent AI-related teaching and event programmes in the University of Cambridge.

The entries will be judged on the following criteria:

  • How the images reflect the brief: ‘reclaiming and recentering the history of diversity in AI education in the University of Cambridge’
  • The inclusion of creative or surprising elements in the image
  • The appropriateness of the image to be used for teaching and events
  • The forms of representation included in the image
  • Aesthetic quality

Visual Guidelines

Please read the Guide to making Better Images of AI to see what tropes to avoid and what might make a good representation related to AI.

Image uses

These include images used for outward-facing posting on social media, University of Cambridge websites, internal communications on student sites and Virtual Learning Environments. They will also be made available for wider Cambridge programs to use for their teaching and events materials. Those agreed will also be added to the Better Images of AI website on a Creative Commons licence with artist attribution and available for wider public download.

Licences

You can use any techniques and source materials that work for your vision. However, all materials need to have the correct license for use and you need to have full ownership of the end product, so we recommend using images from the Creative Commons Portal with a ‘free to be used and remixed’ license’.

Privacy

Please also ensure to anonymise people if they are featured in images.

Techniques / style

Any techniques and approaches are welcome as long as they result in high quality digital images. This can include digital art, photography, collage, illustration and also invite artists to use different image techniques using the Archival Images of AI Playbook. We do have specifications around the use of AI image generators, see below.

AI generated Art

Although inclusion in the Better Images of AI library is not necessarily essential for the winning entry, the library will only accept submissions which use Adobe Firefly (which uses consented images, compensates artists and labels as AI generated), with licensed or original images as visual prompts.

Format

Entries must be in a .png file and submitted to info@betterimagesofai.org. The winning entries will be made available for open access use under a creative comms non-profit licence through the University of Cambridge, and ideally also in the Better Images of AI library. Entrants may also be contacted to include their image in the open-access collection with honourable mention. 

Key dates

Competition opens: 9th of December 2024 (9:00AM UTC)

Competition closes: 30th December 2024 (11:59PM UTC)

Decisions of winners announced: January 2025


Further Information

Please contact info@betterimagesofai.org.

Press release: New playbook released to enable creation of images of AI using free and open licence digital heritage collections from around the world


  • Archival Images of AI project enables the creation of meaningful and compelling images of AI
  • New playbook includes 38 pages of guidance and sources of free to use archive images
  • Showcases methods and tips for remixing archive images which can be used by anyone 
  • Inspirational artists have created free-to-use examples of their own interpretations of AI 

LONDON / AMSTERDAM 4th December 2024: As AI continues to make headlines and evolve in ways that impact the general public, global critical AI research community AIxDESIGN has released a research-informed playbook for remixing free and open licence images to create better images of artificial intelligence. It uses techniques that anyone can apply without the use of AI image generators.

Producing accurate images of AI – whether this is technically accurate or suitable for any given narrative or situation, is not always easy without an illustrator or access to a wide variety of images that can be easily edited or remixed. AIxDESIGN, in partnership with Netherlands Institute for Sound & Vision with inspiration from Better Images of AI and support from We and AI have released a playbook as a guide to address this challenge by working with free images from consented archives around the world and artists immersed in expressing their experiences and understanding of the technology.

Archival Images of AI Playbook

The playbook includes vital information about the use of archive images as well as details about the creation and representation of artificial intelligence through visual narratives. The project builds on the principles outlined in Better Images of AI: A Guide for Users and Creators that explain why accuracy is important when it comes to communicating these technologies to the wider public. 

By making poor choices about how AI is visualised, communications from media to marketing often risk misinforming or misleading the public about how it works, what it means and the impact it can have. The playbook offers new ways to interpret images of AI by engaging with cultural archives to explore historical and social context. It also has sources of visual stimuli and motifs that can be used freely and with open licences by anyone seeking to illustrate their writing or communicate AI news and reflection. 

A highly creative and reflective selection of artists and researchers have contributed to the guide to offer tutorials and examples, including: 

Hanna Bakarat, researcher, activist and collage artist. She’s been deep in researching narratives of AI and exploring collage as an act of resistance. 

Cristóbal Ascencio, a Mexican visual artist. As a photographer, his practice explores new forms of image making such as virtual reality, data manipulation and photogrammetry. 

Zeina Saleem, graphic designer interested in data beautification and the aesthetics of algorithmic distortion. 

Dominika Čupková, interdisciplinary artist and researcher connecting the dots between AI, art, design and feminism.

Nadia Piet, Nadia is an independent researcher, designer, and co-founder and creative director of AIxDESIGN. 

The playbook is available for anyone to download and is accompanied by detailed artist logs available at https://aixdesign.co/posts/archival-images-of-ai. Readers can explore the works’ origins and development and input from Eryk Salvaggio, Cees Martens, Isabel Beirigo, Monique Groot, Danny van Zuijlen, Alice Isaac, Anne Fehres and Luke Conroy.

The playbook is launched at an interactive event where attendees have an opportunity to test and play with the techniques and interact with the artists. 

A varied and powerful selection of over 25 of the images created by the artists will be added to the free Better Images of AI image library where any individual or publication can use the images for free. 

The playbook can be downloaded at https://aixdesign.co/posts/archival-images-of-ai and https://blog.betterimagesofai.org/archival-images-of-ai-playbook/.

About Netherlands Sound & Vision

The Netherlands Institute for Sound & Vision is a knowledge institute in the field of media culture and audiovisual archiving. It specialises in cultural programming, educational offering and research that makes media heritage available, searchable and relevant. Learn more at https://www.beeldengeluid.nl/en. 

About AIxDESIGN 

​​​​​AIxDESIGN (AIxD) is a global community of designers, researchers, creative technologists, and activists using AI in pursuit of creativity, justice and joy and living lab exploring participatory, slow, and more-than-corporate AI. Learn more at aixdesign.co.

About Better Images of AI Better Images of AI is a global non-profit collaboration which curates and commissions stock images that avoid perpetuating unhelpful myths about artificial intelligence, downloadable for free. It provides guidelines and research and creates a space for imaging and creating more inclusive, transparent and realistic visual representations of AI themes and technologies, avoiding overused cliches and alienating, disempowering tropes. It was launched in 2021 with input from a global community of researchers, practitioners and institutions including BBC R&D and coordinated by We and AI.

Beneath the Surface: Adrien’s Artistic Perspective on Generative AI

The image features the title "Beneath the Surface: Adrien's Artistic Perspective on Generative AI." The background consists of colourful, pixelated static, creating a visual texture reminiscent of digital noise. In the centre of the image, there's a teal rectangular overlay containing the title in bold, white text.

May 28, 2024 – A conversation with Adrien Limousin – a photographer and visual artist, sheds light on the nuanced intersections between AI, art, and ethics. Adrien’s work delves into the opaque processes of AI, striving to demystify the unseen mechanisms and biases that shape our representations.


A vibrant, abstract image from converting Street View screenshots from TIFF to JPEG, showing a pixelated, distorted classical building with columns. The sky features glitch-like, multicolored waves, blending greens, purples, pinks, and blues.

ADRIEN LIMOUSIN – Alterations (2023)

Adrien previously studied advertising and now is studying photography at the National Superior School of Photography (ENSP) in Arles and is particularly drawn to the language of visual art, especially from new technologies.

A cluster of coloured pixels made up from random gaussian noise taking up the whole canvas, representing a not denoised AI generated image; digital pointillism

Fig 1. Adrien Limousin / Better Images of AI / Non-image / CC-BY 4.0

Non-image

Adrien was drawn to the ‘Better Images of AI’‘ project after recognising the need for more nuanced and accurate representations of AI, particularly in journalism. In our conversation, I asked Adrien about his approach to creating the image he submitted to Better Images of AI (Fig 1.).


> INTERVIEWER: Can you tell me about your thinking and process behind the image you submitted?

> ADRIEN: I thought about how AI-generated images are created. The process involves taking an image from a dataset, which is progressively reduced to random noise. This noise is then “denoised” to generate a new image based on a given prompt. I wanted to try to find a breach or the other side of the opaqueness of these models. We only ever see the final result—the finished image—and the initial image. The intermediate steps, where the image is transitioning from data to noise and back, are hidden from us.

> ADRIEN: My goal with “Non-image” was to explore and reveal this hidden in-between state. I wanted to uncover what lies between the initial and final stages, which is typically obscured. I found that extracting the true noisy image from the process is quite challenging. Therefore, I created a square of random noise to visually represent this intermediate stage. It’s no longer an image and it’s also not an image yet.


Adrien’s square of random noise captures this “in-between” state, where the image is both “everything and nothing”—representing aspects of AI’s inner workings. This visual metaphor underscores the importance of making these hidden processes visible, to demystify and foster a more accurate understanding of what AI is, how it operates, and it’s real capabilities. Seeing the process Adrien discusses here also reflects the complex and collective human data that underpins AI systems. The image doesn’t originate from a single source but is a collage of countless lives and data points, both digital and physical, emphasising the multifaceted nature of AI and its deep entanglement with human experience.

A laptopogram based on a neutral background and populated by scattered squared portraits, all monochromatic, grouped according to similarity. The groupings vary in size, ranging from single faces to overlapping collections of up to twelve. The facial expressions of all the individuals featured are neutral, represented through a mixture of ages and genders.

Philipp Schmitt & AT&T Laboratories Cambridge / Better Images of AI / Data flock (faces) / CC-BY 4.0

“The medium is the message”

(McLuhan, Marshall, 1964).

When I asked Adrien about the artists who have inspired him, he highlighted how Marshall McLuhan’s seminal concept, “the medium is the message,” profoundly resonated with him.

This concept is crucial for understanding how AI is represented in the media. McLuhan argued that the medium itself—whether it’s a book, television, or image—shapes our perceptions and influences society more than the actual content it delivers. McLuhan’s work, particularly in Understanding Media (1974), explores how technology reshapes human interaction and societal structures. He warned that media technologies, especially in the electronic age, fundamentally alter our perceptions and social patterns. When applied to AI, this means that the way AI is visually represented can either clarify or obscure its true nature. Misleading images don’t just distort public understanding; they also shape how society engages with and responds to AI, emphasising the importance of choosing visuals that accurately reflect the technology’s reality and impact.

 “Stereotypes inside the machine”

(Adrien).

Adrien’s work explores the complex issue of stereotypes embedded within AI datasets, emphasizing how AI often perpetuates and even amplifies these biases through discriminatory images, texts, and videos.


> ADRIEN: Speaking of stereotypes inside the machine, I tried to question that in one of the projects I started two years ago and I discovered that it’s a bit more complicated than what it first seems. AI is making discriminatory images or text or videos, yes. But once you see that you start to question the nature of the image in the dataset and then suddenly the responsibility shifts and now you start to question why these images were chosen or why these images were labelled that way in the dataset in the first place ?

> ADRIEN:  Because it’s a new medium we have the opportunity to do things the right way. We aren’t doomed to repeat the same mistakes over and over. But instead we have created something even more – or at least equally discriminatory.

And even though there are adjustments made (through Reinforcement Learning from Human Feedback) they are just kind of… small patches. The issue needs to be tackled at the core.”

Image shows a white male in a suit facing away from the camera on a grey background. Text on the left side of the image reads “intelligent person.”

Adrien Limousin –  Human·s 2 (2022 – Ongoing)

As Adrien points out, minor adjustments or “sticking plasters” won’t suffice when addressing biases deeply rooted in our cultural and historical contexts. As an example – Google recently attempted  to reduce racial bias in their AI Gemini image algorithms. This effort was aimed at addressing long standing issues of racial bias in AI-generated images, where people of certain racial backgrounds were either misrepresented or underrepresented. However, despite these well-intentioned efforts, the changes inadvertently introduced new biases. For instance, while trying to balance representation, the algorithms began overemphasizing certain demographics in contexts where they were historically underrepresented, leading to skewed and culturally inappropriate portrayals. This outcome highlights the complexity of addressing bias in AI. It’s not enough to simply optimize in the opposite direction or apply blanket fixes; such approaches can create new problems while attempting to solve old ones. What this example underscores is the necessity for AI systems to be developed and situated within culture, history, and place.


> INTERVIEWER: Are these ethical considerations on your mind when you are using AI in your work?

> ADRIEN: Using Generative AI makes me feel complicit about these issues. So I think the way I approach it is more like trying to point out these lacks, through its results or by unravelling its inner working

“It’s the artists role to question”

(Adrien)


> INTERVIEWER: Do you feel like artists have an important role in creating the new and more accurate representations  of AI?

> ADRIEN:  I think that’s one of the role of the artist. To question.

> INTERVIEWER: If you can kind of imagine like what, what kind of representations we might see, or you might want to have in the future like instead of when you Google AI and it’s blue heads and you know, robots, etc.

> ADRIEN: That’s a really good question and I don’t think I have the answer, but as I thought about that, understanding the inner workings of these systems can help us make better representations. For instance, the concepts and ideas of remixing existing representations—something that we are familiar with, that’s one solution I guess to better represent Generative AI.


Image displays an error message from the Windows 95 operating system. The text reads ‘The belief in photographic images.exe has stopped working’.

ADRIEN LIMOUSIN System errors – (2024 – ongoing)

We discussed the challenges involved in encouraging the media to use images that accurately reflect AI.


> ADRIEN: I guess if they used stereotyped images it’s because most people have associated AI with some kind of materialised humanoid as the embodiment of AI and that’s obviously misleading, but it also takes time and effort to change mindsets, especially with such an abstract and complex technology, and that is I think one of the role of the media to do a better job at conveying an accurate vision of AI, while keeping a critical approach.


Another major factor is knowledge: journalists and reporters need to recognise the biases and inaccuracies in current AI representations to make informed choices. This awareness comes from education and resources like the Better Images of AI project, which aim to make this information more accessible to a wider audience. Additionally, there’s a need to develop new visual associations for AI. Media rely on attention-grabbing images that are immediately recognisable, we need new visual metaphors and associations that more accurately represent AI.  

One Reality


> INTERVIEWER: So kind of a big question, but what do you feel is the most pressing ethical issue right now in relation to AI that you’ve been thinking about?

> ADRIEN: Besides the obvious discriminatory part of the dataset and outputs, I think one of the overlooked issues is the interface of these models. If we take ChatGPT for instance, the way there is a search bar and you put text in it expecting an answer, just like a web browser’s search bar is very misleading. It feels familiar, but it absolutely does not work in the same way. To take any output as an answer or as truth, while it is just giving the most probable next words is deceiving and I think that’s something we need to talk a bit more about.


One major problem with AI is its tendency to offer simplified answers to multifaceted questions, which can obscure complex perspectives and realities. This becomes especially relevant as AI systems are increasingly used in information retrieval and decision-making. For example, Google’s AI summarising search feature has been criticised for frequently presenting incorrect information. Additionally, AI’s tendency to reinforce existing biases and create filter bubbles poses a significant risk. Algorithms often prioritise content that aligns with users’ pre-existing views, exacerbating polarisation (Pariser, 2011). This is compounded when AI systems limit exposure to a variety of perspectives, potentially widening societal divides.

Metasynthography

(Adrien)

Adrien takes inspiration from the idea of metaphotography, which involves using photography to reflect on and critique the medium itself. In metaphotography, artists use the photographic process to comment on and challenge the conventions and practices of photography.

Building on this concept, Adrien has coined the term “meta-synthography” to describe his approach to digital art.


> ADRIEN: The term meta-synthography is one of the terms I have chosen to describe Digital arts in general. So it’s not properly established, that’s just me doing my collaging.

> INTERVIEWER: That’s great. You’re gonna coin a new word in this blog 😉


I asked Adrien what artists inspire him. He discusses the influence of Robert Ryman, a renowned painter celebrated for his minimalist approach that focuses on the process of painting itself. Ryman’s work often features layers of paint on canvas, emphasising the act of painting and making the medium and its processes central themes in his art.


> ADRIEN: I recently visited an exhibition of Robert Ryman, which kind of does the same with painting – he paints about painting on painting, with painting.

> INTERVIEWER:  Love that.

> ADRIEN: I thought that’s very interesting and I very much enjoy this kind of work, it talks about the medium…It’s  a bit conceptual, but it raises question about the medium… about the way we use it, about the way we consume it.

Image displays a large advertising board displaying a blank white image, the background is a grey clear sky

Adrien Limousin – Lorem Ipsum (2024 – ongoing)

As we navigate the evolving landscape of AI, the intersection of art and technology provides a crucial perspective on the impact and implications of these systems. By championing accurate representations and confronting inherent biases, Adrien’s work highlights the essential role  artists play in shaping a more nuanced and informed dialogue about AI. It’s not only important to highlight AI’s inner workings but also to recognise that imagery has the power to shape reality and our understanding of these technologies. Everyone has a role in creating AI that works for society, countering the hype and capitalist-driven narratives advanced by tech companies. Representations from communities, along with the voices of individuals and artists, are vital for sharing knowledge, making AI more accessible, and bringing attention to the experiences and perspectives often rendered invisible by AI systems and media narratives.


Adrien Limousin (interviewee) is a 25 years old french (post)photographer exploring the other side of images, currently studying at the National Superior School of Photography in Arles.

Cherry Benson (interviewer) is a Student Steward for Better Images of AI. She holds a degree in psychology from London Metropolitan University and is currently pursuing a Master’s in AI Ethics and Society at the University of Cambridge where her research centers on social AI. Her work on the intersection of AI and border control has been featured as a critical case study in the Cambridge Journal of Artificial Intelligence for how racial capitalism is deeply intertwined with the development and deployment of AI.

💬 Behind the Image with Yutong from Kingston School of Art

This year, we collaborated with Kingston School of Art to give MA students the task of creating their own better images of AI as part of their final project. 

In this mini-series of blog posts called ‘Behind the Images’, our Stewards are speaking to some of the students that participated in the module to understand the meaning of their images, as well as the motivations and challenges that they faced when creating their own better images of AI. Based on our assessment criteria, some of the images will also be uploaded to our library for anyone to use under a creative commons licence. 

In our third and final post, we go ‘Behind the Image’ with Yutong about her pieces, ‘Exploring AI’ and ‘Talking to AI’. Yutong intends that her art will challenge misconceptions about how humans interact with AI.

You can freely access and download ‘Talking to AI’ and both versions of ‘Exploring AI’ from our image library.

Both of Yutong’s images are available in our library, but as you might discover below, there were many challenges that she faced when developing these works. We greatly appreciate Yutong letting us publish her images and talking to us for this interview. We are hopeful that her work and our conversations will serve as further inspiration for other artists and academics who are exploring representations of AI.

Can you tell us a bit about your background and what drew you to the Kingston School of Art?

Yutong is from China and before starting the MA in Illustration at Kingston University, she completed an undergraduate major in Business Administration. What drew Yutong to Kingston School of Art was its highly regarded reputation for its illustration course. On another note, she enjoys how the illustration course at Kingston balances both the commercial and academic aspects of art – allowing Yutong to combine her previous studies with her creative passions. 

Could you talk me through the different parts of your images and the meaning behind them?

In both of her images, Yutong wishes to unpack the interactions between humans and AI – albeit from two different perspectives.

Talking to AI’

Firstly, ‘Talking to AI’ focuses on more accurately representing how AI works. Yutong uses a mirror to reflect how our current interactions with AI are based on our own prompts and commands. At present, AI cannot generate content independently so it reflects the thoughts and opinions that humans feed into systems. The binary code behind the mirror symbolises how human prompts and data are translated into computer language which powers AI. Yutong has used a mirror to capture an element between humans and AI interaction that is overlooked – the blurred transition between human work to AI generation.

‘Exploring AI’

Yutong’s second image, ‘Exploring AI’ aims to shed light on the nuanced interactions that humans have with AI on multiple levels. Firstly, the text, ‘Hi, I am AI’ pays homage to an iconic phrase in programming (‘Hello World’) which is often the first thing any coder learns how to write and it also forms the foundations of a coder’s understanding of a programming language’s syntax, structure, and execution process. Yutong thought this was fitting for her image as she wanted to represent the rich history and applications of AI which has its roots in basic code. 

Within ‘Exploring AI’, each grid square is used to represent the various applications of AI in different industries. The expanded text across multiple grid squares demonstrates how one AI tool can have uses across different industriesChatGPT is a prime example of this.

However, Yutong wants to also draw attention to the figures within each square which all interact with AI in complex and different ways. For example, some of the body language of the figures depict them to be variously frustrated, curious, playful, sceptical, affectionate, indifferent, or excited towards the text, ‘Hi, I am AI’.

Yutong wants to show how our human response to AI changes and varies contextually and it is driven by our own personal conceptions of AI. From her own observations, Yutong identified that most people either have a very positive or very negative opinion towards AI – but not many feel anything in between. By including all the different emotional responses towards AI in this image, Yutong hopes to introduce greater nuance into people’s perceptions of AI and help people to understand that AI can evoke different responses in different contexts. 

What was your inspiration/motivation for creating your images?

As an illustrator, Yutong found herself surrounded by artists that were fearful that AI would replace their role in society. Yutong found that people are often fearful of the unknown and things they cannot control. Therefore, being able to improve understanding of what AI is and how it works through her art, Yutong hopes that she can help her fellow creators face their fears and better understand their creative role in the face of AI. 

Through her art, ‘Exploring AI’ and ‘Talking to AI’, Yutong intends to challenge misconceptions about what AI is and how it works. As an AI user herself, she has realised that human illustrators cannot be replaced by AI – these systems are reliant on the works of humans and do not yet have the creative capabilities to replace artists. Yutong is hopeful that by being better educated on how AI integrates in society and how it works, artists can interact with AI to enhance their own creativity and works if they choose to do so. 

Was there a specific reason you focused on dispelling misconceptions about what AI looks like and how Chat-GPT (or other large language models) work? 

Yutong wanted to focus on how AI and humans interact in the creative industry and she was driven by her own misconceptions and personal interactions with AI tools. Yutong does not intend for her images to be critical of AI. Instead, she envisages that her images can help educate other artists and prompt them to explore how AI can be useful in their own works. 

Can you describe the process for creating this work?

From the outset, Yutong began to sketch her own perceptions and understandings about how AI and humans interact. The sketch below shows her initial inspiration. The point at which each shape overlaps represents how humans and AI can come together and create a new shape – this symbolises how our interactions with technology can unlock new ideas, feelings and also, challenges.

In this initial sketch, she chose to use different shapes to represent the universality of AI and how its diverse application means that AI doesn’t look like one thing – AI can underlay an automated email response, a weather forecast, or medical diagnosis. 

Yutong’s initial sketch for ‘Talking to AI’

The project aims to counteract common stereotypes and misconceptions about AI. How did you incorporate this goal into your artwork? 

In ‘Exploring AI’, Yutong wanted to introduce a more nuanced approach to AI representation by unifying different perspectives about how people feel, experience and apply AI in one image. From having discussions with people utilising AI in different industries, she recognised that those who were very optimistic about AI, didn’t recognise its shortfalls – and the same vice-versa. Yutong believes that humans have a role to help AI reach new technological advancements and AI can also help humans flourish. In Yutong’s own words, “we can make AI better, and AI can make us better”. 

Yutong found talking to people in the industry as well as conducting extensive research about AI very important to ensure that she could more accurately portray AI’s uses and functions. She points to the fact that she used binary code in ‘Talking to AI’ after researching that this is the most fundamental aspect of computer language which underpins many AI systems. 

What have been the biggest challenges in creating a ‘better image of AI’? Did you encounter any challenges in trying to represent AI in a more nuanced and realistic way?

Yutong reflects on the fact that no matter how much she rethought or restarted her ideas, there was always some level of bias in her depiction of AI because of her own subconscious feelings towards the technology. She also found it difficult to capture all the different applications of AI, as well as the various implications and technical features of the technology in a single visual image. 

Through tackling these challenges, Yutong became aware of why Better Images of AI is not called ‘Best Images of AI’ the latter would be impossible. She hopes that while she could not produce the ‘best image of AI’, her art can serve as a better image compared to those typically used in the media.

Based on our criteria for selecting images, we were pleased to accept both your images but asked you if it was possible to make amendments to ‘Exploring AI’ to make the figures more inclusive. What do you think of this feedback and was it something that you considered in your process? 

In Yutong’s image, ‘Exploring AI’, Better Images of AI made a request if an additional image could be made including these figures in different colours to better reflect the diverse world that we live in. Being inclusive is very important to Better Images of AI, especially as visuals of AI and those who are creating AI, are notoriously unrepresentative.

Yutong agreed that this development would be better to enhance the image and being inclusive in her art is something she is actively trying to improve. She reflects on this suggestion by saying, ‘just as different AI tools are unique, so are individual humans’. 

The two versions of ‘Exploring AI’ available on the Better Images of AI library

How has working on this project influenced your own views about AI and its impact? 

During this project, Yutong has been introduced to new ideas and been able to develop her own opinions about AI based on research from academic journals. She says that informing her opinions using sources from academia was beneficial compared to relying on information provided by news outlets and social media platforms which often contain their own biases and inaccuracies.

From this project, Yutong has been able to learn more about how AI could incorporate into her future career as a human and AI creator. She has become interested in the Nightshade tool that artists have been using to prevent AI companies using their art to train their AI systems without the owner’s consent. She envisages a future career where she could be working to help artists collaborate with AI companies – supporting the rights of creators and preserving the creativity of their art. 

What have you learned through this process that you would like to share with other artists and the public?

By chatting to various people interacting and using AI in different ways, Yutong has been introduced to richer ideas about the limits and benefits of AI. Yutong challenges others to talk to people who are working with AI or are impacted by its use to gain a more comprehensive understanding of the technology. She believes that it’s easy to gain a biased opinion about AI by relying on the information shared by a single source, like social media, so we should escape from these echo chambers. Yutong believes that it is so important that people diversify who they are surrounding themselves with to better recognise, challenge, and appreciate AI. 

Yutong (she/her) is an illustrator with whimsical ideas, also an animator and graphic designer.

Visuals of AI in the Military Domain: Beyond ‘Killer Robots’ and towards Better Images?

In this blog post, Anna Nadibaidze explores the main themes found across common visuals of AI in the military domain. Inspired by the work and mission of Better Images of AI, she argues for the need to discuss and find alternatives to images of humanoid ‘killer robots’. Anna holds a PhD in Political Science from the University of Southern Denmark (SDU) and is a researcher for the AutoNorms project, based at SDU.

The integration of artificial intelligence (AI) technologies into the military domain, especially weapon systems and the process of using force, has been the topic of international academic, policy, and regulatory debates for more than a decade. The visual aspect of these discussions, however, has not been analysed in depth. This is both puzzling, considering the role that images play in shaping parts of the discourses on AI in warfare, and potentially problematic, given that many of these visuals, as I explore below, misrepresent major issues at stake in the debate.

In this piece I provide an overview of the main themes that one may observe in visual communication in relation to AI in international security and warfare, discuss why some of these visuals raise concerns, and argue for the need to engage in more critical reflections about the types of imagery used by various actors in the debate on AI in the military.

This blog post is based on research conducted as part of the European Research Council funded project “Weaponised Artificial Intelligence, Norms, and Order” (AutoNorms), which examines how the development and use of weaponised AI technologies may affect international norms, defined as understandings of ‘appropriateness’. Following the broader framework of the project, I argue that certain visuals of AI in the military, by being (re)produced via research communication and media reporting, among others, have potential to shape (mis)perceptions of the issue.

Why reflecting upon images of AI in the military matters

As with the field of AI ethics more broadly, critical reflections on visual communication in relation to AI appear to be minimal in global discussions about autonomous weapon systems (AWS)—systems that can select and engage targets without human intervention—which have been ongoing for more than a decade. The same can be said for debates about responsible AI in the military domain, which have become more prominent in recent years (see, for instance, the initiative of the Responsible AI in the Military Domain Summit held first in 2023, with another edition due in 2024).

Yet, examining visuals deserves a place in the debate on responsible AI in the military domain. It matters because, as argued by Camila Leporace on this blog, images have a role in constructing certain perceptions, especially “in the midst of the technological hype”. As pointed out by Maggie Mustaklem from the Oxford Internet Institute, certain tropes in visual communication and reporting about AI disconnect the technological developments in that area and how people, in particular the broader public, understand what the technologies are about. This is partly why the AutoNorms project blog refrains from using the widely spread visual language of AI in the military context and uses images from the Better Images of AI library as much as possible.

Main themes and issues in visualizing military applications of AI

Many of the visuals featured in research communication, media reporting, and publications about AI in the military domain speak to the tropes and clichés in images of AI more broadly, as identified by the Better Images of AI guide.

One major theme is anthropomorphism, as we often see pictures of white or metallic humanoid robots that appear holding weapons, pressing nuclear buttons, or marching in troops like soldiers with angry or aggressive expressions, as if they could express emotions or be ‘conscious’ (see examples here and here).

In some variations, humanoids evoke associations with science fiction, especially the Terminator franchise. The Terminator is often referenced in debates about AWS, which feature in a substantial part of the research on AI in international relations, security, and military ethics. AWS are often called ‘killer robots’, both in academic publications and media platforms, which seems to encourage the use of images of humanoid ‘killer robots’ with red eyes, often originating from stock image databases (see examples here, here, and here). Some outlets do, however, note in captions that “killer robots do not look like this” (see here and here).

Actors such as campaigners might employ visuals, especially references from pop culture and sci-fi, to get people more engaged and as tools to “support education, engagement and advocacy”. For instance, Stop Killer Robots, a campaign for an international ban on AWS, often uses a robot mascot called David Wreckham to send their message that “not all robots are going to be as friendly as he is”.

Sci-fi also acts as a point of reference for policymakers, as evidenced, for example, by US official discourses and documents on AWS. As an illustration, some of these common tropes were visually present at the conference “Humanity at the Crossroads: Autonomous Weapons Systems and the Challenge of Regulation” which brought together diplomats, civil society, academia, and other actors to discuss the potential international regulation of AWS in April 2024 in Vienna.

Half-human half-robot projected on the wall and a cut-out of a metallic robot greeting participants at the entrance of the Vienna AWS conference. Photos by Anna Nadibaidze.

The colour blue also often features in visual communication about AI in warfare, together with abstract depictions of running code, algorithms, or computing technologies. This is particularly distinguishable in stock images used for blogs, conferences, or academic book cover designs. As Romele and Rodighiero write on this blog, blue might be used because it is calming, soothing, and also associated with peace, encouraging some accepting reaction from viewers, and in this way promoting certain imaginaries about AI technologies.

Examples of covers for recently published academic books on the topic of AI in international security and warfare.

There are further distinct themes in visuals used alongside publications about AI in warfare and AWS. A common trope features human soldiers in an abstract space, often with a blue (and therefore calming) background or running code, wearing a virtual reality headset and presumably looking at data (see examples here and here). One such visual was used for promotional material of the aforementioned REAIM Summit, organised by the Dutch Government in 2023.

Screenshot of the REAIM Summit 2023 website homepage (www.reaim2023.org). The image is credited to the US Naval Information Warfare Center Pacific, public domain.

Finally, many images feature military platforms such as uncrewed aerial vehicles (UAVs or drones) flying alone or in swarms, robotic ground vehicles, or quadruped animal-shaped robots, either depicted alone or together with human soldiers. Many of them are prototypes or models of existing systems tested and used by the United States military, such as the MQ-9 Reaper (which does not classify as an AWS). Most often, these images are taken from the visual repository of the US Department of Defense, given that the photos released by the US government are in the public domain and therefore free to use with attribution (see examples here, here, and here). Many visuals also display generic imagery from the military, for instance soldiers looking at computer screens, sitting in a control room, or engaging in other activities (see examples here, here, and here).

Example of image often used to accompany online publications about AWS. Source: Cpl Rhita Daniel, US Marine Corps, public domain.

However, there are several issues associated with some of the common visuals explored above. As AI researcher and advocate for an AWS ban Stuart Russell points out, references to the Terminator or sci-fi are inappropriate for the debate on AI in the military because they suggest that this is a matter for the future, whereas the development and use of these technologies is already happening.

Sci-fi references and humanoids might also give the impression that AI in the military is about replacing humans with ‘conscious’ machines that will eventually fight ‘robot wars’. This is misleading because the debate surrounding the integration of AI into the military is mostly not about robots replacing humans. Armed forces around the world plan to use AI for a variety of purposes, especially as part of humans interacting with machines, often called ‘teaming’. The debate and actors participating in it should therefore focus on the various legal, ethical, and security challenges that might arise as part of these human-machine interactions, such as a distributed form of agency.

Further, images of ‘killer robots’ often invoke a narrative of ‘uprising’ which is common in many works of popular culture and where humans lose control of AI, as well as determinist views where humans have little influence over how technology impacts society. Such visual tropes overshadow (human) actors’ decisions to develop or use AI in certain ways, as well the political and social contexts surrounding those decisions. Portraying weaponised AI in the form of robots turning against their creators problematically presents this is an inevitable development, instead of highlighting choices made by developers and users of these technologies.

Finally, many of the visuals tend to focus on the combat aspect of integrating AI in the military, especially on weaponry, rather than more ‘mundane’ applications, for instance in logistics or administration. Sensationalist imagery featuring shiny robots with guns or soldiers depicted in a theoretical battlefield with a blue background risks distracting from technological developments in security and warfare, such as the integration of AI into data analysis or military decision-support systems.

Towards better images?

It should be noted that many outlets have moved on from using ‘killer robot’ imagery and sci-fi clichés when publishing about AI in warfare. Some more realistic depictions are being increasingly used. For instance, a recent symposium on military AI published by the platform Opinio Juris features articles illustrated with generic photos of soldiers, drones, or fighter jets.

Images of military personnel looking at data on computer screens are arguably not as problematic because they convey a more realistic representation of the integration of AI into the military domain. But this still means often relying on the same sources: stock imagery and public domain websites such as the US government’s collections. It also means that AI technologies are often depicted in a military training or experimental setting, rather than a context where they could potentially be used, such as an actual conflict, not hidden with a generic blue background.

There are some understandable challenges, such as researchers not getting a say in the images used for their books or articles, or the reliance on free, public domain images, which is common in online journalism. However, as evidenced by the use of sci-fi tropes in major international conferences, a reflection on what are ‘responsible’ and ‘appropriate’ visuals for the debate on AI in the military and AWS is lacking.

Images of robot commanders, the Terminator, or soldiers with blue flashy tablets miss the point that AI in the military is about changing dynamics of human-machine interaction, which involve various ethical, legal, and security implications for agency in warfare. As with images of AI more broadly, there is a need to expand the themes in visuals of AI in security and warfare, and therefore also the types of sources used. Better images of AI would include humans who are behind AI systems and humans that might be potentially affected by them—both soldiers and civilians (e.g. some images and photos depict destroyed civilian buildings, see here, here, or here). Ultimately, imagery about AI in the military should “reflect the realistically messy, complex, repetitive and statistical nature of AI systems” as well as the messy and complex reality of military conflict and the security sphere more broadly.

The author thanks Ingvild Bode, Qiaochu Zhang and Eleanor Taylor (one of our Student Stewards) for their feedback on earlier drafts of this blog. 

How not to communicate about AI in education

Seventeen multicoloured post-it notes are roughly positioned in a strip shape on a white board. Each one of them has a hand drawn sketch in pen on them, answering the prompt on one of the post-it notes "AI is...." The sketches are all very different, some are patterns representing data, some are cartoons, some show drawings of things like data centres, or stick figure drawings of the people involved.

Camila Leporace – journalist, researcher, and PhD in Education – argues that innovation may not be in artificial intelligence (AI) but in our critical capacity to evaluate technological change.


When searching for “AI in education” on Google Images here in Brazil, in November 2023, there is a clear and obvious  predominance of images of robots. The first five images that appeared for me were: 

  1. A robot teaching numeracy in front of a school blackboard; 
  2. A girl looking at a computer screen from which icons she  is viewing “spill out”; 
  3. A series of icons and a hand catching them in the air; 
  4. A robot finger and a human finger trying to find each other like in Michelangelo’s  “Creation of Adam,” but a brain is between them, keeping the fingers from touching; whilst the  robot finger touches the left half of the brain (which is “artificial” and blue), the  human finger touches the right half of the brain (which is coloured); and
  5. A drawing (not a photo) showing a girl sitting with a book and a robot sat on two books next to her, opposite a screen;

It is curious (and harmful) how images associated with artificial intelligence (AI) in education so inaccurately represent what is actually happening with regard to the insertion of these technologies in Brazilian schools – in fact, in almost every school in the world. AI is not a technology that can be  “touched.” Instead, it is a resource that is present in the programming of the systems we use in an invisible, intangible way. For example, Brazilian schools have been adopting AI tools in writing activities, like the correction of students’ essays; or question-and-answer adaptive learning platforms. In Denmark, teachers have been using apps to audit students ‘moods’, through data collection and the generation of bar charts. In the UK, surveillance involving students and teachers as a consequence of data harvesting is a topic getting a lot of attention. 

AI, however, is not restricted to the educational resources designed for teaching and learning, but in various devices useful for learning beyond formal learning contexts. We all use “learning machines” in our daily lives, as now machine learning is everywhere around us trying to gather information on us to provide content and keep us connected. While we do so, we provide data to feed this machinery. Algorithms classify the large masses of data it receives from us.  Often, it is young people who – in contact with algorithmic platforms – provide their data  while browsing and, in return, receive content that – in theory – matches their profiles. This is quite  controversial, raising questions about data privacy, ethics, transparency and what these data  generation and harvesting procedures can add (or not) to the future of children and young  people. Algorithmic neural networks are based on prediction, applying statistics and other features to process data and obtain results. Otherwise we, humans, are  not predictable.

The core problem with images of robots and “magic” screens in education is that they don’t properly communicate what is happening with AI in the context of teaching and learning. These uninformative images end up diverting attention from what is really important: – interactions on social networks, chatbots, and the countless emotional, psychological and developmental implications arising from these environments. While there are speculations about teachers being replaced by AI, teachers have actually never been more important in supporting parents and carers educate about navigating the digital world. That’s why the prevalence of robot teachers in the imagination doesn’t seem  to help at all. And this prevalence is definitely not new!

When we look into the history of automation in education, we find out that one hundred years ago, in the 1920s, Sidney Pressey developed analog teaching machines basically to apply tests to students. Pressey’s machines preceded those developed by the behaviourist B. F. Skinner in the late 1960s, promising – just like today’s AI platforms for adaptive teaching do – to personalise learning, make the process more fun and relieve the teacher of repetitive tasks. When they came up, those inventions not only promised similar benefits as those which fuel AI systems today, but also raised concerns similar to those we face today, including the hypothesis of replacing the teacher entirely. We could then ask: where is the real innovation regarding automation in education, if the old analog machines are so similar to today’s in their assumptions, applications and the discourse they carry?

Innovation doesn’t lie in big data or deep neural networks, the basic ingredients that boost the latest technologies we are aware of. It lies in our critical capacity to look  at the changes brought about by AI technologies with restraint and to be careful about delegating to them what we can’t actually give up. It lies in our critical thinking on how the learning processes can or cannot be supported by learning machines.

More than ever, we need to analyse what is truly human in intelligence, cognition, creativity; this is a way of guiding us in not delegating what cannot be delegated to artificial systems, no matter how powerful they are in processing data. Communication through images requires special attention. After all, images generate impressions, shape perceptions and can completely alter the general audience’s sense of an important topic. The apprehension  we’ve had towards technology for dozens of years is enough. In the midst of the technological hype, we need critical thinking, shared thoughts, imagination and accuracy.. And certainly need better images  of AI.

Better images of AI can support AI literacy for more people

Marika Jonsson's book cover; a simple yellow cover with the title (in Swedish): "En bok om AI"

Marika Jonsson, doctoral student at KTH Royal Institute of Technology, reflects on overcoming the challenge of developing an Easy Read book on artificial intelligence (AI) with so few informative images about AI available.


There are many things that I take for granted. One of them is that I should be able to easily find information about things I want to know more about. Like artificial intelligence (AI). I find AI exciting, interesting; and I see the possibilities of AI helping me in everyday life. And thanks to the fact that I have been able to read about AI, I have also realised that AI can be used for bad things; that AI creates risks and can promote inequality in society.  Most of us use or are exposed to AI daily, sometimes without being aware of it.

Between May 2020 and June 2023, I participated in a project called AllAgeHub in Sweden, where one of the aims was to spread knowledge about how to use welfare technology to empower people in their everyday lives. The project included a course on AI for the participants, who worked in the public healthcare and social care sectors. The participants then wanted to spread knowledge about AI to clients in their respective sectors. The clients could be, for example, people working in adapted workplaces or living in supported housing. There was a demand for information in Easy Read format. Easy Read format is when you write in easy-to-read language, with common words, short sentences and in simple chronological order. The text should be spaced out and have short lines, and the texts are often supported by images. Easy Read is both about how you write and about how you present what is written. The only problem was that I found almost no Easy Read information about AI in Swedish. My view is that the lack of Easy Read information about AI is a serious matter.

A basic principle behind democracy is that all people are equal and should have the same rights. Therefore, I believe we must have access to information in an understandable way. How else can you express your opinion, vote or consent to something in an informed way? That was the reason I decided to write an Easy Read book about AI. My ambition was to write concretely and support the text with pictures. Then I stumbled on the huge problem of finding informative pictures about AI. The images I found were often abstract or inaccurate. The images could also be depicting AI as robots and conveyed the impression that AI is a creature that can take over the earth and destroy humanity. With images like that, it was hard to explain that, for example, personalised ads, which can entice me to buy things I don’t really need, are based on AI technology. Many people don’t know that we are exposed to AI that affects us in everyday life through cookie choices on the internet. The aforementioned images might also make people afraid of using practical AI tools that can make everyday life easier, such as natural language processing (NLP) tools that convert speech to text or reads text aloud. So, I had to create my own pictures.

I must confess, it was difficult to create clear images that explain AI. I chose to create images that show situations where AI is used, and tried to visualise how certain kinds of AI might operate. One example is that I visualised why a chatbot might give the wrong answer by showing how a word can mean two different things with a picture of each word’s meaning. The two different meanings give the AI tool two possible interpretations about what issue is at hand. The images are by no means perfect, but they are an attempt at explaining some aspects of AI.

Two images with Swedish text explaining the images. 1. A box of raspberries. 2. symbol of person carriying a bag. The Swedish word ”bär” is present in both explanations.
The word for carry and berry is the same in Swedish. The text says: “The word berry can mean two things. Berries that you eat. A person carrying a bag.”

The work of creating concrete, comprehensible images that support our understanding of AI can strengthen democracy by giving more people the opportunity to understand information about the tools they use in their day-to-day lives. I hope more people will be inspired to write about AI in Easy Read, and create and share clear and descriptive images of AI.

As they say,  ”a picture is worth a thousand words,” so we need to choose images that tell the same story as the words we use. At the time I write this blog post, I feel there are very few images to choose from. I am hopeful we can change this, together!


The Easy Read book about AI includes a study guide. It is in Swedish, and is available for free as a pdf on AllAgeHub’s website:

https://allagehub.se/2023/06/29/nu-finns-en-lattlast-bok-om-ai-att-ta-del-av/

Co-creating Better Images of AI

Yasmine Boudiaf (left) and Tamsin Nooney (right) deliver a talk during the workshop ‘Co-creating Better Images of AI’

In July, 2023, Science Gallery London and the London Office of Technology and Innovation co-hosted a workshop helping Londoners think about the kind of AI they want. In this post, Dr. Peter Rees reflects on the event, describes its methodology, and celebrates some of the new images that resulted from the day.


Who can create better images of Artificial Intelligence (AI)? There are common misleading tropes of the images which dominate our culture such as white humanoid robots, glowing blue brains, and various iterations of the extinction of humanity. Better Images of AI  is on a mission to increase AI literacy and inclusion by countering unhelpful images. Everyone should get a say in what AI looks like and how they want to make it work for them. No one perspective or group should dominate how Al is conceptualised and imagined.

This is why we were delighted to be able to run the workshop ‘Co-creating Better Images of AI’ during London Data Week. It was a chance to bring together over 50 members of the public, including creative artists, technologists, and local government representatives to each make our own images of AI. Most images of AI that appear online and in the newspapers are copied directly from existing stock image libraries. This workshop set out to see what would happen when we created new images fromscratch. We experimented with creative drawing techniques and collaborative dialogues to create images. Participants’ amazing imaginations and expertise went into a melting-pot which produced an array of outputs. This blogpost reports on a selection of the visual and conceptual takeaways! I offer this account as a personal recollection of the workshop—I can only hope to capture some of the main themes and moments, and I apologise for all that I have left out. 

The event was held at the Science Gallery in London on 4th July 2023 between 3-5pm and was hosted in partnership with London Data Week, funded by the London Office of Innovation and Technology. In keeping with the focus on London Data Week and LOTI, the workshop set out to think about how AI is used every day in the lives of Londoners, to help Londoners think about the kind of AI they want, to re-imagine AI so that we can build systems that work for us.

Workshop methodology

I said the workshop started out from scratch—well, almost. We certainly wanted to make use of the resources already out there such as the [Better Images of AI: A Guide for Users and Creators] co-authored by Dr Kanta Dihal and Tania Duarte. This guide was helpful because it not only suggested some things to avoid, but also provided stimulation for what kind of images we might like to make instead. What made the workshop a success was the wide-ranging and generous contributions—verbal and visual—from invited artists and technology experts, as well as public participants, who all offered insights and produced images, some of which can be found below (or even in the Science Gallery).

The Workshop was structured in two rounds, each with a live discussion and creative drawing ‘challenge’. The approach was to stage a discussion between an artist and a technology expert (approx 15 mins), and then all members of the workshop would have some time (again, approx 15 mins) for creative drawing. The purpose of the live discussion was to provide an accessible introduction to the topic and its challenges, after which we all tackled the challenge of visualising and representing different elements of AI production, use and impact. I will now briefly describe these dialogues, and unveil some of the images created.

Setting the scene

Tania Duarte (Founder, We and AI) launched the workshop with a warm welcome to all. Then, workshop host Dr Robert Elliot-Smith (Director of AI and Data Science at Digital Catapult) introduced the topic of Large Language Models (LLMs) by reminding the audience that such systems are like ‘autocorrect on steroids’: the model is simply very good at predicting words, it does not have any deep understanding of the meaning of the text it produces. He also discussed image-generators, which work in a similar way and with similar problems, which is why certain AI-produced images end up garbling images of hands and arms: they do not understand anatomy.

In response to this preliminary introduction, one participant who described herself as a visual artist expressed horror at the power of such image-generating and labelling AI systems to limit and constrain our perception of reality itself. She described how, if we are to behave as artists, what we have to do in our minds is to avoid seeing everything simply in terms of fixed categories which can conservatively restrain the imagination, keeping it within a set of known categorisations, which is limiting not only our imagination but also our future. For instance, why is the thing we see in front of us necessarily a ‘wall’? Could it not be, seeing more abstractly, simply a straight line? 

From her perspective, AI models seem to be frighteningly powerful mechanisms for reinforcing existing categories for what we are seeing, and therefore also of how to see, what things are, even what we are, and what kind of behaviour is expected. Another participant agreed: it is frustrating to get the same picture from 100 different inputs and they all look so similar. Indeed, image generators might seem to be producing novelty, but there is an important sense in which they are reinforcing the past categories of the data on which they were trained.

This discussion raised big questions leading into the first challenge: the limitations of large language models.

Round 1: The Limitations of Large Language Models

A live discussion was staged between Yasmine Boudiaf (recognised as one of ‘100 Brilliant Women in AI Ethics 2022,’ and fellow at the Ada Lovelace Institute) and Tamsin Nooney (AI Research, BBC R&D) about the process of creating LLMs.

Yasmine asked Tamsin about how the BBC, as a public broadcaster, can use LLMs in a reliable manner, and invited everyone in the room to note down any words they found intriguing, as those words might form a stimulus for their creative drawings.

Tamsin described an example of LLM use-case for the BBC in producing a podcast whereby an LLM could summarise the content, add in key markers and meta-data labels and help to process the content. She emphasised how rigorous testing is required to gain confidence in the LLM’s reliability for a specific task before it could be used. A risk is that a lot of work might go into developing the model only for it to never be usable at all.

Following Yasmine’s line of question, Tamsin described how the BBC deal with the significant costs and environmental impacts of using LLMs. She described how the BBC calculated if they wanted to train their LLM, even a very small one, it would take up all their servers at full capacity for over a year, so they won’t do that! The alternative is then to pay other services such as Amazon to use their model, which means balancing costs: so here are limits due to scale, cost, and environmental impact.

This was followed by a more quiet, but by no means silent, 15 minutes for drawing time in which all participants drew…

Drawing by Marie Jannine Murmann. Abstract cogwheels suggesting that AI tools can be quickly developed to output nonsense but, with adequate human oversight and input, AI tools can be iteratively improved to produce the best outputs they can.
Drawing by Marie Jannine Murmann. Abstract cogwheels suggesting that AI tools can be quickly developed to output nonsense but, with adequate human oversight and input, AI tools can be iteratively improved to produce the best outputs they can.

One participant used an AI image generator for their creative drawing, making a picture of a toddler covered in paint to depict the LLM and its unpredictable behaviours. Tamsin suggested that this might be giving the LLM too much credit! Toddlers, like cats and dogs, have a basic and embodied perception of the world and base knowledge, which LLMs do not have.

Drawing by Howard Elston. An LLM is drawn as an ear, interpreting different inputs from various children.
Drawing by Howard Elston. An LLM is drawn as an ear, interpreting different inputs from various children.

The experience of this discussion and drawing also raised, for another participant, more big questions. She discussed poet David Whyte’s work on the ‘conversational nature of reality’ and thought on how the self is not just inside us but is created through interaction with others and through language. For instance, she mentioned that when you read or hear the word ‘yes’, you have a physical feeling of ‘yesness’ inside, and similarly for ‘no’. She suggested that our encounters with machine-made language produced by LLMs is similar. This language shapes our conversations and interactions, so there is a sense in which the ‘transformers’ (the technical term for the LLM machinery) is also helping to transform our senses of self and the boundary between what is reality and what is fantasy. 

Here, we have the image made by artist Yasmine based on her discussion with Tamsin:

Three groups of icons representing people have shapes travelling between them and a page in the middle of the image. The page is a simple rectangle with straight lines representing data. The shapes traveling towards the page are irregular and in squiggly bands.
Image by Yasmine Boudiaf. Three groups of icons representing people have shapes travelling between them and a page in the middle of the image. The page is a simple rectangle with straight lines representing data. The shapes traveling towards the page are irregular and in squiggly bands.

Yasmine writes:

This image shows an example of Large Language Model in use. Audio data is gathered from a group of people in a meeting. Their speech is automatically transcribed into text data. The text is analysed and relevant segments are selected. The output generated is a short summary text of the meeting. It was inspired by BBC R&D’s process for segmenting podcasts, GPT-4 text summary tools and LOTI’s vision for taking minutes at meetings.

Yasmine Boudiaf

You can now find this image in the Better Images of AI library, and use it with the appropriate attribution: Image by Yasmine Boudiaf / © LOTI / Better Images of AI / Data Processing / CC-BY 4.0. With the first challenge complete, it was time for the second round.

Round 2: Generative AI in Public Services

This second and final round focused on use cases for generative AI in the public sector, specifically by local government. Again, a live discussion was held, this time between Emily Rand (illustrator and author of seven books and recognised by the Children’s Laureate, Lauren Child, to be featured in Drawing Words) and Sam Nutt (Researcher & Data Ethicist, London Office of Technology and Innovation). They built on the previous exploration of LLMs by considering new generative AI applications which they enable for local councils and how they might transform our everyday services.

Emily described how she illustrates by hand, and described her [work] as focusing on the tangible and the real. Making illustrations about AI, whose workings are not obviously visible, was an exciting new topic. See her illustration and commentary below. 

Sam described his role as part of the innovation team which sits across 26 of the boroughs of London and Mayor of London. He helps boroughs to think about how to use data responsibly. In the context of local government data and services, a lot of data collected about residents is statutory (meaning they cannot opt out of giving it), such as council tax data. There is a big prerogative for dealing with such data, especially for sensitive personal health data, that privacy is protected and bias is minimised. He considered some use cases. For instance, council officers can use ChatGPT to draft letters to residents to increase efficiency butthey must not put any personal information into ChatGPT, otherwise data privacy can be compromised. Or, for example, the use of LLMs to summarise large archives of local government data concerning planning permission applications, or the minutes from council meetings, which are lengthy and often technical, which could be made significantly more accessible to many members of the public and researchers. 

Sam also raised the concern that it is very important that residents know how councils use their data so that councils can be held accountable. Therefore this has to be explained and made understandable to residents. Note that 3% of Londoners are totally offline, not using internet at all, so that’s 270,000 people—who also have an equal right to understand how the council uses their data—who need to be reached through offline means. This example brings home the importance of increasing inclusive public Al literacy.

Again, we all drew. Here are a couple of striking images made by participants who also kindly donated their pictures and words to the project:

Drawing by Yokako Tanaka. An abstract blob is outlined encrusted with different smaller shapes at different points around it. The image depicts an ideal approach to AI in the public sector, which is inclusive of all positionalities.
Drawing by Yokako Tanaka. An abstract blob is outlined encrusted with different smaller shapes at different points around it. The image depicts an ideal approach to AI in the public sector, which is inclusive of all positionalities.
Drawing by Aisha Sobey. A computer claims to have “solved the banana” after listing the letters that spell “banana” – whilst a seemingly analytical process has been followed, the computer isn’t providing much insight nor solving any real problem.
Drawing by Aisha Sobey. A computer claims to have “solved the banana” after listing the letters that spell “banana” – whilst a seemingly analytical process has been followed, the computer isn’t providing much insight nor solving any real problem.
Practically identical houses are lined up at the bottom of the image. Out of each house's chimney, columns of binary code – 1's and 0's – emerge.
“Data Houses,” by Joahna Kuiper. Here, the author described how these three common houses are all sending a distress signal—a new kind of smoke signal, but in binary code. And in her words: ‘one of these houses is sending out a distress signal, calling out for help, but I bet you don’t know which one.’ The problem of differentiating who needs what when.
A big eye floats above rectangles containing rows of dots and cryptic shapes.
“Big eye drawing,” by Hui Chen. Another participant described their feeling that ‘we are being watched by big eye, constantly checking on us and it boxes us into categories’. Certain areas are highly detailed and refined, certain other areas, the ‘murky’ or ‘cloudy’ bits, are where the people don’t fit the model so well, and they are more invisible.
Rows of people are randomly overlayed by computer cursors.
An early iteration of Emily Rand’s “AI City.”

Emily started by llustrating the idea of bias in AI. Her initial sketches showed an image showing lines of people of various sizes, ages, ethnicities and bodies. Various cursors showed the cis white able bodied people being selected over the others. Emily also did a sketch of the shape of a City and ended up combining the two. She added frames to show the way different people are clustered. The frame shows the area around the person, where they might have a device sending data about them.

 Emily’s final illustration is below, and can be downloaded from here and used for free with the correct attribution Image by Emily Rand / © LOTI / Better Images of AI / AI City / CC-BY 4.0.

Building blocks are overlayed with digital squares that highlight people living their day-to-day lives through windows. Some of the squares are accompanied by cursors.

At the end of the workshop, I was left with feelings of admiration and positivity. Admiration of the stunning array of visual and conceptual responses from participants, and in particular the candid and open manner of their sharing. And positivity because the responses were often highlighting the dangers of AI as well as the benefits—its capacity to reinforce systemic bias and aid exploitation—but these critiques did not tend to be delivered in an elegiac or sad tone, they seemed more like an optimistic desire to understand the technology and make it work in an inclusive way. This seemed a powerful approach.

The results

The Better Images of AI mission is to create a free repository of better images of AI with more realistic, accurate, inclusive and diverse ways to represent AI. Was this workshop a success and how might it inform Better Images of AI work going forward?

Tania Duarte, who coordinates the Better Images of AI collaboration, certainly thought so:

It was great to see such a diverse group of people come together to find new and incredibly insightful and creative ways of explaining and visualising generative AI and its uses in the public sector. The process of questioning and exploring together showed the multitude of lenses and perspectives through which often misunderstood technologies can be considered. It resulted in a wealth of materials which the participants generously left with the project, and we aim to get some of these developed further to work on the metaphors and visual language further. We are very grateful for the time participants put in, and the ideas and drawings they donated to the project. The Better Images of AI project, as an unfunded non-profit is hugely reliant on volunteers and donated art, and it is a shame such work is so undervalued. Often stock image creators get paid $5 – $25 per image by the big image libraries, which is why they don’t have time to spend researching AI and considering these nuances, and instead copy existing stereotypical images.

Tania Duarte

The images created by Emily Rand and Yasmine Boudiaf are being added to the Better Images of AI Free images library on a Creative Commons licence as part of the #NewImageNovember campaign. We hope you will enjoy discovering a new creative interpretation each day of November, and will be able to use and share them as we double the size of the library in one month. 

Sign up for our newsletter to get notified of new images here.

Acknowledgements

A big thank you to organisers, panellists and artists:

  • Jennifer Ding – Senior Researcher for Research Applications at The Alan Turing Institute
  • Yasmine Boudiaf – Fellow at Ada Lovelace Institute, recognised as one of ‘100 Brilliant Women in AI Ethics 2022’
  • Dr Tamsin Nooney – AI Research, BBC R&D
  • Emily Rand – illustrator and author of seven books and recognised by the Children’s Laureate, Lauren Child, to be featured in Drawing Words
  • Sam Nutt – Researcher & Data Ethicist, London Office of Technology and Innovation (LOTI)
  • Dr Tomasz Hollanek – Research Fellow, Leverhulme Centre for the Future of Intelligence
  • Laura Purseglove – Producer and Curator at Science Gallery London
  • Dr Robert Elliot-Smith – Director of AI and Data Science at Digital Catapult
  • Tania Duarte – Founder, We and AI and Better Images of AI

Also many thanks to the We and Al team, who volunteered as facilitators to make this workshop possible: 

  • Medina Bakayeva, UCL master’s student in cyber policy & AI governance, communications background
  • Marissa Ellis, Founder of Diversily.com, Inclusion Strategist & Speaker @diversily
  • Valena Reich, MPhil in Ethics of AI, Gates Cambridge scholar-elect, researcher at We and AI
  • Ismael Kherroubi Garcia FRSA, Founder and CEO of Kairoi, AI Ethics & Research Governance
  • Dr Peter Rees was project manager for the workshop

And a final appreciation for our partners: LOTI, the Science Gallery London, and London Data Week, who made this possible.

Related article from BIoAI blog: ‘What do you think AI looks like?’: https://blog.betterimagesofai.org/what-do-children-think-ai-looks-like/

A new Better Image of AI – every day for November

A new Better Image of AI – every day for November. Visit the free image library throughout November to see a range of new images from exciting artists. 30 New Images in 30 Days!

Announcing 30 New Images in 30 Days – one new image being added to the Better Images of AI Library each day of November! We and AI Founder and Better Images of AI coordinator Tania Duarte reflects on the excitement and challenges involved in this next stage of the Better Images of AI project.


In December 2021, Better Images of AI launched what at the time was intended to be a small amount of inspiration images. The hope was that providing some images which attempted to show alternative ways to represent AI technologies and impacts, based on research about how current available images were harmful or unhelpful, would inspire other creators. That they would prompt thought from journalists and other communicators, throw down the gauntlet to image libraries, get more people to share ideas with a growing community and help viewers develop better mental models about AI. So, nearly 2 years in, how is it going?

The good

On the one hand, we have been overwhelmed by the response. The images, most of which are donated and all of which are by insightful and talented artists, have clearly helped a wide range of people and organisations communicate in ways that better represent their message and provide more interesting and engaging moments with audiences. They also provided creative provocations or learning opportunities, helped to differentiate users from the boring blue brains and white robots, and enabled users to avoid fostering misunderstandings about AI.

Images have been downloaded from the library across the world; they have been used in news media, business and academic presentations, blogs, websites, event banners, brochures, and reports; and they have been viewed by millions of people. We have been pleased to see them bring life to stories in publications such as the TIME, Washington Post, and the Guardian, but also to statements from influential AI related organisations and in academia and on courses where they are reaching the next generations.

We have seen new images influenced by some of the approaches and learned from the novel interpretations and adaptations people have made. We’ve had feedback and insights from users and stakeholders via a research project which resulted in a Guide to help make the case for better images. 

The bad

However, the job is far from over. New text-to-image generators trained on the existing tropes are being used to illustrate AI and, unsurprisingly, are replicating them and feeding back anthropomorphic representations into a seemingly never-ending production line of scary robots.

As more parts of the internet, more industries and more parts of society become occupied with AI for the first time, text-to-image tools are bringing new users to the still limited range of stock images labelled “AI.” The boom in generative AI and the increase in coverage given to narratives around existential risk and super intelligent AGI has breathed new life into the sci-fi narratives which replace more accurate and insightful discussions about AI.

While we have received some funding to create new images (more about that soon!), our core operations and project remain unfunded, and, indeed, we have lost many funding applications despite such demonstrable impact. This means that non-profit volunteer organisation We and AI, who manage the collaboration, and coordinate the project and site, also took on the running costs, despite also not being funded to do so. It takes time to explore and produce impactful and meaningful visual representations of complex topics; to consult with and for a wide range of image users, volunteers, creatives, advocates, and advisors across the world; to communicate and to support and answer queries about the project; to build new proposals and potential partnerships; to evaluate, prepare, upload images and liaise with artists. It takes money to host and maintain the website, and build new functionality in advance of making it more scalable.

As a result, we have had a backlog of images and articles and have not yet launched some upgrades to the site that were made to enable the library to grow. This has been frustrating, as we know that many users have used all the existing images and are keen to have a wider selection to use. And there is a greater need than ever for more pictures related to AI!

The beautiful

It’s therefore with great joy that we can announce that, with support from volunteers at We and AI, we have finally been able to get together and process all of these images, and upload one a day for the next 30 days! 

We also have some new blog articles written to help share experiences and insight into visual communication of AI from a range of We and AI community members, and a couple of new supporter announcements. 

We will share the stories, projects and motivations behind all of these images over the month of November, as we often find that these discussions prompt important conversations about AI and our relationship with it. We hope you will enjoy discovering a new creative interpretation every day, and will be able to use and share them as we double the size of the library in one month. Check out the first one today.

We are extremely grateful to all the artists and everybody involved in the creation of the images we host.

Illustrating Data Hazards

A person with their hands on a laptop keyboard is looking at something happening over their screen with a worried expression. They are white, have shoulder length dark hair and wear a green t-shirt. The overall image is illustrated in a warm, sketchy, cartoon style. Floating in front of the person are three small green illustrations representing different industries, which is what they are looking at. On the left is a hospital building, in the middle is a bus, and on the right is a siren with small lines coming off it to indicate that it is flashing or making noise. Between the person and the images representing industries is a small character representing artificial intelligence made of lines and circles in green and red (like nodes and edges on a graph) who is standing with its ‘arms’ and ‘legs’ stretched out, and two antenna sticking up. A similar patten of nodes and edges is on the laptop screen in front of the person, as though the character has jumped out of their screen. The overall image makes it look as though the person is worried the AI character might approach and interfere with one of the industry icons.

We are delighted to start releasing some useful new images donated by the Data Hazards project into our free image library. The images are stills from an animated video explaining the project, and offer a refreshing take on illustrating AI and data bias. They take an effective and creative approach to making visible the role of the data scientist and the impact of algorithms, and the project behind the images uses visuals in order to improve data science itself. Project leaders Dr Nina Di Cara and Dr Natalie Zelenka share some background on Data Hazards labels, and the inspiration behind the animation behind the new images.

Data science has the potential to do so much for us. We can use it to identify new diseases, streamline services, and create positive change in the world. However, there have also been many examples of ways that data science has caused harm. Often this harm is not intended, but its weight falls on those who are the most vulnerable and marginalised. 

Often too, these harms are preventable. Testing datasets for bias, talking to communities affected by technology or changing functionality would be enough to stop people from being harmed. However, data scientists in general are not well trained to think about ethical issues, and even though there are other fields that have many experts on data ethics, it is not always easy for these groups to intersect. 

The Data Hazards project was developed by Dr Nina Di Cara and Dr Natalie Zelenka in 2021, and aims to make it easier for people from any discipline to talk together about data science harms, which we call Data Hazards. These Hazards are in the form of labels. Like chemical hazards, we want Data Hazards to make people stop and think about risk, not to stop using data science at all. 

An person is illustrated in a warm, cartoon-like style in green. They are looking up thoughtfully from the bottom left at a large hazard symbol in the middle of the image. The Hazard symbol is a bright orange square tilted 45 degrees, with a black and white illustration of an exclamation mark in the middle where the exclamation mark shape is made up of tiny 1s and 0s like binary code. To the right-hand side of the image a small character made of lines and circles (like nodes and edges on a graph) is standing with its ‘arms’ and ‘legs’ stretched out, and two antenna sticking up. It faces off to the right-hand side of the image.
Yasmin Dwiputri & Data Hazards Project / Better Images of AI / Managing Data Hazards / CC-BY 4.0

By making it easier for us all to talk about risks, we believe we are more likely to see them early and have a chance at preventing them. The project is open source, so anyone can suggest new or improved labels which mean that we can keep responding to new and changing ethical landscapes in data science. 

The project has now been running for nearly two years and in that time we have had input from over 100 people on what the Hazard labels should be, and what safety precautions should be suggested for each of them. We are now launching Version 1.0 with newly designed labels and explainer animations! 

Chemical hazards are well known for their striking visual icons, which many of us see day-to-day on bottles in our homes. By having Data Hazard labels, we wanted to create similar imagery that would communicate the message of each of the labels. For example, how can we represent ‘Reinforces Existing Bias’ (one of the Hazard labels) in a small, relatively simple image? 

Icon

Description automatically generated
Image of the ‘Reinforces Existing Bias’ Data Hazard label

We also wanted to create some short videos to describe the project, that included a data scientist character interacting with ‘AI’ and had the challenge of deciding how to create a better image of AI than the typical robot. We were very lucky to work with illustrator and animator Yasmin Dwiputri, and Vanessa Hanschke who is doing a PhD at the University of Bristol in understanding responsible AI through storytelling. 

We asked Yasmin to share some thoughts from her experience working on the project:

“The biggest challenge was creating an AI character for the films. We wanted to have a character that shows the dangers of data science, but can also transform into doing good. We wanted to stay away from portraying AI as a humanoid robot and have a more abstract design with elements of neural networks. Yet, it should still be constructed in a way that would allow it to move and do real-life actions.

We came up with the node monster. It has limbs which allow it to engage with the human characters and story, but no facial expressions. Its attitude is portrayed through its movements, and it appears in multiple silly disguises. This way, we could still make him lovable and interesting, but avoid any stereotypes or biases.

As AI is becoming more and more present in the animation industry, it is creating a divide in the animation community. While some people are praising the endless possibilities AI could bring, others are concerned it will also replace artistic expressions and human skills.

The Data Hazard Project has given me a better understanding of the challenges we face even before AI hits the market. I believe animation productions should be aware of the impact and dangers AI can have, before only speaking of innovation. At the same time, as creatives, we need to learn more about how AI, if used correctly, and newer methods could improve our workflow.”

Yasmin Dwiputri

Now that we have the wonderful resources created we have been able to release them on our website and will be using them for training, teaching and workshops that we run as part of the project. You can view the labels and the explainer videos on the Data Hazards website. All of our materials are licensed as CC-BY 4.0 and so can be used and re-used with attribution. 

We’re also really excited to see some on the Better Images of AI website, and hope they will be helpful to others who are trying to represent data science and AI in their work. A crucial part of AI ethics is ensuring that we do not oversell or exaggerate what AI can do, and so the way we visualise images of AI is hugely important to the perception of AI by the public and being able to do ethical data science! 

Cover image by Yasmin Dwiputri & Data Hazards Project / Better Images of AI / AI across industries / CC-BY 4.0

Three new Better Images of AI research workshops announced

LCFI Research Project l FINAL WORKSHOPS ANNOUNCED! Calling all journalists, AI practitioners, communicators and creatives! (Event poster in Better Images of AI blue and purple colours, with logos)

Three new workshops have been announced in September and October by the Better Images of AI project team. We will once again bring a range of AI practitioners and communicators together with artists and designers working in different creative fields,  to explore in small groups how to represent artificial intelligence technologies and impacts in more helpful ways.

Following a first insightful initial workshop in July, we’re inviting anyone in relevant fields to apply to join the remaining workshops,- taking place both online and in person. We are particularly interested in hearing from journalists who write about AI. However if you are interested in critiquing and exploring new images in an attempt to find more inclusive, varied and realistic visual representations of AI, we would like to hear from you!

Our next workshops will be held on:

  • Monday 12 September, 3.30 – 5.30pm UTC+1 – ONLINE
  • Wednesday 28 September, 3 – 5pm UTC+1 – ONLINE
  • Thursday 6 October, 2:30 – 4:30pm UTC+1 – IN PERSON – The Alan Turing Institute, British Library 96 Euston Road London NW1 2DB

If you would like to attend or know anyone in these fields, email research@betterimagesofai.org, specifying which date. Please include some information about your current field and ideally a link to an online profile or portfolio.

The workshops will look at approaches to meet the criteria of being a ‘better image of AI’, identified by stakeholders at earlier roundtable sessions. 

The discussions in all four workshops will inform an Arts and Humanities Research Council-funded research project undertaken by the Leverhulme Centre for the Future of Intelligence, the University of Cambridge and organised by We and AI. 

Our first workshop was held on 25 July, and brought together over 20 individuals from creative arts, communications, technology and academia to discuss sets of curated and created images of AI and to explore the next steps in meeting the needs identified in providing better images of AI moving forward. 

The four workshops follow a series of roundtable discussions, which set out to examine and identify user requirements for helpfully communicating visual narratives, metaphors, information and stories related to AI. 

The first workshop was incredibly rich in terms of generating creative ideas and giving feedback on gaps in current imagery. Not only has it surfaced lots of new concepts for the wider Better Images of AI to work on, but the series of workshops will also form part of a research paper to be published in January 2023. This process is really critical to ensuring that our mission to communicate AI in more inclusive, realistic and transparent ways is informed by a variety of stakeholders and underpinned by good evidence.

Dagmar Monett, Head of the Computer Science Department at Berlin School of Economics and Law and one of the July workshop attendees, said: “”Better Images of AI also means better AI: coming forward in AI as a field also means creating and using narratives that don’t distort its goals nor obscure what is possible from its actual capacities. Better Images of AI is an excellent example of how to do it the right way.”

The academic research project is being led by Dr Kanta Dihal, who has published many related books, journal articles and papers related to emerging technology narratives and public perceptions.

The workshops will ultimately contribute to research-informed design brief guidance, which will then be made freely available to anyone commissioning or selecting images to accompany communications – such as news articles, press releases, web communications, and research papers related to AI technologies and their impacts. 

They will also be used to identify and commission new stock images for the Better Images of AI free library.

To register interest: Email our team at research@betterimagesofai.org, letting us know which date you’d like to attend and giving us some information about your current field as well as a link to your LinkedIn profile or similar.

Images Matter!

Woman to the left, jumbled up letters entering her ear

AI in Translation

You often hear the phrase “words matter”: words help us to construct mental images in our minds, and to make sense of the world around us. Yet, in the same framing, “images matter” too. How we depict the state of technology (imagined, current or future) visually and verbally,  helps us position ourselves in relation to what is already there and what is coming.

The way these technologies are visualized and expressed in combination tells us what an emerging technology looks like, and how we should expect to interact with it. If AI is always depicted as white, gendered robots, the majority of AI systems we interact with in reality around the clock go unnoticed. What we do not notice, we cannot react to. When we do not react, we become part of the flow in the dominant (and presently incorrect) narrative. This is why we need better images of AI, as well as a language overhaul.

These issues are not limited to the english-speaking world alone. I have recently been asked to give a lecture at a Turkish university on artificial intelligence and the future of work. Over the years I have presented on this and similar topics (AI and the future of the workplace, the future of HR) on a number of occasions. As an AI ethicist and lecturer, I also frequently discuss the uses of AI in human resources, workplace datafication and employee/candidate surveillance. The difference this time? I was asked to hold the lecture in Turkish.  

Yes, it is my native language. However, for more than 15 years, I have been using English in my day-to-day professional interactions. In English, I can talk about AI and ethics, bias, social justice, and policy for hours. When discussing the same topics in Turkish though I need to use a dictionary to translate some of the technical terminology.  So, during my preparations for this presentation, I went down the rabbit hole: specifically one concerning how connected biases in language and images impact overarching narratives of artificial intelligence. 

Gender and Race Bias in Natural Language Models

In 2017 Caliskan, Bryson and Narayan explored in their pioneering work that semantics (meaning of words) derived automatically from language corpora contain human-like biases. The authors showed that natural language models, built by parsing of large corpora derived from internet, reflect the human and societal gender and racial biases. The evidence was shown in word embeddings, which is a method of representation where the words that have the same meaning or tend to be used together are mapped closer to each other on a vector in a high-dimensional space. In other words, they are hidden patterns of word co-occurrence statistics of language corpora, which include grammatical and semantic information. Caliskan et al share that the thesis behind word embeddings is that words that are closer together in the vector space are semantically closer in some sense. The research showed for example, Google Translate converts occupations in Turkish sentences in gendered ways – even though Turkish language is gender-neutral:

“O bir doktor. O bir hemsire.” to these English sentences: “He is a doctor. She is a nurse.” Or “O bir profesör. O bir öğretmen” to these English sentences “He’s a professor. She is a teacher.”

Such results reflect the gender stereotypes within the language models themselves. Such subtle changes have serious consequences.  NLP tasks such as keyword search and match, translation, web search, or text generation/recognition/analysis can be embedded in systems that make decisions on hiring, university admission, immigration applications, law enforcement interactions, etc.

Google Translate, after a patch fix of its models, now gives feminine and masculine binary translations. But 4 years after this patch fix (as of the time of writing), Google Translate still has not addressed non-binary gender translations.

Gender and Race Bias in Search Results

The second seminal work is Dr Safiya Noble’s book Algorithms of Oppression, which covers academic research on Google search algorithms, examining search results from 2009 to 2015. Similar to the findings of the above research on language models, Dr Noble argues that the search algorithms are not neutral tools, and they reflect and magnify the race and gender biases that exist in society and the people who create them. She expertly demonstrates how the search results for keywords like “white girls” are significantly different to “Black girls”,  “Asian girls” or “Hispanic girls”  The latter set of words would show images which were exclusively pornography or highly sexualized content. The research brings to the surface the hidden structures of power and bias in widely used tools that shape the narratives of technology and future. Dr Noble writes “racism and sexism are part of the architecture and language of technology[…]We need a full-on re-evaluation of the implications of our information resources being governed by corporate-controlled advertising companies.”

Google Search applied another after-the-fact fix to reduce the racy results after Dr Noble’s work. However, this also remains a patch fix: the results for “Latina girls” still show majority sexualized images and results for “Hispanic girls” show majority stock photos or Pinterest posts. The results for “Asian girls” seem to remain much the same, associated with pictures tagged as hot, cute, beautiful, sexy, brides.

Gender and Race Bias in Search Results for “Artificial Intelligence”

The third work is Better Images of AI, which is a collaboration that I am proud to have helped found and continue supporting as an advisor. A group of like-minded advocates and scholars have been fighting against the false and cliched images of artificial intelligence used in news stories or marketing material about AI. 

We have been concerned about how images such as humanoid robots, outstretched robot hands, brains shape the public’s perception of what AI systems are and what they are capable of. Such anthropomorphized illustrations not only add to the hype of AI’s endless miracles, but they also stop people questioning the ubiqutious AI systems embedded in their smart phones, laptops, fitness trackers, home appliances – to name but a few. They hinder the perception of consumers and citizens. This means that the conversations in mainstream tend to be stuck at ‘AI is going to take all of our jobs away,’ or ‘AI will be the end of humanity’ and as such the current societal and environmental harms and implications of some AI systems are not publicly and deeply discussed. Those powerful actors developing or using systems to benefit themselves rather than society are hardly held accountable. 

The Better Images of AI collaboration not only challenges the narratives and biases underlying these images, but also provides a platform for artists to share their images in a creative commons repository – in other words, it builds a communal alternative imagination. These images aim to more realistically portray the technology, the people behind it, and point towards its strengths, weaknesses, context and applications. They represent a wider range of humans and human cultures than ‘Caucasian businessperson’, show realistic applications of AI now, not in some unspecified science-fiction future, don’t show physical robotic hardware where there is none and reflect the realistically messy, complex, repetitive and statistical nature of AI systems.

Down the rabbit hole…

So with that background, back to my story for this article. For part of the lecture, I was preparing discussions surrounding AI and the future of work. I wanted to discuss how execution of different professional tasks were changing with technology, and what that means for the future of certain industries or occupational areas. I wanted to underline that some tasks like repetitive transactions, large scale iterations, standard rule applications are better done with AI – as long as they were the right solution for the context and problem, and were developed responsibly and monitored continuously. 

On the flip side, certain skills and tasks that include leading, empathizing, creating are to be left to humans–AI systems neither have the capacity or capability, nor should they be entrusted with such tasks.  I wanted to add some visuals to the presentation and also check out what is currently being depicted in the search results. I first started with basic keyword searches in English such as ‘AI and medical,’ ‘AI and education,’ ‘AI and law enforcement’ etc. What I saw in the first few examples was depressing. I decided to expand the search to more occupational areas: the search results did not get better. I then wondered what the results might be if I had the same searches but this time in Turkish.

What you see below are the first images that come up in my Google search results for each of these keywords. The images not only continue to reflect the false narratives but in some cases are flat out illogical. Please note that I have only used AI / Yapay Zeka in my search and not ‘robot’.

Yapay zeka ve sağlık : AI and medical

A picture containing text

Description automatically generated

In both Turkish and English-speaking worlds, we are to expect white Caucasian male robots to be our future doctors. They will need to wear a shirt, tie and white doctor’s coat to keep their metalic bodies warm (apparently no need for masking). They will also need to look at a tablet to process information and make diagnosis or decisions. Their hands and fingers will delicately handle surgical moves. What we should really be caring about medical algorithms right now is the representativeness of the datasets used in building the algorithms, the explainability of how the algorithm made a diagnostic determination, why it is suggesting a certain prescription or course of action, and how some health applications are completely left out of regulatory oversight.

We have already experienced current medical algorithms which result in biased and discriminatory outcomes because of a patient’s gender, socioeconomic level or even historical access of certain populations to healthcare. We know of diagnostic algorithms which have embedded code to change a determination due to a patient’s race; of false determinations due to the skin color of a patient; of faulty correlations and predictions due to training datasets representing only a portion of the population.

Yapay zeka ve hemşire : AI and Nurse

Yapay zekanın sağlık alanında kullanımı | Pitstop Reklam Ajansı Graphical user interface

Description automatically generated

After seeing the above images I wondered if the results would change if I was more specific about the profession within the medical field. I immediately regretted my decision.

In both results, the Caucasian male robot image changes to a Caucasian female image, reflecting the gender stereotypes across both cultures. The Turkish AI nurse wants you to keep quiet and not cause any disruption or noise. I was not prepared for the English version, a D+ cup wearing robot. Hard to say if the breasts are natural or artificial! This nurse has a Green Cross both on the nurse cap and the bra(?!). The robot is connected to something with yellow cables so probably limited in its physical reach, although there is definitely intention to listen to your chest or heart beat. This nurse will also show you your vitals on an image projected from her chest.

Yapay zeka ve kanun : AI and legal

A picture containing water sport, swimming

Description automatically generated A close-up of a robot

Description automatically generated with low confidence

AI in the legal system is currently one of the most contentious issues in the policy and regulatory discussions. We have already seen a number of use cases where AI systems are used by courts for judicial decisions about recidivism, sentencing or bail, some with results biased against Black people in particular. In the criminal justice field, the use of AI systems for providing investigative assistance and automating decision-making processes for routine administrative paperwork is already in place in many countries. When it comes to images though, these systems, some of which make high-stake decisions that impact fundamental rights, or the existing cases of impacted people are not depicted. Instead we either have a robot touching a blue projection (don’t ask why), or a robot holding a wooden gavel. It is not clear from the depiction if the robot will chase you and hammer you down with the gravel, or if this white male looking robot is about to make a judgement about your right to abortion. The glasses which the robot is wearing I presume are to stress that this particular legal robot is well read.

Yapay zeka ve polis : AI and Law Enforcement

A picture containing text, electronics

Description automatically generated A picture containing text, outdoor, sky

Description automatically generated

Similar to the secondary search I explained above for medical systems, I wanted to go deeper here. I searched for AI and law enforcement.  Currently, in a number of countries (including US, EU member states, China, etc) AI systems are used by police to predict crimes which have not happened yet. Law enforcement uses AI in various ways, from  evidence analysis to biometric surveillance: from anomaly detection/pattern analysis to license-plate readers; from crowd control to dragnet data collection and aggregation; from voice analysis to social media scanning to drone systems. Although crime data is notoriously biased in terms of race, ethinicity and socioeconomic background, and reflects decades of structural racism and oppression, you could not tell any of that from the image results. 

You do not see the picture of Black men wrongfully arrested due to biased and inaccurate facial recognition systems. You do not see hot spots mapped onto predictive policing maps which are heavily surveilled due to the data outcomes. You do not see the law enforcement buying large amounts of data from data-brokers – data that they would otherwise need search warrants to acquire. What you see instead in the English version is another Caucasian male-looking robot working shoulder to shoulder with police SWAT teams – keeping law and order!  In the Turkish version, the image result reflects a female police officer who is either being whispered to by an AI system or using an AI system for work. If you are a police officer in Turkey, you are probably safe for the moment as long as your AI system is shaped as a human head circuit.

Yapay zeka ve gazetecilik : AI and journalism

A picture containing text, automaton

Description automatically generated

Content and news creation are currently some of the most ubiquitous uses of AI we experience in our daily lives. We see algorithmic systems curating content at news/media channels. We experience the manipulation and ranking of the content in the search results, in the news that we are exposed to, in the social media feeds that we doom scroll. We complain about how disinformation and misinformation (and to a certain extent deepfakes) have become mainstream conversations with real life consequences. Research after research warns us about the dangers of echo chambers created by algorithmic systems, how it leads to radicalization and polarization, and demands accountability from the people who have the power to control their designs.

The image result in Turkish search is interesting in the sense that journalism is still a male occupation. The same looking people work in the field, and AI in this context is a robot of short stature waving an application form to be considered for the job.  The robot in English results is slightly more stylish. It even carries a Press card to depict the ethical obligations it has for the profession. You would almost think that this is the journalist working long hours to break an investigative piece, or one risking their life to report from conflict zones.

Yapay zeka ve finans : AI and finance

A fire hydrant in front of a digital clock

Description automatically generated with medium confidence

The finance sector,  banking and insurance industries reflect some of the most mature use cases of AI systems. For decades now, banking has been using algorithmic systems for pattern recognition and fraud detection, for credit scoring and credit/loan determinations, for electronic transaction matching to name a few. The insurance industry likewise heavily uses algorithmic systems and big data to determine insurance eligibility, policy premiums and in certain cases claim management.  Finance was one of the first industries disrupted by emerging technologies. FinTech created a number of companies and applications to break the hold of major financial institutions on the market. Big banks responded with their own innovations.

So, it is again interesting to see that even with such mature use of AI in a field, robot images are still first in the search results. We do not see the app which you used to transfer funds to your family or friends. Nor the high frequency trading algorithms which currently carry more than 70% of all daily stock exchange transactions. It is not the algorithms which collect hundreds of data points about you from your grocery shopping to GPS locations to make a judgement about your creditworthiness – your trustworthiness. It is not the sentiment analysis AI which scans millions of corporate reporting, public disclosures or even tweets about publicly traded companies and make microsecond judgements on what stocks to buy. It is not the AI algorithm which determines the interest rate and limit on your next credit card or loan application. No, it is the image of another white robot staring at a digital board of what we can assume to be stock prices. 

Yapay zeka ve ordu : AI and military

A picture containing outdoor, tree, grass, military vehicle

Description automatically generated A picture containing weapon, old

Description automatically generated

AI and military usE cases are a whole different story in the scheme of AI innovation and policy discussions. AI systems have been used for many years in satellite imagery analysis, pattern recognition, weapon development and simulations etc. The more recent debates intertwine geopolitics with an AI arms race. This indeed should keep all of us awake at night. The importance of autonomous lethal weapons (LAWs) by militaries as well as non-traditional actors is an issue upon which every single state in the world seems to agree. 

Yet agreement does not mean action. It does not mean human life is protected. LAWs have the capacity to make decisions by themselves to attack – without any accountability. Micro drones can be combined with facial recognition and attack systems to take down individuals and political dissenters. Drones can be remotely controlled to drop ammunition over remote regions. Robotic systems (correct depiction) can be used for landmine removal, crowd control or perimeter security. All these AI systems already exist. The image results though again reflect an interesting narrative. The image in Turkish results show a female American soldier using a robot to carry heavy equipment. The robot here is more like a mule in this depiction than an autonomous killer.  The image result in English shows a mixed gender robot group in what seems to be camouflage green color. At least the glowing white will not be an issue for the safety of these robots.

Yapay zeka ve eğitim : AI and Education

Yapay Zekanın Eğitimdeki 10 Kullanım Alanı – Social Business Türkiye Text

Description automatically generated

When it comes to AI and education, the images continue to be robot related. The first robot lifts kids up to the skies to show what is on the horizon. It has nothing to do with the hype of AI-powered training systems or learning analytics which are hitting schools and universities across the globe. The AI here does not seem to use proctoring software to discriminate or surveil students. It also apparently does not matter if you do not have access to broadband to interact with this AI or do your schoolwork. The search result in English, on the other hand, shows a robot which needs a blackboard and a piece of chalk to process mathematical problems. If your Excel or Tableu or R software does not look like this image, you might want to return to the vendor. Also if you are an educator in social sciences or humanities, it is probably time to re-think the future of your career.

Yapay zeka ve mühendislik : AI and engineering

Diagram

Description automatically generated with medium confidence Graphical user interface

Description automatically generated with low confidence

The blackboard and chalk using robot is better off in the future of engineering. Educator robot might be short on resources, but the engineer robot will use a digital board to do the same calculations.  Staring at this board will eventually ensure the robot engineer solves the problem. In the Turkish version, the robot gazes at a field of hexagons. If you are a current engineer in any field using AI software to visualize your data in multiple dimensions, running design or impact scenarios, or building code etc – does this look like your algorithm? 

Yapay zeka ve satış : AI and sales

A picture containing text, electronics

Description automatically generated A group of people working on a computer

Description automatically generated with low confidence

If you are a salesperson in Turkey, the prospects for you are a bit iffy. The future seems to require your brain to be exposed and held in the air. There is a safety net of a palm there to protect your AI brain just in case there is too much overload.  However if you are in sales in the English-speaking world, your sales team or your call center staff will be more of white glowy male robots. Despite being a robot, these AI systems will still need access to a laptop to type things and process data. They will also need headsets to communicate with customers because the designers forgot to include voice recognition and analysis software in the first place. Maybe next time you hear ‘press 0 to speak to an agent’ you might have different images in your mind. Never mind how the customer support services you call record your voice and train their algorithms with a very weak consent notice (‘your call might be recorded for training and quality purposes’ sound familiar?). Never mind the fact most of the current AI applications are chatbots on the websites you visit, or automated text algorithms which inquire about your questions. Never mind the cheap human labor which churns through the sales and call center operations without much of worker rights or protections.    

Yapay zeka ve mimarlık : AI and architecture

A statue of a person with a city in the background

Description automatically generated with low confidence A statue of a person with a city in the background

Description automatically generated with low confidence

It was surprising to see the same image as the first result in both Turkish and English search for architecture. I will not speculate on why this might be the case. However, our images and imaginations of current and future AI systems once again are limited to robots. This time a female robot is used in the depiction with city planning and architectural ideas flowing out from the back of the robot’s head.

Yapay zeka ve tarım : AI and agriculture

A picture containing text, plant, grass

Description automatically generated

Finally, I wanted to check what the situation was for agriculture. It was surprising that Turkish image reflected a robot delicately picking a grain of wheat. Turkey used to be a country proud of its agricultural heritage and its ability to self-sustain on food. It used to be a net exporter of food products.  Over the years, it lost that edge due to a number of factors. The current imagery of AI does not seem to take into account any human who suffer the harsh conditions in the fields. The image on the right is more focused on the conditions of the nature to ensure efficiency and high production. It was refreshing to see that at least the image of green fields was kept and maybe that stays for us a reminder that we need to respect and protect the nature. 

So, returning to where I started, images matter.  We need to be cognizant of how the emerging technologies are being visualized, why they are depicted in these ways, who makes those decisions and hence shapes the conversation, who benefits and who is harmed from such framing. We need to imagine technologies which move us towards humanity, equity and justice. We also need the images of those technologies to be accurate, diverse and inclusive.

Instead of assigning human characteristics to algorithms (which are at the end of the day human made code and rules), we need to reflect the human motivations and decisions embedded in these systems. Instead of depicting AI with superhuman powers, we need to show the labor of humans which build these systems. Instead of focusing only on robots and robotics, we need to explain AI as software embedded in our phones, laptops, apps, home appliances, cars, or surveillance infrastructures. Instead of thinking of AI as an independent entity or intelligence, we need to explain AI being used as a tool-making decisions about our identity, health, finances, work, education or our rights and freedoms. 

Handmade, Remade, Unmade A.I.

Two digitally illustrated green playing cards on a white background, with the letters A and I in capitals and lowercase calligraphy over modified photographs of human mouths in profile.

The Journey of Alina Constantin’s Art

Alina’s image, Handmade A.I., was one of the first additions to the Better Images of AI repository. The description affixed to the image on the site outlines its ‘alternative redefinition of AI’, bringing back into play the elements of human interaction which are so frequently excluded from discussions of the tech. Yet now, a few months on from the introduction of the image to the site, Alina’s work itself has undergone some ‘alternative redefinition’. This blog post explores the journey of this particular image, from the details of its conception to its numerous uses since: How has the image itself been changed, adapted in significance, semantically used? 

Alina Constantin is a multicultural game designer, artist and organiser whose work focuses on unearthing human-sized stories out of large systems. For this piece, some of the principles of machine learning like interpretation, classification, and prioritisation were encoded as the more physical components of human interaction: ‘hands, mouths and handwritten typefaces’, forcing us to consider our relationship to technology differently. We caught up with Alina to discuss further the process (and meaning) behind the work.

What have been the biggest challenges in creating Better Images of AI?

Representing AI comes with several big challenges. The first is the ongoing inundation of our collective imagination with skewed imagery, falsely representing these technologies in practice, in the name of simplification, sensationalism, and our human impulse towards personification. The second challenge is the absence of any single agreed-upon definition of AI, and obviously the complexity of the topic itself.

What was your approach to this piece?

My approach was largely an intricate process of translation. To stay focused upon the ‘why of A.I’ in practical terms, I chose to focus on elements of speech, also wanting to highlight the human sources of our algorithms in hand drawing letters and typefaces. 

I asked questions, and selected imagery that could be both evocative and different. For the back side of the cards, not visible in this image, I bridged the interpretive logic of tarot with the mapping logic of sociology, choosing a range of 56 words from varying fields starting with A/I to allow for more personal and specific definitions of A.I. To take this idea further, I then mapped the idea to 8 different chess moves, extending into a historical chess puzzle that made its way into a theatrical card deck, which you can play with here. You can see more of the process of this whole project here.

This process of translating A.I via my own artist’s tool set of stories/gameplay was highly productive, requiring me to narrow down my thinking to components of A.I logic which could be expressed and understood by individuals with or without a background in tech. The importance of prototyping, and discussing these ideas with audiences both familiar and unfamiliar with AI helped me validate and adjust my own understanding and representation–a crucial step for all of us to assure broader representation within the sector.

So how has Alina’s Better Image been used? Which meanings have been drawn out, and how has the image been redefined in practice? 

One implementation of ‘Handmade A.I.’, on the website of one of our affiliated organisations We and AI, remains largely aligned with the artist’s reading of it. According to We and AI, the image was chosen due to its re-centring of the human within the AI conversation: the human hands still hold the cards, humanity are responsible for their shuffling, their design (though not necessarily completely in control of which ones are dealt.) Human agency continues to direct the technology, not the other way round. As a key tenet of the organisation, and a key element of the image identified by Alina, this all adds up. 

https://weandai.org/, use of Alina’s image

A similar usage by the Universität Hamburg, to accompany a lecture on responsibility in the AI field, follows a similar logic. The additional slant of human agency considered from a human rights perspective again broadens Alina’s initial image. The components of human interaction which she has featured expand to a more universal representation of not just human input to these technologies but human culpability–the blood, in effect, is on our hands. 

Universität Hamburg use of Alina’s image

Another implementation, this time by the Digital Freedom Fund, comes with an article concerning the importance of our language around these new technologies. Deviating slightly from the visual, and more into the semantics of artificial intelligence, the use may at first seem slightly unrelated. However, as the content of the article develops, concerns surrounding the ‘technocentrism’ rather than anthropocentrism in our discussions of AI become a focal point. Alina’s image captures the need to reclaim language surrounding these technologies, placing the cards firmly back in human hands. The article directly states, ‘Every algorithm is the result of a desire expressed by a person or a group of persons’ (Meyer, 2022.) Technology is not neutral. Like a pack of playing cards, it is always humanity which creates and shuffles the deck. 

Digital Freedom Fund use of Alina’s image

This is not the only instance in which Alina’s image has been used to illustrate the relation of AI and language. The question “Can AI really write like a human?” seems to be on everyone’s lips, and ‘Handmade A.I.’ , with its deliberately humanoid typeface, its natural visual partner. In a blog post for LSE, Marco Lehner (of BR AI+) discusses employment of a GPT-3 bot, and whilst allowing for slightly more nuance, ultimately reaches a similar crux– human involvement remains central, no matter how much ‘automation’ we attempt.

Even as ‘better’ images such as Alina’s are provided, we still see the same stock images used over and over again. Issues surrounding the speed and need for images in journalistic settings, as discussed by Martin Bryant in our previous blog post, mean that people will continue to almost instinctively reach for the ‘easy’ option. But when asked to explain what exactly these images are providing to the piece, there’s often a marked silence. This image of a humanoid robot is meaningless– Alina’s images are specific; they deal in the realities of AI, in a real facet of the technology, and are thus not universally applicable. They relate to considerations of human agency, responsible AI practice, and don’t (unlike the stock photos) act to the detriment of public understanding of our tech future.