Beneath the Surface: Adrien’s Artistic Perspective on Generative AI

The image features the title "Beneath the Surface: Adrien's Artistic Perspective on Generative AI." The background consists of colourful, pixelated static, creating a visual texture reminiscent of digital noise. In the centre of the image, there's a teal rectangular overlay containing the title in bold, white text.

May 28, 2024 – A conversation with Adrien Limousin – a photographer and visual artist, sheds light on the nuanced intersections between AI, art, and ethics. Adrien’s work delves into the opaque processes of AI, striving to demystify the unseen mechanisms and biases that shape our representations.


A vibrant, abstract image from converting Street View screenshots from TIFF to JPEG, showing a pixelated, distorted classical building with columns. The sky features glitch-like, multicolored waves, blending greens, purples, pinks, and blues.

ADRIEN LIMOUSIN – Alterations (2023)

Adrien previously studied advertising and now is studying photography at the National Superior School of Photography (ENSP) in Arles and is particularly drawn to the language of visual art, especially from new technologies.

A cluster of coloured pixels made up from random gaussian noise taking up the whole canvas, representing a not denoised AI generated image; digital pointillism

Fig 1. Adrien Limousin / Better Images of AI / Non-image / CC-BY 4.0

Non-image

Adrien was drawn to the ‘Better Images of AI’‘ project after recognising the need for more nuanced and accurate representations of AI, particularly in journalism. In our conversation, I asked Adrien about his approach to creating the image he submitted to Better Images of AI (Fig 1.).


> INTERVIEWER: Can you tell me about your thinking and process behind the image you submitted?

> ADRIEN: I thought about how AI-generated images are created. The process involves taking an image from a dataset, which is progressively reduced to random noise. This noise is then “denoised” to generate a new image based on a given prompt. I wanted to try to find a breach or the other side of the opaqueness of these models. We only ever see the final result—the finished image—and the initial image. The intermediate steps, where the image is transitioning from data to noise and back, are hidden from us.

> ADRIEN: My goal with “Non-image” was to explore and reveal this hidden in-between state. I wanted to uncover what lies between the initial and final stages, which is typically obscured. I found that extracting the true noisy image from the process is quite challenging. Therefore, I created a square of random noise to visually represent this intermediate stage. It’s no longer an image and it’s also not an image yet.


Adrien’s square of random noise captures this “in-between” state, where the image is both “everything and nothing”—representing aspects of AI’s inner workings. This visual metaphor underscores the importance of making these hidden processes visible, to demystify and foster a more accurate understanding of what AI is, how it operates, and it’s real capabilities. Seeing the process Adrien discusses here also reflects the complex and collective human data that underpins AI systems. The image doesn’t originate from a single source but is a collage of countless lives and data points, both digital and physical, emphasising the multifaceted nature of AI and its deep entanglement with human experience.

A laptopogram based on a neutral background and populated by scattered squared portraits, all monochromatic, grouped according to similarity. The groupings vary in size, ranging from single faces to overlapping collections of up to twelve. The facial expressions of all the individuals featured are neutral, represented through a mixture of ages and genders.

Philipp Schmitt & AT&T Laboratories Cambridge / Better Images of AI / Data flock (faces) / CC-BY 4.0

“The medium is the message”

(McLuhan, Marshall, 1964).

When I asked Adrien about the artists who have inspired him, he highlighted how Marshall McLuhan’s seminal concept, “the medium is the message,” profoundly resonated with him.

This concept is crucial for understanding how AI is represented in the media. McLuhan argued that the medium itself—whether it’s a book, television, or image—shapes our perceptions and influences society more than the actual content it delivers. McLuhan’s work, particularly in Understanding Media (1974), explores how technology reshapes human interaction and societal structures. He warned that media technologies, especially in the electronic age, fundamentally alter our perceptions and social patterns. When applied to AI, this means that the way AI is visually represented can either clarify or obscure its true nature. Misleading images don’t just distort public understanding; they also shape how society engages with and responds to AI, emphasising the importance of choosing visuals that accurately reflect the technology’s reality and impact.

 “Stereotypes inside the machine”

(Adrien).

Adrien’s work explores the complex issue of stereotypes embedded within AI datasets, emphasizing how AI often perpetuates and even amplifies these biases through discriminatory images, texts, and videos.


> ADRIEN: Speaking of stereotypes inside the machine, I tried to question that in one of the projects I started two years ago and I discovered that it’s a bit more complicated than what it first seems. AI is making discriminatory images or text or videos, yes. But once you see that you start to question the nature of the image in the dataset and then suddenly the responsibility shifts and now you start to question why these images were chosen or why these images were labelled that way in the dataset in the first place ?

> ADRIEN:  Because it’s a new medium we have the opportunity to do things the right way. We aren’t doomed to repeat the same mistakes over and over. But instead we have created something even more – or at least equally discriminatory.

And even though there are adjustments made (through Reinforcement Learning from Human Feedback) they are just kind of… small patches. The issue needs to be tackled at the core.”

Image shows a white male in a suit facing away from the camera on a grey background. Text on the left side of the image reads “intelligent person.”

Adrien Limousin –  Human·s 2 (2022 – Ongoing)

As Adrien points out, minor adjustments or “sticking plasters” won’t suffice when addressing biases deeply rooted in our cultural and historical contexts. As an example – Google recently attempted  to reduce racial bias in their AI Gemini image algorithms. This effort was aimed at addressing long standing issues of racial bias in AI-generated images, where people of certain racial backgrounds were either misrepresented or underrepresented. However, despite these well-intentioned efforts, the changes inadvertently introduced new biases. For instance, while trying to balance representation, the algorithms began overemphasizing certain demographics in contexts where they were historically underrepresented, leading to skewed and culturally inappropriate portrayals. This outcome highlights the complexity of addressing bias in AI. It’s not enough to simply optimize in the opposite direction or apply blanket fixes; such approaches can create new problems while attempting to solve old ones. What this example underscores is the necessity for AI systems to be developed and situated within culture, history, and place.


> INTERVIEWER: Are these ethical considerations on your mind when you are using AI in your work?

> ADRIEN: Using Generative AI makes me feel complicit about these issues. So I think the way I approach it is more like trying to point out these lacks, through its results or by unravelling its inner working

“It’s the artists role to question”

(Adrien)


> INTERVIEWER: Do you feel like artists have an important role in creating the new and more accurate representations  of AI?

> ADRIEN:  I think that’s one of the role of the artist. To question.

> INTERVIEWER: If you can kind of imagine like what, what kind of representations we might see, or you might want to have in the future like instead of when you Google AI and it’s blue heads and you know, robots, etc.

> ADRIEN: That’s a really good question and I don’t think I have the answer, but as I thought about that, understanding the inner workings of these systems can help us make better representations. For instance, the concepts and ideas of remixing existing representations—something that we are familiar with, that’s one solution I guess to better represent Generative AI.


Image displays an error message from the Windows 95 operating system. The text reads ‘The belief in photographic images.exe has stopped working’.

ADRIEN LIMOUSIN System errors – (2024 – ongoing)

We discussed the challenges involved in encouraging the media to use images that accurately reflect AI.


> ADRIEN: I guess if they used stereotyped images it’s because most people have associated AI with some kind of materialised humanoid as the embodiment of AI and that’s obviously misleading, but it also takes time and effort to change mindsets, especially with such an abstract and complex technology, and that is I think one of the role of the media to do a better job at conveying an accurate vision of AI, while keeping a critical approach.


Another major factor is knowledge: journalists and reporters need to recognise the biases and inaccuracies in current AI representations to make informed choices. This awareness comes from education and resources like the Better Images of AI project, which aim to make this information more accessible to a wider audience. Additionally, there’s a need to develop new visual associations for AI. Media rely on attention-grabbing images that are immediately recognisable, we need new visual metaphors and associations that more accurately represent AI.  

One Reality


> INTERVIEWER: So kind of a big question, but what do you feel is the most pressing ethical issue right now in relation to AI that you’ve been thinking about?

> ADRIEN: Besides the obvious discriminatory part of the dataset and outputs, I think one of the overlooked issues is the interface of these models. If we take ChatGPT for instance, the way there is a search bar and you put text in it expecting an answer, just like a web browser’s search bar is very misleading. It feels familiar, but it absolutely does not work in the same way. To take any output as an answer or as truth, while it is just giving the most probable next words is deceiving and I think that’s something we need to talk a bit more about.


One major problem with AI is its tendency to offer simplified answers to multifaceted questions, which can obscure complex perspectives and realities. This becomes especially relevant as AI systems are increasingly used in information retrieval and decision-making. For example, Google’s AI summarising search feature has been criticised for frequently presenting incorrect information. Additionally, AI’s tendency to reinforce existing biases and create filter bubbles poses a significant risk. Algorithms often prioritise content that aligns with users’ pre-existing views, exacerbating polarisation (Pariser, 2011). This is compounded when AI systems limit exposure to a variety of perspectives, potentially widening societal divides.

Metasynthography

(Adrien)

Adrien takes inspiration from the idea of metaphotography, which involves using photography to reflect on and critique the medium itself. In metaphotography, artists use the photographic process to comment on and challenge the conventions and practices of photography.

Building on this concept, Adrien has coined the term “meta-synthography” to describe his approach to digital art.


> ADRIEN: The term meta-synthography is one of the terms I have chosen to describe Digital arts in general. So it’s not properly established, that’s just me doing my collaging.

> INTERVIEWER: That’s great. You’re gonna coin a new word in this blog 😉


I asked Adrien what artists inspire him. He discusses the influence of Robert Ryman, a renowned painter celebrated for his minimalist approach that focuses on the process of painting itself. Ryman’s work often features layers of paint on canvas, emphasising the act of painting and making the medium and its processes central themes in his art.


> ADRIEN: I recently visited an exhibition of Robert Ryman, which kind of does the same with painting – he paints about painting on painting, with painting.

> INTERVIEWER:  Love that.

> ADRIEN: I thought that’s very interesting and I very much enjoy this kind of work, it talks about the medium…It’s  a bit conceptual, but it raises question about the medium… about the way we use it, about the way we consume it.

Image displays a large advertising board displaying a blank white image, the background is a grey clear sky

Adrien Limousin – Lorem Ipsum (2024 – ongoing)

As we navigate the evolving landscape of AI, the intersection of art and technology provides a crucial perspective on the impact and implications of these systems. By championing accurate representations and confronting inherent biases, Adrien’s work highlights the essential role  artists play in shaping a more nuanced and informed dialogue about AI. It’s not only important to highlight AI’s inner workings but also to recognise that imagery has the power to shape reality and our understanding of these technologies. Everyone has a role in creating AI that works for society, countering the hype and capitalist-driven narratives advanced by tech companies. Representations from communities, along with the voices of individuals and artists, are vital for sharing knowledge, making AI more accessible, and bringing attention to the experiences and perspectives often rendered invisible by AI systems and media narratives.


Adrien Limousin (interviewee) is a 25 years old french (post)photographer exploring the other side of images, currently studying at the National Superior School of Photography in Arles.

Cherry Benson (interviewer) is a Student Steward for Better Images of AI. She holds a degree in psychology from London Metropolitan University and is currently pursuing a Master’s in AI Ethics and Society at the University of Cambridge where her research centers on social AI. Her work on the intersection of AI and border control has been featured as a critical case study in the Cambridge Journal of Artificial Intelligence for how racial capitalism is deeply intertwined with the development and deployment of AI.

💬 Behind the Image with Yutong from Kingston School of Art

This year, we collaborated with Kingston School of Art to give MA students the task of creating their own better images of AI as part of their final project. 

In this mini-series of blog posts called ‘Behind the Images’, our Stewards are speaking to some of the students that participated in the module to understand the meaning of their images, as well as the motivations and challenges that they faced when creating their own better images of AI. Based on our assessment criteria, some of the images will also be uploaded to our library for anyone to use under a creative commons licence. 

In our third and final post, we go ‘Behind the Image’ with Yutong about her pieces, ‘Exploring AI’ and ‘Talking to AI’. Yutong intends that her art will challenge misconceptions about how humans interact with AI.

You can freely access and download ‘Talking to AI’ and both versions of ‘Exploring AI’ from our image library.

Both of Yutong’s images are available in our library, but as you might discover below, there were many challenges that she faced when developing these works. We greatly appreciate Yutong letting us publish her images and talking to us for this interview. We are hopeful that her work and our conversations will serve as further inspiration for other artists and academics who are exploring representations of AI.

Can you tell us a bit about your background and what drew you to the Kingston School of Art?

Yutong is from China and before starting the MA in Illustration at Kingston University, she completed an undergraduate major in Business Administration. What drew Yutong to Kingston School of Art was its highly regarded reputation for its illustration course. On another note, she enjoys how the illustration course at Kingston balances both the commercial and academic aspects of art – allowing Yutong to combine her previous studies with her creative passions. 

Could you talk me through the different parts of your images and the meaning behind them?

In both of her images, Yutong wishes to unpack the interactions between humans and AI – albeit from two different perspectives.

Talking to AI’

Firstly, ‘Talking to AI’ focuses on more accurately representing how AI works. Yutong uses a mirror to reflect how our current interactions with AI are based on our own prompts and commands. At present, AI cannot generate content independently so it reflects the thoughts and opinions that humans feed into systems. The binary code behind the mirror symbolises how human prompts and data are translated into computer language which powers AI. Yutong has used a mirror to capture an element between humans and AI interaction that is overlooked – the blurred transition between human work to AI generation.

‘Exploring AI’

Yutong’s second image, ‘Exploring AI’ aims to shed light on the nuanced interactions that humans have with AI on multiple levels. Firstly, the text, ‘Hi, I am AI’ pays homage to an iconic phrase in programming (‘Hello World’) which is often the first thing any coder learns how to write and it also forms the foundations of a coder’s understanding of a programming language’s syntax, structure, and execution process. Yutong thought this was fitting for her image as she wanted to represent the rich history and applications of AI which has its roots in basic code. 

Within ‘Exploring AI’, each grid square is used to represent the various applications of AI in different industries. The expanded text across multiple grid squares demonstrates how one AI tool can have uses across different industriesChatGPT is a prime example of this.

However, Yutong wants to also draw attention to the figures within each square which all interact with AI in complex and different ways. For example, some of the body language of the figures depict them to be variously frustrated, curious, playful, sceptical, affectionate, indifferent, or excited towards the text, ‘Hi, I am AI’.

Yutong wants to show how our human response to AI changes and varies contextually and it is driven by our own personal conceptions of AI. From her own observations, Yutong identified that most people either have a very positive or very negative opinion towards AI – but not many feel anything in between. By including all the different emotional responses towards AI in this image, Yutong hopes to introduce greater nuance into people’s perceptions of AI and help people to understand that AI can evoke different responses in different contexts. 

What was your inspiration/motivation for creating your images?

As an illustrator, Yutong found herself surrounded by artists that were fearful that AI would replace their role in society. Yutong found that people are often fearful of the unknown and things they cannot control. Therefore, being able to improve understanding of what AI is and how it works through her art, Yutong hopes that she can help her fellow creators face their fears and better understand their creative role in the face of AI. 

Through her art, ‘Exploring AI’ and ‘Talking to AI’, Yutong intends to challenge misconceptions about what AI is and how it works. As an AI user herself, she has realised that human illustrators cannot be replaced by AI – these systems are reliant on the works of humans and do not yet have the creative capabilities to replace artists. Yutong is hopeful that by being better educated on how AI integrates in society and how it works, artists can interact with AI to enhance their own creativity and works if they choose to do so. 

Was there a specific reason you focused on dispelling misconceptions about what AI looks like and how Chat-GPT (or other large language models) work? 

Yutong wanted to focus on how AI and humans interact in the creative industry and she was driven by her own misconceptions and personal interactions with AI tools. Yutong does not intend for her images to be critical of AI. Instead, she envisages that her images can help educate other artists and prompt them to explore how AI can be useful in their own works. 

Can you describe the process for creating this work?

From the outset, Yutong began to sketch her own perceptions and understandings about how AI and humans interact. The sketch below shows her initial inspiration. The point at which each shape overlaps represents how humans and AI can come together and create a new shape – this symbolises how our interactions with technology can unlock new ideas, feelings and also, challenges.

In this initial sketch, she chose to use different shapes to represent the universality of AI and how its diverse application means that AI doesn’t look like one thing – AI can underlay an automated email response, a weather forecast, or medical diagnosis. 

Yutong’s initial sketch for ‘Talking to AI’

The project aims to counteract common stereotypes and misconceptions about AI. How did you incorporate this goal into your artwork? 

In ‘Exploring AI’, Yutong wanted to introduce a more nuanced approach to AI representation by unifying different perspectives about how people feel, experience and apply AI in one image. From having discussions with people utilising AI in different industries, she recognised that those who were very optimistic about AI, didn’t recognise its shortfalls – and the same vice-versa. Yutong believes that humans have a role to help AI reach new technological advancements and AI can also help humans flourish. In Yutong’s own words, “we can make AI better, and AI can make us better”. 

Yutong found talking to people in the industry as well as conducting extensive research about AI very important to ensure that she could more accurately portray AI’s uses and functions. She points to the fact that she used binary code in ‘Talking to AI’ after researching that this is the most fundamental aspect of computer language which underpins many AI systems. 

What have been the biggest challenges in creating a ‘better image of AI’? Did you encounter any challenges in trying to represent AI in a more nuanced and realistic way?

Yutong reflects on the fact that no matter how much she rethought or restarted her ideas, there was always some level of bias in her depiction of AI because of her own subconscious feelings towards the technology. She also found it difficult to capture all the different applications of AI, as well as the various implications and technical features of the technology in a single visual image. 

Through tackling these challenges, Yutong became aware of why Better Images of AI is not called ‘Best Images of AI’ the latter would be impossible. She hopes that while she could not produce the ‘best image of AI’, her art can serve as a better image compared to those typically used in the media.

Based on our criteria for selecting images, we were pleased to accept both your images but asked you if it was possible to make amendments to ‘Exploring AI’ to make the figures more inclusive. What do you think of this feedback and was it something that you considered in your process? 

In Yutong’s image, ‘Exploring AI’, Better Images of AI made a request if an additional image could be made including these figures in different colours to better reflect the diverse world that we live in. Being inclusive is very important to Better Images of AI, especially as visuals of AI and those who are creating AI, are notoriously unrepresentative.

Yutong agreed that this development would be better to enhance the image and being inclusive in her art is something she is actively trying to improve. She reflects on this suggestion by saying, ‘just as different AI tools are unique, so are individual humans’. 

The two versions of ‘Exploring AI’ available on the Better Images of AI library

How has working on this project influenced your own views about AI and its impact? 

During this project, Yutong has been introduced to new ideas and been able to develop her own opinions about AI based on research from academic journals. She says that informing her opinions using sources from academia was beneficial compared to relying on information provided by news outlets and social media platforms which often contain their own biases and inaccuracies.

From this project, Yutong has been able to learn more about how AI could incorporate into her future career as a human and AI creator. She has become interested in the Nightshade tool that artists have been using to prevent AI companies using their art to train their AI systems without the owner’s consent. She envisages a future career where she could be working to help artists collaborate with AI companies – supporting the rights of creators and preserving the creativity of their art. 

What have you learned through this process that you would like to share with other artists and the public?

By chatting to various people interacting and using AI in different ways, Yutong has been introduced to richer ideas about the limits and benefits of AI. Yutong challenges others to talk to people who are working with AI or are impacted by its use to gain a more comprehensive understanding of the technology. She believes that it’s easy to gain a biased opinion about AI by relying on the information shared by a single source, like social media, so we should escape from these echo chambers. Yutong believes that it is so important that people diversify who they are surrounding themselves with to better recognise, challenge, and appreciate AI. 

Yutong (she/her) is an illustrator with whimsical ideas, also an animator and graphic designer.

🪄 Behind the Image with Minyue from Kingston School of Art

The image shows a colourful illustration of a story-like scene, with two half star characters performing various tasks. The stars, along with a wizard, are interacting with drawings, magnifying glasses, and magic-like elements. Below that, there is a scene with a fantasy landscape, including a castle and dragon. To the right of the image, text reads: 'Behind the Image with Minyue' and below that, a tagline reads: 'Let AI Become Your Magic Wand' which is the name of Minyue's image submission. The background of the image is light blue.

This year, we collaborated with Kingston School of Art to give MA students the task of creating their own better images of AI as part of their final project.

In this mini-series of blog posts called ‘Behind the Images’, our Stewards are speaking to some of the students that participated in the module to understand the meaning of their images, as well as the motivations and challenges that they faced when creating their own better images of AI.

In our second post, we go ‘Behind the Image’ with Minyue about her piece, ‘Let AI Become Your Magic Wand’. Minyue wants to draw attention to the overlooked human input in AI generated art and challenges those who believe AI will replace artists.

‘Let AI Become Your Magic Wand’ is not available in our library as it did not match all the criteria due to challenges which we explore below. However, we greatly appreciate Minyue letting us publish her images and talking to us. We are hopeful that her work and our conversation will serve as further inspiration for other artists and academics who are exploring representations of AI.

Can you tell us a bit about your background, and what drew you to the Better Images of AI project at Kingston School of Art? 

Minyue is from China and previously studied a foundation course in the UK before starting the Masters in Illustration at Kingston University. Before starting the Masters, Minyue had limited knowledge of AI and she only saw discussions about it on social media – especially from artists fearful that AI tools were capable of copying their own work without their consent. At the same time, Minyue also saw many fellow creators developing impressive works using AI generator tools – whether in the ideation phase or to create the final artwork. 

Confused about her own perception of AI, Minyue was drawn to the Better Images of AI project to learn more about the relationship between humans and AI in the creative process. 

Could you talk us through the different parts of your image and the meaning behind it? 

Minyue’s Final Image, ‘Let AI Become Your Magic Wand’

Minyue’s piece is focussed on two halves of a star. One half is called the ‘evaluation half star’ which represents AI’s image recognition capabilities (the technical term is the ‘Discriminator’). For Minyue, recognition capabilities refer to AI’s ability to interpret and understand input data. For image generator tools, AI systems are trained on vast amounts of imagery so that they can identify key features and elements of a picture. This could involve recognising objects, styles, colours or other visual aspects. Therefore, in generating an image of a chick (as shown in Minyue’s image), the evaluation half star is focussed on interpreting what distinctive features the training data classifies as a true representation of a chick – like perhaps the yellow colour and the shape of a beak.  

The other half is called the ‘creation half star’ which portrays the image construction capabilities of AI tools (the technical term is the ‘Generator’). The Generator enables AI to create new, coherent images based on the evaluation half star’s understanding of input data. 

Therefore, together, Minyue’s image shows how the half stars make a full star – capable of generating AI art based on user prompts and trained by vast image datasets. You’ll see that in the bottom part of Minyue’s image in the computer tab, she indicates that the full star (consisting of the creation and evaluation half stars) make up a magic wand when added with a pencil. The pencil symbolises the human labour behind the training of both the evaluation and creative half stars. 

Without being guided by humans, Minyue believes that these two half stars would not exist. It is humans that have created the input data, it is humans that prompt AI tools to create certain images, and it is humans that train the AI systems to be able to create these images in different ways. Therefore, her piece highlights the crucial human element of AI art which is often overlooked. 

Lastly, Minyue also hopes to emphasise that the combination of these AI tools with humans offers a new avenue for realising human creativity. That is why she has chosen to use a wizard and magic wand to depict how AI and humans, when working together, can be magical. 

Better Images of AI aims to counteract common stereotypes and misconceptions about AI. How did you incorporate this goal into your artwork?

Minyue emphasised that the main misconception that she wanted to focus on is that AI is a tool requiring rational human use, rather than an autonomous creator. When looking at her work, Minyue wanted people to contemplate, “who is controlling the magic?”, and prompt us to think more carefully about the role of humans in AI art. 

What was your motivation/inspiration for creating ‘Let AI Become Your Magic Wand’?

Firstly, as an illustration student, Minyue was particularly interested in the role of AI in the creative industry. The metaphor of the magic wand comes from her observation of artists who skilfully use new technologies to create their work, which made her feel as if she were watching a magical performance.

Secondly, Minyue wanted to raise awareness to the fact that using AI image generators still requires human skill, creativity and imagination. A wizard can only perform magic if they are trained to use the wand. In the same way, AI can assist artists to create, but artists must learn how to use this technology to develop innovative, appealing, and meaningful works of art. 

Minyue’s early sketch shows how she wanted to distinguish between the human (wizard) and AI (in the magic wand)

Finally, she hopes to dispel the idea that AI art will limit creativity or the work of human artists – instead, if creators choose to work with AI, it could also enhance their capabilities and usher in a new genre of art. 

Based on Better Images of AI criteria for selecting images, we had to make the difficult decision to not upload this image to our library. We made this choice based on closer scrutiny of the magic wand metaphor which could be misconceived as promoting the idea that AI is magic (this rhetoric is commonly pursued by technology companies). 

What do you think of this feedback, and was this idea something that you ever considered in your process? 

Minyue understood the concerns and appreciated the feedback provided by Better Images of AI which made her reconsider how her work could be misleading in some aspects and the challenges of relying on metaphors to communicate difficult ideas. Her intention was that the magic wand metaphor would prompt individuals to think more deeply about who is in control of AI art and also, how AI can advance the creative industry if used safely and ethically. However, she is aware that coupled with the technology industry’s widespread use of magical symbols to represent AI (for example, the logo for Zoom’s AI Smart Assistant or Google’s AI chatbot Gemini), Minyue’s image could (unintentionally) be perceived to suggest that AI (alone) is magical.

Was there a specific reason that you focussed on dispelling misconceptions about the human element of AI art, especially in relation to image generation? 

Minyue strongly believes that the creative power of AI comes from human inspiration and human creativity. She hopes her work will help convey the fact that AI art is rooted in human creativity and labour, which is often overlooked in the public discourse promoted by the media about AI replacing artists, leading to misunderstandings.

A lot of the inspiration for Minyue holding this view has come from her reflections on how past technology has integrated into the creative industry. For example, painters were originally fearful about the widespread adoption of photography since it offered a faster and cheaper means of reproducing and disseminating images. But over time, Minyue believes that we can see how photography has developed its own unique styles and languages, with photographers moving away from imitating traditional art pieces, to explore unique photographic expressions. Minyue believes that AI may also evolve into a new tool for the production of a new art form. 

Can you describe your process for creating ‘Let AI Become Your Magic Wand’?

Minyue detailed the very long process that led her to the final creation. She recalled how having the Better Images of AI Guide was helpful, but she still struggled because her initial understanding of AI was really poor.

Therefore, Minyue took time to carefully research the more technical aspects of AI image generation so she could more accurately represent how AI image generators work and their relationship with human creators. Below you can see how she researched the technical elements of AI image generation as well as its use in different contexts. 

Minyue’s research about technical aspects of AI image generators and their applications

Minyue’s initial sketches also show how she was interested in portraying the relationship between humans and technology.

One of Minyue’s initial sketches when exploring ideas for the Better Images of AI project

Minyue aims to create more engaging and approachable AI images to help non-experts understand AI technology and reduce public fear of new technologies. This was also one of her reasons for choosing to participate in the Better Images of AI project.

What have been your biggest challenges in creating a better image of AI? Did you encounter any challenges in trying to represent AI in a more nuanced and realistic way?

Minyue faced difficulties when challenging her previous views on AI that were presented to her by the media. Contrary to a lot of the other images in the Better Images of AI library, Minyue also wanted to promote a more optimistic narrative about AI – that AI can be beneficial to humans and enhance our own creative outputs. 

Another one of the challenges that Minyue faced was distinguishing between AI and computers or robots. In one of her initial sketches she shows how in her early stages of this project, Minyue overlooked how AI has numerous applications beyond just being used within computer applications.

Another one of Minyue’s sketches which show her challenges relating to how she could illustrate AI

What have you learned through this process that you would like to share with other artists or the public? 

Minyue says that while artists are often driven by their passions when creating their works, it is important to consider how art might cause misunderstandings if creators are not guided by in-depth research and detailed expression. Minyue’s hope is that other artists will focus on this in order to promote a more realistic and accurate understanding of AI. 


Minyue Hu (she/her) is about to graduate from Kingston University with a Master’s degree in Illustration. In the coming year, she will be staying in the UK to continue her work as an artist and actively create new pieces. Minyue’s inspiration often centres on human experience and emotion, with the aim of combining personal stories with social contexts to prompt viewers to reflect on their own experiences. Her final project, Daughters of the Universe, is set to be released soon, and she looks forward to your attention. 

👤 Behind the Image with Ying-Chieh from Kingston School of Art

This year, we collaborated with Kingston School of Art to give MA students the task of creating their own better images of AI as part of their final project. 

In this mini-series of blog posts called ‘Behind the Images’, our Stewards are speaking to some of the students that participated in the module to understand the meaning of their images, as well as the motivations and challenges that they faced when creating their own better images of AI. Based on our assessment criteria, some of the images will also be uploaded to our library for anyone to use under a creative commons licence. 

In our first post, we go ‘Behind the Images’ with Ying-Chieh Lee about her images, ‘Can Your Data Be Seen’ and ‘Who is Creating the Kawaii Girl?’. Ying-Chieh hopes that her art will raise awareness of how biases in AI emerge from homogenous datasets and unrepresentative groups of developers who can create AI to marginalise members of society, like women. 

You can freely access and download ‘Who is Creating the Kawaii Girl’ from our image library by clicking here.

‘Can Your Data Be Seen’ is not available in our library as it did not match all the criteria due to challenges which we explore below. However, we greatly appreciate Ying-Chieh letting us publish her images and talking to us. We are hopeful that her work and our conversation will serve as further inspiration for other artists and academics who are exploring representations of AI.

Can you tell us a bit about your background, and what drew you to the MA at Kingston University?

Ying-Chieh originally comes from Taiwan and has been creating art since she was about 10 years old. In her undergraduate, Ying-Chieh studied sculpture and then worked for a year. Whilst working, Ying-Chieh really missed drawing so decided to start freelance illustration but she wanted to develop her art skills further which led Ying-Chieh to Kingston School of Art. 

Could you talk me through the different parts of your images and the meaning behind them?

‘Can Your Data Be Seen?’

‘Can Your Data Be Seen?’ shows figures representing different subjects in datasets, but the cast light illustrates how only certain groups are captured in the training of AI models. Furthermore, the uniformity and factory-like depiction of the figures criticises how AI datasets often quantify the rich, lived experiences of humans into data points which do not capture the nuances and diversity of many human individuals. 

Ying-Chieh hopes that the image highlights the homogeneity of AI datasets and also draws attention to the invisibility of certain individuals who are not represented in training data. Those who are excluded from AI datasets are usually from marginalised communities, who are frequently surveilled, quantified and exploited in the AI pipeline, but are excluded from the benefits of AI systems due to the domination of privileged groups in datasets. 

‘Who’s Creating the Kawaii Girl’

In ‘Who’s Creating the Kawaii Girl’, Ying-Chieh shows a young female character in a school uniform which represents the Japanese artistic and cultural ‘Kawaii’ style. The Kawaii aesthetic symbolises childlike innocence, cuteness, and the quality of being lovable. Kawaii culture began to rise in Japan in the 1970s through anime, manga and merchandise collections – one of the most recognisable is the Hello Kitty brand. The ‘Kawaii’ aesthetic is often characterised by pastel colours, rounded shapes, and features which evoke vulnerability, like big eyes and small mouths. 

In the image, Ying-Chieh has placed the Kawaii Girl in the palm of an anonymous, sinister figure – this suggests a sense of vulnerability and power over the Girl. The faint web-like pattern on the figures and the background symbolises the unseen influence that AI has on how media is created and distributed that often reinforce stereotypes or facilitates exploitation. The image criticises the overwhelmingly male-dominated AI industry who frequently use technology and content generation tools to reinforce ideologies about women being controlled and subservient to men. For example, there has been a rise in nonconsensual deep fake pornography created by AI tools and also regressive stereotypes about gender roles being reinforced by information provided by large language models, like ChatGPT. Ying-Chieh hopes that ‘Who’s Creating the Kawaii Girl’ will challenge people to think about how AI can be misused and its potential to perpetuate harmful gender stereotypes that sexualise females. 

What was the inspiration/motivation for creating your image, ‘Can Your Data Be Seen’ and ‘Who’s Creating the Kawaii Girl?’? 

At the outset, Ying-Chieh wasn’t very familiar with AI or the negative uses and implications of the technology. To explore how it was being used, she looked on Facebook and found a group that was being used to share lots of offensive images of women which were generated by AI. When interrogating the group further, she realised that the group was not small, indeed, it had a large number of active users –  which were mostly men. This was Ying-Chieh’s initial inspiration for the image, ‘Who’s Creating the Kawaii Girl?’. 

However, this Facebook group also prompted Ying-Chieh to think deeper about how the users were able to generate these sexualised images of women and girls so easily. A lot of the images represented a very stereotypical model of attractiveness which prompted her to think about how the underlying datasets of these AI models were most probably very unrepresentative which reinforced stereotypical standards of beauty and attractiveness. 

Was there a specific reason you focussed on issues like data bias and gender oppression related to AI?

Gender equality has always been something that Ying-Chieh has been passionate about, but she had never considered how the issue related to AI. She came to realise how its relationship wasn’t that different to other industries which oppress women because AI is fundamentally produced by humans and fed by data that humans have created. Therefore, the problems with AI being used to harm women are not isolated in the technology, but rooted in systemic social injustices that have long mistreated and misrepresented women and other marginalised groups.

Ying-Chieh’s sketch of the AI ‘bias loop’

In her research stages, Ying-Chieh explored the ‘bias loop’ which represents how AI models are trained on data selected by humans or derived from historical data which will create biased images. At the same time, the images created by AI will serve as new training data, which will further embed our historical biases into future AI tools. The concept of the ‘bias loop’ resonated with Ying-Chieh’s interest in gender equality and made her concerned for the uses and developments of AI which privileging some groups at the expense of others, especially where this repeats itself and causes inescapable cycles of injustice. 

Can you describe the process for creating this work?

Ying-Chieh started from developing some initial sketches and engaging in discussions with Jane, the programme coordinator, about her work. As you can see below, ‘Whos’ Creating the Kawaii Girl’ has evolved significantly from its initial sketch but ‘Can Your Data Be Seen?’ has remained quite similar to Ying-Chieh’s original design. 

The initial sketches of ‘Can Your Data Be Seen?’ (left) and ‘Who’s Creating the Kawaii Girl?’

Ying-Chieh also engaged in some activities during classes which helped her to learn more about AI and its ethical implications. One of these games, ‘You Say, I Draw’ involved one student describing an image and the other student drawing the image purely relying on their partner’s description without knowing what they were drawing.

This game highlighted the role that data providers and prompters play in the development of AI and challenged Ying-Chieh to think more carefully about how data was being used to train content generation tools. During the game, she realised that the personality, background, and experiences of the prompter really influenced what the resulting image looked like. In the same way, the type of data and the developers creating AI tools can really influence the final outputs and results of a system. 

An image of the results from the ‘You Say, I Draw’ activity

Better Images of AI aims to counteract common stereotypes and misconceptions about AI. How did you incorporate this goal into your artwork? 

Ying-Chieh’s aim was to explore and address biases present in AI models in order to contribute to the Better Images of AI mission so that the future development of AI can be more diverse and inclusive. She hopes that her illustrations will make it easier for the public to understand issues about biases in AI which are often inaccessible or shielded from wider comprehension.

Her images draw more attention to how AI’s training data is bias and how AI is being used to reinforce gender stereotypes about women. From this, Ying-Chieh hopes that further action can be taken to improve data collection and processing methods as well as more laws and rules about limits to image generation where it exploits or harms individuals. 

What have been the biggest challenges of creating a ‘better image of AI’? Did you encounter any challenges in trying to represent AI in a more nuanced and realistic way? 

Ying-Chieh spoke about her challenges in trying to strike the right balance between designing images that could be widely used and recognised by audiences as related to AI but also not falling into any common tropes that misrepresented AI (like robots, descending code, the colour blue). She also found it difficult to not make images too metaphorical to the extent that they may be misinterpreted by audiences.

Based on our criteria for selecting images, we were pleased to accept, ‘Who’s Creating the Kawaii Girl?’, but had the difficult decision to not upload ‘Can Your Data Be Seen’ based on the fact that it didn’t communicate and conceptualise AI enough. What do you think of this feedback and was it something that you considered in the process? 


Ying-Chieh shared that she had been continuous that her images would not be easily recognisable as communicating ideas about AI throughout the design process. She made some efforts to counteract this, for example, on ‘Can Your Data Be Seen’ she made the figures all identical to represent data points and the lighter coloured lines on the faces and bodies of the figures represent the technical elements behind AI image recognition technology.

How has working on this project influenced your own views on AI and its impact? 

Before starting this project, Ying-Chieh said that her opinion towards AI had been quite positive. She was largely influenced by things that she had seen and read in the news about how AI was going to benefit society. However, from her research on Facebook, she has become increasingly aware that this is not entirely true. There are many dangerous ways that AI can be used which are already lurking in the shadows of our daily lives.

 What have you learned through this process that you would like to share with other artists or the public?

The biggest takeaway from this project for Ying-Chieh is how camera angles, zooming, or object positioning can strongly influence the message that an image conveys. For example, in the initial sketches of ‘Can Your Data Be Seen’, Ying-Chieh explored how she could best capture the relationship of power through different depths of perspective.  

Various early sketches of ‘Can Your Data Be Seen’ from different depths of perspective

Furthermore, when exploring ideas about how to reflect the oppressive nature of AI, Ying-Chieh enlarged the shadow’s presence in the frame for ‘Who’s Creating the Kawaii Girl’. By doing this, the shadow reinforces the strong power that elite groups have over the creation of content about marginalised groups which is often hidden and kept secret from wider knowledge. 

Ying-Chieh’s exploration of how the photographer’s angle can reflect different positions of power and vulnerability

Ying-Chieh Lee (she/her) is a visual creator, illustrator, and comic artist from Taiwan. Her work often focuses on women-related themes and realistic, dark-style comics.


Better Images of AI’s Partnership with Kingston School of Art

An image with a light blue background that reads, 'Let's Collab!' at the top, the word 'Collab' underlined in burgandy. Below that, it says 'Better Images of AI x Kingston School of Art' with 'Kingston School of Art' in teal. Below the text is an illustration of two hands high-fiving, with black sleeves and white hands. Around the hands are burgundy stars.

This year, we were pleased to partner with Kingston’s School of Art to run an elective for their MA Illustration, Animation, and Graphic Design students to create their own ‘better images of AI’. Following this collaboration, some of the student’s images have been published in our library for anyone to use freely. Their images focus on communicating different ideas about the current state of AI – from the connection between the technology and gender oppression to breaking down the interactions between humans and AI chatbots.

In this blog post, we speak to Jane Cheadle who is the course leader for the MA Animation course at Kingston School of Art about partnering with Better Images of AI for the elective. The MA is a new course and it is focussed on critical and research-led animation design processes.

If you’re interested in running a similar module/elective or incorporating Better Images of AI’s work into your university course, we would love to hear from you – please contact info@betterimagesofai.org.

How did the collaboration with Better Images of AI come about?

AI is having an impact on various industries and the creative domain is no exception. Jane explains how she and the staff in the department were asked to work towards developing a strategy addressing the use of AI in the design school. At the same time, Jane was also in contact with Alan Warburton – a creator that works with various technologies, including computer generated imagery, AI, virtual reality, and augmented reality to develop art. Alan introduced Jane to Better Images of AI and she became interested in the work that we are doing, and how this linked to their future strategy for the use of AI in the design school.

Therefore, instead of solely creating rules about the use of AI in the school, Jane thought that working with the students to explore the challenges, limits, and benefits of the technology would be more meaningful as it would provide better learning opportunities for the students (as well as herself!) about this topic. 

Where does the elective fit within the school’s curriculum?

Kingston University’s Town House Strategy aims to prepare graduates for advances in technology which will alter our future society and workplaces. The strategy aims to equip students with enhanced entrepreneurial, digital, and creative problem-solving skills so they can better advance their careers and professional practice. As part of this strategy, Kingston University encourages collaboration and partnership with businesses and external bodies to help advance student’s knowledge and awareness of the different aspects of the working world.

As part of this, the Kingston School of Art runs a cross-disciplinary design module open to students from three different MA courses (Graphic Design, Illustration, and Animation). In this module, students are asked to think about the role of the designer now, and what it might look like in the future. The goal is to prompt students to situate their creative practice within the contemporary paradigms of precarity and uncertainty, providing space for students to understand and address issues such as climate literacy, design education, and the future of work. There are multiple electives within this module and each works with a partner external to the university.

Better Images of AI were fortunate enough to be approached by Jane to be the external partner for their elective. This elective was run by Jane as well as researcher and artist, Maybelle Peters. Jane explains that this module had a dual aim: firstly, to allow students to develop better images of AI which could be published to our library. But also, secondly, to educate students about AI and its impact on society. For Jane, it was important that when exploring AI, this was applied to the student’s own practice and positionality so they could understand how AI is influencing the creative industry as well as political, power structures more broadly.

How did the elective run?

Jane shares that there was a real divide amongst the students about their familiarity with AI and its wider context. Some students had been dabbling with AI tools and wanted to develop a position on its creative and ethical use. Meanwhile, others were not using AI at all and expressed being somewhat weary of it, alongside a real sense of amorphous fear around automated image generation and other capabilities that impact the markets for their creative works.

Better Images of AI worked with the Kingston School of Art to provide a brief for the elective, and students also used our Guide to help them understand the problems with current stock imagery that is used to illustrate AI so they could avoid these common tropes in their own work.

Following this, the students worked in special interest groups to research different aspects of AI. Each group then used this research to develop practical workshops to run with the wider class. This enabled the students to develop their own better images of AI based on what they had learnt from leading and participating in workshops and research tasks. Better Images of AI also visited Kingston School of Art to provide guidance and feedback to the students in the development stages of their images.

Some of the images that were submitted as part of the elective can be seen below. Each image shows a thoughtful approach and are so varied in nature – some are super low-fi and others are hilarious – but all the students drew upon their own design/drawing/making skills to develop their unique images. 

Why did you think it was important to partner with Better Images of AI for this elective?

As designers and image makers, we agreed that there is a responsibility to accurately and responsibly represent aspects of the world, such as AI. It was important to allow students to work with real constraints and build towards a future that they want to live in. While the brief provided to the students was to create images that accurately represent what AI looks like right now, much of the student workshops focussed on what kind of AI they wanted to see, what safeguards need to be put in place, and what power relations we might need to change in order to get there.

Jane Cheadle (she/they) is an animator, researcher and educator. Jane is currently senior lecturer and MA Animation course leader in the design school at Kingston School of Art. Both of Jane’s practice and research are cross-disciplinary and experimental with a focus on drawing, collaboration and expanded animation.  


We are super thankful to Jane and Maybelle as well as the Kingston School of Art for incorporating Better Images of AI into their elective. We are so appreciative to all the students who participated in the module and shared their work with us. Jane is excited to hopefully run the elective again and we are looking forward to more work together with the students and staff at Kingston School of Art.

This blog post is the first in a series of posts about Better Images of AI collaboration with the Kingston School of Art. In a series of mini interview blog posts, we speak to three students that participated in the elective and designed their own better images of AI. Some of the student’s images even feature in our library – you can view them here.

Visuals of AI in the Military Domain: Beyond ‘Killer Robots’ and towards Better Images?

In this blog post, Anna Nadibaidze explores the main themes found across common visuals of AI in the military domain. Inspired by the work and mission of Better Images of AI, she argues for the need to discuss and find alternatives to images of humanoid ‘killer robots’. Anna holds a PhD in Political Science from the University of Southern Denmark (SDU) and is a researcher for the AutoNorms project, based at SDU.

The integration of artificial intelligence (AI) technologies into the military domain, especially weapon systems and the process of using force, has been the topic of international academic, policy, and regulatory debates for more than a decade. The visual aspect of these discussions, however, has not been analysed in depth. This is both puzzling, considering the role that images play in shaping parts of the discourses on AI in warfare, and potentially problematic, given that many of these visuals, as I explore below, misrepresent major issues at stake in the debate.

In this piece I provide an overview of the main themes that one may observe in visual communication in relation to AI in international security and warfare, discuss why some of these visuals raise concerns, and argue for the need to engage in more critical reflections about the types of imagery used by various actors in the debate on AI in the military.

This blog post is based on research conducted as part of the European Research Council funded project “Weaponised Artificial Intelligence, Norms, and Order” (AutoNorms), which examines how the development and use of weaponised AI technologies may affect international norms, defined as understandings of ‘appropriateness’. Following the broader framework of the project, I argue that certain visuals of AI in the military, by being (re)produced via research communication and media reporting, among others, have potential to shape (mis)perceptions of the issue.

Why reflecting upon images of AI in the military matters

As with the field of AI ethics more broadly, critical reflections on visual communication in relation to AI appear to be minimal in global discussions about autonomous weapon systems (AWS)—systems that can select and engage targets without human intervention—which have been ongoing for more than a decade. The same can be said for debates about responsible AI in the military domain, which have become more prominent in recent years (see, for instance, the initiative of the Responsible AI in the Military Domain Summit held first in 2023, with another edition due in 2024).

Yet, examining visuals deserves a place in the debate on responsible AI in the military domain. It matters because, as argued by Camila Leporace on this blog, images have a role in constructing certain perceptions, especially “in the midst of the technological hype”. As pointed out by Maggie Mustaklem from the Oxford Internet Institute, certain tropes in visual communication and reporting about AI disconnect the technological developments in that area and how people, in particular the broader public, understand what the technologies are about. This is partly why the AutoNorms project blog refrains from using the widely spread visual language of AI in the military context and uses images from the Better Images of AI library as much as possible.

Main themes and issues in visualizing military applications of AI

Many of the visuals featured in research communication, media reporting, and publications about AI in the military domain speak to the tropes and clichés in images of AI more broadly, as identified by the Better Images of AI guide.

One major theme is anthropomorphism, as we often see pictures of white or metallic humanoid robots that appear holding weapons, pressing nuclear buttons, or marching in troops like soldiers with angry or aggressive expressions, as if they could express emotions or be ‘conscious’ (see examples here and here).

In some variations, humanoids evoke associations with science fiction, especially the Terminator franchise. The Terminator is often referenced in debates about AWS, which feature in a substantial part of the research on AI in international relations, security, and military ethics. AWS are often called ‘killer robots’, both in academic publications and media platforms, which seems to encourage the use of images of humanoid ‘killer robots’ with red eyes, often originating from stock image databases (see examples here, here, and here). Some outlets do, however, note in captions that “killer robots do not look like this” (see here and here).

Actors such as campaigners might employ visuals, especially references from pop culture and sci-fi, to get people more engaged and as tools to “support education, engagement and advocacy”. For instance, Stop Killer Robots, a campaign for an international ban on AWS, often uses a robot mascot called David Wreckham to send their message that “not all robots are going to be as friendly as he is”.

Sci-fi also acts as a point of reference for policymakers, as evidenced, for example, by US official discourses and documents on AWS. As an illustration, some of these common tropes were visually present at the conference “Humanity at the Crossroads: Autonomous Weapons Systems and the Challenge of Regulation” which brought together diplomats, civil society, academia, and other actors to discuss the potential international regulation of AWS in April 2024 in Vienna.

Half-human half-robot projected on the wall and a cut-out of a metallic robot greeting participants at the entrance of the Vienna AWS conference. Photos by Anna Nadibaidze.

The colour blue also often features in visual communication about AI in warfare, together with abstract depictions of running code, algorithms, or computing technologies. This is particularly distinguishable in stock images used for blogs, conferences, or academic book cover designs. As Romele and Rodighiero write on this blog, blue might be used because it is calming, soothing, and also associated with peace, encouraging some accepting reaction from viewers, and in this way promoting certain imaginaries about AI technologies.

Examples of covers for recently published academic books on the topic of AI in international security and warfare.

There are further distinct themes in visuals used alongside publications about AI in warfare and AWS. A common trope features human soldiers in an abstract space, often with a blue (and therefore calming) background or running code, wearing a virtual reality headset and presumably looking at data (see examples here and here). One such visual was used for promotional material of the aforementioned REAIM Summit, organised by the Dutch Government in 2023.

Screenshot of the REAIM Summit 2023 website homepage (www.reaim2023.org). The image is credited to the US Naval Information Warfare Center Pacific, public domain.

Finally, many images feature military platforms such as uncrewed aerial vehicles (UAVs or drones) flying alone or in swarms, robotic ground vehicles, or quadruped animal-shaped robots, either depicted alone or together with human soldiers. Many of them are prototypes or models of existing systems tested and used by the United States military, such as the MQ-9 Reaper (which does not classify as an AWS). Most often, these images are taken from the visual repository of the US Department of Defense, given that the photos released by the US government are in the public domain and therefore free to use with attribution (see examples here, here, and here). Many visuals also display generic imagery from the military, for instance soldiers looking at computer screens, sitting in a control room, or engaging in other activities (see examples here, here, and here).

Example of image often used to accompany online publications about AWS. Source: Cpl Rhita Daniel, US Marine Corps, public domain.

However, there are several issues associated with some of the common visuals explored above. As AI researcher and advocate for an AWS ban Stuart Russell points out, references to the Terminator or sci-fi are inappropriate for the debate on AI in the military because they suggest that this is a matter for the future, whereas the development and use of these technologies is already happening.

Sci-fi references and humanoids might also give the impression that AI in the military is about replacing humans with ‘conscious’ machines that will eventually fight ‘robot wars’. This is misleading because the debate surrounding the integration of AI into the military is mostly not about robots replacing humans. Armed forces around the world plan to use AI for a variety of purposes, especially as part of humans interacting with machines, often called ‘teaming’. The debate and actors participating in it should therefore focus on the various legal, ethical, and security challenges that might arise as part of these human-machine interactions, such as a distributed form of agency.

Further, images of ‘killer robots’ often invoke a narrative of ‘uprising’ which is common in many works of popular culture and where humans lose control of AI, as well as determinist views where humans have little influence over how technology impacts society. Such visual tropes overshadow (human) actors’ decisions to develop or use AI in certain ways, as well the political and social contexts surrounding those decisions. Portraying weaponised AI in the form of robots turning against their creators problematically presents this is an inevitable development, instead of highlighting choices made by developers and users of these technologies.

Finally, many of the visuals tend to focus on the combat aspect of integrating AI in the military, especially on weaponry, rather than more ‘mundane’ applications, for instance in logistics or administration. Sensationalist imagery featuring shiny robots with guns or soldiers depicted in a theoretical battlefield with a blue background risks distracting from technological developments in security and warfare, such as the integration of AI into data analysis or military decision-support systems.

Towards better images?

It should be noted that many outlets have moved on from using ‘killer robot’ imagery and sci-fi clichés when publishing about AI in warfare. Some more realistic depictions are being increasingly used. For instance, a recent symposium on military AI published by the platform Opinio Juris features articles illustrated with generic photos of soldiers, drones, or fighter jets.

Images of military personnel looking at data on computer screens are arguably not as problematic because they convey a more realistic representation of the integration of AI into the military domain. But this still means often relying on the same sources: stock imagery and public domain websites such as the US government’s collections. It also means that AI technologies are often depicted in a military training or experimental setting, rather than a context where they could potentially be used, such as an actual conflict, not hidden with a generic blue background.

There are some understandable challenges, such as researchers not getting a say in the images used for their books or articles, or the reliance on free, public domain images, which is common in online journalism. However, as evidenced by the use of sci-fi tropes in major international conferences, a reflection on what are ‘responsible’ and ‘appropriate’ visuals for the debate on AI in the military and AWS is lacking.

Images of robot commanders, the Terminator, or soldiers with blue flashy tablets miss the point that AI in the military is about changing dynamics of human-machine interaction, which involve various ethical, legal, and security implications for agency in warfare. As with images of AI more broadly, there is a need to expand the themes in visuals of AI in security and warfare, and therefore also the types of sources used. Better images of AI would include humans who are behind AI systems and humans that might be potentially affected by them—both soldiers and civilians (e.g. some images and photos depict destroyed civilian buildings, see here, here, or here). Ultimately, imagery about AI in the military should “reflect the realistically messy, complex, repetitive and statistical nature of AI systems” as well as the messy and complex reality of military conflict and the security sphere more broadly.

The author thanks Ingvild Bode, Qiaochu Zhang and Eleanor Taylor (one of our Student Stewards) for their feedback on earlier drafts of this blog. 

Better Images of AI’s Student Stewards

Better Images of AI is delighted to be working with Cambridge University’s AI Ethics Society to create a community of Student Stewards. The Student Stewards are working to empower people to use more representative images of AI and celebrate those who lead by example. The Stewards have also formed a valuable community to help Better Images of AI connect with its artists and develop its image library. 

What is Cambridge University’s AI Ethics Society? 

The Cambridge University AI Ethics Society (CUAES) is a group of students from the University of Cambridge who share a passion for advancing the ethical discourse surrounding AI. Each year, the society choses a campaign to support and introduces its members to the issues that these organisations are trying to solve through events and workshops. In 2023, CUAES supported Stop Killer Robots. This year, the Society chose to support Better Images of AI. 

The Society’s Reasons for Supporting Better Images of AI 

The CUAES committee really resonated with Better Images of AI’s mission. The impact that visual media can have on public discourse about AI has been overlooked – especially in academia where there is a focus on written word. Nevertheless, stock images of humanoid robots, white men in suits and the human brain all embed certain values and preconceptions about what AI is and who makes it. CUAES believes that Better Images of AI can help cultivate more thoughtful and constructive discussions about AI. 

Members of the CUAES are privileged enough to be fairly well-informed about the nuances of AI and its ethical implications. Nevertheless, the Society has recognised that even its own logo of a robot incorporates reductive imagery that misrepresents the complexities and current state of AI. Therefore, from oversights in its own decisions, CUAES saw that further work needed to be done.

CUAES is eager to share the importance of Better Images of AI to industry actors, but also members of the public whose perceptions will likely be shaped the most by these sensationalist images. CUAES hopes that by creating a community of Student Stewards, they can disseminate Better Images of AI’s message widely and work together to revise their logo to better reflect the Society’s values. 

The Birth of the Student Steward Initiative

Better Images of AI visited the CUAES earlier this year to introduce members to its work and encourage students to think more critically about how AI is represented. During the workshop, participants were given the tough task to design their own images of AI – we saw everything from illustrations depicting how generative AI models are trained to the duality of AI being symbolised by the ying and yang. The students who attended the workshop were fascinated by Better Images of AI’s mission and wanted to use their skills and time to help – this was the start of the Student Steward community. 

A few weeks after this workshop, individuals were invited to a virtual induction to become Student Stewards so they could introduce more nuanced understandings of AI to the wider public. Whilst this initiative has been borne out of CUAES, students (and others) from all around the globe are invited to join the group to shape a more informed and balanced public perception of AI.

The Role of the Student Stewards

The Student Stewards are on the frontline of spreading Better Images of AI’s mission to journalists, researchers, communications professionals, designers, and the wider public. Here are some of the roles that they champion: 

  1. The Guidance Role: if our Student Stewards see images of AI that are misleading, unrepresentative or harmful, they will attempt to contact authors and make them aware of the Better Images of AI Library and Guide. The Stewards hope that they can help to raise awareness of the problems associated with the images used and guide authors towards alternative options that avoid reinforcing dangerous AI tropes. 
  1. The Gratitude Role: we realise that it is equally as important to recognise instances where authors have used images from the Better Images of AI library. Images from the library have been spotted in international media, adopted by academic institutions and utilised by independent writers. Every decision to opt for more inclusive and representative images of AI plays a crucial role in raising awareness of the nuances of AI. Therefore, our Stewards want to thank authors for being sensitive to these issues and encourage the continuous of the library. 
  1. Connecting with artists: the stories and motivations behind each of the images in our library are often so interesting and thought provoking. Our Student Stewards will be taking the time to connect with artists that contribute images to our library. By learning more about how artists have been inspired to create their works, we can better appreciate the diverse perspectives and narratives that these images provide to wider society. 
  1. Helping with image collections: Better Images of AI carefully selects the images that are chosen to be published in its library. Each image is scrutinised against the different requirements to ensure that they avoid reinforcing harmful stereotypes and embody the principles of honesty, humanity, necessity and specificity. Our Student Stewards will be assisting with many of the tasks that are involved from submission to publication, including liaising with artists, data labelling, evaluating initial submissions, and writing image descriptions. 
  1. Sharing their views: Each of our Student Stewards come with different interests related to AI and its associated representations, narratives, benefits and challenges. We are eager for our students to share their insights on our blog to introduce others to new debates and ideas in these domains.

As Better Images of AI is a non-profit organisation, our community of Stewards operate on a voluntary basis but this does allow for flexibility around your other commitments. Stewards are free to take on additional tasks based on their own availability and interests and there are no minimum time requirements for undertaking this role – we are just grateful for your enthusiasm and willingness to help! 

If you are interested in becoming a Student Steward at Better Images of AI, please get in touch. You do not need to be affiliated with the University of Cambridge or be a student to join the group.