Earlier this year, we were invited to Cambridge (UK) for an exhibition of some of the visuals from the Better Images of AI library. It was followed by a panel event on “White Robots, Blue Brains: and Other Myths: AI, Reimagined”. The event was organised by Hannah Claus (PhD student at the University of Cambridge) together with the Early Careers Community of the Centre for Human Inspired AI (CHIA) and hosted by Robinson College, Cambridge on June 6th.
In the blog post below, we explore how the event’s exhibition and panel opened up discussions about reimagining AI and the role that artists have taken in this space to challenge visual tropes of AI and make space for alternative, more diverse representations.
What does AI mean to you?
“It’s whatever I want it to mean at any given moment” – Participant
The central theme of both the exhibition and the CHIA panel event was to encourage participants to reflect on what AI means to them personally. This required stepping outside the dominant narratives by technology companies and instead engaging in honest reflection about how we each encounter AI each day and how it shapes our lives, relationships, work, and environments. Participants were asked to draw or write their own responses to the prompt: “What does AI mean to you?”. The variety of answers (despite the relative homogeneity of a group of Cambridge-based researchers and creatives) revealed just how multifaceted AI is, and how differently it impacts individuals. Especially the mix of people coming from both the tech space and the arts scene created an environment where AI and its portrayal in our current Eurocentric society was questioned on multiple layers.

Some responses highlighted AI’s practical benefits, such as “not needing to learn python syntax”, “a tool that makes life easier”, or “the potential to revolutionise the way we currently do physics research.” Others focused on its costs, depicting the human labour embedded in training datasets or its environmental toll. One response stood out in particular: “It’s whatever I want it to mean at any given moment.” This impactful statement underpinned much of what the evening’s event was about: advocating for more genuine choices about how and if AI is being used, how it is being developed, and who it is being developed for.


Participant responses to the question: ‘What does AI mean to you?’
The post-it notes underscored how AI means something different to everyone, which depends on various factors. Yet this diversity of perspectives is rarely reflected in our visuals of AI. When AI is only imagined as an abstract, superhuman, existential threat, opportunities to question its social, environmental, legal, and political dimensions are closed. But when AI is imagined through many personal, critical, playful, speculative lenses, space opens up to contest dominant narratives and democratise the conversation about how AI is impacting society.
“Much of the public still visualizes AI through a handful of increasingly clichéd and misleading images: white robots, glowing blue brains, swirling networks of light. They suggest AI is a distant, humanoid intelligence, when in fact it’s embedded in the messy, invisible systems we use every day— algorithmic driven engagement, capitalist systems of surveillance, language models, creative platforms.” – Alex Mentzel
Exhibiting better images of AI
“Images of AI come from somewhere, do something, and go somewhere” – Dominik Vrabič Dežman
The exhibition featured 10 of the images from the library created by human artists from all around the world, each image communicated a variety of themes about AI. Often, the images from the library are viewed only digitally in blog posts, usually on LinkedIn or in news articles. However, being able to bring some of the visuals into a physical exhibition opened up opportunities for in-person dialogue about the works, the role of artists in the field of AI, and the ideas about AI that they prompt us to think about.


Images from the library on exhibition at Robinson College, University of Cambridge
Seeing how other people connected an image of AI to themes of labour, surveillance, or creativity often revealed the multiplicity of meanings that a single artwork can hold. Exchanges about these different perceptions not only introduced greater depth to the understanding of the artist’s work, but also created space for collective reflection about how AI is imagined, represented, and contested.
Importantly, this also shows that images of AI are never neutral, as stated in Dominik Vrabič Dežman’s paper on AI visuals and hype: “images of AI come from somewhere, do something, and go somewhere”. In his paper, Dominik Vrabič Dežman criticises the “deep blue sublime” aesthetic of dominant AI imagery which reinforce harmful narratives about AI and its autonomy, automation, and inevitability. It’s interesting to think about this quotation with respect to the exhibition and Better Images of AI’s library. Talking about the images together in the same physical place reinforced how images of AI are shaped by choices made by the creators such as their culture, institutions, politics, identity, and artistic style.

While the Better Images of AI library can be as political as common tropes, they make space for a diversity of interpretations and centre stories about AI which are actively suppressed or sidelined in dominant visuals. Therefore, reflecting back on Dominik’s words: the images in the library do come from somewhere: human artists from all around the world. They do something: disrupt the dominant narratives by surfacing neglected perspectives and reframe what counts as meaningful or relevant in discussions about AI. And they also go somewhere: not just on blog posts and news articles, but they also prompt more long lasting thinking and reflections on what AI really means to us.
What is AI made of? By Shady Sharify
The exhibition was such a success that the artworks were also exhibited at the annual conference of the Centre for Human-Inspired AI on the 16th of June 2025. This conference brought together international researchers, industry professionals, students, and creatives to discuss how AI intersects with various fields, spanning from climate change to healthcare. During the conference, the attendees had the opportunity to vote for the “Best Artwork” from the selection of images that were exhibited on the day. “What is AI Made Of?” by Shady Shaify was voted for this award by the audience. The artwork resonated strongly with attendees because of the way it centered the hidden materials and labour of AI, rather than depicted AI as an abstract, disembodied robot. As a result, the piece invited the participants to think critically about the infrastructures and human contributions that are so often erased in mainstream visuals of AI.

The role that art plays in reimagining AI: panel event
The panel event was focussed on how art can be used to deconstruct myths about AI. The panel was chaired by Hannah Claus and accompnied by Tania Duarte who manages the Better Images of AI collaboration. They were both joined by Chanelle Mwale and Alex Mentzel.

Chanelle Mwale is a singer, songwriter and poet, and the founder of the Ubuntu Network. Chanelle shared their experiences as an artist in the current AI hype and commented on how artists are responding and reflecting on the use of AI in the industry.
Alex Mentzel is a PhD student who works on the intersection of AI and art, creating a bridge between both worlds. He has combined AI with immersive theatre in his works. During the panel, Alex talked about how putting AI into a live, physical space has changed people’s reactions to the technology than on the screen.
In Alex’s own project, Faust Shop, he has taken AI off the laptop and into a live, shared space where audiences co-produce the system’s behavior. Embodied, participatory encounters recalibrate trust: the ‘magic’ of AI fades a bit and what emerges is curiosity, skepticism, and agency. People don’t just react to a polished output, they witness and question the technological pipeline that produced it.
What do “better images of AI” mean to Alex and Chanelle?
Asking what “better images of AI” meant to Alex, he responded: “When we only show AI in narrow, anthropomorphic ways, we strip away the context: the human labor, data pipelines, biases, and infrastructures that make it function. And context matters. AI isn’t experienced the same way in Berlin, Tripoli, or Bangalore. Visual culture should reflect local histories, labor conditions, and uses, rather than exporting a single Western, sci-fi imaginary. If our images don’t account for these differences, they risk erasing the very people most impacted by the technology.
We also lose sight of the fact that AI doesn’t think or create like we do—it arrives at results through entirely different logics. It’s like mistaking JL Borges’ Pierre Menard for Cervantes: the outputs might look the same, but the meaning is totally different because the process is different (I draw here on William Morgan’s excellent article). Better images should show process and context, not just outputs. The public won’t trust what it can’t see. Hito Steyerl writes about the web as a form of ambient and pervasive infrastructure, no longer constrained to the screens but out in the world. That is where AI lives now, too.”
Chanelle also responded by saying that “better images of AI” means putting the human labour at the centre: “I think that the depiction of AI in society is quite deceptive actually, on one hand it’s marketed to the average person through images of robots and generic laptops. On the other it’s seen as this horrible thing with the potential to eradicate the need for human connectivity and thought.
I think that the fact of the matter is that the average person doesn’t know a lot about AI because they don’t have the time to learn about AI, the images that we see that show us robots, computers almost takes away the acknowledgement of the human labour that goes into making it possible.”

Do we need to redefine art in light of AI? Alex and Chanelle gave their thoughts.
The panelists were also asked about what they thought about art and whether it can be reconciled with AI? In response, Alex responded:
“Do we have to redefine art? I don’t think so. We need to re-center process, intention, and accountability. With AI, creative decisions move upstream—dataset curation, model selection, constraints, and staging. The art is not only the image or performance; it’s how we frame the system, disclose its workings, and invite audiences to negotiate meaning inside it. That framing is a human responsibility.
Is generative AI ‘just another tool,’ like photography once was? The camera transformed art, but it didn’t infer a scene from a high-dimensional statistical model trained on the world’s images. Generative AI is both a tool and an infrastructure: it creates, and it also absorbs, normalizes, and redistributes cultural patterns at scale. That dual role demands new norms around attribution, consent, artist compensation, and transparency. If we want AI to serve society responsibly, we should show not just what it is, but how it works, who made it, and who is left out. Art is uniquely positioned to hold that complexity.
We are making images of AI at a moment when three tempos of history collapse are collapsing into one another: the geohistorical time of the Earth, characterized by slow and almost imperceptible processes; the longue durée (“long term”), which encompasses stable structures of governance, culture, and socio-economic systems; and the l’histoire événementielle (“history of events”) marked by rapid changes and innovations. As Hartmut Böhme notes after Fernand Braudel, the concurrence of these three temporalities has always been present, but what is new is that today’s technosystems now reach down into geohistoricalal time. That has consequences for culture: if the foundations of life fail, the artificial natures—our infrastructures, our digital worlds, our art—fail with them.
So the task is not to pose art against nature, but to practice technology within “Third Nature”: a hybrid ecology where accumulated knowledge itself becomes a force, cognizant of the subsequent turn since Steyerl’s observation that the web has already left the screen and exploded into the world. In my own work, such as with Faust Shop, bringing AI into an embodied space makes that hybridity legible: audiences see and feel this system and recognize themselves inside it.
Let our images match these stakes. If representation is to help culture endure in Third Nature, our depictions of AI must critically engage with the computational differences of emerging technologies and model a world that we can depend on, that can still be lived in by all.”
As an artist, Chanelle commented: “In regards to AI and art, I look at it through a musician’s lens and see it as something that in some ways can encourage creativity because of the countless things it can do, it can also instill a laziness into the core principles of how that art is made. Ultimately the definition of what is art and what makes art art, whether that be music, poetry, fine art or photography will have to be adapted to fit into a world with AI/an AI context.”
A huge thank you to Hannah Claus for organising the event alongside the Early Careers Community of the Centre for Human Inspired AI (CHIA) and Robinson College (Cambridge) for hosting the exhibition. We are also grateful to the panelists Alex and Chanelle for providing such thoughtful comments and everyone who was able to interact and come to the events.