Reimagining AI in Cambridge with CHIA  

Exhibited images from the library line the two walls of a corridor with people walking through and exploring the images.

Earlier this year, we were invited to Cambridge (UK) for an exhibition of some of the visuals from the Better Images of AI library. It was followed by a panel event on “White Robots, Blue Brains: and Other Myths: AI, Reimagined”. The event was organised by Hannah Claus (PhD student at the University of Cambridge) together with the Early Careers Community of the Centre for Human Inspired AI (CHIA) and hosted by Robinson College, Cambridge on June 6th.

In the blog post below, we explore how the event’s exhibition and panel opened up discussions about reimagining AI and the role that artists have taken in this space to challenge visual tropes of AI and make space for alternative, more diverse representations. 

What does AI mean to you? 

“It’s whatever I want it to mean at any given moment” – Participant

The central theme of both the exhibition and the CHIA panel event was to encourage participants to reflect on what AI means to them personally. This required stepping outside the dominant narratives by technology companies and instead engaging in honest reflection about how we each encounter AI each day and how it shapes our lives, relationships, work, and environments. Participants were asked to draw or write their own responses to the prompt: “What does AI mean to you?”. The variety of answers (despite the relative homogeneity of a group of Cambridge-based researchers and creatives) revealed just how multifaceted AI is, and how differently it impacts individuals. Especially the mix of people coming from both the tech space and the arts scene created an environment where AI and its portrayal in our current Eurocentric society was questioned on multiple layers.

A wall with coloured post-it notes scattered across it with participant's written and visual responses to the prompt "what does AI mean to you?".
Participant responses to the question: ‘What does AI mean to you?’

Some responses highlighted AI’s practical benefits, such as “not needing to learn python syntax”, “a tool that makes life easier”, or “the potential to revolutionise the way we currently do physics research.” Others focused on its costs, depicting the human labour embedded in training datasets or its environmental toll. One response stood out in particular: “It’s whatever I want it to mean at any given moment.” This impactful statement underpinned much of what the evening’s event was about: advocating for more genuine choices about how and if AI is being used, how it is being developed, and who it is being developed for. 

Participant responses to the question: ‘What does AI mean to you?’

The post-it notes underscored how AI means something different to everyone, which depends on various factors. Yet this diversity of perspectives is rarely reflected in our visuals of AI. When AI is only imagined as an abstract, superhuman, existential threat, opportunities to question its social, environmental, legal, and political dimensions are closed. But when AI is imagined through many personal, critical, playful, speculative lenses, space opens up to contest dominant narratives and democratise the conversation about how AI is impacting society. 

“Much of the public still visualizes AI through a handful of increasingly clichéd and misleading images: white robots, glowing blue brains, swirling networks of light. They suggest AI is a distant, humanoid intelligence, when in fact it’s embedded in the messy, invisible systems we use every day— algorithmic driven engagement, capitalist systems of surveillance, language models, creative platforms.” – Alex Mentzel

Exhibiting better images of AI

“Images of AI come from somewhere, do something, and go somewhere” – Dominik Vrabič Dežman

The exhibition featured 10 of the images from the library created by human artists from all around the world, each image communicated a variety of themes about AI. Often, the images from the library are viewed only digitally in blog posts, usually on LinkedIn or in news articles. However, being able to bring some of the visuals into a physical exhibition opened up opportunities for in-person dialogue about the works, the role of artists in the field of AI, and the ideas about AI that they prompt us to think about. 

Images from the library on exhibition at Robinson College, University of Cambridge

Seeing how other people connected an image of AI to themes of labour, surveillance, or creativity often revealed the multiplicity of meanings that a single artwork can hold. Exchanges about these different perceptions not only introduced greater depth to the understanding of the artist’s work, but also created space for collective reflection about how AI is imagined, represented, and contested. 

Importantly, this also shows that images of AI are never neutral, as stated in Dominik Vrabič Dežman’s paper on AI visuals and hype: “images of AI come from somewhere, do something, and go somewhere”. In his paper, Dominik Vrabič Dežman criticises the “deep blue sublime” aesthetic of dominant AI imagery which reinforce harmful narratives about AI and its autonomy, automation, and inevitability. It’s interesting to think about this quotation with respect to the exhibition and Better Images of AI’s library. Talking about the images together in the same physical place reinforced how images of AI are shaped by choices made by the creators such as their culture, institutions, politics, identity, and artistic style. 

Students and artists gathered around the exhibition talking and socialising.
Individuals gathered around the exhibition talking

While the Better Images of AI library can be as political as common tropes, they make space for a diversity of interpretations and centre stories about AI which are actively suppressed or sidelined in dominant visuals. Therefore, reflecting back on Dominik’s words: the images in the library do come from somewhere: human artists from all around the world. They do something: disrupt the dominant narratives by surfacing neglected perspectives and reframe what counts as meaningful or relevant in discussions about AI. And they also go somewhere: not just on blog posts and news articles, but they also prompt more long lasting thinking and reflections on what AI really means to us. 

What is AI made of? By Shady Sharify

The exhibition was such a success that the artworks were also exhibited at the annual conference of the Centre for Human-Inspired AI on the 16th of June 2025. This conference brought together international researchers, industry professionals, students, and creatives to discuss how AI intersects with various fields, spanning from climate change to healthcare. During the conference, the attendees had the opportunity to vote for the “Best Artwork” from the selection of images that were exhibited on the day. “What is AI Made Of?” by Shady Shaify was voted for this award by the audience. The artwork resonated strongly with attendees because of the way it centered the hidden materials and labour of AI, rather than depicted AI as an abstract, disembodied robot. As a result, the piece invited the participants to think critically about the infrastructures and human contributions that are so often erased in mainstream visuals of AI.

Two individuals looking at ‘Who Is AI Made Of? by Shady Sharify

The role that art plays in reimagining AI: panel event

The panel event was focussed on how art can be used to deconstruct myths about AI. The panel was chaired by Hannah Claus and accompnied by Tania Duarte who manages the Better Images of AI collaboration. They were both joined by Chanelle Mwale and Alex Mentzel.

Alex, Tania, Chanelle and Hannah on the panel. Yutong Liu's image "Talking to AI 2.0" is projected in the background.
From left to right: Alex, Tania, Chanelle and Hannah on the panel. Yutong Liu’s image “Talking to AI 2.0” is projected in the background

Chanelle Mwale is a singer, songwriter and poet, and the founder of the Ubuntu Network. Chanelle shared their experiences as an artist in the current AI hype and commented on how artists are responding and reflecting on the use of AI in the industry. 

Alex Mentzel is a PhD student who works on the intersection of AI and art, creating a bridge between both worlds. He has combined AI with immersive theatre in his works. During the panel, Alex talked about how putting AI into a live, physical space has changed people’s reactions to the technology than on the screen.

In Alex’s own project, Faust Shop, he has taken AI off the laptop and into a live, shared space where audiences co-produce the system’s behavior. Embodied, participatory encounters recalibrate trust: the ‘magic’ of AI fades a bit and what emerges is curiosity, skepticism, and agency. People don’t just react to a polished output, they witness and question the technological pipeline that produced it.

What do “better images of AI” mean to Alex and Chanelle? 

Asking what “better images of AI” meant to Alex, he responded: “When we only show AI in narrow, anthropomorphic ways, we strip away the context: the human labor, data pipelines, biases, and infrastructures that make it function.  And context matters. AI isn’t experienced the same way in Berlin, Tripoli, or Bangalore. Visual culture should reflect local histories, labor conditions, and uses, rather than exporting a single Western, sci-fi imaginary. If our images don’t account for these differences, they risk erasing the very people most impacted by the technology.

We also lose sight of the fact that AI doesn’t think or create like we do—it arrives at results through entirely different logics. It’s like mistaking JL Borges’ Pierre Menard for Cervantes: the outputs might look the same, but the meaning is totally different because the process is different (I draw here on William Morgan’s excellent article). Better images should show process and context, not just outputs. The public won’t trust what it can’t see. Hito Steyerl writes about the web as a form of ambient and pervasive infrastructure, no longer constrained to the screens but out in the world. That is where AI lives now, too.” 

Chanelle also responded by saying that “better images of AI” means putting the human labour at the centre: “I think that the depiction of AI in society is quite deceptive actually, on one hand it’s marketed to the average person through images of robots and generic laptops. On the other it’s seen as this horrible thing with the potential to eradicate the need for human connectivity and thought. 

I think that the fact of the matter is that the average person doesn’t know a lot about AI because they don’t have the time to learn about AI, the images that we see that show us robots, computers almost takes away the acknowledgement of the human labour that goes into making it possible.” 

Chanelle, Tania, Hannah, and Alex stood smiling in front of a projected image of Yutong Liu's image "AI is Everywhere".
From left to right: Chanelle, Tania, Hannah, and Alex stood smiling in front of a projected image of Yutong Liu’s image “AI is Everywhere”

Do we need to redefine art in light of AI? Alex and Chanelle gave their thoughts.

The panelists were also asked about what they thought about art and whether it can be reconciled with AI? In response, Alex responded:

“Do we have to redefine art? I don’t think so. We need to re-center process, intention, and accountability. With AI, creative decisions move upstream—dataset curation, model selection, constraints, and staging. The art is not only the image or performance; it’s how we frame the system, disclose its workings, and invite audiences to negotiate meaning inside it. That framing is a human responsibility.

Is generative AI ‘just another tool,’ like photography once was? The camera transformed art, but it didn’t infer a scene from a high-dimensional statistical model trained on the world’s images. Generative AI is both a tool and an infrastructure: it creates, and it also absorbs, normalizes, and redistributes cultural patterns at scale. That dual role demands new norms around attribution, consent, artist compensation, and transparency. If we want AI to serve society responsibly, we should show not just what it is, but how it works, who made it, and who is left out. Art is uniquely positioned to hold that complexity.

We are making images of AI at a moment when three tempos of history collapse are collapsing into one another: the geohistorical time of the Earth, characterized by slow and almost imperceptible processes; the longue durée (“long term”), which encompasses stable structures of governance, culture, and socio-economic systems; and the l’histoire événementielle (“history of events”) marked by rapid changes and innovations. As Hartmut Böhme notes after Fernand Braudel, the concurrence of these three temporalities has always been present, but what is new is that today’s technosystems now reach down into geohistoricalal time. That has consequences for culture: if the foundations of life fail, the artificial natures—our infrastructures, our digital worlds, our art—fail with them.

So the task is not to pose art against nature, but to practice technology within “Third Nature”: a hybrid ecology where accumulated knowledge itself becomes a force, cognizant of the subsequent turn since Steyerl’s observation that the web has already left the screen and exploded into the world. In my own work, such as with Faust Shop, bringing AI into an embodied space makes that hybridity legible: audiences see and feel this system and recognize themselves inside it.

Let our images match these stakes. If representation is to help culture endure in Third Nature, our depictions of AI must critically engage with the computational differences of emerging technologies and model a world that we can depend on, that can still be lived in by all.” 

As an artist, Chanelle commented: “In regards to AI and art, I look at it through a musician’s lens and see it as something that in some ways can encourage creativity because of the countless things it can do, it can also instill a laziness into the core principles of how that art is made. Ultimately the definition of what is art and what makes art art, whether that be music, poetry, fine art or photography will have to be adapted to fit into a world with AI/an AI context.” 

A huge thank you to Hannah Claus for organising the event alongside the Early Careers Community of the Centre for Human Inspired AI (CHIA) and Robinson College (Cambridge) for hosting the exhibition. We are also grateful to the panelists Alex and Chanelle for providing such thoughtful comments and everyone who was able to interact and come to the events. 

🌳 ‘Behind the Forest, There are Trees’: Nicole in conversation with Laura

On the left is Nicole's image which shows a single tree silhouette is composed of a mosaic of an array of small, colourful images of various types of trees in a collage. The images are projected onto a cutout shape of a tree and overlap in vibrant layers. The tree sits against a dark forest background with a white drop shadow which creates distance between the tree and the forest behind it. The base of the tree features a bright green grass wrapped around the trunk. On the right, the text reads: 'Behind the Forest, there are Trees', 'Nicole Crozier' in conversation with Laura. In a blue text box, it reads 'Behind the Image Series'.

In this blog post, Laura Martinez Agudelo (one of our amazing volunteer stewards) interviews Nicole Crozier, the artist behind the image ‘Seeing the Forest for the Trees’ which was submitted as part of The Bigger Picture collection. The post explores how the image criticises, but also reflects on, the development of generative AI and what these new technologies mean for artists and the art industry. Nicole hopes the image can challenge the AI hype and misconceptions about how AI-generated art is created. 

You can freely access and download ‘Seeing the Forest for the Trees’ from our image library here. 

From roots to branches

Nicole is a visual artist originally from Ottawa and she currently lives in Montreal, Quebec. She is studying for a Master’s degree in Fine Arts (Painting and Drawing) at Concordia University, and she is working hard for her thesis defence in September. Prior to moving to Montreal, she lived in Toronto for seven years, where she developed her practice and worked as an arts manager, primarily in the dance world. She decided to enroll in her current programme in order to dedicate more time to her artwork. 

Art has been a part of Nicole’s life since childhood: “I first started painting when I was in grade 9. It was for me a means of expression and also… an escape from bullying at school”. While at high school, she wanted to be a journalist, but her art teacher convinced her to go to art school, which is how she ended up on this path. Nicole completed her undergraduate degree in Visual Arts at the University of Ottawa, graduating in 2013, during which time she primarily explored two artistic approaches: painting and photography. Since then, she has focused on both, “moving back and forth between them”. 

She knows that her technical skills lie mainly in painting, but she admits that she is “a slow painter and it can be frustrating sometimes”. At the same time, she finds that it is also a great quality: “… just slowing down, taking time and engaging in a dialogue with the painting. With photography, she feels the opposite because the process provides a quicker response between her and the subjects.

Besides, she is interested in playing with contrasting ways of reception of her artwork: “creating images that fall between two effects, for example seduction and repulsion… When you see an image, you may at first be attracted to something within the picture, and then repelled; not quite sure what you are looking at… I like working in between spaces, between two poles”. Let’s see how this approach converges with the idea of creating better images of AI. 

The Bigger Picture Workshop

Nicole doesn’t usually explore AI contents in her artwork. She came across it in The Bigger Picture workshop that she attended through Better Images of AI, which is how she heard about the motivations behind the project to create more realistic images of AI: “I was intrigued by the prompt, given my general interest in archetypes and in understanding the world through photography. We live in a hyper image saturated society and think about ourselves so much in relation to photographs”. 

She argues that, in using photography, we “view our daily lives through the camera lens, synthesizing our selves, environments, and social conditions into iconographic ways of seeing the world around us”. This idea crosses over with her first approach of AI image generators: “as an artist, that’s the main way I interact with AI: through text to image generative AI programs and trying to understand how they work and are affecting the arts industry”.

Collage and visual correlations

Although her work has changed a lot over the years, Nicole has always had an aesthetic working methodology and interest in collage. For her, collage is itself a way of creating. She loves to elaborate physical collages with paper and then photograph them. This is evident in her illustration Seeing the Forest for the Trees

In her art practice, Nicole often starts with “a 3-dimensional collage or maquette that I light and then photograph. I like working with cut paper and finding craft supplies that have a textural quality that tips the viewer off that what they are looking at is handmade, that draws them in. I create the illusion of space and then the camera flattens it: a multitude of images I’m combining compressed into one photograph. Which is also a similar process of synthesis used by generative AI in response to text prompts, so I think there is also a visual correlation between the two”

This is one of the reasons why Nicole became interested in the process of AI image generators as “‘sophisticated’ collage makers”, the material conditions of production and how many visual inputs produce one output. She also compares this process to the use of collage by artists in art history, such as the surrealists and how they used chance to access parts of their subconscious when making art. 

With AI (text-to-image or image-to-image models), you can reuse the same prompt and receive a different image each time, but the ’emotional’ and ‘creative’ part of the process is removed. These kinds of AI images show no signs of the subconscious role in creating: they are dead images, most often based on data poached from other artists without consent…”.

The machine would only be able to reproduce the data used to train it. For Nicole, the process of imagining and exploring visual representations is an essential part of creating images. What was then the idea behind Seeing the Forest for the Trees? 

The source of inspiration

A single tree silhouette is composed of a mosaic of an array of small, colourful images of various types of trees in a collage. The images are projected onto a cutout shape of a tree and overlap in vibrant layers. The tree sits against a dark forest background with a white drop shadow which creates distance between the tree and the forest behind it. The base of the tree features a bright green grass wrapped around the trunk.

Nicole Crozier & The Bigger Picture / Better Images of AI / CCBY-4.0

When Nicole was conceiving the image, she was thinking about how to illustrate her understanding of the way AI generators ‘create’ images. The tree-forest relation was a good subject to work with because it is a universal metaphor for the individual versus the group, one image versus a composite image. 

Large language models and generative image models compose images by training on hundreds of thousands of data and metadata items. The process of generating the final image is invisible to us, and the final image could not exist without the multitude of images that were (statistically translated) and put together. I don’t know if I was fully successful, but my contribution was an attempt to express this idea visually.”

For Nicole, the choice of the tree-forest metaphor is also related to how we position ourselves socially as individuals: recognising ourselves as individual trees within the forest”. 

The social component of AI systems is also intrinsic: it is hard not to think about AI through the lens of how we operate in society, because we always approach things from a human perspective, and the tree-forest metaphor is one that maybe we can all easily understand”. This is why Nicole wanted to create a tree made of other trees: It’s a forest inside a tree! And that’s where the idea first came from”. 

She has also been working with handmade maquettes for a long time and she wanted to use this method and materiality to create the image including the human process of making it. Afterwards, she came up with the popular phrase that became the title of the image — a perfect match to reinforce the meaning! 

Now let’s take a closer look at the image to see how it was put together.

The visual and material composition

First, Nicole cut out the image of a tree from a white piece of paper. Then, she created a small scene with it in a box and projected the image of the collage of trees onto the scene, using a Photoshop mask. 

An image of Nicole in their creative environment. They are sitting down next to a desk, surrounded by art equipment and tools.

Picture of Nicole in their creative environment

For the visual composition reflected on the large tree, she selected some images, free to use and without copyright restrictions, from Unsplash. She chose these images based on formal considerations: “I was trying to find images of trees with a lot of empty space around them so that you could see just one tree. The idea was to find archetypal images of trees”. 

Finally, she photographed the whole composition and did some extra manipulation in Photoshop: “it was a technical choice to achieve a well-balanced image”. 

The trees were chosen for their metaphorical relationship with nature and technology as well: “There are so many ways in which we can connect the idea and the visual representation of a tree with environmental and technological concerns, within the dynamic of ecosystems, understanding the branches of a tree as a network – it’s almost cliché, but I think it works really well for this topic”.

Questioning AI hype, sustainability, and the inevitability narrative

Even if Nicole doesn’t think she will make any more artwork about AI imagery specifically, she is currently considering the philosophical aspects of this subject, as well as the ethical issues of AI systems in our society: 

I have deep concerns about AI technology in general, its impact on society and whether it will be mainly beneficial or malevolent, and particularly in relation to climate change. This ethical question extends to myself and why and how I make art too. Honestly, I try to avoid using AI as much as possible…”.

She also mentions the importance of the Better Images of AI project: “I think Better Images of AI is trying to move beyond the black and white binary of imaging AI as either benevolent or malevolent. In a hyper-visual world, they are trying to provide more nuanced images to promote better visual literacy around how AI systems actually work and how they are being implemented in our society”. 

Nicole knows that there is a lot of propaganda and hype surrounding AI, encouraging not only a blindly positive attitude towards it, but also the idea that it is inevitable: the dominant discourse is that AI is here whether you like it or not. The general discourse seems to be ‘if you don’t embrace or adopt AI, you’ll be left behind in the job market’, and I think that scares some people”. 

This is obviously a turning point from many different angles: technological, cultural and environmental… it touches everything. Also, many AI systems are like black boxes. We don’t fully understand the nature of the inputs and processes that generate the outputs”, not even all humanity’s labour behind. 

Nicole thinks that the discourse of inevitability is irresponsible. It neglects real risk, harms and our individual and collective ability of agency: “Corporations creating these AI systems should have a regulated responsibility to ensure they are only used in ethical and beneficial ways. Though given our current neo-liberal economic climate, in which some corporations have more power and wealth than some nation states, I’m not very optimistic about the likelihood of our ability to employ these tools in ethical ways. 

From my understanding, it’s just an intensive version of a colonial regime, like: “let’s gather as much information as possible about all aspects of human experience, with limited compensation – if any –  to those whose data was harvested, and see what we can extract from it for profit for a limited few”. 

Nicole believes this approach is environmentally expansive and extractive, given that “the huge amount of energy needed to maintain these systems is something that most people think of as ‘invisible’, but it has real and concrete effects”. Having mentioned these issues, Nicole shared some other thoughts from the perspective of the art field.

What kind of art do we want?

During our interview, some quotes were proposed for discussion. These quotes came from the book The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want, by Emily M. Bender and Alex Hanna (2025), and specifically from a subsection of Chapter 5 about ‘AI and Art-Making’ (p. 103-112). This book was also discussed at the latest We and AI Book Club monthly meeting. As a visual artist, Nicole loved the idea of sharing her thoughts about it. 

One of the quotes suggested to Nicole was: There are, to date, no synthetic media machines in any medium that are based only on data collected in a way that respects existing artists”, referring to AI systems whose training data includes your own art. 

Nicole said: “My understanding of what they consider to be a media machine are systems that show no respect for copyright and, while I agree with the general statement, I think there is a grey area here. I think creating your own dataset (based on your own past work or work you’ve received consent for or paid for), could be an ethical use of this technology. 

AI is reshaping creative practice and Nicole knows artists who are “exploring those systems as a way of creating art, by creating their own dataset with their own work”, but she believes that “the general processes AI image generators are based around are inherently the opposite of what art is supposed to do”. 

About AI ‘Art’, Bender and Hanna mention in their book an idea expressed by Dr. Johnathan Flowers in an interview for Episode 4: Is AI Art Actually ‘Art’? (Mystery AI Hype Theater 3000, podcast audio, October 26, 2022):the purpose of art is to signal a particular kind of intention and to convey a particular type of experience, and this is precisely what AI art lacks”. 

Nicole agrees with this idea and proposes that it is also useful to ask:

“Is this a type of art that we want to be creating in the first place? Is it culturally productive or regressive? Not in the sense of capitalist productivity, but… Is it actually helping or inspiring anybody? Even more importantly, is this a mirror that we can hold up to see ourselves reflected in?”. 

For Nicole “the medium is the message (quoting McLuhan). Maybe AI art is art, maybe it isn’t, but… Is it really what we need? The focus on whether AI art is art is a smokescreen. What is art? Sometimes, there isn’t even a word for it; it depends on the context, the creative practices and the culture”. 

Two other quotes suggested were: “Why should artists who spent years perfecting their skill be left to starve as a few technical experts who stole their work get rich off of it?” and “AI art generators are already being deployed in ways that disrupt the economic systems through which people become and sustain careers as working artists”. 

Nicole thinks that people working in what is sometimes called ‘traditional artwork’, such as creating art objects for galleries, seem to be less concerned about ‘AI art’ because they feel somehow ‘untouchable’: “collectors always want paintings and physical objects. However, artists in creative industries such as illustration, design and animation are feeling the economic effects of AI much more acutely, and I have a lot of sympathy for them and their jobs. Many artistic fields are being affected…”. 

She believes that there should be more critical regulation to protect those artists, their copyright and the cultural value of their work: “AI is further degrading the general public’s respect for these art forms. People often say ‘oh, my kid can do that!’ and now, with AI image generators, it’s the same idea, ‘oh, I can do that using text-to-image models’. At least for now, I think, we can still tell the difference between something made by the artistic motivation, intention and work of a person (including imagination, experience and artistic skills), and something made by using only an AI image generator. But I think the width of this uncanny valley will continue to shrink in the years to come…”. 

Despite the opacity and mutability of many AI technologies, and all the questions about the latent space in the scanning and statistical process of image visualisation, and the patterns for generation, Nicole concludes by emphasising the importance of ongoing learning and reflection on the applications of AI image generators, not only in art, but also in all professional fields. She encourages us to think critically, without being deterministic, and question “what we should accept or refuse”.

Huge thanks to Nicole for her contribution, and for sharing her insights about her artwork and the challenges of creating in the context of AI image generators today.  

About the artist

Image of Nicole who shown to be creating art, surrounded by creative tools and equipment like paintbrushes.

Nicole Crozier is a visual artist and arts manager based in Tiohtià:ke (Montreal-QC), with ties to Tkaronto (Toronto), born and bred in Adàwe (Ottawa-ON). Nicole holds a Bachelor of Fine Arts (University of Ottawa) and a graduate certificate in Arts Management (Centennial College).

About the author

Laura Martinez Agudelo is a Temporary Teaching and Research Assistant (ATER) at the University Marie & Louis Pasteur – ELLIADD Laboratory. She holds a PhD in Information and Communication Sciences. Her research interests include socio-technical devices and (digital) mediations in the city, visual methods and modes of transgression and memory in (urban) art.   

Headshot of Laura


Windows, Cursors, and the Invisible Layer of the AI City by Berk Alkoç

Building blocks are overlayed with digital squares that highlight people living their day-to-day lives through windows. Some of the squares are accompanied by cursors. Below, there is the article title, "Windows, Cursors, and the Invisible Layer of the City by Berk Alkoç"

Artist contributions to the Better Images of AI library have always served a really important role in relation to fostering understanding and critical thinking about AI technologies and their context. Images facilitate deeper inquiries into the nature of AI, its history, and ethical, social, political and legal implications.  

In this series of blog posts called ‘Through My Eyes’, some of our volunteer stewards are each taking turns to choose an image from the library and unpack the artist’s processes and explore what that image means to them. In this blog post, Berk Alkoç explores Emily Rand’s image, AI City and what it reveals about algorithmic bias in increasingly digitalised cities where extractive data harvesting facilitate tech companies to exclude, surveil, and target individuals. 

In 1950, Gordon Childe wrote that “the concept of ‘city’ is notoriously hard to define” in his Urban Revolution. He had a point. Cities have always been strange hybrids: part geography, part invention, part collective experiment. They aren’t just collections of buildings where people happen to congregate. They’re networks of relationships, new ways of living compressed into dense spaces. In the past, cities grew from mud and stone. Today, the materials have changed. Now, the city is also constructed from data, code, and thousands of invisible algorithms quietly processing in the background while we simply try to eat dinner, walk the dog or scroll through our phones.

This tension defines AI City, a piece created by illustrator Emily Rand in collaboration with The London Office of Technology and Innovation (LOTI). The work emerged from a workshop at Science Gallery London during London Data Week 2023, beginning with a public conversation with Sam Nutt about how AI shapes urban life and how bias infiltrates these systems. Rather than creating a conventional infographic or dystopian “big brother” poster, Rand chose to depict this through an ostensibly ordinary city block. Yet nothing remains ordinary once you begin to look closely.

And if the city operates as a GPU, what exactly is it processing? Us.– Berk Alkoç

At first glance, the image presents familiar urban elements: a dense patchwork of apartments, brick facades, diverse windows revealing different lives. Someone makes a phone call. Another person slumps at a table. A figure stands with arms crossed, lost in thought. This is recognizable urban life. But then here comes the boxes: neon rectangles hovering around certain windows, as if the cityscape has transformed into a website interface. And the cursors: black arrows pointing directly at people, suggesting the skyline has become a giant computer screen where an invisible hand prepares to click on someone.

Building blocks are overlayed with digital squares that highlight people living their day-to-day lives through windows. Some of the squares are accompanied by cursors.
Emily Rand & LOTI / Better Images of AI / CC BY 4.0

The selection isn’t random. Some people receive boxes, others don’t. Some windows attract cursors, others remain unmarked. Suddenly the city appears less like an urban landscape and more like the interior of a graphics card, with building rows resembling processor arrays, the entire block functioning as an enormous GPU. And if the city operates as a GPU, what exactly is it processing? Us.

Here lies the work’s incisive critique. The boxes and cursors represent unseen systems that shape contemporary urban life—systems that monitor our movements, organize us into patterns, and determine who receives attention and who doesn’t. The piece prompts essential questions: In a data-driven city, who becomes visible? Who gains priority? And who gets overlooked? This is algorithmic bias in practice, but not the dramatic science fiction version. It’s quiet, mundane, cumulative and harmful. It manifests as improved waste collection in one neighborhood and neglected potholes in another. It appears as one window receiving a digital frame while another fades into invisibility. The effect is as subtle as the small cursors in Rand’s image, but equally consequential.

Consider walking through the city with an invisible video game interface overlaying everything, highlighting random people while you’re simply trying to catch your bus. The city becomes simultaneously street and dashboard. You exist in public space while being processed by hidden systems. You’re surrounded by millions of people, yet filtered, sorted, and perceived through rules you never consented to—together, yet apart.

The invisible selection process that AI City visualizes has become increasingly visible through striking examples of algorithmic bias. For instance, Cambridge researcher Christoffer Koch Andersen’s work on “trans impossibility” examines how digital systems built around rigid binary gender categories systematically exclude trans people from essential services. Andersen points to Hungary’s plan to use facial recognition at pride parades to identify and fine attendees—suddenly those floating cursors aren’t just overlooking queer people; they’re actively targeting them. The same technology now renders people invisible in one context while placing giant neon boxes around anyone who dares exist publicly in another.

“You don’t see us until you’re aiming” 

Christoffer Koch Andersen and Mila Edensor on structural invisibility and trans impossibility 

Andersen demonstrates how these classification systems derive from colonial-era categorization practices—the supposedly “neutral” algorithms now embedded in everything from healthcare reminders to banking access actually perpetuate centuries-old biases. It’s the digital equivalent of those floating cursors deciding who gets selected and who gets ignored, except the consequences extend beyond metaphor to your actual bank account, health outcomes, and  your safety at a public gathering.

This extraction extends beyond labor to data itself. As Kate Crawford notes in Atlas of AI (2021), tech companies operate under a “collect-it-all mentality” where engineers aim to build “a mirror of the real world,” requiring that “anything that you see in the real world needs to be in our databases.” And the Smart cities become the ideal sites of this comprehensive data harvesting: faces captured on streets train facial recognition systems, social media feeds build predictive language models, and personal photos train machine vision algorithms. Just imagine how every small action, like a phone call, a walk, scrolling, becomes data that helps the city “learn,” and how that learning influences the city’s treatment of you. All of this is normalized as necessary rather than questioned as invasive. The urban environment transforms into a resource to be mined, with public spaces exploited for training data that powers the very systems creating discriminatory outcomes in areas such as employment, housing, and policing.

If Childe were writing today, he would need to revise his definition. Cities are no longer built solely from stone and steel. We no longer define them by agricultural “surplus”—now we speak of data surplus. Cities consist of streaming data, prediction models, and machine learning algorithms. They are simultaneously human and non-human. And if the city functions as a computer, then someone, somewhere, controls the mouse. The question becomes: who decides where to click? 

About the author

Berk Alkoç (he/him) is a designer–researcher based in Germany exploring the intersections of technology, cities, and everyday life through a critical (and unapologetically queer) lens. At ZeMKI, University of Bremen, he designs for Molo, a civic media platform. At the Institute for Technology Assessment and Systems Analysis (ITAS) at the Karlsruhe Institute of Technology (KIT), he researches nature conservation through a relational values lens and how digital tools shape environmental governance. Outside of work, he’s likely outdoors or immersed in something visual, whether behind a camera, sketching, or experimenting with graphic design.

A headshot in black and white of Berk.

If you want to contribute to our new blog series, ‘Through My Eyes’, by selecting an image from the Better Images of AI Library and exploring what the image means to you, get in touch (info@betterimagesofai.org).

Explore other posts in the ‘Through My Eyes’ series