Beneath the Surface: Adrien’s Artistic Perspective on Generative AI

The image features the title "Beneath the Surface: Adrien's Artistic Perspective on Generative AI." The background consists of colourful, pixelated static, creating a visual texture reminiscent of digital noise. In the centre of the image, there's a teal rectangular overlay containing the title in bold, white text.

May 28, 2024 – A conversation with Adrien Limousin – a photographer and visual artist, sheds light on the nuanced intersections between AI, art, and ethics. Adrien’s work delves into the opaque processes of AI, striving to demystify the unseen mechanisms and biases that shape our representations.


A vibrant, abstract image from converting Street View screenshots from TIFF to JPEG, showing a pixelated, distorted classical building with columns. The sky features glitch-like, multicolored waves, blending greens, purples, pinks, and blues.

ADRIEN LIMOUSIN – Alterations (2023)

Adrien previously studied advertising and now is studying photography at the National Superior School of Photography (ENSP) in Arles and is particularly drawn to the language of visual art, especially from new technologies.

A cluster of coloured pixels made up from random gaussian noise taking up the whole canvas, representing a not denoised AI generated image; digital pointillism

Fig 1. Adrien Limousin / Better Images of AI / Non-image / CC-BY 4.0

Non-image

Adrien was drawn to the ‘Better Images of AI’‘ project after recognising the need for more nuanced and accurate representations of AI, particularly in journalism. In our conversation, I asked Adrien about his approach to creating the image he submitted to Better Images of AI (Fig 1.).


> INTERVIEWER: Can you tell me about your thinking and process behind the image you submitted?

> ADRIEN: I thought about how AI-generated images are created. The process involves taking an image from a dataset, which is progressively reduced to random noise. This noise is then “denoised” to generate a new image based on a given prompt. I wanted to try to find a breach or the other side of the opaqueness of these models. We only ever see the final result—the finished image—and the initial image. The intermediate steps, where the image is transitioning from data to noise and back, are hidden from us.

> ADRIEN: My goal with “Non-image” was to explore and reveal this hidden in-between state. I wanted to uncover what lies between the initial and final stages, which is typically obscured. I found that extracting the true noisy image from the process is quite challenging. Therefore, I created a square of random noise to visually represent this intermediate stage. It’s no longer an image and it’s also not an image yet.


Adrien’s square of random noise captures this “in-between” state, where the image is both “everything and nothing”—representing aspects of AI’s inner workings. This visual metaphor underscores the importance of making these hidden processes visible, to demystify and foster a more accurate understanding of what AI is, how it operates, and it’s real capabilities. Seeing the process Adrien discusses here also reflects the complex and collective human data that underpins AI systems. The image doesn’t originate from a single source but is a collage of countless lives and data points, both digital and physical, emphasising the multifaceted nature of AI and its deep entanglement with human experience.

A laptopogram based on a neutral background and populated by scattered squared portraits, all monochromatic, grouped according to similarity. The groupings vary in size, ranging from single faces to overlapping collections of up to twelve. The facial expressions of all the individuals featured are neutral, represented through a mixture of ages and genders.

Philipp Schmitt & AT&T Laboratories Cambridge / Better Images of AI / Data flock (faces) / CC-BY 4.0

“The medium is the message”

(McLuhan, Marshall, 1964).

When I asked Adrien about the artists who have inspired him, he highlighted how Marshall McLuhan’s seminal concept, “the medium is the message,” profoundly resonated with him.

This concept is crucial for understanding how AI is represented in the media. McLuhan argued that the medium itself—whether it’s a book, television, or image—shapes our perceptions and influences society more than the actual content it delivers. McLuhan’s work, particularly in Understanding Media (1974), explores how technology reshapes human interaction and societal structures. He warned that media technologies, especially in the electronic age, fundamentally alter our perceptions and social patterns. When applied to AI, this means that the way AI is visually represented can either clarify or obscure its true nature. Misleading images don’t just distort public understanding; they also shape how society engages with and responds to AI, emphasising the importance of choosing visuals that accurately reflect the technology’s reality and impact.

 “Stereotypes inside the machine”

(Adrien).

Adrien’s work explores the complex issue of stereotypes embedded within AI datasets, emphasizing how AI often perpetuates and even amplifies these biases through discriminatory images, texts, and videos.


> ADRIEN: Speaking of stereotypes inside the machine, I tried to question that in one of the projects I started two years ago and I discovered that it’s a bit more complicated than what it first seems. AI is making discriminatory images or text or videos, yes. But once you see that you start to question the nature of the image in the dataset and then suddenly the responsibility shifts and now you start to question why these images were chosen or why these images were labelled that way in the dataset in the first place ?

> ADRIEN:  Because it’s a new medium we have the opportunity to do things the right way. We aren’t doomed to repeat the same mistakes over and over. But instead we have created something even more – or at least equally discriminatory.

And even though there are adjustments made (through Reinforcement Learning from Human Feedback) they are just kind of… small patches. The issue needs to be tackled at the core.”

Image shows a white male in a suit facing away from the camera on a grey background. Text on the left side of the image reads “intelligent person.”

Adrien Limousin –  Human·s 2 (2022 – Ongoing)

As Adrien points out, minor adjustments or “sticking plasters” won’t suffice when addressing biases deeply rooted in our cultural and historical contexts. As an example – Google recently attempted  to reduce racial bias in their AI Gemini image algorithms. This effort was aimed at addressing long standing issues of racial bias in AI-generated images, where people of certain racial backgrounds were either misrepresented or underrepresented. However, despite these well-intentioned efforts, the changes inadvertently introduced new biases. For instance, while trying to balance representation, the algorithms began overemphasizing certain demographics in contexts where they were historically underrepresented, leading to skewed and culturally inappropriate portrayals. This outcome highlights the complexity of addressing bias in AI. It’s not enough to simply optimize in the opposite direction or apply blanket fixes; such approaches can create new problems while attempting to solve old ones. What this example underscores is the necessity for AI systems to be developed and situated within culture, history, and place.


> INTERVIEWER: Are these ethical considerations on your mind when you are using AI in your work?

> ADRIEN: Using Generative AI makes me feel complicit about these issues. So I think the way I approach it is more like trying to point out these lacks, through its results or by unravelling its inner working

“It’s the artists role to question”

(Adrien)


> INTERVIEWER: Do you feel like artists have an important role in creating the new and more accurate representations  of AI?

> ADRIEN:  I think that’s one of the role of the artist. To question.

> INTERVIEWER: If you can kind of imagine like what, what kind of representations we might see, or you might want to have in the future like instead of when you Google AI and it’s blue heads and you know, robots, etc.

> ADRIEN: That’s a really good question and I don’t think I have the answer, but as I thought about that, understanding the inner workings of these systems can help us make better representations. For instance, the concepts and ideas of remixing existing representations—something that we are familiar with, that’s one solution I guess to better represent Generative AI.


Image displays an error message from the Windows 95 operating system. The text reads ‘The belief in photographic images.exe has stopped working’.

ADRIEN LIMOUSIN System errors – (2024 – ongoing)

We discussed the challenges involved in encouraging the media to use images that accurately reflect AI.


> ADRIEN: I guess if they used stereotyped images it’s because most people have associated AI with some kind of materialised humanoid as the embodiment of AI and that’s obviously misleading, but it also takes time and effort to change mindsets, especially with such an abstract and complex technology, and that is I think one of the role of the media to do a better job at conveying an accurate vision of AI, while keeping a critical approach.


Another major factor is knowledge: journalists and reporters need to recognise the biases and inaccuracies in current AI representations to make informed choices. This awareness comes from education and resources like the Better Images of AI project, which aim to make this information more accessible to a wider audience. Additionally, there’s a need to develop new visual associations for AI. Media rely on attention-grabbing images that are immediately recognisable, we need new visual metaphors and associations that more accurately represent AI.  

One Reality


> INTERVIEWER: So kind of a big question, but what do you feel is the most pressing ethical issue right now in relation to AI that you’ve been thinking about?

> ADRIEN: Besides the obvious discriminatory part of the dataset and outputs, I think one of the overlooked issues is the interface of these models. If we take ChatGPT for instance, the way there is a search bar and you put text in it expecting an answer, just like a web browser’s search bar is very misleading. It feels familiar, but it absolutely does not work in the same way. To take any output as an answer or as truth, while it is just giving the most probable next words is deceiving and I think that’s something we need to talk a bit more about.


One major problem with AI is its tendency to offer simplified answers to multifaceted questions, which can obscure complex perspectives and realities. This becomes especially relevant as AI systems are increasingly used in information retrieval and decision-making. For example, Google’s AI summarising search feature has been criticised for frequently presenting incorrect information. Additionally, AI’s tendency to reinforce existing biases and create filter bubbles poses a significant risk. Algorithms often prioritise content that aligns with users’ pre-existing views, exacerbating polarisation (Pariser, 2011). This is compounded when AI systems limit exposure to a variety of perspectives, potentially widening societal divides.

Metasynthography

(Adrien)

Adrien takes inspiration from the idea of metaphotography, which involves using photography to reflect on and critique the medium itself. In metaphotography, artists use the photographic process to comment on and challenge the conventions and practices of photography.

Building on this concept, Adrien has coined the term “meta-synthography” to describe his approach to digital art.


> ADRIEN: The term meta-synthography is one of the terms I have chosen to describe Digital arts in general. So it’s not properly established, that’s just me doing my collaging.

> INTERVIEWER: That’s great. You’re gonna coin a new word in this blog 😉


I asked Adrien what artists inspire him. He discusses the influence of Robert Ryman, a renowned painter celebrated for his minimalist approach that focuses on the process of painting itself. Ryman’s work often features layers of paint on canvas, emphasising the act of painting and making the medium and its processes central themes in his art.


> ADRIEN: I recently visited an exhibition of Robert Ryman, which kind of does the same with painting – he paints about painting on painting, with painting.

> INTERVIEWER:  Love that.

> ADRIEN: I thought that’s very interesting and I very much enjoy this kind of work, it talks about the medium…It’s  a bit conceptual, but it raises question about the medium… about the way we use it, about the way we consume it.

Image displays a large advertising board displaying a blank white image, the background is a grey clear sky

Adrien Limousin – Lorem Ipsum (2024 – ongoing)

As we navigate the evolving landscape of AI, the intersection of art and technology provides a crucial perspective on the impact and implications of these systems. By championing accurate representations and confronting inherent biases, Adrien’s work highlights the essential role  artists play in shaping a more nuanced and informed dialogue about AI. It’s not only important to highlight AI’s inner workings but also to recognise that imagery has the power to shape reality and our understanding of these technologies. Everyone has a role in creating AI that works for society, countering the hype and capitalist-driven narratives advanced by tech companies. Representations from communities, along with the voices of individuals and artists, are vital for sharing knowledge, making AI more accessible, and bringing attention to the experiences and perspectives often rendered invisible by AI systems and media narratives.


Adrien Limousin (interviewee) is a 25 years old french (post)photographer exploring the other side of images, currently studying at the National Superior School of Photography in Arles.

Cherry Benson (interviewer) is a Student Steward for Better Images of AI. She holds a degree in psychology from London Metropolitan University and is currently pursuing a Master’s in AI Ethics and Society at the University of Cambridge where her research centers on social AI. Her work on the intersection of AI and border control has been featured as a critical case study in the Cambridge Journal of Artificial Intelligence for how racial capitalism is deeply intertwined with the development and deployment of AI.