Open Competition to Create More Realistic Stock Images of ‘Digital Transformation at Work’

ANNOUNCEMENT

The ESRC Digital Futures at Work Research Centre (Digit) and Better Images of AI (BIoAI) are delighted to announce a competition to reimagine the visual communication of how work is changing in the digital age. The Digit Centre has undertaken a significant five year research programme culminating in insights about real-world digital transformations currently impacting people’s daily lives.

ARTISTS BRIEF

To visually capture the digital transformation of work and digital dialogues around these changes. We  will offer eight prizes for new visual images across four  themes from the report:

  1. Digital Adoption
  2. Digital Inclusion
  3. Changing employment contracts and working conditions 
  4. Digital Dialogues

We invite the creative community to illustrate the realities of these themes. By reflecting on the research findings, artists can help visually communicate and build better understandings about how digital technologies are shaping changes in the workplace.

Digit is teaming up with the non-profit Better Images of AI (BIoAI) collaboration and free image library – to bring visuals created for the competition to the attention of a global audience of media, education and content creators.

BIoAI will make selected entries free to use on Creative Commons licences to better frame and illustrate the wider debates seen in public articles and content about the future of work in the digital age. 


UPDATE 18th May 2025: The deadline is being extended to allow entries to be submitted by 23:59 on 18th May 2025 in ANY time zone.

  • Competition Opens: Wednesday 9th April 2025
  • Competition Closes: Sunday 18th May 2025 (23:00 UTC)
  • Winners Notified: Friday 20th June 2025
  • Public Announcement: w/c 7th July 2025

Prizes

– Grand prize: £600
– Winner: 4 x £400 prizes
– Runner-up: 4 x £300 prizes

Please note that funds not received in GBP Sterling will be subject to reductions due to foreign exchange fees.

Eligibility

– Open to everyone over the age of 18 at the time of submission or interaction with the project
– Students, amateurs, and professionals from across all creative fields are welcomed
– Global entries from around the world are welcomed but will need to ensure that they can receive payment, and be responsible for legal and practical requirements related to receiving payments in their countries.
– Entries from collaborations or commercial organisations are eligible
– Prizes are for original artwork created for exclusive use of the competition. Submissions should be the artist’s own original work and should not infringe upon any third party’s intellectual property. If submissions include elements such as images, graphics and other materials created by others (e.g., in a collage or mixed-media work) the artist must have the appropriate rights or licenses to use them.
– Multiple entries are welcome

Formats

Static digital files of images created by (but not limited to) the following methods are encouraged:
– Digital art, 3D rendering
– Photography
– Collage, remixing
– Illustration
– High quality photographs of sculpture, craft, or 3D artworks
– Printmaking
– Painting, drawing

Please note that images created using AI image generators are subject to certain conditions which are detailed in the next section.

Use of AI and AI image generators

AI-generated artworks will only be eligible if:

-Original artwork by the submitting artist is used as the visual prompt and style 

-The image generator used to create the image:

  • Has only consented art used in its training 
  • Compensates artists with work in its training set
  • Marks all images as generated

(Currently Public Diffusion has been recommended but alternatives which meet these criteria can be discussed).

-The way the image generator has been used in the process is described within the image documentation provided.

-Processes which use AI but are not text-to-image AI image generators (such as found in digital editing platforms) are admissible. For example background remove, expand, filters.

Judging criteria

Entries will be scored based on meeting a combination of criteria including:
– Visual and aesthetic impact
– Meeting the brief and reflecting the research
– Originality and creativity
– Avoidance of unhelpful tropes
– Communicating the themes from the research findings

Judging panel

Entries will be shortlisted then judged by a panel of experts in different fields including creative, communications, technical or digital and work. Judges’ decisions will be final.

Use of images

-A condition of accepting a prize as a winning entry is that the image copyright will be transferred to University of Sussex.
-The image may also be made available in the Better Images of AI image library on a Creative Commons licence 4.0.  This allows for commercial adaptation and use, but requires every use to be credited to the artist and project.
-Images which do not win or accept prizes remain the property of the artist, but may be offered the opportunity to be included in the Better Images of AI library, and shared on the Digit website.

Inclusivity

-Entries are encouraged from individuals from all groups, communities and backgrounds. 
-Entrants are encouraged to contact the team if they have accessibility requirements and require information or submission in a different format.

Publicity

-Winning entries will be publicised and selected winners may have the option for potential further publicity.
-Entries that are unsuccessful, but are deemed to model and inspire in accordance with the aims of the competition may also be given the opportunity to be featured in publicity.

Prize Payment

-We aim to pay winners by 30 June 2025. They will need to submit an invoice or receipt to receive payment.
-Winners from outside the UK will be subject to foreign currency conversion fees of roughly 3%.
-We and AI LTD will administer all prize payments on behalf of the Digit Centre at the University of Sussex.

Data Sharing

Entrants will be asked for:
– Name and email for correspondence only
– (Optional) The entrant’s name and website (if applicable) may be published with your competition entry on social media, the Better Images of AI library, and other news forms.


  • Submission is by filling out this Form which will need to be received by Sunday 18th May 2025 (23:00 UTC).
  • If you log in to Google you can start and return to the form at any time, otherwise we suggest you have all information ready before you fill it in. In either case it will need to be submitted by the deadline.
  • If you have any accessibility requirements please contact info@betterimagesofai.org  so alternative arrangements can be made. 
  • You can submit as many entries as you like, as long as you provide all the relevant information for each of the images.
  • Files will need to be submitted as .PNG files size 2560×1440. As images can be very large, we ask for links to image files.
  • Alongside your image, you will also be asked to provide a short description and answer some brief questions relating to the development process, the transfer of intellectual property, and your background (optional).
  • You will be able to submit up to 5 images through the form, if you wish to enter more images please get in touch so we can help facilitate this with you. 
  • If you change your mind about entering, get in touch and we will remove your entry.

Please make sure you have read the following sections with details of the brief thoroughly before starting or submitting your entry.


The aim of the competition is to contribute to wider public understanding of the ways in which AI-enabled technologies are changing work. Creating images which will increase such understanding requires a thoughtful consideration of the landscape of digital transformation and AI at work, which is the focus of Digit’s research.

About Digit:

Digit stands for the Digital Futures At Work Research Centre, and was established in 2020 with investment from The UKRI Economic and Social Research Council (ERSC), a UK government funded research and innovation body. 

A number of researchers based at universities led by the University of Sussex Business School and Leeds University Business School, with the universities of Cambridge, Aberdeen, Manchester and Monash, and the Institute for the Future of Work have been examining  the way that digital technologies are reshaping work. They have been looking at the impact on employers, workers and their representatives, job seekers and governments. Their aim is to inform current debates about the future of work and develop a compelling, empirical basis for effective policy-making.

Digit is the organisation funding the competition and setting the brief. You can find out more about Digit here.

About Digital Dialogues:

One of the key outputs of Digit’s research is the ‘Digital Dialogues’ report, summarising findings from the Centre’s research on digital transformation at work. The findings identify the key challenges now facing governments, businesses, trade unions, civil society organisations, and workers – and how they are shaping the future of work.

These are the findings that the competition seeks to illustrate visually. You can find out more about the report here.

About Better Images of AI:

Better Images of AI is non-profit project and website which includes:

  • a free image library
  • articles about visual representations of AI
  • research and guidance on how to communicate about AI and technology in more realistic, transparent and inclusive ways.

BIoAI is an open and global collaboration between several individuals and non-profit organisations and institutes. They are united by a shared aim to enable better conversations and understanding of AI by replacing misleading but dominant science fiction imagery of AI with more useful (and less exclusionary and biased images). 

Some of the partners include BBC R&D, LCFI, Digital Catapult, Scottish AI Alliance, AI Sweden, Finnish Centre for Artificial Intelligence and others, as well as individual activists, artists and academics. Better Images of AI is coordinated and maintained by We and AI, a UK non-profit focused on critical AI literacy. 

The project has been highly influential with images used by hundreds of content producers communicating about AI including global news media and content creators, and viewed at least 2,000,000 times. 

Useful links:

Better Images of AI have written this creative brief based on their research and experience in this area. They are always keen to maintain longer term relationships with artists interested in this area.

Stock images are photography or other images which are licensed for use via various library sites. They are used for a number of different purposes such as to accompany news media or features articles, to accompany online communications such as newsletters or websites, to illustrate reports or research, to decorate events spaces, slide presentations and books. They are found based on keywords, and stock image libraries buy images and copyright from mainly professional commercial artists which they know will be commercially successful. Often this means signifying the keyword in question in a recognisable and popular way.

For complex and evolving subjects references to recognisable images can be influenced by historic or existing popular cultural media representations. Commercially successful images are often striking, compelling and provide a provocative visual shorthand to communicate the idea. However, too often, images of AI have become self-referential and endlessly copied and reworked cliches, without being connected to the lived realities of the topic. The resulting dominance of common aesthetics and iconography is oversimplistic and misinforms public perceptions and understanding of the topics.

Future of Work and digital transformation are areas in which most images found in stock image libraries are often not really reflective of the topics they are being used to illustrate. Images about AI often contain robots or brains, but instead could be much more meaningfully represented by the themes from Digit’s research, which looks at how people and organisations are using or being impacted by AI and other technologies at work. 

Stock images are used by journalists, writers, editors, content creators, thought leaders, artists, commentators, academics, educators, public, private, and third sector communications and marketing departments.

As such, creating new, fresher images of digital work (and resulting challenges) which are informed by the realities which the research explores will:

  • Help society at large to increase understanding of the way work is changing
  • Facilitate more meaningful and informed discussions about how we respond
  • Encourage users and viewers to think critically about the challenges uncovered
  • Create a set of images which fill gaps in representing topics related to AI

In contrast with much existing imagery which reinforces hype and speculation about the diffusion of AI through the economy, the competition aims to produce a range of images that better visualise the ways in which AI and digital technologies are transforming work in practice and its impacts for different groups of workers, employers, trade unions and communities.  

Our research identifies some of the real problems and opportunities for ordinary working people and those looking for work that can be understood in the here and now.  However, the imagery available to illustrate how work might be changing is limited. It is often focused on the technology in sanitised high tech warehouses, white collar business environments, or on young ‘digital nomads’ in beautiful locations. We urgently need a wider range of images that can help society to visualise real world transformations that are already underway—and how this impacts on people in their working and daily lives.

Google image searches for ‘Future of Work’ often result in white, science fiction robots, indicating that workers will be replaced by robots:

“Digital transformation” searches show a lot of “Minority Report” style images, with white suited people virtually orchestrating holographic elements by pressing icons, not showing any of the real world people, processes, or real technologies involved.

The challenge is to come up with new ways of portraying digital transformations by focusing instead on the realities, examples, places, characteristics and impacts.

You can find out more about these realities in the following section: The Four Themes.


To help focus ideas and narrow down the amount of research needed to enter, we are proposing four themes related to the research. You can enter images for one or multiple themes. In each one we are asking you to either consider your perspective or how you might visually explore the following questions:

Finding: Digital adoption is still patchy and investment in digital skills training is low

  • Only just over a third of employers had invested in new digital technologies and non-adopters were hesitant about investing in the near future. 
  • Some small firms in particular are at risk of falling behind. 
  • While employers were finding it difficult to recruit workers with the necessary skills, there was limited investment in training.
  • In manufacturing and finance organisations we found that AI is being used for specific (often repetitive tasks) tasks but has not, as yet, resulted in job losses. 
  • However, AI use in some creative and digital small businesses suggests that young people may find it harder to gain the level of digital skills required for entry level jobs.

Feel free to come up with your way of visualising the challenge and recommendation area. Or you might wish to consider the following:

  • How we are adopting technologies either in specific situations or collectively
  • Whether they are helpful, or control and restrain us 
  • Whether they are helping us think creatively or helping us all think the same way
  • Which capabilities or practices might it be improving, and which might it be reducing or degrading
  • The factors that influence adoption rates

Digital exclusion is creating and exacerbating new forms of inequality

  • ‘Digital by default’ welfare policies are a barrier to work for jobseekers with low levels of digital literacy. These barriers include data poverty, a reliance on smart phones for complex tasks rather than computers, dependence on shared devices, and reliance on intermediaries to get online.
  • Digital technologies can also help to build inclusionary ways of working that particularly benefit women, disabled people and ethnic minorities. However, people from these groups can also experience downsides of digitalisation. 

Feel free to come up with your way of visualising the challenge and recommendation area. Or you might wish to consider the following:

  • Examples of what it means for digital at work to be for the benefit of all, and who the all are
  • How you can include people in the change
  • Who has currently have been left behind – What fields they work in, and how it happens, and what their exclusion looks like

Finding: Technology adoption is facilitating experimentation with how, when and where people work, as firms adopt new business models and/or working time arrangements.

  • Some platform companies have established ‘privatised’ forms of social and employment protections (like sick pay), but these provide less protection for the self-employed compared to standard ‘worker’ or ‘employee’ contracts
  • Impacts can vary in different sectors: most musicians do not earn much from streaming; travel content creators experience precarious income streams; quick commerce companies are moving from direct employment to self-employed contractors. 
  • Positive experiments include agile working in the NHS and a four-day working week that can enable organisations and individuals to benefit.

Feel free to come up with your way of visualising the challenge and recommendation area. Or you might wish to consider the following:

  • What kind of contracts are now becoming common, and what does this mean for workers?
  • What digital jobs might be good jobs? (Consider for example, digital nomads, content creators who have the freedom  to travel and flexibility, and others who have found new ways of working)
  • Or whether we are becoming digital slaves – tied to automated schedules and surveillance and managed by algorithm
  • Whether we can have both types of jobs at once, and what the gap looks like
  • What does changes mean for relationships between workers and employers

Supporting more extensive, society-wide, inclusive, ‘digital dialogues’ will be key to improve productivity, wellbeing and inclusion

  • Questions about how to accelerate responsible adoption of technology in public and private sectors should go hand in hand with questions about how to harness technology to improve people’s everyday working lives. 
  • Our research shows that giving workers a voice can help to realise and share the benefits of technology at work.

Feel free to come up with your way of visualising the challenge and recommendation area. Or you might wish to consider the following:

  • What it means to talk about technology, and who is included
  • Whether everyone who is involved who should be
  • If not, who is decision making or consultation limited to?
  • What it might look like to have meaningful or equitable conversations about digital transformation and the future of work

For all themes, you may wish to ask yourself one or more of the following questions to help with representing intangible or disembodied concepts:

  • What industry am I interested in? 
  • Who is involved in the technology? How can we represent them authentically?
  • Who else is involved? How? And where?
  • Are there processes or technologies which can be represented?
  • How can you accurately reflect the properties of data and technology – for example the statistical vs emotional nature of AI?
  • How can you be realistic about the capabilities and performance of technologies?
  • Are you more interested in communicating physical elements, or concepts?
  • What is usually visible and invisible to people?
  • What mood do you want to convey? Are you optimistic/ positive or pessimistic/ critical?
  • Who do you think would benefit from seeing this picture and why?
  • Do you want to focus on a particular detail, or wider social or technical systems?
  • What concerns or reassures you? What excites or depresses you?
  • How might you convey nuance, ambiguity or tensions?
  • What are the wider people, social, environmental or economic implications?

We welcome a range of approaches to visualising the themes and challenges, drawing on your own practice and ideas. 

A (non-exhaustive) list of approaches we welcome:

  • Remixing or collage from existing materials (AIoAI)
  • Realism – showing a scene 
  • New metaphors – conveying concepts through more familiar references
  • Showing the output of any digital technologies used in the creation of the image
  • Storytelling 
  • Iconography – creating new visual shortcuts or language to signify aspects you want to address
  • Focus on portraying very specific use cases, examples, industries, technologies

In all cases, we are looking for images which:

  1. Convey current digital work or transformation as it is now, not in the future
  2. Are visually compelling and high quality – they could realistically be in a commercial image library
  3. Show the AI or technology in the picture somehow so people looking can see it is about digital / tech/ AI 
  4. Are original work, accompanied by a roughly 75 to 250 word description of what is in the picture, how it relates to the (named) theme and what technique you have used
  5. (Optional) reflect the significance of people (for example, in designing, governing, contributing to, or being impacted by digital work) 

Based on research leading to the Guide to Better Images of AI, which should be read before you start, here is a list of things to avoid in your entry:

  • Human brains
  • Science fiction (usually white) robots
  • Anthropomorphism
  • ‘Creation of Adam’ touching hands
  • Unnecessary use of the colour blue to signify AI
  • Science fiction references or speculative future/fantasy
  • Descending code
  • Unnecessary white people in suits
  • Unnecessary holography
  • Magical or monolithic representations of AI

An online briefing session will be held on Thursday 17th April at 12pm UTC +1 and is now available for you to watch below.

Please contact info@betterimagesofai.org if you have any questions not answered above.

What Do I See in ‘Ways of Seeing’ by Zoya Yasmine

At the top there is a Diptych contrasting a whimsical pastel scene with large brown rabbits, a rainbow, and a girl in a red dress on the left, and a grid of numbered superpixels on the right - emphasizing the difference between emotive seeing and analytical interpretation. At the bottom, there is black text which says 'what do i see in ways of seeing' by Zoya Yasmine. In the top right corner, there is text in a maroon text box which says 'through my eyes blog series'.

Artist contributions to the Better Images of AI library have always served a really important role in relation to fostering understanding and critical thinking about AI technologies and their context. Images facilitate deeper inquiries into the nature of AI, its history, and ethical, social, political and legal implications.

When artists create better images of AI, they often have to grapple with these narratives in their attempts to more realistically portray the technology and point towards its strengths and weaknesses. Furthermore, as artists freely share these images in our library, others can benefit from learning about the artist’s own internal motivations (which are provided in the descriptions) but the images can also inspire users’ own musings.

In this series of blog posts, some of our volunteer stewards are each taking turns to choose an image from the Archival Images of AI collection and unpack the artist’s processes and explore what that image means to them. 

At the end of 2024, we released the Archival Images of AI Playbook with AIxDESIGN and the Netherlands Institute for Sound and Vision. The playbook explores how existing images – especially those from digital heritage collections – can help us craft more meaningful visual narratives about AI. Through various image-makers’ own attempts to make better images of AI, the playbook shares numerous techniques which can teach you how to transform existing images into new creations. 

Here, Zoya Yasmine unpacks ‘Ways of Seeing’ Nadia Piet’s (an image-maker) own better image of AI that was created for the playbook. Zoya comments on how it is a really valuable image to depict the way that text-to-image generators ‘learn’ how to generate their output creations. Zoya considers how this image relates to copyright law (she’s a bit of an intellectual property nerd) and the discussions about whether AI companies should be able to use individual’s work to train their systems without explicit consent or remuneration. 

ALT text: Diptych contrasting a whimsical pastel scene with large brown rabbits, a rainbow, and a girl in a red dress on the left, and a grid of numbered superpixels on the right - emphasizing the difference between emotive seeing and analytical interpretation.

Nadia Piet + AIxDESIGN & Archival Images of AI / Better Images of AI / Ways of Seeing / CC-BY 4.0

‘Ways of Seeing’ by Nadia Piet 

This diptych contrasts human and computational ways of seeing: one riddled with memory and meaning, the other devoid of emotional association and capable of structural analysis. The left pane shows an illustration from Tom Seidmann-Freud’s Book of Hare Stories (1924) which portrays a whimsical, surreal scene that is both playful and uncanny. On the right, the illustration is reduced to a computational rendering, with each of its superpixels (16×16) fragmented and sorted by visual complexity with a compression algorithm. 


Copyright and training AI systems 

Training AI systems requires substantial amounts of input data – from images, videos, texts and other content. Based on the data from these materials, AI systems can ‘learn’ how to make predictions and provide outputs. However, lots of these materials used to train AI systems are often protected by copyright owned by another parties which raises complex questions about ownership and the legality of using such data without permission. 

In the UK, Getty Images filed a lawsuit against Stability AI (developers of a text-to-image model called Stable Diffusion) claiming that 7.3 million of its images were unlawfully scrapped from its website to train Stability AI’s model. Similarly, Mumsnet has launched a legal complaint against OpenAI, the developer of ChatGPT, accusing the AI company of scraping content from its site (with over 6 billion words shared by community members) without consent. 

The UK’s Copyright, Designs and Patents Act 1998 (the Act) provides companies like Getty Images and Mumsnet with copyright protection over their databases and assets. So unless an exception applies, permission (through a license) is required if other parties wish to reproduce or copy the content. Section 29(A) of the Act provides an exception which permits copies of any copyright protected material for the purposes of Text and Data Mining (TDM) without a specific license. But, this lenient provision is for non-commercial purposes only. Although the status of AI systems like Stable Diffusion and ChatGPT have not been tested before the courts yet, they are likely to fall outside the scope of non-commercial purposes

TDM is the automated technique used to extract and analyse vast amounts of online materials to reveal relationships and patterns in the data. TDM has become an increasingly valuable tool to train lucrative generative AI systems on mass amounts of materials scraped from the Internet. It becomes clear that AI models cannot be developed or built efficiently without input data that has been created by human artists, researchers, writers, photographers, publishers, and creators. However, as much of their works are being used without payment or attribution by AI companies, big tech companies are essentially ‘freeriding’ on the works of the creative industry who have invested significant time, effort, and resources into producing such rich works. 


How does this image relate to current debates about copyright and AI training? 

When I saw this image, it really prompted me to think about the training process of AI systems and the purpose of the copyright system. ‘Ways of Seeing’ has stimulated my own thoughts about how computational models ‘learn’ and ‘see’ in contrast to human creators

Text-to-image AI generators (like Stable Diffusion or Midjourney) are repeatedly trained on thousands of images which allow the models to ‘learn’ to identify patterns, like what common objects and colours look like, and then reproduce these patterns when instructed to create new images. While Piet’s image has been designed to illustrate a ‘compression algorithm’ process, I think it also serves as a useful visual to reflect how AI processes visual data computationally, reducing it to pixels, patterns, or latent features. 

It’s important to note that often the images generated by AI models will not necessarily be exact copies of the original images used in the training process – but instead, they serve as statistical approximations of training data which have informed the model’s overall understanding of how objects are represented. 

It’s interesting to think about this in relation to copyright and what this legal framework serves to protect. Copyright stands to protect the creative expression of works – for example, the lighting, exposure, filter, or positioning of an image – but not the ideas themselves. The reason that copyright law focuses on these elements is because they reflect the creator’s own unique thoughts and originality. However, as Piet’s illustration can usefully demonstrate, what is significant about the AI training process for copyright law is that often TDM is often not used to extract the protected expression of the materials.

To train AI models, it is often the factual elements of the work that might be the most valuable (as opposed to the creative aspects). The training process relies on the broad visual features of the images, rather than specific artistic choices. For example, when training text-to-image models, TDM is not often used to extract data about the lighting techniques which are employed to make an image of a cat particularly appealing. Instead, the accessibility to images of cats which detail the features that resemble a cat (fur, whiskers, big eyes, paws) are what’s important. In Piet’s image, the protectable parts of the illustration from the ‘Book of Hare Stories’ would subsist in the artistic style and execution  – for example, the way that the hare and other elements are drawn, the placement and interaction of the elements, and the overall design of the image. 

The specific challenge for copyright law is that AI companies are unable to capture these ‘unprotectable’ factual elements of materials without making a copy or storing the protected parts (Lemley and Casey, 2020). I think Nadia’s image really highlights the transformation of artwork into fragmented ‘data’ for training systems which challenges our understanding of creativity and originality. 

My thoughts above are not to suggest that AI companies should be able to freely use copyright protected works as training data for their models without remunerating or seeking permission from copyright owners. Instead, the way that TDM and generative AI ‘re-imagine’ the value of these ‘unprotectable’ elements means that AI companies still freeride on creator’s materials. Therefore, AI companies should be required to explicitly license copyright-protected materials used to train their systems so creators are provided with proper control over their works (you can read more about my thoughts here).

Also, I do not deny that there are generative AI systems that aim to reproduce a particular artist’s style – see here. In these instances, I think it would be easier to prove that there was copyright infringement since these are a clear reproduction of ‘protected elements’. However, if this is not the purpose of the AI tool, developers try to avoid the outputs replicating training data too similarly as this can open them up more easily to copyright infringement for both the input (as discussed in this piece) but also the output image (see here for a discussion). 


My favourite part of Nadia Piet’s image

I think my favourite part of the image is the choice of illustration used to represent computational processing. As Nadia writes in her description, Tom Seidmann-Freud’s illustration depicts a “whimsical, surreal scene that is both playful and uncanny”. Tom, an Austrian-Jewish painter and children’s book author and illustrator (and also Sigmund Freud’s niece), led a short life and she died of an overdose of sleeping pills in 1930 at age 37 after the death of her husband a few months prior. 

“The Hare and the Well” (Left), “Fable of the Hares and the Frogs” (Middle), “Why the Hare Has No Tail” (Right) by Tom Seidmann-Freud derived in the Public Domain Review

After Tom’s death, the Nazis came to power and attempted to destroy much of the art she had created as part of the purge of Jewish authors. Luckily, Tom’s family and art lovers were able to preserve much of her work. I think Nadia’s choice of this image critiques what might be ‘lost’ when rich, meaningful art is reduced to AI’s structural analysis. 

A second point, although not related exactly to the image, is the very thoughtful title, ‘Ways of Seeing’. ‘Ways of Seeing’ was a 1972 BBC television series and book created by John Berger. In the series, Berger criticised traditional Western cultural aesthetics by raising questions about hidden ideologies in visual images like the male gaze embedded in the female nude. He also examined what had changed in our ‘ways of seeing’ in the time between the art was made and the present day. Side note: I think Berger’s would have been a huge fan of Better Images of AI. 

In a similar vein, Nadia has used Seidmann-Freud’s art as a way to explore new parallels with technology like AI which would not have been thought about at the time the work was created. In addition, Nadia’s work serves as an invitation to see and understand AI differently, and like Berges, her work supports artists around the world.


The value of Nadia’s ‘better image of AI’ for copyright discussions

As Nadia writes in the description, Tom Seidmann-Freud’s illustration was derived from the Public Domain Review, where it is written that “Hares have been known to serve as messengers between the conscious world and the deeper warrens of the mind”. From my perspective, Nadia’s whole image acts as a messenger to convey information about the two differing modes of seeing between humans and AI models. 

We need better images of AI like this. Especially for the purposes of copyright law so we can have more meaningful and informed conversations about the nature of AI and its training processes. All too often, in conversations about AI and creativity, images used depict humanoid robots painting on a canvas or hands snatching works.

‘AI art theft’ illustration by Nicholas Konrad (Left) and Copyright and AI image (Right)

These images create misleading visual metaphors that suggest that AI is directly engaging in creative acts in the same way that humans do. Additionally, visuals showing AI ‘stealing’ works reduce the complex legal and ethical debates around copyright, licensing, and data training to overly simplified, fear-evoking concepts.

Thus, better images of AI, like ‘Ways of Seeing’, can serve a vital role as a messenger to represent the reality of how AI systems are developed. This paves the way for more constructive legal dialogues around intellectual property and AI that protect creator’s rights, while allowing for the development of AI technologies based on consented, legally acquired datasets.


About the author

Zoya Yasmine (she/her) is a current PhD student exploring the intersection between intellectual property, data, and medical AI. She grew up in Wales and in her spare time she enjoys playing tennis, puzzling, and watching TV (mostly Dragon’s Den and Made in Chelsea). Zoya is also a volunteer steward for Better Images of AI and part of many student societies including AI in Medicine, AI Ethics, Ethics in Mathematics & MedTech. 


This post was also kindly edited by Tristan Ferne – lead producer/researcher at BBC Research & Development.


If you want to contribute to our new blog series, ‘Through My Eyes’, by selecting an image from the Archival Images of AI collection and exploring what the image means to you, get in touch (info@betterimagesofai.org)

Explore other posts in the ‘Through My Eyes’ Series

‘Weaved Wires Weaving Me’ by Laura Martinez Agudelo

At the top, Digital collage featuring a computer monitor with circuit board patterns on the screen. A Navajo woman is seated on the edge of the screen, appearing to stitch or fix the digital landscape with their hands. Blue digital cables extend from the monitor, keyboard, and floor, connecting the image elements. Beneath, there is the text in black, 'Weaved Wires Weaving Me' by Laura Martinez Agudelo'. In the top right corner, there is a text box in maroon with the text in white: 'through my eyes blog series'

Artist contributions to the Better Images of AI library have always served an important role to foster understanding and critical thinking about AI technologies and their context. Images facilitate deeper inquiries into the nature of AI, its history, and ethical, social, political and legal implications.

When artists create better images of AI, they often have to grapple with these narratives in their attempts to more realistically portray the technology and point towards its strengths and weaknesses. Furthermore, as artists freely share these images in our library, others can benefit from learning about the artist’s own internal motivations (which are provided in image descriptions) but the images can also inspire users’ own musings.

In our blog series, “Through My Eyes”, some of our volunteer stewards take turns selecting an image from the Archival Images of AI collection. They delve into the artist’s creative process and explore what the image means to them—seeing it through their own eyes.

At the end of 2024, we released the Archival Images of AI Playbook with AIxDESIGN and the Netherlands Institute for Sound and Vision. The playbook explores how existing images – especially those from digital heritage collections – can help us craft more meaningful visual narratives about AI. Through various image-makers’ own attempts to make better images of AI, the playbook shares numerous techniques which can teach you how to transform existing images into new creations. 

Here, Laura Martinez Agudelo shares her personal reflections on ‘Weaving Wires 1’ – Hanna Barakat’s own better image of AI that was created for the playbook. Laura comments on how the image uncovers the hidden Navajo women’s labor behind the assembly of microchips in Silicon Valley – inviting us to confront the oppressive cultural conditions of conception, creation and mediation of the technology industry’s approach to innovation.


Digital collage featuring a computer monitor with circuit board patterns on the screen. A Navajo woman is seated on the edge of the screen, appearing to stitch or fix the digital landscape with their hands. Blue digital cables extend from the monitor, keyboard, and floor, connecting the image elements.

Hanna Barakat + AIxDESIGN & Archival Images of AI / Better Images of AI / Weaving Wires 1 / CC-BY 4.0


Cables came out and crossed my mind 

Weaving wires 1 by Hanna Barakat is about hidden histories of computer labor. As it is explained in the image’s description, her digital collage is inspired by the history of computing in the 1960s in Silicon Valley, where the Fairchild Semiconductor company employed Navajo women for intensive tasks such as assembling microchips. Their work (actually with their hands and their digits) was a way for these women to provide for their families in an economically marginalized context.

At that time, this labor was made to be seen as a way to legitimize the transfer of the weaving cultural practices to contribute to technological innovation. This legitimation appears to be an illusion, to converge the unchanging character of weaving as heritage, with the constant renewal of global industry, but it also presupposes the non-recognition of Navajo women’s labor and a techno-cultural and gendered transaction. Their work is diluted in meaning and action, and overlooked in the history of computing.

In Weaving wires 1, we can see a computer monitor with circuit board patterns on the screen, and a juxtaposed woven design. Then, two potential purposes dialogue with the woman sitting at the edge of the screen, suspended in a white background: is the woman stitching or fixing or even both as she weaves and prolongs the wires? These blue wires extend from the monitor, keyboard and beyond. The woman seems to be modifying or constructing a digital landscape with her own hands, leading us to remember the place where these materialities come from, and the memories they connect to.

Since my mother tongue is Spanish, a distant memory of the word “Navajo” and the image of weaving women appeared. “Navajo” is a Spanish adaptation of the Tewa Pueblo word navahu’u, which means “farm fields in the valley”. The Navajo people call themselves Diné, literally meaning “The People”. At this point, I began to think about the specific socio-spatial conditions of Navajo/Diné women at that time and their misrepresentation today. When I first saw the collage, I felt these cables crossing my own screen. Many threads began to unravel in my head in the form of question marks. I wondered how older and younger generations of Navajo/Diné women have experienced (and in other ways inherited) this hidden labor associated with the transformation of the valley and their community. This image disrupts as a visual opposition to the geographic and social identification of Silicon Valley as presented, for example, in the media. So now, these wires expand the materiality to reveal their history. Hanna creatively represents the connection between key elements of this theme. Let’s explore some of her artistic choices.

Recoded textures as visual extensions 

Hanna Barakat is a researcher, artist and activist who studies emerging technologies and their social impact. I discovered her work thanks to the Archival Images of AI project (Launch & Playtest). Weaving wires 1 is part of a larger project from Hanna where a creative dialogue between textures and technology is proposed. Hanna plays with intersections of visual forms to raise awareness of the social, racial and gender issues behind technologies. Weaving wires 1 reconnected me with the importance of questioning the human and material extractive conditions in which technological devices are produced.

As a lecturer in (digital) communication, I’m often looking for visual support on topics such as the socio-economic context in which the Internet appears, the evolution of the Web, the history of computer culture, and socio-technical theories and examples to study technological innovation, its problems and ethical challenges. The visual narratives are mostly uniform, and the graphic references are also gendered. Women’s work is most of the time misrepresented (no, those women in front of the big computers are not just models or assistants, they have full names and they are the official programmers and coders. Take a look at the work of Kathy/Kathryn Kleiman… Unexplored archives are waiting for us !).

When I visually interacted with Weaving wires 1 and read its source of inspiration (I actually used and referenced the image for one of my lectures), I realized once again the need to make visible the herstory (term coined in the 1960s as a feminist critique of conventional historiography) of technological innovation. Sometimes, in the rush of life in general (and in specific moments like the preparation of a lecture in my case), we forget to take some time and distance to convene other ways of exploring and sharing knowledge (with the students) and to recreate the modalities of approaching some essential topics for a better understanding of the socio-technical metamorphosis of our society.

Going beyond assumed landmarks

In order to understand hidden social realities, we might question our own landmarks. For me, “landmarks” could be both consciously (culturally) confirmed ideas and visual/physical evidence of the existence of boundaries or limits in our (representation of) reality. Hanna’s image proposes an insight into the importance of going beyond some established landmarks. This idea, as a result of the artistic experience, highlights some questions such as : where did the devices we use every day come from and whose labour created them? And in what others forms are these conditions extended through time and space, and for whom ? You might have some answers, references, examples, or even names coming to mind right now. 

In Weaving wires 1, and in Hanna’s artistic contribution, several essential points are raised. Some of them are often missing in discourses and practices of emerging technologies like AI systems : the recognition of the human labor that supports the material realities of technological tools, the intersection of race and gender, the roots of digital culture and industry, and the need to explore new visual narratives that reflect technology’s real conditions of production.

Fix, reconnect and reimagine

Hanna uses the digital collage (but also techniques such as juxtaposition, overlayering and/or distortion – she explains her approach with examples in her artist log). She explores ways to honor the stories she conjures up by rejecting colonial discourses. For me, in the case of Weaving wires 1, these wires connect to our personal experiences with technological devices and memories of the digital transformation of our society. They could also represent the need to imagine and construct together, as citizens, more inclusive (technological) futures.

A digital landscape is somewhere there, or right in front of us. Weaving wires 1 will be extended by Hanna in Weaving wives 2 to question the meaning of the valley landscape itself and its borders. For now, some other transversal questions appear (still inspired by her first image) about deterministic approaches to studying data-driven technology and its intersection with society: what fragments or temporalities of our past are we willing and able to deconstruct? Which ones filter the digital space and ask for other ways of understanding? How can we reconnect with the basic needs of our world if different forms of violence (physical and symbolic), in this case in human labor, are not only hidden, but avoided, neglected or unrepresented in the socio-digital imaginary?

It is such a necessary discussion to face our collective memory and the concrete experiences in between. Weaving wires 1 invites us to confront the oppressive cultural conditions of conception, creation and mediation of the technology industry’s approach to innovation.With this image, Hanna brings us a meaningful contribution. She deconstructs simplistic assumptions and visual perspectives to actually create ‘better images of AI’!


About the author

Laura Martinez Agudelo is a Temporary Teaching and Research Assistant (ATER) at the University Marie & Louis Pasteur – ELLIADD Laboratory. She holds a PhD in Information and Communication Sciences. Her research interests include socio-technical devices and (digital) mediations in the city, visual methods and modes of transgression and memory in (urban) art.   

This post was also kindly edited by Tristan Ferne – lead producer/researcher at BBC Research & Development.


If you want to contribute to our new blog series, ‘Through My Eyes’, by selecting an image from the Archival Images of AI collection and exploring what the image means to you, get in touch (info@betterimagesofai.org)

Explore other posts in the ‘Through My Eyes’ Series

Hanna Barakat’s image collection & the paradoxes of depicting diversity in AI history

A black-and-white image depicting the early computer, Bombe Machine, during World War II. In the foreground, the shadow of a woman in vintage clothing is cast on a man changing the machine's cable.

As part of a collaboration between Better Images of AI and Cambridge University’s Diversity Fund, Hanna Barakat was commissioned to create a digital collage series to depict diverse images about the learning and education of AI at Cambridge. Hanna’s series of images complement our competition that we opened up to the public at the end of last year which invited submissions for better images of AI from the wider community –  you can see the winning entries here.

In the blog post below, Hanna Barakat talks about her artistic process and reflections upon contributing to this collection. Hanna provides her thoughts on the challenges of creating images that communicate about AI histories and the inherent contradictions that arise when engaging in this work.

The purpose behind the collection

As outlined by the Better Images of AI project, normative depictions of AI continue to perpetuate negative gender and racial stereotypes about the creators, users, and beneficiaries of AI. Moreover, they misdirect attention from the harms implicit in the real-life applications of the technology. The lack of diversity—and the problematic interpretation of diversity—in AI-generated images is not merely an ‘output’ issue that can be easily fixed. Instead, it stems from deep-rooted systemic issues that reflect a long history of bias in data science.

As a result, even so-called ‘diverse’ images created by AI often end up reinforcing these harms [Fig.1]. The image below has adopted token diversity tropes like a wheelchair, different skin tones and a mix of genders – superficially appearing diverse without addressing deeper issues like context, intersectionality, and the inclusion of underrepresented groups in leadership roles. The teacher remains to be an older, able-bodied white male and the students all appear to be conventionally attractive, similarly sized individuals wearing almost matching types of clothing. The image also shows a fictional blue holographic image of a robot in the centre – misrepresenting what generative AI is and exaggerating the capabilities of the technology.

Figure 1. Image depicting an educational course on Generative AI.

As academic institutions like the Leverhulme Centre for the Future of Intelligence are exploring “vital questions about the risks and opportunities emerging with AI,” they commissioned images that reflect a more nuanced depiction of the risks and opportunities. Specifically, they requested seven images that represent the diversity in Cambridge’s teaching about AI, with the intention to use these images for courses, websites, and events programs.

Hanna’s artistic process

My process takes a holistic approach to “diversity” – aiming to avoid the “DEI-washing” images that reduce diversity to a gradient of brown bodies or tokenization of marginalized groups in the name of “inclusion” but often fail to acknowledge the positionality of the institutions utilizing such images.

Instead, my approach interrogates the development of AI technology, its history of computing in the UK, and the positionality of elite institutions such as Cambridge University to create thoughtful images about the education of AI at Cambridge.

Analog Lecture on Computing by Hanna Barakat & Cambridge Diversity Fund and Pas(t)imes in the Computer Lab by Hanna Barakat & Cambridge Diversity Fund

Through digital collages of open-source archival images, this series offers a critical visual depiction of education about AI. Collage is a way of moving against the archival grain– reinserting, for example, the overlooked women who ran cryptanalysis of the Enigma Machine at Bletchley Park to surrealist depictions of a historically contextualized lecture about AI. By combining mixed media layers, my artistic process seeks to weave together historical narratives and investigate the voices systemically overlooked and/or left out. 

I carefully navigated the archive and relied on visual motifs of hands, strings, shadows, and data points. Throughout the series, these elements engage with the histories of UK computing as a starting point to expose the broader sociotechnical nature of AI. The use of anonymous hands becomes a way of encouraging reflection upon the human labor that underpins all machines. The use of shadows symbolizes the unacknowledged labor of marginalized communities throughout the Global Majority.

Turning Threads of Cognition by Hanna Barakat & Cambridge Diversity Fund

It is these communities upon which technological “process” has relied upon and at whose expense “progress” has been achieved. I use an abstract interpretation of data points to symbolize the exchange of information and learning on university campuses. I was inspired by Ada Lovelace, Cavendish Labs archive (physics laboratories), which depicts photos of early histories of computing, the stories of Cambridge Language Research Unit (CLRU) run by Margaret Masterman, Jean Valentine, and the many other Cambridge-educated women at Bletchley Park that made Alan Turing’s achievements possible.

Lovelace GPU by Hanna Barakat & Cambridge Diversity Fund

The challenges of creating images relating to the diverse history of AI

Nonetheless, I remain cautious about imbuing these images with too much subversive power. Like any nuanced undertaking, this project grapples with tension, including navigating the challenge of representing diverse bodies without tokenizing them; drawing from archival material while recognizing the imperialist incentives that shape their creation; portraying education about AI in ways that are both literal and critically reflective, particularly in contexts where racial and ethnic diversity (in the histories of UK) are not necessarily commonplace; and balancing a respect for the critical efforts of the CFI with an awareness of its positionality as an elite institution. On a practical level, I encountered challenges in accessing the limited number of images available, as many were not fully licensed for open access.

I list these tensions not to imply as a means of demonstrating hypocrisy, but, quite the opposite—to illuminate the complexities and inherent contradictions that arise when engaging in this work. By highlighting these points of friction, I am able to acknowledge the layered positionality that shapes both the process and the outcomes, emphasizing that such tensions are not obstacles to be avoided but rather essential facets of critically engaged practice.

If you want to read more about the processes behind Hanna’s work, view her Artist Log on the AIxDESIGN site. You can also learn how to make your own archival images of AI by exploring our Playbook that we released at the end of 2024 with AIxDESIGN and the Netherlands Sound and Vision Institute.

Dr Aisha Sobey was behind the project which was commissioned with funding from Cambridge Diversity Fund

This project grew from the desire of CFI and multiple collaborations with Better Images of AI to have better images of AI in relation to the teaching and learning we do at the Centre, and from my research into the ‘lookism’ of generative AI image models. I knew that asking for the combination of criteria to show anonymous, diverse people in images of AI learning would be tricky, but even as the project evolved to take a historical lens to reclaim lost histories, this proved to be a really difficult task for the artists.

The images created by Hanna and the entries to the prize competition showed some brilliant and unique takes on the prompt. Still, they often struggled to bring diverse people and Cambridge together. It points to the barriers of showing difference in an ethical way that doesn’t tokenise or exploit already marginalised groups and we didn’t solve that challenge in these images, and the need for more diverse people in places like Cambridge to make these stories. However, I am hopeful that the process has been valuable to illuminate different challenges of doing this kind of work and further that the images offer alternative and exciting perspectives to the representation of diversity in learning and teaching AI at the University.”

Artist Subjectivity Statement

In creating these images which seek to depict diversity, it is imperative to address the “experience of the knower.” Thus, consistent with a critical feminist framework, I feel it is important to share my identity and positionality as it undoubtedly shapes my artistic practice and influences my approach to digital technologies.

My name is Hanna Barakat. I am a 25-year-old science & technology studies researcher and collage artist.  I am a female-identifying Palestinian-American. While I was raised in Los Angeles, California, I am from Anabta, Palestine. Growing up in the Palestinian diaspora, my experience is informed by layers of systemic violence that traverse the digital-physical “divide.” I received my education from Brown University, a reputable university in the United States.

Brown University’s founders and benefactors participated in and benefited from the transatlantic slave trade. Brown University is built on the stolen lands of the Narragansett, Wôpanâak, and Pokanoket communities. In this light, I materially benefit from, and to some degree am harmed by, my location within systems of settler colonialism, whiteness, racial capitalism, Islamophobia, heteropatriarchy, and education inequality. My identity, lived experiences, and fraught relationship with technology inform my approach to artist practice–which uses visual language as a tool to (1) critically challenge normative narratives about technology development and (2) imagine cultural contextualized and localized digital futures. 

Winners of public competition with Cambridge Diversity Fund announced

An image with the text 'Winners Announced!" at the top in maroon. Below it in slightly lighter purple text it states: 'Reihaneh Golpayegani for Women and AI' and 'Janet Turra for Ground Up and Spat Out'. Their two images are positioned on the image at a slant each in opposite directions. At the bottom, there is a maroon banner with the text 'University Diversity Fund' in white, the CFI logo in white, and the Better Images of AI logo.

At the end of 2024, we launched a public competition with Cambridge Diversity Fund calling for images that reclaimed and recentred the history of diversity in AI education at the University of Cambridge.

We were so grateful to receive such a diverse range of submissions that provided rich interpretations of the brief and focused on really interesting elements of AI history.

Dr Aisha Sobey set and judged the challenge, which was enabled by funding from Cambridge Diversity Fund. Entries were judged on meeting the brief, the forms of representation reflected in the image, appropriateness, relevance, uniqueness, and visual appeal.

We are delighted to announce the winners and their winning entries:

First Place Prize

Awarded to Reihaneh Golpayegani for ‘Women and AI’

The left side incorporates a digital interface, showing code snippets, search queries, and comments referencing Woolf’s ideas, including discussions about Shakespeare’s fictional sister, Judith. The overlay of coding elements highlights modern interpretations of Woolf’s work through the lens of data and AI.

The center depicts a dimly lit, minimalist room with a window, dessk, and wooden floors and cupboards. The right side features a collage of Cambridge landmarks, historical photographs of women, and a black and white figure in Edwardian attire. There is a map of Cambridge in the background, which is overlayed with images of old fountain pens and ink, books, and a handwritten letter.

This image is inspired by Virginia Woolf’s A Room of One’s Own. According to this essay, which is based on her lectures at Newnham College and Girton College, Cambridge University, two things are essential for a woman to write fiction: money and a room of her own. This image adds a new layer to this concept by bringing it into the Al era.

Just as Woolf explored the meaning of “women and fiction”, defining “women and AI” is quite complex. It could refer to algorithms’ responses to inquiries involving women, the influence of trending comments on machine stereotypes, or the share of women in big tech. The list can go on and involve many different experiences of women with AI as developers, users, investors, and beyond. With all its complexity, Woolf’s ideas offer us insight: Allocating financial resources and providing safe spaces-in reality and online- is necessary for women to have positive interactions with AI and to be well-represented in this field.

Download ‘Women and AI’ from the Better Images of AI library here

About the artist:

Reihaneh Golpayegani is a law graduate and digital art enthusiast. Reihaneh is interested in exploring the intersection of law, art, and technology by creating expressive artworks and pursuing my master’s studies in this area.

Commendation Prize

Awarded to Janet Turra for ‘Ground Up and Spat Out’

The outputs of Large Language Models do seem uncanny often leading people to compare the abilities of these systems to thinking, dreaming or hallucinating. This image is intended to be a tongue-in-cheek dig, suggesting that AI is at its core, just a simple information ‘meat grinder,’ feeding off the words, ideas and images on the internet, chopping them up and spitting them back out. The collage also makes the point that when we train these models on our biased, inequitable world the responses we get cannot possibly differ from the biased and inequitable world that made them.

Download ‘Ground up and Spat Out’ from the Better Images of AI library here.

About the artist:

Janet Turra is a photographer, ceramicist and mixed media artist based in East Cork, Ireland. Her fine arts career spans over 25 years, a career which has taken many turns in rhythm with the changing phases of her life. Continually challenging the concept of perception, however, her art has taken on many themes including self, identity, motherhood and more recently our perception of AI and how it relates to the female body. 

Background to the competition

Cambridge and LCFI researchers have played key roles in identifying how current stock images of AI can perpetuate negative gender and racial stereotypes about the creators, users, and beneficiaries of AI.

The winning entries will be used for outward-facing posting on social media, University of Cambridge websites, internal communications on student sites and Virtual Learning Environments. They will also be made available for wider Cambridge programs to use for their teaching and events materials. They are also both available in the Better Images of AI library here and here for anyone to freely download and use under a Creative Commons License.

“This project grew from the desire of CFI and multiple collaborations with Better Images of AI to have better images of AI in relation to the teaching and learning we do at the Centre, and from my research into the ‘lookism’ of generative AI image models. I am hopeful that the process has been valuable to illuminate different challenges of doing this kind of work and further that the images offer alternative and exciting perspectives to the representation of diversity in learning and teaching AI at the University.” – Aisha Sobey, University of Cambridge (Postdoctoral Researcher)

An additional collection of images from Hanna

As part of this project, collage artist and scholar, Hanna Barakat, was commissioned to design a collection of images which draw upon her work researching AI narratives and marginalised communities to uncover and reclaim diverse histories. You can find the collection in the Better Images of AI library and we’ll also be releasing an additional blog post which focuses on Hanna’s collection as well as the challenges/reflections on this competition brief.

Beneath the Surface: Adrien’s Artistic Perspective on Generative AI

The image features the title "Beneath the Surface: Adrien's Artistic Perspective on Generative AI." The background consists of colourful, pixelated static, creating a visual texture reminiscent of digital noise. In the centre of the image, there's a teal rectangular overlay containing the title in bold, white text.

May 28, 2024 – A conversation with Adrien Limousin – a photographer and visual artist, sheds light on the nuanced intersections between AI, art, and ethics. Adrien’s work delves into the opaque processes of AI, striving to demystify the unseen mechanisms and biases that shape our representations.


A vibrant, abstract image from converting Street View screenshots from TIFF to JPEG, showing a pixelated, distorted classical building with columns. The sky features glitch-like, multicolored waves, blending greens, purples, pinks, and blues.

ADRIEN LIMOUSIN – Alterations (2023)

Adrien previously studied advertising and now is studying photography at the National Superior School of Photography (ENSP) in Arles and is particularly drawn to the language of visual art, especially from new technologies.

A cluster of coloured pixels made up from random gaussian noise taking up the whole canvas, representing a not denoised AI generated image; digital pointillism

Fig 1. Adrien Limousin / Better Images of AI / Non-image / CC-BY 4.0

Non-image

Adrien was drawn to the ‘Better Images of AI’‘ project after recognising the need for more nuanced and accurate representations of AI, particularly in journalism. In our conversation, I asked Adrien about his approach to creating the image he submitted to Better Images of AI (Fig 1.).


> INTERVIEWER: Can you tell me about your thinking and process behind the image you submitted?

> ADRIEN: I thought about how AI-generated images are created. The process involves taking an image from a dataset, which is progressively reduced to random noise. This noise is then “denoised” to generate a new image based on a given prompt. I wanted to try to find a breach or the other side of the opaqueness of these models. We only ever see the final result—the finished image—and the initial image. The intermediate steps, where the image is transitioning from data to noise and back, are hidden from us.

> ADRIEN: My goal with “Non-image” was to explore and reveal this hidden in-between state. I wanted to uncover what lies between the initial and final stages, which is typically obscured. I found that extracting the true noisy image from the process is quite challenging. Therefore, I created a square of random noise to visually represent this intermediate stage. It’s no longer an image and it’s also not an image yet.


Adrien’s square of random noise captures this “in-between” state, where the image is both “everything and nothing”—representing aspects of AI’s inner workings. This visual metaphor underscores the importance of making these hidden processes visible, to demystify and foster a more accurate understanding of what AI is, how it operates, and it’s real capabilities. Seeing the process Adrien discusses here also reflects the complex and collective human data that underpins AI systems. The image doesn’t originate from a single source but is a collage of countless lives and data points, both digital and physical, emphasising the multifaceted nature of AI and its deep entanglement with human experience.

A laptopogram based on a neutral background and populated by scattered squared portraits, all monochromatic, grouped according to similarity. The groupings vary in size, ranging from single faces to overlapping collections of up to twelve. The facial expressions of all the individuals featured are neutral, represented through a mixture of ages and genders.

Philipp Schmitt & AT&T Laboratories Cambridge / Better Images of AI / Data flock (faces) / CC-BY 4.0

“The medium is the message”

(McLuhan, Marshall, 1964).

When I asked Adrien about the artists who have inspired him, he highlighted how Marshall McLuhan’s seminal concept, “the medium is the message,” profoundly resonated with him.

This concept is crucial for understanding how AI is represented in the media. McLuhan argued that the medium itself—whether it’s a book, television, or image—shapes our perceptions and influences society more than the actual content it delivers. McLuhan’s work, particularly in Understanding Media (1974), explores how technology reshapes human interaction and societal structures. He warned that media technologies, especially in the electronic age, fundamentally alter our perceptions and social patterns. When applied to AI, this means that the way AI is visually represented can either clarify or obscure its true nature. Misleading images don’t just distort public understanding; they also shape how society engages with and responds to AI, emphasising the importance of choosing visuals that accurately reflect the technology’s reality and impact.

 “Stereotypes inside the machine”

(Adrien).

Adrien’s work explores the complex issue of stereotypes embedded within AI datasets, emphasizing how AI often perpetuates and even amplifies these biases through discriminatory images, texts, and videos.


> ADRIEN: Speaking of stereotypes inside the machine, I tried to question that in one of the projects I started two years ago and I discovered that it’s a bit more complicated than what it first seems. AI is making discriminatory images or text or videos, yes. But once you see that you start to question the nature of the image in the dataset and then suddenly the responsibility shifts and now you start to question why these images were chosen or why these images were labelled that way in the dataset in the first place ?

> ADRIEN:  Because it’s a new medium we have the opportunity to do things the right way. We aren’t doomed to repeat the same mistakes over and over. But instead we have created something even more – or at least equally discriminatory.

And even though there are adjustments made (through Reinforcement Learning from Human Feedback) they are just kind of… small patches. The issue needs to be tackled at the core.”

Image shows a white male in a suit facing away from the camera on a grey background. Text on the left side of the image reads “intelligent person.”

Adrien Limousin –  Human·s 2 (2022 – Ongoing)

As Adrien points out, minor adjustments or “sticking plasters” won’t suffice when addressing biases deeply rooted in our cultural and historical contexts. As an example – Google recently attempted  to reduce racial bias in their AI Gemini image algorithms. This effort was aimed at addressing long standing issues of racial bias in AI-generated images, where people of certain racial backgrounds were either misrepresented or underrepresented. However, despite these well-intentioned efforts, the changes inadvertently introduced new biases. For instance, while trying to balance representation, the algorithms began overemphasizing certain demographics in contexts where they were historically underrepresented, leading to skewed and culturally inappropriate portrayals. This outcome highlights the complexity of addressing bias in AI. It’s not enough to simply optimize in the opposite direction or apply blanket fixes; such approaches can create new problems while attempting to solve old ones. What this example underscores is the necessity for AI systems to be developed and situated within culture, history, and place.


> INTERVIEWER: Are these ethical considerations on your mind when you are using AI in your work?

> ADRIEN: Using Generative AI makes me feel complicit about these issues. So I think the way I approach it is more like trying to point out these lacks, through its results or by unravelling its inner working

“It’s the artists role to question”

(Adrien)


> INTERVIEWER: Do you feel like artists have an important role in creating the new and more accurate representations  of AI?

> ADRIEN:  I think that’s one of the role of the artist. To question.

> INTERVIEWER: If you can kind of imagine like what, what kind of representations we might see, or you might want to have in the future like instead of when you Google AI and it’s blue heads and you know, robots, etc.

> ADRIEN: That’s a really good question and I don’t think I have the answer, but as I thought about that, understanding the inner workings of these systems can help us make better representations. For instance, the concepts and ideas of remixing existing representations—something that we are familiar with, that’s one solution I guess to better represent Generative AI.


Image displays an error message from the Windows 95 operating system. The text reads ‘The belief in photographic images.exe has stopped working’.

ADRIEN LIMOUSIN System errors – (2024 – ongoing)

We discussed the challenges involved in encouraging the media to use images that accurately reflect AI.


> ADRIEN: I guess if they used stereotyped images it’s because most people have associated AI with some kind of materialised humanoid as the embodiment of AI and that’s obviously misleading, but it also takes time and effort to change mindsets, especially with such an abstract and complex technology, and that is I think one of the role of the media to do a better job at conveying an accurate vision of AI, while keeping a critical approach.


Another major factor is knowledge: journalists and reporters need to recognise the biases and inaccuracies in current AI representations to make informed choices. This awareness comes from education and resources like the Better Images of AI project, which aim to make this information more accessible to a wider audience. Additionally, there’s a need to develop new visual associations for AI. Media rely on attention-grabbing images that are immediately recognisable, we need new visual metaphors and associations that more accurately represent AI.  

One Reality


> INTERVIEWER: So kind of a big question, but what do you feel is the most pressing ethical issue right now in relation to AI that you’ve been thinking about?

> ADRIEN: Besides the obvious discriminatory part of the dataset and outputs, I think one of the overlooked issues is the interface of these models. If we take ChatGPT for instance, the way there is a search bar and you put text in it expecting an answer, just like a web browser’s search bar is very misleading. It feels familiar, but it absolutely does not work in the same way. To take any output as an answer or as truth, while it is just giving the most probable next words is deceiving and I think that’s something we need to talk a bit more about.


One major problem with AI is its tendency to offer simplified answers to multifaceted questions, which can obscure complex perspectives and realities. This becomes especially relevant as AI systems are increasingly used in information retrieval and decision-making. For example, Google’s AI summarising search feature has been criticised for frequently presenting incorrect information. Additionally, AI’s tendency to reinforce existing biases and create filter bubbles poses a significant risk. Algorithms often prioritise content that aligns with users’ pre-existing views, exacerbating polarisation (Pariser, 2011). This is compounded when AI systems limit exposure to a variety of perspectives, potentially widening societal divides.

Metasynthography

(Adrien)

Adrien takes inspiration from the idea of metaphotography, which involves using photography to reflect on and critique the medium itself. In metaphotography, artists use the photographic process to comment on and challenge the conventions and practices of photography.

Building on this concept, Adrien has coined the term “meta-synthography” to describe his approach to digital art.


> ADRIEN: The term meta-synthography is one of the terms I have chosen to describe Digital arts in general. So it’s not properly established, that’s just me doing my collaging.

> INTERVIEWER: That’s great. You’re gonna coin a new word in this blog 😉


I asked Adrien what artists inspire him. He discusses the influence of Robert Ryman, a renowned painter celebrated for his minimalist approach that focuses on the process of painting itself. Ryman’s work often features layers of paint on canvas, emphasising the act of painting and making the medium and its processes central themes in his art.


> ADRIEN: I recently visited an exhibition of Robert Ryman, which kind of does the same with painting – he paints about painting on painting, with painting.

> INTERVIEWER:  Love that.

> ADRIEN: I thought that’s very interesting and I very much enjoy this kind of work, it talks about the medium…It’s  a bit conceptual, but it raises question about the medium… about the way we use it, about the way we consume it.

Image displays a large advertising board displaying a blank white image, the background is a grey clear sky

Adrien Limousin – Lorem Ipsum (2024 – ongoing)

As we navigate the evolving landscape of AI, the intersection of art and technology provides a crucial perspective on the impact and implications of these systems. By championing accurate representations and confronting inherent biases, Adrien’s work highlights the essential role  artists play in shaping a more nuanced and informed dialogue about AI. It’s not only important to highlight AI’s inner workings but also to recognise that imagery has the power to shape reality and our understanding of these technologies. Everyone has a role in creating AI that works for society, countering the hype and capitalist-driven narratives advanced by tech companies. Representations from communities, along with the voices of individuals and artists, are vital for sharing knowledge, making AI more accessible, and bringing attention to the experiences and perspectives often rendered invisible by AI systems and media narratives.


Adrien Limousin (interviewee) is a 25 years old french (post)photographer exploring the other side of images, currently studying at the National Superior School of Photography in Arles.

Cherry Benson (interviewer) is a Student Steward for Better Images of AI. She holds a degree in psychology from London Metropolitan University and is currently pursuing a Master’s in AI Ethics and Society at the University of Cambridge where her research centers on social AI. Her work on the intersection of AI and border control has been featured as a critical case study in the Cambridge Journal of Artificial Intelligence for how racial capitalism is deeply intertwined with the development and deployment of AI.

💬 Behind the Image with Yutong from Kingston School of Art

This year, we collaborated with Kingston School of Art to give MA students the task of creating their own better images of AI as part of their final project. 

In this mini-series of blog posts called ‘Behind the Images’, our Stewards are speaking to some of the students that participated in the module to understand the meaning of their images, as well as the motivations and challenges that they faced when creating their own better images of AI. Based on our assessment criteria, some of the images will also be uploaded to our library for anyone to use under a creative commons licence. 

In our third and final post, we go ‘Behind the Image’ with Yutong about her pieces, ‘Exploring AI’ and ‘Talking to AI’. Yutong intends that her art will challenge misconceptions about how humans interact with AI.

You can freely access and download ‘Talking to AI’ and both versions of ‘Exploring AI’ from our image library.

Both of Yutong’s images are available in our library, but as you might discover below, there were many challenges that she faced when developing these works. We greatly appreciate Yutong letting us publish her images and talking to us for this interview. We are hopeful that her work and our conversations will serve as further inspiration for other artists and academics who are exploring representations of AI.

Can you tell us a bit about your background and what drew you to the Kingston School of Art?

Yutong is from China and before starting the MA in Illustration at Kingston University, she completed an undergraduate major in Business Administration. What drew Yutong to Kingston School of Art was its highly regarded reputation for its illustration course. On another note, she enjoys how the illustration course at Kingston balances both the commercial and academic aspects of art – allowing Yutong to combine her previous studies with her creative passions. 

Could you talk me through the different parts of your images and the meaning behind them?

In both of her images, Yutong wishes to unpack the interactions between humans and AI – albeit from two different perspectives.

Talking to AI’

Firstly, ‘Talking to AI’ focuses on more accurately representing how AI works. Yutong uses a mirror to reflect how our current interactions with AI are based on our own prompts and commands. At present, AI cannot generate content independently so it reflects the thoughts and opinions that humans feed into systems. The binary code behind the mirror symbolises how human prompts and data are translated into computer language which powers AI. Yutong has used a mirror to capture an element between humans and AI interaction that is overlooked – the blurred transition between human work to AI generation.

‘Exploring AI’

Yutong’s second image, ‘Exploring AI’ aims to shed light on the nuanced interactions that humans have with AI on multiple levels. Firstly, the text, ‘Hi, I am AI’ pays homage to an iconic phrase in programming (‘Hello World’) which is often the first thing any coder learns how to write and it also forms the foundations of a coder’s understanding of a programming language’s syntax, structure, and execution process. Yutong thought this was fitting for her image as she wanted to represent the rich history and applications of AI which has its roots in basic code. 

Within ‘Exploring AI’, each grid square is used to represent the various applications of AI in different industries. The expanded text across multiple grid squares demonstrates how one AI tool can have uses across different industriesChatGPT is a prime example of this.

However, Yutong wants to also draw attention to the figures within each square which all interact with AI in complex and different ways. For example, some of the body language of the figures depict them to be variously frustrated, curious, playful, sceptical, affectionate, indifferent, or excited towards the text, ‘Hi, I am AI’.

Yutong wants to show how our human response to AI changes and varies contextually and it is driven by our own personal conceptions of AI. From her own observations, Yutong identified that most people either have a very positive or very negative opinion towards AI – but not many feel anything in between. By including all the different emotional responses towards AI in this image, Yutong hopes to introduce greater nuance into people’s perceptions of AI and help people to understand that AI can evoke different responses in different contexts. 

What was your inspiration/motivation for creating your images?

As an illustrator, Yutong found herself surrounded by artists that were fearful that AI would replace their role in society. Yutong found that people are often fearful of the unknown and things they cannot control. Therefore, being able to improve understanding of what AI is and how it works through her art, Yutong hopes that she can help her fellow creators face their fears and better understand their creative role in the face of AI. 

Through her art, ‘Exploring AI’ and ‘Talking to AI’, Yutong intends to challenge misconceptions about what AI is and how it works. As an AI user herself, she has realised that human illustrators cannot be replaced by AI – these systems are reliant on the works of humans and do not yet have the creative capabilities to replace artists. Yutong is hopeful that by being better educated on how AI integrates in society and how it works, artists can interact with AI to enhance their own creativity and works if they choose to do so. 

Was there a specific reason you focused on dispelling misconceptions about what AI looks like and how Chat-GPT (or other large language models) work? 

Yutong wanted to focus on how AI and humans interact in the creative industry and she was driven by her own misconceptions and personal interactions with AI tools. Yutong does not intend for her images to be critical of AI. Instead, she envisages that her images can help educate other artists and prompt them to explore how AI can be useful in their own works. 

Can you describe the process for creating this work?

From the outset, Yutong began to sketch her own perceptions and understandings about how AI and humans interact. The sketch below shows her initial inspiration. The point at which each shape overlaps represents how humans and AI can come together and create a new shape – this symbolises how our interactions with technology can unlock new ideas, feelings and also, challenges.

In this initial sketch, she chose to use different shapes to represent the universality of AI and how its diverse application means that AI doesn’t look like one thing – AI can underlay an automated email response, a weather forecast, or medical diagnosis. 

Yutong’s initial sketch for ‘Talking to AI’

The project aims to counteract common stereotypes and misconceptions about AI. How did you incorporate this goal into your artwork? 

In ‘Exploring AI’, Yutong wanted to introduce a more nuanced approach to AI representation by unifying different perspectives about how people feel, experience and apply AI in one image. From having discussions with people utilising AI in different industries, she recognised that those who were very optimistic about AI, didn’t recognise its shortfalls – and the same vice-versa. Yutong believes that humans have a role to help AI reach new technological advancements and AI can also help humans flourish. In Yutong’s own words, “we can make AI better, and AI can make us better”. 

Yutong found talking to people in the industry as well as conducting extensive research about AI very important to ensure that she could more accurately portray AI’s uses and functions. She points to the fact that she used binary code in ‘Talking to AI’ after researching that this is the most fundamental aspect of computer language which underpins many AI systems. 

What have been the biggest challenges in creating a ‘better image of AI’? Did you encounter any challenges in trying to represent AI in a more nuanced and realistic way?

Yutong reflects on the fact that no matter how much she rethought or restarted her ideas, there was always some level of bias in her depiction of AI because of her own subconscious feelings towards the technology. She also found it difficult to capture all the different applications of AI, as well as the various implications and technical features of the technology in a single visual image. 

Through tackling these challenges, Yutong became aware of why Better Images of AI is not called ‘Best Images of AI’ the latter would be impossible. She hopes that while she could not produce the ‘best image of AI’, her art can serve as a better image compared to those typically used in the media.

Based on our criteria for selecting images, we were pleased to accept both your images but asked you if it was possible to make amendments to ‘Exploring AI’ to make the figures more inclusive. What do you think of this feedback and was it something that you considered in your process? 

In Yutong’s image, ‘Exploring AI’, Better Images of AI made a request if an additional image could be made including these figures in different colours to better reflect the diverse world that we live in. Being inclusive is very important to Better Images of AI, especially as visuals of AI and those who are creating AI, are notoriously unrepresentative.

Yutong agreed that this development would be better to enhance the image and being inclusive in her art is something she is actively trying to improve. She reflects on this suggestion by saying, ‘just as different AI tools are unique, so are individual humans’. 

The two versions of ‘Exploring AI’ available on the Better Images of AI library

How has working on this project influenced your own views about AI and its impact? 

During this project, Yutong has been introduced to new ideas and been able to develop her own opinions about AI based on research from academic journals. She says that informing her opinions using sources from academia was beneficial compared to relying on information provided by news outlets and social media platforms which often contain their own biases and inaccuracies.

From this project, Yutong has been able to learn more about how AI could incorporate into her future career as a human and AI creator. She has become interested in the Nightshade tool that artists have been using to prevent AI companies using their art to train their AI systems without the owner’s consent. She envisages a future career where she could be working to help artists collaborate with AI companies – supporting the rights of creators and preserving the creativity of their art. 

What have you learned through this process that you would like to share with other artists and the public?

By chatting to various people interacting and using AI in different ways, Yutong has been introduced to richer ideas about the limits and benefits of AI. Yutong challenges others to talk to people who are working with AI or are impacted by its use to gain a more comprehensive understanding of the technology. She believes that it’s easy to gain a biased opinion about AI by relying on the information shared by a single source, like social media, so we should escape from these echo chambers. Yutong believes that it is so important that people diversify who they are surrounding themselves with to better recognise, challenge, and appreciate AI. 

Yutong (she/her) is an illustrator with whimsical ideas, also an animator and graphic designer.

🪄 Behind the Image with Minyue from Kingston School of Art

The image shows a colourful illustration of a story-like scene, with two half star characters performing various tasks. The stars, along with a wizard, are interacting with drawings, magnifying glasses, and magic-like elements. Below that, there is a scene with a fantasy landscape, including a castle and dragon. To the right of the image, text reads: 'Behind the Image with Minyue' and below that, a tagline reads: 'Let AI Become Your Magic Wand' which is the name of Minyue's image submission. The background of the image is light blue.

This year, we collaborated with Kingston School of Art to give MA students the task of creating their own better images of AI as part of their final project.

In this mini-series of blog posts called ‘Behind the Images’, our Stewards are speaking to some of the students that participated in the module to understand the meaning of their images, as well as the motivations and challenges that they faced when creating their own better images of AI.

In our second post, we go ‘Behind the Image’ with Minyue about her piece, ‘Let AI Become Your Magic Wand’. Minyue wants to draw attention to the overlooked human input in AI generated art and challenges those who believe AI will replace artists.

‘Let AI Become Your Magic Wand’ is not available in our library as it did not match all the criteria due to challenges which we explore below. However, we greatly appreciate Minyue letting us publish her images and talking to us. We are hopeful that her work and our conversation will serve as further inspiration for other artists and academics who are exploring representations of AI.

Can you tell us a bit about your background, and what drew you to the Better Images of AI project at Kingston School of Art? 

Minyue is from China and previously studied a foundation course in the UK before starting the Masters in Illustration at Kingston University. Before starting the Masters, Minyue had limited knowledge of AI and she only saw discussions about it on social media – especially from artists fearful that AI tools were capable of copying their own work without their consent. At the same time, Minyue also saw many fellow creators developing impressive works using AI generator tools – whether in the ideation phase or to create the final artwork. 

Confused about her own perception of AI, Minyue was drawn to the Better Images of AI project to learn more about the relationship between humans and AI in the creative process. 

Could you talk us through the different parts of your image and the meaning behind it? 

Minyue’s Final Image, ‘Let AI Become Your Magic Wand’

Minyue’s piece is focussed on two halves of a star. One half is called the ‘evaluation half star’ which represents AI’s image recognition capabilities (the technical term is the ‘Discriminator’). For Minyue, recognition capabilities refer to AI’s ability to interpret and understand input data. For image generator tools, AI systems are trained on vast amounts of imagery so that they can identify key features and elements of a picture. This could involve recognising objects, styles, colours or other visual aspects. Therefore, in generating an image of a chick (as shown in Minyue’s image), the evaluation half star is focussed on interpreting what distinctive features the training data classifies as a true representation of a chick – like perhaps the yellow colour and the shape of a beak.  

The other half is called the ‘creation half star’ which portrays the image construction capabilities of AI tools (the technical term is the ‘Generator’). The Generator enables AI to create new, coherent images based on the evaluation half star’s understanding of input data. 

Therefore, together, Minyue’s image shows how the half stars make a full star – capable of generating AI art based on user prompts and trained by vast image datasets. You’ll see that in the bottom part of Minyue’s image in the computer tab, she indicates that the full star (consisting of the creation and evaluation half stars) make up a magic wand when added with a pencil. The pencil symbolises the human labour behind the training of both the evaluation and creative half stars. 

Without being guided by humans, Minyue believes that these two half stars would not exist. It is humans that have created the input data, it is humans that prompt AI tools to create certain images, and it is humans that train the AI systems to be able to create these images in different ways. Therefore, her piece highlights the crucial human element of AI art which is often overlooked. 

Lastly, Minyue also hopes to emphasise that the combination of these AI tools with humans offers a new avenue for realising human creativity. That is why she has chosen to use a wizard and magic wand to depict how AI and humans, when working together, can be magical. 

Better Images of AI aims to counteract common stereotypes and misconceptions about AI. How did you incorporate this goal into your artwork?

Minyue emphasised that the main misconception that she wanted to focus on is that AI is a tool requiring rational human use, rather than an autonomous creator. When looking at her work, Minyue wanted people to contemplate, “who is controlling the magic?”, and prompt us to think more carefully about the role of humans in AI art. 

What was your motivation/inspiration for creating ‘Let AI Become Your Magic Wand’?

Firstly, as an illustration student, Minyue was particularly interested in the role of AI in the creative industry. The metaphor of the magic wand comes from her observation of artists who skilfully use new technologies to create their work, which made her feel as if she were watching a magical performance.

Secondly, Minyue wanted to raise awareness to the fact that using AI image generators still requires human skill, creativity and imagination. A wizard can only perform magic if they are trained to use the wand. In the same way, AI can assist artists to create, but artists must learn how to use this technology to develop innovative, appealing, and meaningful works of art. 

Minyue’s early sketch shows how she wanted to distinguish between the human (wizard) and AI (in the magic wand)

Finally, she hopes to dispel the idea that AI art will limit creativity or the work of human artists – instead, if creators choose to work with AI, it could also enhance their capabilities and usher in a new genre of art. 

Based on Better Images of AI criteria for selecting images, we had to make the difficult decision to not upload this image to our library. We made this choice based on closer scrutiny of the magic wand metaphor which could be misconceived as promoting the idea that AI is magic (this rhetoric is commonly pursued by technology companies). 

What do you think of this feedback, and was this idea something that you ever considered in your process? 

Minyue understood the concerns and appreciated the feedback provided by Better Images of AI which made her reconsider how her work could be misleading in some aspects and the challenges of relying on metaphors to communicate difficult ideas. Her intention was that the magic wand metaphor would prompt individuals to think more deeply about who is in control of AI art and also, how AI can advance the creative industry if used safely and ethically. However, she is aware that coupled with the technology industry’s widespread use of magical symbols to represent AI (for example, the logo for Zoom’s AI Smart Assistant or Google’s AI chatbot Gemini), Minyue’s image could (unintentionally) be perceived to suggest that AI (alone) is magical.

Was there a specific reason that you focussed on dispelling misconceptions about the human element of AI art, especially in relation to image generation? 

Minyue strongly believes that the creative power of AI comes from human inspiration and human creativity. She hopes her work will help convey the fact that AI art is rooted in human creativity and labour, which is often overlooked in the public discourse promoted by the media about AI replacing artists, leading to misunderstandings.

A lot of the inspiration for Minyue holding this view has come from her reflections on how past technology has integrated into the creative industry. For example, painters were originally fearful about the widespread adoption of photography since it offered a faster and cheaper means of reproducing and disseminating images. But over time, Minyue believes that we can see how photography has developed its own unique styles and languages, with photographers moving away from imitating traditional art pieces, to explore unique photographic expressions. Minyue believes that AI may also evolve into a new tool for the production of a new art form. 

Can you describe your process for creating ‘Let AI Become Your Magic Wand’?

Minyue detailed the very long process that led her to the final creation. She recalled how having the Better Images of AI Guide was helpful, but she still struggled because her initial understanding of AI was really poor.

Therefore, Minyue took time to carefully research the more technical aspects of AI image generation so she could more accurately represent how AI image generators work and their relationship with human creators. Below you can see how she researched the technical elements of AI image generation as well as its use in different contexts. 

Minyue’s research about technical aspects of AI image generators and their applications

Minyue’s initial sketches also show how she was interested in portraying the relationship between humans and technology.

One of Minyue’s initial sketches when exploring ideas for the Better Images of AI project

Minyue aims to create more engaging and approachable AI images to help non-experts understand AI technology and reduce public fear of new technologies. This was also one of her reasons for choosing to participate in the Better Images of AI project.

What have been your biggest challenges in creating a better image of AI? Did you encounter any challenges in trying to represent AI in a more nuanced and realistic way?

Minyue faced difficulties when challenging her previous views on AI that were presented to her by the media. Contrary to a lot of the other images in the Better Images of AI library, Minyue also wanted to promote a more optimistic narrative about AI – that AI can be beneficial to humans and enhance our own creative outputs. 

Another one of the challenges that Minyue faced was distinguishing between AI and computers or robots. In one of her initial sketches she shows how in her early stages of this project, Minyue overlooked how AI has numerous applications beyond just being used within computer applications.

Another one of Minyue’s sketches which show her challenges relating to how she could illustrate AI

What have you learned through this process that you would like to share with other artists or the public? 

Minyue says that while artists are often driven by their passions when creating their works, it is important to consider how art might cause misunderstandings if creators are not guided by in-depth research and detailed expression. Minyue’s hope is that other artists will focus on this in order to promote a more realistic and accurate understanding of AI. 


Minyue Hu (she/her) is about to graduate from Kingston University with a Master’s degree in Illustration. In the coming year, she will be staying in the UK to continue her work as an artist and actively create new pieces. Minyue’s inspiration often centres on human experience and emotion, with the aim of combining personal stories with social contexts to prompt viewers to reflect on their own experiences. Her final project, Daughters of the Universe, is set to be released soon, and she looks forward to your attention. 

Co-creating Better Images of AI

Yasmine Boudiaf (left) and Tamsin Nooney (right) deliver a talk during the workshop ‘Co-creating Better Images of AI’

In July, 2023, Science Gallery London and the London Office of Technology and Innovation co-hosted a workshop helping Londoners think about the kind of AI they want. In this post, Dr. Peter Rees reflects on the event, describes its methodology, and celebrates some of the new images that resulted from the day.


Who can create better images of Artificial Intelligence (AI)? There are common misleading tropes of the images which dominate our culture such as white humanoid robots, glowing blue brains, and various iterations of the extinction of humanity. Better Images of AI  is on a mission to increase AI literacy and inclusion by countering unhelpful images. Everyone should get a say in what AI looks like and how they want to make it work for them. No one perspective or group should dominate how Al is conceptualised and imagined.

This is why we were delighted to be able to run the workshop ‘Co-creating Better Images of AI’ during London Data Week. It was a chance to bring together over 50 members of the public, including creative artists, technologists, and local government representatives to each make our own images of AI. Most images of AI that appear online and in the newspapers are copied directly from existing stock image libraries. This workshop set out to see what would happen when we created new images fromscratch. We experimented with creative drawing techniques and collaborative dialogues to create images. Participants’ amazing imaginations and expertise went into a melting-pot which produced an array of outputs. This blogpost reports on a selection of the visual and conceptual takeaways! I offer this account as a personal recollection of the workshop—I can only hope to capture some of the main themes and moments, and I apologise for all that I have left out. 

The event was held at the Science Gallery in London on 4th July 2023 between 3-5pm and was hosted in partnership with London Data Week, funded by the London Office of Innovation and Technology. In keeping with the focus on London Data Week and LOTI, the workshop set out to think about how AI is used every day in the lives of Londoners, to help Londoners think about the kind of AI they want, to re-imagine AI so that we can build systems that work for us.

Workshop methodology

I said the workshop started out from scratch—well, almost. We certainly wanted to make use of the resources already out there such as the [Better Images of AI: A Guide for Users and Creators] co-authored by Dr Kanta Dihal and Tania Duarte. This guide was helpful because it not only suggested some things to avoid, but also provided stimulation for what kind of images we might like to make instead. What made the workshop a success was the wide-ranging and generous contributions—verbal and visual—from invited artists and technology experts, as well as public participants, who all offered insights and produced images, some of which can be found below (or even in the Science Gallery).

The Workshop was structured in two rounds, each with a live discussion and creative drawing ‘challenge’. The approach was to stage a discussion between an artist and a technology expert (approx 15 mins), and then all members of the workshop would have some time (again, approx 15 mins) for creative drawing. The purpose of the live discussion was to provide an accessible introduction to the topic and its challenges, after which we all tackled the challenge of visualising and representing different elements of AI production, use and impact. I will now briefly describe these dialogues, and unveil some of the images created.

Setting the scene

Tania Duarte (Founder, We and AI) launched the workshop with a warm welcome to all. Then, workshop host Dr Robert Elliot-Smith (Director of AI and Data Science at Digital Catapult) introduced the topic of Large Language Models (LLMs) by reminding the audience that such systems are like ‘autocorrect on steroids’: the model is simply very good at predicting words, it does not have any deep understanding of the meaning of the text it produces. He also discussed image-generators, which work in a similar way and with similar problems, which is why certain AI-produced images end up garbling images of hands and arms: they do not understand anatomy.

In response to this preliminary introduction, one participant who described herself as a visual artist expressed horror at the power of such image-generating and labelling AI systems to limit and constrain our perception of reality itself. She described how, if we are to behave as artists, what we have to do in our minds is to avoid seeing everything simply in terms of fixed categories which can conservatively restrain the imagination, keeping it within a set of known categorisations, which is limiting not only our imagination but also our future. For instance, why is the thing we see in front of us necessarily a ‘wall’? Could it not be, seeing more abstractly, simply a straight line? 

From her perspective, AI models seem to be frighteningly powerful mechanisms for reinforcing existing categories for what we are seeing, and therefore also of how to see, what things are, even what we are, and what kind of behaviour is expected. Another participant agreed: it is frustrating to get the same picture from 100 different inputs and they all look so similar. Indeed, image generators might seem to be producing novelty, but there is an important sense in which they are reinforcing the past categories of the data on which they were trained.

This discussion raised big questions leading into the first challenge: the limitations of large language models.

Round 1: The Limitations of Large Language Models

A live discussion was staged between Yasmine Boudiaf (recognised as one of ‘100 Brilliant Women in AI Ethics 2022,’ and fellow at the Ada Lovelace Institute) and Tamsin Nooney (AI Research, BBC R&D) about the process of creating LLMs.

Yasmine asked Tamsin about how the BBC, as a public broadcaster, can use LLMs in a reliable manner, and invited everyone in the room to note down any words they found intriguing, as those words might form a stimulus for their creative drawings.

Tamsin described an example of LLM use-case for the BBC in producing a podcast whereby an LLM could summarise the content, add in key markers and meta-data labels and help to process the content. She emphasised how rigorous testing is required to gain confidence in the LLM’s reliability for a specific task before it could be used. A risk is that a lot of work might go into developing the model only for it to never be usable at all.

Following Yasmine’s line of question, Tamsin described how the BBC deal with the significant costs and environmental impacts of using LLMs. She described how the BBC calculated if they wanted to train their LLM, even a very small one, it would take up all their servers at full capacity for over a year, so they won’t do that! The alternative is then to pay other services such as Amazon to use their model, which means balancing costs: so here are limits due to scale, cost, and environmental impact.

This was followed by a more quiet, but by no means silent, 15 minutes for drawing time in which all participants drew…

Drawing by Marie Jannine Murmann. Abstract cogwheels suggesting that AI tools can be quickly developed to output nonsense but, with adequate human oversight and input, AI tools can be iteratively improved to produce the best outputs they can.
Drawing by Marie Jannine Murmann. Abstract cogwheels suggesting that AI tools can be quickly developed to output nonsense but, with adequate human oversight and input, AI tools can be iteratively improved to produce the best outputs they can.

One participant used an AI image generator for their creative drawing, making a picture of a toddler covered in paint to depict the LLM and its unpredictable behaviours. Tamsin suggested that this might be giving the LLM too much credit! Toddlers, like cats and dogs, have a basic and embodied perception of the world and base knowledge, which LLMs do not have.

Drawing by Howard Elston. An LLM is drawn as an ear, interpreting different inputs from various children.
Drawing by Howard Elston. An LLM is drawn as an ear, interpreting different inputs from various children.

The experience of this discussion and drawing also raised, for another participant, more big questions. She discussed poet David Whyte’s work on the ‘conversational nature of reality’ and thought on how the self is not just inside us but is created through interaction with others and through language. For instance, she mentioned that when you read or hear the word ‘yes’, you have a physical feeling of ‘yesness’ inside, and similarly for ‘no’. She suggested that our encounters with machine-made language produced by LLMs is similar. This language shapes our conversations and interactions, so there is a sense in which the ‘transformers’ (the technical term for the LLM machinery) is also helping to transform our senses of self and the boundary between what is reality and what is fantasy. 

Here, we have the image made by artist Yasmine based on her discussion with Tamsin:

Three groups of icons representing people have shapes travelling between them and a page in the middle of the image. The page is a simple rectangle with straight lines representing data. The shapes traveling towards the page are irregular and in squiggly bands.
Image by Yasmine Boudiaf. Three groups of icons representing people have shapes travelling between them and a page in the middle of the image. The page is a simple rectangle with straight lines representing data. The shapes traveling towards the page are irregular and in squiggly bands.

Yasmine writes:

This image shows an example of Large Language Model in use. Audio data is gathered from a group of people in a meeting. Their speech is automatically transcribed into text data. The text is analysed and relevant segments are selected. The output generated is a short summary text of the meeting. It was inspired by BBC R&D’s process for segmenting podcasts, GPT-4 text summary tools and LOTI’s vision for taking minutes at meetings.

Yasmine Boudiaf

You can now find this image in the Better Images of AI library, and use it with the appropriate attribution: Image by Yasmine Boudiaf / © LOTI / Better Images of AI / Data Processing / CC-BY 4.0. With the first challenge complete, it was time for the second round.

Round 2: Generative AI in Public Services

This second and final round focused on use cases for generative AI in the public sector, specifically by local government. Again, a live discussion was held, this time between Emily Rand (illustrator and author of seven books and recognised by the Children’s Laureate, Lauren Child, to be featured in Drawing Words) and Sam Nutt (Researcher & Data Ethicist, London Office of Technology and Innovation). They built on the previous exploration of LLMs by considering new generative AI applications which they enable for local councils and how they might transform our everyday services.

Emily described how she illustrates by hand, and described her [work] as focusing on the tangible and the real. Making illustrations about AI, whose workings are not obviously visible, was an exciting new topic. See her illustration and commentary below. 

Sam described his role as part of the innovation team which sits across 26 of the boroughs of London and Mayor of London. He helps boroughs to think about how to use data responsibly. In the context of local government data and services, a lot of data collected about residents is statutory (meaning they cannot opt out of giving it), such as council tax data. There is a big prerogative for dealing with such data, especially for sensitive personal health data, that privacy is protected and bias is minimised. He considered some use cases. For instance, council officers can use ChatGPT to draft letters to residents to increase efficiency butthey must not put any personal information into ChatGPT, otherwise data privacy can be compromised. Or, for example, the use of LLMs to summarise large archives of local government data concerning planning permission applications, or the minutes from council meetings, which are lengthy and often technical, which could be made significantly more accessible to many members of the public and researchers. 

Sam also raised the concern that it is very important that residents know how councils use their data so that councils can be held accountable. Therefore this has to be explained and made understandable to residents. Note that 3% of Londoners are totally offline, not using internet at all, so that’s 270,000 people—who also have an equal right to understand how the council uses their data—who need to be reached through offline means. This example brings home the importance of increasing inclusive public Al literacy.

Again, we all drew. Here are a couple of striking images made by participants who also kindly donated their pictures and words to the project:

Drawing by Yokako Tanaka. An abstract blob is outlined encrusted with different smaller shapes at different points around it. The image depicts an ideal approach to AI in the public sector, which is inclusive of all positionalities.
Drawing by Yokako Tanaka. An abstract blob is outlined encrusted with different smaller shapes at different points around it. The image depicts an ideal approach to AI in the public sector, which is inclusive of all positionalities.
Drawing by Aisha Sobey. A computer claims to have “solved the banana” after listing the letters that spell “banana” – whilst a seemingly analytical process has been followed, the computer isn’t providing much insight nor solving any real problem.
Drawing by Aisha Sobey. A computer claims to have “solved the banana” after listing the letters that spell “banana” – whilst a seemingly analytical process has been followed, the computer isn’t providing much insight nor solving any real problem.
Practically identical houses are lined up at the bottom of the image. Out of each house's chimney, columns of binary code – 1's and 0's – emerge.
“Data Houses,” by Joahna Kuiper. Here, the author described how these three common houses are all sending a distress signal—a new kind of smoke signal, but in binary code. And in her words: ‘one of these houses is sending out a distress signal, calling out for help, but I bet you don’t know which one.’ The problem of differentiating who needs what when.
A big eye floats above rectangles containing rows of dots and cryptic shapes.
“Big eye drawing,” by Hui Chen. Another participant described their feeling that ‘we are being watched by big eye, constantly checking on us and it boxes us into categories’. Certain areas are highly detailed and refined, certain other areas, the ‘murky’ or ‘cloudy’ bits, are where the people don’t fit the model so well, and they are more invisible.
Rows of people are randomly overlayed by computer cursors.
An early iteration of Emily Rand’s “AI City.”

Emily started by llustrating the idea of bias in AI. Her initial sketches showed an image showing lines of people of various sizes, ages, ethnicities and bodies. Various cursors showed the cis white able bodied people being selected over the others. Emily also did a sketch of the shape of a City and ended up combining the two. She added frames to show the way different people are clustered. The frame shows the area around the person, where they might have a device sending data about them.

 Emily’s final illustration is below, and can be downloaded from here and used for free with the correct attribution Image by Emily Rand / © LOTI / Better Images of AI / AI City / CC-BY 4.0.

Building blocks are overlayed with digital squares that highlight people living their day-to-day lives through windows. Some of the squares are accompanied by cursors.

At the end of the workshop, I was left with feelings of admiration and positivity. Admiration of the stunning array of visual and conceptual responses from participants, and in particular the candid and open manner of their sharing. And positivity because the responses were often highlighting the dangers of AI as well as the benefits—its capacity to reinforce systemic bias and aid exploitation—but these critiques did not tend to be delivered in an elegiac or sad tone, they seemed more like an optimistic desire to understand the technology and make it work in an inclusive way. This seemed a powerful approach.

The results

The Better Images of AI mission is to create a free repository of better images of AI with more realistic, accurate, inclusive and diverse ways to represent AI. Was this workshop a success and how might it inform Better Images of AI work going forward?

Tania Duarte, who coordinates the Better Images of AI collaboration, certainly thought so:

It was great to see such a diverse group of people come together to find new and incredibly insightful and creative ways of explaining and visualising generative AI and its uses in the public sector. The process of questioning and exploring together showed the multitude of lenses and perspectives through which often misunderstood technologies can be considered. It resulted in a wealth of materials which the participants generously left with the project, and we aim to get some of these developed further to work on the metaphors and visual language further. We are very grateful for the time participants put in, and the ideas and drawings they donated to the project. The Better Images of AI project, as an unfunded non-profit is hugely reliant on volunteers and donated art, and it is a shame such work is so undervalued. Often stock image creators get paid $5 – $25 per image by the big image libraries, which is why they don’t have time to spend researching AI and considering these nuances, and instead copy existing stereotypical images.

Tania Duarte

The images created by Emily Rand and Yasmine Boudiaf are being added to the Better Images of AI Free images library on a Creative Commons licence as part of the #NewImageNovember campaign. We hope you will enjoy discovering a new creative interpretation each day of November, and will be able to use and share them as we double the size of the library in one month. 

Sign up for our newsletter to get notified of new images here.

Acknowledgements

A big thank you to organisers, panellists and artists:

  • Jennifer Ding – Senior Researcher for Research Applications at The Alan Turing Institute
  • Yasmine Boudiaf – Fellow at Ada Lovelace Institute, recognised as one of ‘100 Brilliant Women in AI Ethics 2022’
  • Dr Tamsin Nooney – AI Research, BBC R&D
  • Emily Rand – illustrator and author of seven books and recognised by the Children’s Laureate, Lauren Child, to be featured in Drawing Words
  • Sam Nutt – Researcher & Data Ethicist, London Office of Technology and Innovation (LOTI)
  • Dr Tomasz Hollanek – Research Fellow, Leverhulme Centre for the Future of Intelligence
  • Laura Purseglove – Producer and Curator at Science Gallery London
  • Dr Robert Elliot-Smith – Director of AI and Data Science at Digital Catapult
  • Tania Duarte – Founder, We and AI and Better Images of AI

Also many thanks to the We and Al team, who volunteered as facilitators to make this workshop possible: 

  • Medina Bakayeva, UCL master’s student in cyber policy & AI governance, communications background
  • Marissa Ellis, Founder of Diversily.com, Inclusion Strategist & Speaker @diversily
  • Valena Reich, MPhil in Ethics of AI, Gates Cambridge scholar-elect, researcher at We and AI
  • Ismael Kherroubi Garcia FRSA, Founder and CEO of Kairoi, AI Ethics & Research Governance
  • Dr Peter Rees was project manager for the workshop

And a final appreciation for our partners: LOTI, the Science Gallery London, and London Data Week, who made this possible.

Related article from BIoAI blog: ‘What do you think AI looks like?’: https://blog.betterimagesofai.org/what-do-children-think-ai-looks-like/

Open Call for Artists | Apply by 25th September

A! x Design Open call poster - We now invite Artists from EU and affiliated countries to join the Open Call

We and AI have teamed up with AIxDesign to commission three artists to encourage a better understanding of AI. Thanks to AI4Media’s support, each of the successful artists will be offered a €1,500 stipend for their contributions. The resulting images will be added to the Better Images of AI gallery for free and public use.

The main aim is to create a set of imagery that avoids perpetuating unhelpful myths about artificial intelligence (AI) by inviting artists from different backgrounds to develop better images while tackling questions such as:

  • Is the image representing a particular part of the technology or is it trying to tell a wider story?
  • Does it help people understand the technology and is it an accurate representation?

Each commissioned artist will work independently to create images, meeting two times with the project team to present concepts, ask questions, and receive feedback as we iterate towards the final images.

If you find this challenge exciting, take a look at the 🔗open call and apply by 25th September (midnight, CET)!

The wonderful team at AIxDESIGN are also running a series of info sessions throughout September in case you want to know more:

  • 7th September, 6pm CET / 12pm EST / 9am PST
  • 14th September, 11am CET / 6pm Philippines
  • 21st September, 6pm CET / 12pm EST / 9am PST

To join one of the info sessions, follow the “Open call and application” button above and find the RSVP links under “Project timeline”.

Since 2021, We and AI have been curating informative and engaging images through the Better Images of AI project. Better Images of AI challenges common misconceptions about AI, thereby enabling more fruitful discussions. Our continued public engagement initiatives and research have shown that images for responsible and explainable AI are still hard to come by, and we always welcome artists to help solve this problem. The challenges posed in the open call result from research conducted in collaboration with AI4Media and funded by AHRC.

AIxDESIGN are a self-organised community of over 8,000 computationally curious people who work in the open and are dedicated to conducting critical AI design research for people (not profit). We warmly welcome their alliance, and their continued work informing AI with feminist thought and a philosophy of care.

We also applaud AI4Media’s efforts not only to encourage and enable the development and adoption of AI systems across media industries, but also to engage with how the media can better represent AI.

Image by Alan Warburton / © BBC / Better Images of AI / Nature / CC-BY 4.0

Illustrating Data Hazards

A person with their hands on a laptop keyboard is looking at something happening over their screen with a worried expression. They are white, have shoulder length dark hair and wear a green t-shirt. The overall image is illustrated in a warm, sketchy, cartoon style. Floating in front of the person are three small green illustrations representing different industries, which is what they are looking at. On the left is a hospital building, in the middle is a bus, and on the right is a siren with small lines coming off it to indicate that it is flashing or making noise. Between the person and the images representing industries is a small character representing artificial intelligence made of lines and circles in green and red (like nodes and edges on a graph) who is standing with its ‘arms’ and ‘legs’ stretched out, and two antenna sticking up. A similar patten of nodes and edges is on the laptop screen in front of the person, as though the character has jumped out of their screen. The overall image makes it look as though the person is worried the AI character might approach and interfere with one of the industry icons.

We are delighted to start releasing some useful new images donated by the Data Hazards project into our free image library. The images are stills from an animated video explaining the project, and offer a refreshing take on illustrating AI and data bias. They take an effective and creative approach to making visible the role of the data scientist and the impact of algorithms, and the project behind the images uses visuals in order to improve data science itself. Project leaders Dr Nina Di Cara and Dr Natalie Zelenka share some background on Data Hazards labels, and the inspiration behind the animation behind the new images.

Data science has the potential to do so much for us. We can use it to identify new diseases, streamline services, and create positive change in the world. However, there have also been many examples of ways that data science has caused harm. Often this harm is not intended, but its weight falls on those who are the most vulnerable and marginalised. 

Often too, these harms are preventable. Testing datasets for bias, talking to communities affected by technology or changing functionality would be enough to stop people from being harmed. However, data scientists in general are not well trained to think about ethical issues, and even though there are other fields that have many experts on data ethics, it is not always easy for these groups to intersect. 

The Data Hazards project was developed by Dr Nina Di Cara and Dr Natalie Zelenka in 2021, and aims to make it easier for people from any discipline to talk together about data science harms, which we call Data Hazards. These Hazards are in the form of labels. Like chemical hazards, we want Data Hazards to make people stop and think about risk, not to stop using data science at all. 

An person is illustrated in a warm, cartoon-like style in green. They are looking up thoughtfully from the bottom left at a large hazard symbol in the middle of the image. The Hazard symbol is a bright orange square tilted 45 degrees, with a black and white illustration of an exclamation mark in the middle where the exclamation mark shape is made up of tiny 1s and 0s like binary code. To the right-hand side of the image a small character made of lines and circles (like nodes and edges on a graph) is standing with its ‘arms’ and ‘legs’ stretched out, and two antenna sticking up. It faces off to the right-hand side of the image.
Yasmin Dwiputri & Data Hazards Project / Better Images of AI / Managing Data Hazards / CC-BY 4.0

By making it easier for us all to talk about risks, we believe we are more likely to see them early and have a chance at preventing them. The project is open source, so anyone can suggest new or improved labels which mean that we can keep responding to new and changing ethical landscapes in data science. 

The project has now been running for nearly two years and in that time we have had input from over 100 people on what the Hazard labels should be, and what safety precautions should be suggested for each of them. We are now launching Version 1.0 with newly designed labels and explainer animations! 

Chemical hazards are well known for their striking visual icons, which many of us see day-to-day on bottles in our homes. By having Data Hazard labels, we wanted to create similar imagery that would communicate the message of each of the labels. For example, how can we represent ‘Reinforces Existing Bias’ (one of the Hazard labels) in a small, relatively simple image? 

Icon

Description automatically generated
Image of the ‘Reinforces Existing Bias’ Data Hazard label

We also wanted to create some short videos to describe the project, that included a data scientist character interacting with ‘AI’ and had the challenge of deciding how to create a better image of AI than the typical robot. We were very lucky to work with illustrator and animator Yasmin Dwiputri, and Vanessa Hanschke who is doing a PhD at the University of Bristol in understanding responsible AI through storytelling. 

We asked Yasmin to share some thoughts from her experience working on the project:

“The biggest challenge was creating an AI character for the films. We wanted to have a character that shows the dangers of data science, but can also transform into doing good. We wanted to stay away from portraying AI as a humanoid robot and have a more abstract design with elements of neural networks. Yet, it should still be constructed in a way that would allow it to move and do real-life actions.

We came up with the node monster. It has limbs which allow it to engage with the human characters and story, but no facial expressions. Its attitude is portrayed through its movements, and it appears in multiple silly disguises. This way, we could still make him lovable and interesting, but avoid any stereotypes or biases.

As AI is becoming more and more present in the animation industry, it is creating a divide in the animation community. While some people are praising the endless possibilities AI could bring, others are concerned it will also replace artistic expressions and human skills.

The Data Hazard Project has given me a better understanding of the challenges we face even before AI hits the market. I believe animation productions should be aware of the impact and dangers AI can have, before only speaking of innovation. At the same time, as creatives, we need to learn more about how AI, if used correctly, and newer methods could improve our workflow.”

Yasmin Dwiputri

Now that we have the wonderful resources created we have been able to release them on our website and will be using them for training, teaching and workshops that we run as part of the project. You can view the labels and the explainer videos on the Data Hazards website. All of our materials are licensed as CC-BY 4.0 and so can be used and re-used with attribution. 

We’re also really excited to see some on the Better Images of AI website, and hope they will be helpful to others who are trying to represent data science and AI in their work. A crucial part of AI ethics is ensuring that we do not oversell or exaggerate what AI can do, and so the way we visualise images of AI is hugely important to the perception of AI by the public and being able to do ethical data science! 

Cover image by Yasmin Dwiputri & Data Hazards Project / Better Images of AI / AI across industries / CC-BY 4.0

Handmade, Remade, Unmade A.I.

Two digitally illustrated green playing cards on a white background, with the letters A and I in capitals and lowercase calligraphy over modified photographs of human mouths in profile.

The Journey of Alina Constantin’s Art

Alina’s image, Handmade A.I., was one of the first additions to the Better Images of AI repository. The description affixed to the image on the site outlines its ‘alternative redefinition of AI’, bringing back into play the elements of human interaction which are so frequently excluded from discussions of the tech. Yet now, a few months on from the introduction of the image to the site, Alina’s work itself has undergone some ‘alternative redefinition’. This blog post explores the journey of this particular image, from the details of its conception to its numerous uses since: How has the image itself been changed, adapted in significance, semantically used? 

Alina Constantin is a multicultural game designer, artist and organiser whose work focuses on unearthing human-sized stories out of large systems. For this piece, some of the principles of machine learning like interpretation, classification, and prioritisation were encoded as the more physical components of human interaction: ‘hands, mouths and handwritten typefaces’, forcing us to consider our relationship to technology differently. We caught up with Alina to discuss further the process (and meaning) behind the work.

What have been the biggest challenges in creating Better Images of AI?

Representing AI comes with several big challenges. The first is the ongoing inundation of our collective imagination with skewed imagery, falsely representing these technologies in practice, in the name of simplification, sensationalism, and our human impulse towards personification. The second challenge is the absence of any single agreed-upon definition of AI, and obviously the complexity of the topic itself.

What was your approach to this piece?

My approach was largely an intricate process of translation. To stay focused upon the ‘why of A.I’ in practical terms, I chose to focus on elements of speech, also wanting to highlight the human sources of our algorithms in hand drawing letters and typefaces. 

I asked questions, and selected imagery that could be both evocative and different. For the back side of the cards, not visible in this image, I bridged the interpretive logic of tarot with the mapping logic of sociology, choosing a range of 56 words from varying fields starting with A/I to allow for more personal and specific definitions of A.I. To take this idea further, I then mapped the idea to 8 different chess moves, extending into a historical chess puzzle that made its way into a theatrical card deck, which you can play with here. You can see more of the process of this whole project here.

This process of translating A.I via my own artist’s tool set of stories/gameplay was highly productive, requiring me to narrow down my thinking to components of A.I logic which could be expressed and understood by individuals with or without a background in tech. The importance of prototyping, and discussing these ideas with audiences both familiar and unfamiliar with AI helped me validate and adjust my own understanding and representation–a crucial step for all of us to assure broader representation within the sector.

So how has Alina’s Better Image been used? Which meanings have been drawn out, and how has the image been redefined in practice? 

One implementation of ‘Handmade A.I.’, on the website of one of our affiliated organisations We and AI, remains largely aligned with the artist’s reading of it. According to We and AI, the image was chosen due to its re-centring of the human within the AI conversation: the human hands still hold the cards, humanity are responsible for their shuffling, their design (though not necessarily completely in control of which ones are dealt.) Human agency continues to direct the technology, not the other way round. As a key tenet of the organisation, and a key element of the image identified by Alina, this all adds up. 

https://weandai.org/, use of Alina’s image

A similar usage by the Universität Hamburg, to accompany a lecture on responsibility in the AI field, follows a similar logic. The additional slant of human agency considered from a human rights perspective again broadens Alina’s initial image. The components of human interaction which she has featured expand to a more universal representation of not just human input to these technologies but human culpability–the blood, in effect, is on our hands. 

Universität Hamburg use of Alina’s image

Another implementation, this time by the Digital Freedom Fund, comes with an article concerning the importance of our language around these new technologies. Deviating slightly from the visual, and more into the semantics of artificial intelligence, the use may at first seem slightly unrelated. However, as the content of the article develops, concerns surrounding the ‘technocentrism’ rather than anthropocentrism in our discussions of AI become a focal point. Alina’s image captures the need to reclaim language surrounding these technologies, placing the cards firmly back in human hands. The article directly states, ‘Every algorithm is the result of a desire expressed by a person or a group of persons’ (Meyer, 2022.) Technology is not neutral. Like a pack of playing cards, it is always humanity which creates and shuffles the deck. 

Digital Freedom Fund use of Alina’s image

This is not the only instance in which Alina’s image has been used to illustrate the relation of AI and language. The question “Can AI really write like a human?” seems to be on everyone’s lips, and ‘Handmade A.I.’ , with its deliberately humanoid typeface, its natural visual partner. In a blog post for LSE, Marco Lehner (of BR AI+) discusses employment of a GPT-3 bot, and whilst allowing for slightly more nuance, ultimately reaches a similar crux– human involvement remains central, no matter how much ‘automation’ we attempt.

Even as ‘better’ images such as Alina’s are provided, we still see the same stock images used over and over again. Issues surrounding the speed and need for images in journalistic settings, as discussed by Martin Bryant in our previous blog post, mean that people will continue to almost instinctively reach for the ‘easy’ option. But when asked to explain what exactly these images are providing to the piece, there’s often a marked silence. This image of a humanoid robot is meaningless– Alina’s images are specific; they deal in the realities of AI, in a real facet of the technology, and are thus not universally applicable. They relate to considerations of human agency, responsible AI practice, and don’t (unlike the stock photos) act to the detriment of public understanding of our tech future.

Branching Out: Understanding an Algorithm at a Glance

A window of three images. On the right is a photo of a big tree in a green field in a field of grass and a bright blue sky. The two on the left are simplifications created based on a decision tree algorithm. The work illustrates a popular type of machine learning model: the decision tree. Decision trees work by splitting the population into ever smaller segments. I try to give people an intuitive understanding of the algorithm. I also want to show that models are simplifications of reality, but can still be useful, or in this case visually pleasing. To create this I trained a model to predict pixel colour values, based on an original photograph of a tree.

The impetus for the most recent contributions to our image repository was described by the artist as promoting understanding of present AI systems. Rens Dimmendaal, Principal Data Scientist at GoDataDriven, discussed with Better Images of AI the need to cut through all the unnecessary complication of ideas within the AI field; a goal which he believes is best achieved through visual media. 

Discussions of the ‘black box’ of AI are not exactly new, and the recent calls for explainability statements to accompany new tech from Best Practice AI are certainly attempting to address the problem at some level. Tim Gordon writes of the greater ‘transparency’ required in the field, as well as the implicit acknowledgement that any wider impacts have been considered. Yet, for the broader spectrum of individuals whose lives are already being influenced by AI technologies, an extensive, jargon-filled document on the various inputs and outputs of any single algorithm is unlikely to provide much relief. 

This is where Dimmendaal comes in: to provide ‘understanding at a glance’ (and also to ‘make a pretty picture’, in his own words). The artist began with the example of the decision tree. All present tutorials on this topic, in his view, use datasets which only make the concept more difficult to understand–have a look at ‘decision tree titanic’ for a clear illustration of this.  Another explanation was provided by r2d3. Yet, for Rens, this still employed an overly complicated ‘usecase’. Hence, this selection of images.

Rens cites his inspiration for this particular project as Roger Johansson’s recreation of the ‘Mona Lisa’, using genetic programming. In the original, Johansson attempts to reproduce the piece with a combination of semi-transparent polygons and an evolutionary algorithm, gradually mutating the initial randomly generated polygons to move closer and closer to the original image. Rens recreated elements of this code as a starting point, then with the addition of the triptych format and implementation of a decision tree style algorithm made the works his own. 

Rens Dimmendaal / Better Images of AI / Man / CC-BY 4.0

In keeping with his motivations–making a ‘pretty picture’, but chiefly contributing to the greater transparency of AI methodologies–Dimmendaal elected the triptych to present his outputs. The mutation of the image is shown as a fluid, interactive process, morphing across the triptych from left to right, from abstraction to the original image itself. Getting a glimpse inside the algorithm in this manner allows for the ‘understanding at a glance’ which the artist wished to provide–the image shifts before our eyes, from the initial input to the final output. 

Rens Dimmendaal & David Clode / Better Images of AI / Fish / CC-BY 4.0

Rens Dimmendaal & Jesse Donoghoe / Better Images of AI / Car / CC-BY 4.0

Engaging with the decision tree was not only a practical decision, related to the prior lack of adequate tutorial, but also an artistic one. As Dimmendaal explains, ‘applying a decision tree to an actual tree was just too poetic an opportunity to let slide.’ We think it paid off… 

Dimmendaal has worked with numerous algorithmic systems previously (including: k-means, nearest neighbours, linear regression, svm) but cites this particular combination of genetic programming, decision trees and the triptych format as producing the nicest outcome. More of his work can be found both in our image repository, and on his personal website.

Whether or not a detailed understanding of algorithms is something you are interested in, you can input your own images to the tool Rens created for this project here and play around with making your own decision tree art. What do images relevant to your industry, product or interests look like seen through this process? Make sure to tag Better Images of AI in your AI artworks, and credit Rens. We’re excited to see what you come up with!

More from Better Images: Twitter | LinkedIn

More from the artist: Twitter | Linkedin

Humans (back) in the Loop

Pictures of Artificial Intelligence often remove the human side of the technology completely, removing all traces of human agency. Better Images of AI seeks to rectify this. Yet, picturing the AI workforce is complex and nuanced. Our new images from Humans in the Loop attempt to present more of the positive side, as well as bringing the human back into the centre of AI’s global image. 

The ethicality of AI supply chains is not something newly brought under fire. Yet, separate from the material implications of its production, the ‘new digital assembly line’, which Mary L. Gray and Siddarth Suri explore in their book Ghost Work, holds a much more immediate (and largely unrecognised) human impact. In particular, the all-too-frequent exploitation characterising so-called ‘Clickwork’. Better Images of AI has recently coordinated with award-winning social enterprise Humans in the Loop to attempt to rectify this endemic removal of the human from discussions; with a focus on images concerning the AI supply chain, and the field of artificial intelligence more broadly.

‘Clickwork’, more appropriately referred to as ‘data work’ is an umbrella term, signifying a whole host of human involvements in AI production. One of the areas in which human input is most needed is that of data annotation, an activity that provides training data for Artificial Intelligence. What used to be considered “menial” and “low-skilled” work is today a nascent field with its own complexities and skills requirements,  involving extensive training. However, tasks such as this, often ‘left without definition and veiled from consumers who benefit from it’ (Gray & Suri, 2019), result in these individuals finding themselves relegated to the realm of “ghost work”.

While the nature of ‘ghost work’ is not inherently positive or negative, the resultant lack of protection which these data workers are subject to can produce some highly negative outcomes. Recently Time Magazine uncovered some practices which were not only being hidden, but deliberately misrepresented. The article collates testimonies from Sama employees, contracted as outsourced Facebook content moderators. These testimonials reveal a workplace characterised by ‘mental trauma, intimidation, and alleged suppression’. The article ultimately concludes that through the hidden quality of this sector of the supply chain, Facebook profits through exploitation, and through the exportation of trauma away from the West and instead toward the developing world.

So how can we help to mitigate these associated risks of ‘ghost work’ within the AI supply chain? It starts with making the invisible, visible. As Noopur Raval (2021) puts it, to collectively ‘identify and interrupt the invisibility of work’ constitutes an initial step towards undermining the ‘deliberate construction and maintenance of “screens of invisibility”‘. To counter the prevalent images of AI, circulated as an extension of ‘AI imperialism’ within the West- an idea further engaged with by Karen Hao (2022)– which remove any semblance of human agency or production, and conceal the potential for human exploitation, we were keen to show the people involved in creating the technology.

These people are very varied and not just the homogenous Silicon Valley types portrayed in popular media. They include silicon miners, programmers, data scientists, product managers, data workers, content moderators, managers and many others from all around the globe; these are the people who are the intelligence behind AI. Our new images from Humans in the Loop attempt to challenge wholly negative depictions of data work, whilst simultaneously bringing attention to the exploitative practices and employment standards within the fields of data labelling and annotation. There is still, of course, work to do, as the Founder, Iva Gumnishika detailed in the course of our discussion with her. The glossy, more optimistic look at data work which these images present must not be taken as licence to excuse the ongoing poor working conditions, lack of job stability, or exposure to damaging or traumatic content which many of these individuals are still facing.

As well meeting our aim of portraying the daily work at Humans in the Loop and to showcase the ‘different faces behind [their] projects’, our discussions with the Founder gave us the opportunity to explore and communicate some of the potential positive outcomes of roles within the supply chain. These include the greater flexibility which employment such as data annotation might allow for, in contrast to the more precarious side of gig-style working economies.

In order to harness the positive potential of new employment opportunities, especially those for displaced workers, Human in the Loop’s navigates major geopolitical factors impacting their employees (for example the Taliban government in Afghanistan, the embargoes on Syria, and more recently the war in Ukraine). Gumnishika also described issues connected with this brand of data work such as convincing ‘clients to pay dignified wages for something that they perceive as “low-value work”’ and attempting to avoid the ‘race to the bottom’ within this arena. Another challenge is in allowing the workers themselves to acknowledge their central role in the industry, and what impact their work is having. When asked what she would identify as the central issue within present AI supply chain structures, her emphatic response was that ‘AI is not as artificial as you would think!’. The cloaking of the hundreds of thousands of people working to verify and annotate the data, all in the name of selling products as “fully autonomous”, and possessing “superhuman intelligence”, only acts to the detriment of its very human components. By including more of the human faces behind AI, as a completely normal/necessary part of it, Gumnishka hopes to trigger the unveiling of AI’s hidden labour inputs. In turn, by sparking widespread recognition of the complexity, value, and humanity behind work such as data annotation and content moderation–as in the case of Sama– the ultimate goal is an overhaul of data workers’ employment conditions, wages and acknowledgement as a central part of AI futures. 

In our gallery we attempt to represent both sides of data work, and Max Gruber, another contributor to the Better Images of AI gallery, engages with the darker side of gig-work in greater depth through his work, included in our main gallery and below. It presents ‘clickworkers’ as they predominantly are currently – precariously paid workers in a digital gig economy, performing monotonous work for little to no compensation. His series of photographs depict 3D printed figures, stationed in front of their computers to the uncomfortable effect of quite literally illustrating the term “human resources”, as well as the rampant anonymity which perpetuates exploitation in the area. The figure below ‘Clickworker 3d-printed’ is captioned as ‘anonymized, almost dehumanised’, the obscuration of the face and identical ‘worker’ represented in the background of the image, all cementing the individual’s status as unacknowledged labour in the AI supply chain. 

Max Gruber / Better Images of AI / Clickworker 3d-printed / CC-BY 4.0

We can contrast this with the stories behind Human in the Loop’s employees.

Nacho Kamenov & Humans in the Loop / Better Images of AI / Data annotators labeling data / CC-BY 4.0

This image, titled ‘Data annotators labelling data’ immediately offers up two very real data workers, faces clear and contribution to the production of AI clearly outlined. The accompanying caption details the function of data annotation, when it is needed, what purpose it serves; there is no masking, no hidden element to their work, as previously.

Gumnishka shares that some of the people who appear on the images have continued their path as migrants and refugees to other European countries, for example the young woman in the blog cover photo. Others have other jobs (one of the pictures shows an architect although now having found work in her field, continues to come to training and is part of the community. For others like the woman in the colourful scarf, it becomes their main source of livelihood and they are happy to pursue it as a career.

Through adding the human faces back into the discussions surrounding artificial intelligence we see not just the Silicon Valley or business-suited tech workers we occasionally see in pictures, but the vast armies of workers across the world, many of them women, many of them outside of the West.

The image below is titled ‘A trainer instructing a data annotator on how to label images’. This helps address the lack of clarity on what exactly datawork entails, and the level of training, expertise and skill required to carry it out. This image engages directly with this idea, showing some of the extensive training required in visible action, in this case by the Founder herself.

a young woman sitting in front of a computer in an office while another woman standing next to her is pointing at something on her screen
Nacho Kamenov & Humans in the Loop / Better Images of AI / A trainer instructing a data annotator on how to label images / CC-BY 4.0 (Also used as cover image)

Although these images do not of course accurately represent the experience of all data workers, in combination with the increasing awareness of conditions enabled by contributions such as the recent Times article, or the work by Gray and Suri, by Kate Crawford in her book Atlas of AI, and with the counterbalance provided by Max Gruber’s images, the addition of the photographs from Humans in the Loop provides inspiration for others. 

We hope to keep adding images of the real people behind AI, especially those most invisible at present. If you work in AI, could you send us your pictures, and how could you show the real people behind AI? Who is still going unnoticed or unheard? Get involved with the project here: https://betterimagesofai.org/contact.

Better Images of AI’s first Artist: Alan Warburton

A photographic rendering of a young black man standing in front of a cloudy blue sky, seen through a refractive glass grid and overlaid with a diagram of a neural network

In working towards providing better images of AI, BBC R&D are commissioning some artists to create stock pictures for open licence use. Working with artists to find more meaningful and helpful yet visually compelling ways to represent AI has been at the core of the project.

The first artist to complete his commission is London-based Alan Warburton. Alan is a multidisciplinary artist exploring the impact of software on contemporary visual culture. His hybrid practice feeds insight from commercial work in post-production studios into experimental arts practice, where he explores themes including digital labour, gender and representation, often using computer-generated images (CGI). 

His artwork has been exhibited internationally at venues including BALTIC, Somerset House, Ars Electronica, the National Gallery of Victoria, the Carnegie Museum of Art, the Austrian Film Museum, HeK Basel, Photographers Gallery, London Underground, Southbank Centre and Channel 4. Alan is currently doing a practice-based PhD at Birkbeck, London looking at how commercial software influences contemporary visual cultures.

Warburton’s first encounters with AI are likely familiar to us all through the medium of disaster and science fiction films that presented assorted ideas of the technology to broad audiences through the late 1990s and early 2000s. 

As an artist, Warburton says it is over the past few years that technological examples have jumped out for him to help create his work. “In terms of my everyday working life, I suppose that rendering – the process of computing photorealistic images – has always been an incredibly slow and complex process but in the last four or five years various pieces of software that are part of the rendering  process have begun to incorporate AI technologies in increasing degrees,” he says. “AI noise reduction or things like rotoscoping are affected as the very mundane labour-intensive activities involved in the work of an animator and visual effects artists or image manipulator have been sped up. 

“AI has also affected me in the way it has affected everyone else through smart phone technology and through the way I interact with services provided by energy companies or banks or insurance people. Those are the areas that are more obscured, obtuse or mysterious because you don’t really see the systems. But with image processing software I have an insight into the reality of how AI is being used.” 

Warburton’s knowledge of software and AI tools has ensured that he is able to critically analyse which tools are beneficial. “I have been quite discriminatory in the way I use AI tools. There’s workflow tools that speed things up as well as image libraries and 3D model libraries. But the latter ones provide politically charged content even though it’s not positioned as such. Presets available in software will give you white skinned caucasian bodies and allow you to photorealistically simulate people but, for example, there’s hair simulation algorithms that default to caucasian hair. There’s this variegated tapestry of AI software tools, libraries, databases that you have to be discriminatory in the use of or be aware of the limitations and bias and voice those criticisms.” 

The artist’s personal use of technology is also careful and thought through. “I don’t have my face online,” he says. “There’s no content of me speaking online, I don’t have photographs online. That’s slightly unusual for someone who works as an artist and has necessary public engagement as part of my job, but I’m very aware that anything I put online can be used as training data –  if it’s public domain (materials available to the public as a whole, especially those not subject to copyright or other legal restrictions) then it’s fair game.

“Whilst my image is unlikely to be used for nefarious ends or contribute directly to a problematic database, there’s a principle that I stick to and I have stuck to for a very long time. There’s some control over my data, my presence and my image that I like to police although I am aware that my data is used in ways that I don’t understand. Keeping control over that data requires labour, you have to go through all of the options in consent forms and carefully select what you are willing to give away and not. Being discriminatory about how your data is used to construct powerful systems of control and AI is a losing game. You have to some extent to accept that your participation with these systems relies on you giving them access to your data.”

When it comes to addressing the issues of AI representation in the wider world, Warburton can see the issues that need to be solved and acknowledges that there is no easy answer. “Over the past five or ten years we have had waves of visual interpretations of our present moment,” he says. “Unfortunately many of those have reached back into retro tropes. So we’ve had vaporwave and post-internet aesthetics and many different Tumblr vibes trying to frame the present visual culture or the technological now but using retro imagery that seemed regressive. 

“We don’t have a visual language for a dematerialised culture.”

“We don’t have a visual language for a dematerialised culture. It’s very difficult to represent the culture that comes through the conduit of the smartphone. I think that’s why people have resorted to these analogue metaphors for culture. We may have reached the end of these attempts to describe data or AI culture, we can’t use those old symbols anymore and yet we still don’t have a popular understanding of how to describe them. I don’t know if it’s even possible to build a language that describes the way data works. Resorting to metaphor seems like a good way of solving that problem but this also brings in the issue of abstraction and that’s another problem.”

Alan’s experience and interest in this field of work have led to some insightful and recognisable visualisations of how AI operates and what is involved, which can act as inspiration for other artists with less knowledge of the technology. Future commissions from BBC R&D for the Better Images of AI project will enable other artists to use their different perspectives to help evolve this new visual language for dematerialised culture.