Better Images of AI’s Student Stewards

Better Images of AI is delighted to be working with Cambridge University’s AI Ethics Society to create a community of Student Stewards. The Student Stewards are working to empower people to use more representative images of AI and celebrate those who lead by example. The Stewards have also formed a valuable community to help Better Images of AI connect with its artists and develop its image library. 

What is Cambridge University’s AI Ethics Society? 

The Cambridge University AI Ethics Society (CUAES) is a group of students from the University of Cambridge who share a passion for advancing the ethical discourse surrounding AI. Each year, the society choses a campaign to support and introduces its members to the issues that these organisations are trying to solve through events and workshops. In 2023, CUAES supported Stop Killer Robots. This year, the Society chose to support Better Images of AI. 

The Society’s Reasons for Supporting Better Images of AI 

The CUAES committee really resonated with Better Images of AI’s mission. The impact that visual media can have on public discourse about AI has been overlooked – especially in academia where there is a focus on written word. Nevertheless, stock images of humanoid robots, white men in suits and the human brain all embed certain values and preconceptions about what AI is and who makes it. CUAES believes that Better Images of AI can help cultivate more thoughtful and constructive discussions about AI. 

Members of the CUAES are privileged enough to be fairly well-informed about the nuances of AI and its ethical implications. Nevertheless, the Society has recognised that even its own logo of a robot incorporates reductive imagery that misrepresents the complexities and current state of AI. Therefore, from oversights in its own decisions, CUAES saw that further work needed to be done.

CUAES is eager to share the importance of Better Images of AI to industry actors, but also members of the public whose perceptions will likely be shaped the most by these sensationalist images. CUAES hopes that by creating a community of Student Stewards, they can disseminate Better Images of AI’s message widely and work together to revise their logo to better reflect the Society’s values. 

The Birth of the Student Steward Initiative

Better Images of AI visited the CUAES earlier this year to introduce members to its work and encourage students to think more critically about how AI is represented. During the workshop, participants were given the tough task to design their own images of AI – we saw everything from illustrations depicting how generative AI models are trained to the duality of AI being symbolised by the ying and yang. The students who attended the workshop were fascinated by Better Images of AI’s mission and wanted to use their skills and time to help – this was the start of the Student Steward community. 

A few weeks after this workshop, individuals were invited to a virtual induction to become Student Stewards so they could introduce more nuanced understandings of AI to the wider public. Whilst this initiative has been borne out of CUAES, students (and others) from all around the globe are invited to join the group to shape a more informed and balanced public perception of AI.

The Role of the Student Stewards

The Student Stewards are on the frontline of spreading Better Images of AI’s mission to journalists, researchers, communications professionals, designers, and the wider public. Here are some of the roles that they champion: 

  1. The Guidance Role: if our Student Stewards see images of AI that are misleading, unrepresentative or harmful, they will attempt to contact authors and make them aware of the Better Images of AI Library and Guide. The Stewards hope that they can help to raise awareness of the problems associated with the images used and guide authors towards alternative options that avoid reinforcing dangerous AI tropes. 
  1. The Gratitude Role: we realise that it is equally as important to recognise instances where authors have used images from the Better Images of AI library. Images from the library have been spotted in international media, adopted by academic institutions and utilised by independent writers. Every decision to opt for more inclusive and representative images of AI plays a crucial role in raising awareness of the nuances of AI. Therefore, our Stewards want to thank authors for being sensitive to these issues and encourage the continuous of the library. 
  1. Connecting with artists: the stories and motivations behind each of the images in our library are often so interesting and thought provoking. Our Student Stewards will be taking the time to connect with artists that contribute images to our library. By learning more about how artists have been inspired to create their works, we can better appreciate the diverse perspectives and narratives that these images provide to wider society. 
  1. Helping with image collections: Better Images of AI carefully selects the images that are chosen to be published in its library. Each image is scrutinised against the different requirements to ensure that they avoid reinforcing harmful stereotypes and embody the principles of honesty, humanity, necessity and specificity. Our Student Stewards will be assisting with many of the tasks that are involved from submission to publication, including liaising with artists, data labelling, evaluating initial submissions, and writing image descriptions. 
  1. Sharing their views: Each of our Student Stewards come with different interests related to AI and its associated representations, narratives, benefits and challenges. We are eager for our students to share their insights on our blog to introduce others to new debates and ideas in these domains.

As Better Images of AI is a non-profit organisation, our community of Stewards operate on a voluntary basis but this does allow for flexibility around your other commitments. Stewards are free to take on additional tasks based on their own availability and interests and there are no minimum time requirements for undertaking this role – we are just grateful for your enthusiasm and willingness to help! 

If you are interested in becoming a Student Steward at Better Images of AI, please get in touch. You do not need to be affiliated with the University of Cambridge or be a student to join the group.

What do children think AI looks like?

Selection of Post-It notes representing childrens views of AI

The BBC Research and Development team asked hundreds of children this question as part of their Get Curious event at the Manchester Science Festival. The event aimed to help children and families understand what AI is and share the interesting ways that it is used at the BBC.

“What do you think AI looks like?”

That was the question we posed to hundreds of children and families passing through the 2022 Manchester Science Festival at the Science and Industry Museum. Representing the work of BBC R&D, we set up shop in the main hall, primed with demos of intelligent wildlife cameras used on BBC productions, and interactive games that explain how AI works.

However, one task was something that all ages could have a go at. We handed each passerby a post it note, asked them to draw what they thought artificial intelligence looked like, and encouraged them to stick it on our wall of AI images.

As well as being an artsy refuge from the busy museum, this collective mind map-come-collaborative art project had a purpose. We wanted to to see how early unhelpful AI image tropes set in, and explore what inspiration can be taken from the youngest of all generations in creating Better Images of AI.

So, with an empty wall, we started collecting drawings.

With such a range of ages and understanding of artificial intelligence, a lot of this exercise involved the team helping kids understand what AI is and where they might come across it. Getting a 7-year-old to understand what you meant by AI called for a lot of obvious reference points. Talking about apps on smartphones, and voice assistants like Alexa both proved to be useful, and of course, robots! As a result, plenty of sketches of iPads, smart speakers and wacky androids lined the wall.

Some drawings were also inspired by our other activities demonstrating AI. Many latched on to the idea of birds and smart cameras from our wildlife identification demo. A few also tried to represent the confusion seen when AI comes across something it is not trained to recognise.

The older children at the festival were also curious about what was going on under the hood. “But how does it actually work?”. These explanations and discussions prompted more literal interpretations of what AI looks like. An overworked laptop, computer chips and even sketches of the streams of coded data.

A number of drawings pulled from the biological tropes of AI, including the classic disembodied brain to make a comparison with human intelligence. Another sketch used a DNA double helix, presumably to represent a kind of ‘programmed’ intelligence. Other less helpful tropes also emerged; to one participant, the answer to “what do you think AI looks like?” was the Meta logo.

My favourite image of AI from the festival came from a father trying to explain AI to his son. “AI is just like…” He paused, before suggesting:

“Magic?”

The two then sketched an image that perfectly encapsulated the wonder of AI, along with the mystery that many feel when faced with results from ‘black box’ algorithms. A rabbit appearing from a magician’s hat. 

At the end of the day, we were left with a wall containing over one hundred creative images of AI. I was also left with two conclusions. Firstly, people’s images of AI are shaped heavily by how AI has been explained to them. If the explanation contains certain tropes, so will their understanding of what AI looks like.

Secondly, asking children, families, and other non-technical people the simple question of “what do you think AI looks like?” showed how curious the public really are about AI. The imaginative responses to this question provide fresh inspiration of what to do — and what not to do — when creating images of AI.

About the Authors

Ben Hughes is a research engineer at BBC R&D. His work in AI and ML has involved research in music information retrieval and creating experiences for explaining machine learning to the general public. The latter work has led to school workshops and outreach on AI education.

Tristan Ferne is the lead producer for the Internet Research & Future Services team where he develops and runs projects that use technology and design to prototype the future of media. He has over 15 years experience in R&D for the web, TV and radio. 

Learn more about this project

This project was conducted as part of a BBC R&D’s Get Curious event at the Manchester Science Festival. The event aimed to help children and families understand what AI is and share the interesting ways that it is used at the BBC.

Illustrating the Materiality of AI

Silicon block on a plain black background

The physical materials involved in designing, producing, and running artificially intelligent systems are all-too-frequently largely absent from discussions of AI itself. As a result, the implications of AI’s intense materiality continue to be overlooked and unremedied.

By picturing the physicality of artificial intelligence within the Better Images of AI repository, with the contributions of Catherine Breslin and Fritzchens Fritz, we hope to foster more accurate representations of these emerging technologies.

Picturing Silicon

Catherine Breslin

Silicon is a crucial component of AI manufacture. A block like the one pictured here would be sliced into 12 inch diameter wafers to form the base of CPUs. Picturing silicon visually illustrates that the ‘mining’ of AI is not purely metaphorical (e.g. data mining) but also a literal, material undertaking. Catherine Breslin, the photographer, operates within the AI supply-chain first-hand in her work as a machine-learning voice engineer and consultant, previously involved in the production of Amazon’s Alexa.

A block of silicon (also known as a mono-crystal) placed on a plain black background and photographed in HD to make its rich, reflective and complex surface visible.
Catherine Breslin / Better Images of AI / Silicon on Black 1 / CC-BY 4.0
A block of silicon (also known as a mono-crystal) placed on a plain black background and photographed in HD to make its rich, reflective and complex surface visible.
Catherine Breslin / Better Images of AI / Silicon Closeup / CC-BY 4.0

GPUs, etched.

Fritzchens Fritz

Three colorful GPUs with their packaging cleanly removed laying on a white surface
Fritzchens Fritz / Better Images of AI / GPU shot etched 2 / CC-BY 4.0

The GPU (Graphics Processing Unit) is an essential part of modern AI infrastructure. It’s a special type of chip or electronic circuit, originally designed to process images and render graphics and now used for other computational tasks, including training neural networks in deep learning. Die-shots are close-up photographs of computer chips, from which the “packaging“ is removed, usually by undergoing a quite dangerous etching process involving sulfuric acid and high temperatures. The artist has used a combination of external light sources, polarising filters on the camera lense and image post production to create the colourful effect, capturing with this shot three NVIDIA Turing Chips (TU104, TU106, TU116).

Abstract microscopic photography of a Graphics Processing Unit resembling a satellite image of a big city
Fritzchens Fritz / Better Images of AI / GPU shot etched 5 / CC-BY 4.0

Why these images?

Tania Duarte, who coordinates the Better Images of AI collaboration, explains why the project has elected to commission and include these images as part of their repository:

“All too often we see images of AI in virtual, holographic forms, or find ourselves repeatedly presented with circuit brains in shiny 3D outlines suspended in blue space. These images of AI can make the technology seem intangible and ungovernable; something removed from real-world origins and consequences, perhaps even magical.

Catherine Breslin’s striking silicon rock images show the materiality of AI, and allude to the environmental impact: the physical reality of extracting natural resources for the industry and its toll on people and the planet. It also showcases the stunning beauty of the natural rock, in an iconic image echoing the shiny sci-fi robots in representations of AI, but falling much closer to its physical reality.

The next images of GPUs – made from silicon, are fascinating in that they show a further stage in the production of the hardware which enables AI systems. They are also visually compelling, showing a vibrant use of colour much more reflective of the many outputs of AI, and makes me wonder why in trying to make AI exciting, organisations use such limited and cliched colour palettes.”

Press release: Better Images of AI launches a free stock image library of more realistic images of artificial intelligence


  • Non-profit collaboration starts to make and distribute more accurate and inclusive visual representations of AI
  • Follows research showing that current popular images of AI using themes like white human-like robots and glowing brains and blue backgrounds create barriers to understanding of technology, trust, and diversity
  • Available for technical, science, news and general media and marketing communications

December 14, 2021 08:00 AM Coordinated Universal Time (UTC)

LONDON, UK. Today sees the launch of Better Images of AI Image Library, which makes available the first commissioned and curated stock images of artificial intelligence (AI) in response to various research studies which have substantiated concerns about the negative impacts of the existing available imagery.

betterimagesofai.org is a collaboration between various global academics, artists, diversity advocates, and non-profit organisations. It aims to help create a more representative and realistic visual language for AI systems, themes, applications and impacts. It is now starting to provide free images, guidance and visual inspiration for those communicating on AI technologies. 

At present, the available downloadable images on photo libraries, search engines, and content platforms are dominated by a limited range of images, for example, those based on science fiction inspired shiny robots, glowing brains and blue backgrounds. These tropes are often used as inspiration even when new artwork is commissioned by media or tech companies.

The first few images to be released on the library showcase different approaches to visually communicating technologies such as computer vision and natural language processing and to communicating themes such as the role of ‘click workers’ who annotate data use in machine learning training and other human input to machine learning.

A photographic rendering of a young black man standing in front of a cloudy blue sky, seen through a refractive glass grid and overlaid with a diagram of a neural network
Image by Alan Warburton / © BBC / Better Images of AI / Quantified Human / Licenced by CC-BY 4.0
Two digitally illustrated green playing cards on a white background, with the letters A and I in capitals and lowercase calligraphy over modified photographs of human mouths in profile.
Alina Constantin / Better Images of AI / Handmade A.I / Licenced by CC-BY 4.0
A banana, a plant and a flask on a monochrome surface, each one surrounded by a thin white frame with letters attached that spell the name of the objects
Max Gruber / Better Images of AI / Banana / Plant / Flask / Licenced by CC-BY 4.0

Better Images of AI is coordinated by We and AI and includes research, development and artistic input from BBC R&D, with academic partners Leverhulme Centre for the Future of Intelligence. Founding supporters of the initiative include the Ada Lovelace Institute, The Alan Turing Institute, The Institute for Human-Centred AI, Digital Catapult, International Centre for Ethics in the Sciences and Humanities (IZEW), All Tech is Human, Feminist Internet and the Finnish Center for Artificial Intelligence (FCAI). These organisations will advise on the creation of images, ensuring that social and technical considerations and expertise underpin the creation and distribution of compelling new images.

Octavia Reeve, Interim Lead, Ada Lovelace Institute said:

“The images that depict AI play a fundamental role in shaping how we perceive it. Those perceptions shape the ways AI is built, designed, used and adopted. To ensure these technologies work for people and society we must develop more representative, inclusive, diverse and realistic images of AI. The Ada Lovelace Institute is delighted to be a Founding Supporter of the Better Images of AI initiative.”

Dr. Kanta Dihal, Senior Research Fellow at the Leverhulme Centre for the Future of Intelligence, University of Cambridge said:

“Images of white plastic androids, Terminators, and blue brains have been increasingly widely criticized for misinforming people about what AI is, but until now there has been a huge lack of suitable alternative images. I am incredibly excited to see the Better Images of AI project leading the way in providing these alternatives.”

Dr. Charlotte Webb, Co-founder of Feminist Internet said: 

“The images we use to describe and represent AI shape not only how it is understood in the public imaginary, but also how we build, interact with and subvert it. Better Images is trying to intervene in the picturing of AI so we can expand beyond the biases and lack of imagination embedded in today’s stock imagery.”  

Professor Teemu Roos, Finnish Center for Artificial Intelligence, University of Helsinki said:

Images are not just decoration – especially in today’s fast-paced media environment, headlines and illustrations count at least as much as the actual story. But while it’s easy to call out bad stock photos, it’s very hard to find good alternatives. I’m extremely happy to see an initiative like the Better Images of AI filling a huge gap in the way we can communicate about AI without perpetuating harmful misconceptions and mystification of AI.

David Ryan Polgar, Founder and Director of All Tech Is Human said:

“Visual representation of artificial intelligence greatly influences our overall conception of how AI is impacting society, along with signalling inclusion of who is, and who should be, involved in the process. Given the ubiquitous nature of AI and its broad impact on most every aspect of our lives, Better Images of AI is a much-needed shift away from the intimidatingly technical and often mystical portrayal of AI that assumes an unwarranted neutrality. AI is made by humans and all humans should feel welcome to participate in the conversation around it.”

Tania Duarte, Co-Founder of We and AI said:

“We have found that misconceptions about AI make it hard for people to be aware of the impact of AI systems in their lives, and the human agency behind them. Myths about sentient robots are fuelled by the pictures they see, which are overhyped, futuristic, colonial, and distract from the real opportunities and issues. That’s why We and AI are so pleased to have coordinated this project which will build greater public engagement with AI, and support more trustworthy AI.”

The Better Images of AI project has so far been funded by volunteers at We and AI and BBC R&D, and now invites sponsors, donations in kind and other support in order to grow the repository and ensure that more images from artists from underrepresented groups, and from the global south can be included. 

Better Images of AI invites interest from organisations who wish to know more about the briefs developed as part of the project and to get involved in working with artists to represent their AI projects. They also wish to make contact with artists and art organisations who are interested in joining the project.

Contact

For further information: info (at) betterimagesofai.org

For funding offers: tania.duarte (at) weandai.org

Website: https://www.betterimagesofai.org

Twitter: https://twitter.com/ImagesofAI

Notes

We and AI are a UK non-profit organisation engaging, connecting and activating communities to make AI work for everybody. Their volunteers develop programmes including the Race and AI Toolkit, and AI Literacy & AI in Society workshops. They support a greater diversity of people to get involved in shaping the impact and opportunities of AI systems.
Website: https://weandai.org/ Email: hello (at) weandai.org