Better Images of AI is delighted to be working with Cambridge University’s AI Ethics Society to create a community of Student Stewards. The Student Stewards are working to empower people to use more representative images of AI and celebrate those who lead by example. The Stewards have also formed a valuable community to help Better Images of AI connect with its artists and develop its image library.
What is Cambridge University’s AI Ethics Society?
The Cambridge University AI Ethics Society (CUAES) is a group of students from the University of Cambridge who share a passion for advancing the ethical discourse surrounding AI. Each year, the society choses a campaign to support and introduces its members to the issues that these organisations are trying to solve through events and workshops. In 2023, CUAES supported Stop Killer Robots. This year, the Society chose to support Better Images of AI.
The Society’s Reasons for Supporting Better Images of AI
The CUAES committee really resonated with Better Images of AI’s mission. The impact that visual media can have on public discourse about AI has been overlooked – especially in academia where there is a focus on written word. Nevertheless, stock images of humanoid robots, white men in suits and the human brain all embed certain values and preconceptions about what AI is and who makes it. CUAES believes that Better Images of AI can help cultivate more thoughtful and constructive discussions about AI.
Members of the CUAES are privileged enough to be fairly well-informed about the nuances of AI and its ethical implications. Nevertheless, the Society has recognised that even its own logo of a robot incorporates reductive imagery that misrepresents the complexities and current state of AI. Therefore, from oversights in its own decisions, CUAES saw that further work needed to be done.
CUAES is eager to share the importance of Better Images of AI to industry actors, but also members of the public whose perceptions will likely be shaped the most by these sensationalist images. CUAES hopes that by creating a community of Student Stewards, they can disseminate Better Images of AI’s message widely and work together to revise their logo to better reflect the Society’s values.
The Birth of the Student Steward Initiative
Better Images of AI visited the CUAES earlier this year to introduce members to its work and encourage students to think more critically about how AI is represented. During the workshop, participants were given the tough task to design their own images of AI – we saw everything from illustrations depicting how generative AI models are trained to the duality of AI being symbolised by the ying and yang. The students who attended the workshop were fascinated by Better Images of AI’s mission and wanted to use their skills and time to help – this was the start of the Student Steward community.
A few weeks after this workshop, individuals were invited to a virtual induction to become Student Stewards so they could introduce more nuanced understandings of AI to the wider public. Whilst this initiative has been borne out of CUAES, students (and others) from all around the globe are invited to join the group to shape a more informed and balanced public perception of AI.
The Role of the Student Stewards
The Student Stewards are on the frontline of spreading Better Images of AI’s mission to journalists, researchers, communications professionals, designers, and the wider public. Here are some of the roles that they champion:
The Guidance Role: if our Student Stewards see images of AI that are misleading, unrepresentative or harmful, they will attempt to contact authors and make them aware of the Better Images of AI Library and Guide. The Stewards hope that they can help to raise awareness of the problems associated with the images used and guide authors towards alternative options that avoid reinforcing dangerous AI tropes.
The Gratitude Role: we realise that it is equally as important to recognise instances where authors have used images from the Better Images of AI library. Images from the library have been spotted in international media, adopted by academic institutions and utilised by independent writers. Every decision to opt for more inclusive and representative images of AI plays a crucial role in raising awareness of the nuances of AI. Therefore, our Stewards want to thank authors for being sensitive to these issues and encourage the continuous of the library.
Connecting with artists: the stories and motivations behind each of the images in our library are often so interesting and thought provoking. Our Student Stewards will be taking the time to connect with artists that contribute images to our library. By learning more about how artists have been inspired to create their works, we can better appreciate the diverse perspectives and narratives that these images provide to wider society.
Helping with image collections: Better Images of AI carefully selects the images that are chosen to be published in its library. Each image is scrutinised against the different requirements to ensure that they avoid reinforcing harmful stereotypes and embody the principles of honesty, humanity, necessity and specificity. Our Student Stewards will be assisting with many of the tasks that are involved from submission to publication, including liaising with artists, data labelling, evaluating initial submissions, and writing image descriptions.
Sharing their views: Each of our Student Stewards come with different interests related to AI and its associated representations, narratives, benefits and challenges. We are eager for our students to share their insights on our blog to introduce others to new debates and ideas in these domains.
As Better Images of AI is a non-profit organisation, our community of Stewards operate on a voluntary basis but this does allow for flexibility around your other commitments. Stewards are free to take on additional tasks based on their own availability and interests and there are no minimum time requirements for undertaking this role – we are just grateful for your enthusiasm and willingness to help!
If you are interested in becoming a Student Steward at Better Images of AI, please get in touch. You do not need to be affiliated with the University of Cambridge or be a student to join the group.
In July, 2023, Science Gallery London and the London Office of Technology and Innovation co-hosted a workshop helping Londoners think about the kind of AI they want. In this post, Dr. Peter Rees reflects on the event, describes its methodology, and celebrates some of the new images that resulted from the day.
Who can create better images of Artificial Intelligence (AI)? There are common misleading tropes of the images which dominate our culture such as white humanoid robots, glowing blue brains, and various iterations of the extinction of humanity. Better Images of AI is on a mission to increase AI literacy and inclusion by countering unhelpful images. Everyone should get a say in what AI looks like and how they want to make it work for them. No one perspective or group should dominate how Al is conceptualised and imagined.
This is why we were delighted to be able to run the workshop ‘Co-creating Better Images of AI’ during London Data Week. It was a chance to bring together over 50 members of the public, including creative artists, technologists, and local government representatives to each make our own images of AI. Most images of AI that appear online and in the newspapers are copied directly from existing stock image libraries. This workshop set out to see what would happen when we created new images fromscratch. We experimented with creative drawing techniques and collaborative dialogues to create images. Participants’ amazing imaginations and expertise went into a melting-pot which produced an array of outputs. This blogpost reports on a selection of the visual and conceptual takeaways! I offer this account as a personal recollection of the workshop—I can only hope to capture some of the main themes and moments, and I apologise for all that I have left out.
The event was held at the Science Gallery in London on 4th July 2023 between 3-5pm and was hosted in partnership with London Data Week, funded by the London Office of Innovation and Technology. In keeping with the focus on London Data Week and LOTI, the workshop set out to think about how AI is used every day in the lives of Londoners, to help Londoners think about the kind of AI they want, to re-imagine AI so that we can build systems that work for us.
Workshop methodology
I said the workshop started out from scratch—well, almost. We certainly wanted to make use of the resources already out there such as the [Better Images of AI: A Guide for Users and Creators] co-authored by Dr Kanta Dihal and Tania Duarte. This guide was helpful because it not only suggested some things to avoid, but also provided stimulation for what kind of images we might like to make instead. What made the workshop a success was the wide-ranging and generous contributions—verbal and visual—from invited artists and technology experts, as well as public participants, who all offered insights and produced images, some of which can be found below (or even in the Science Gallery).
The Workshop was structured in two rounds, each with a live discussion and creative drawing ‘challenge’. The approach was to stage a discussion between an artist and a technology expert (approx 15 mins), and then all members of the workshop would have some time (again, approx 15 mins) for creative drawing. The purpose of the live discussion was to provide an accessible introduction to the topic and its challenges, after which we all tackled the challenge of visualising and representing different elements of AI production, use and impact. I will now briefly describe these dialogues, and unveil some of the images created.
Setting the scene
Tania Duarte (Founder, We and AI) launched the workshop with a warm welcome to all. Then, workshop host Dr Robert Elliot-Smith (Director of AI and Data Science at Digital Catapult) introduced the topic of Large Language Models (LLMs) by reminding the audience that such systems are like ‘autocorrect on steroids’: the model is simply very good at predicting words, it does not have any deep understanding of the meaning of the text it produces. He also discussed image-generators, which work in a similar way and with similar problems, which is why certain AI-produced images end up garbling images of hands and arms: they do not understand anatomy.
In response to this preliminary introduction, one participant who described herself as a visual artist expressed horror at the power of such image-generating and labelling AI systems to limit and constrain our perception of reality itself. She described how, if we are to behave as artists, what we have to do in our minds is to avoid seeing everything simply in terms of fixed categories which can conservatively restrain the imagination, keeping it within a set of known categorisations, which is limiting not only our imagination but also our future. For instance, why is the thing we see in front of us necessarily a ‘wall’? Could it not be, seeing more abstractly, simply a straight line?
From her perspective, AI models seem to be frighteningly powerful mechanisms for reinforcing existing categories for what we are seeing, and therefore also of how to see, what things are, even what we are, and what kind of behaviour is expected. Another participant agreed: it is frustrating to get the same picture from 100 different inputs and they all look so similar. Indeed, image generators might seem to be producing novelty, but there is an important sense in which they are reinforcing the past categories of the data on which they were trained.
This discussion raised big questions leading into the first challenge: the limitations of large language models.
Round 1: The Limitations of Large Language Models.
A live discussion was staged between Yasmine Boudiaf (recognised as one of ‘100 Brilliant Women in AI Ethics 2022,’ and fellow at the Ada Lovelace Institute) and Tamsin Nooney (AI Research, BBC R&D) about the process of creating LLMs.
Yasmine asked Tamsin about how the BBC, as a public broadcaster, can use LLMs in a reliable manner, and invited everyone in the room to note down any words they found intriguing, as those words might form a stimulus for their creative drawings.
Tamsin described an example of LLM use-case for the BBC in producing a podcast whereby an LLM could summarise the content, add in key markers and meta-data labels and help to process the content. She emphasised how rigorous testing is required to gain confidence in the LLM’s reliability for a specific task before it could be used. A risk is that a lot of work might go into developing the model only for it to never be usable at all.
Following Yasmine’s line of question, Tamsin described how the BBC deal with the significant costs and environmental impacts of using LLMs. She described how the BBC calculated if they wanted to train their LLM, even a very small one, it would take up all their servers at full capacity for over a year, so they won’t do that! The alternative is then to pay other services such as Amazon to use their model, which means balancing costs: so here are limits due to scale, cost, and environmental impact.
This was followed by a more quiet, but by no means silent, 15 minutes for drawing time in which all participants drew…
One participant used an AI image generator for their creative drawing, making a picture of a toddler covered in paint to depict the LLM and its unpredictable behaviours. Tamsin suggested that this might be giving the LLM too much credit! Toddlers, like cats and dogs, have a basic and embodied perception of the world and base knowledge, which LLMs do not have.
The experience of this discussion and drawing also raised, for another participant, more big questions. She discussed poet David Whyte’s work on the ‘conversational nature of reality’ and thought on how the self is not just inside us but is created through interaction with others and through language. For instance, she mentioned that when you read or hear the word ‘yes’, you have a physical feeling of ‘yesness’ inside, and similarly for ‘no’. She suggested that our encounters with machine-made language produced by LLMs is similar. This language shapes our conversations and interactions, so there is a sense in which the ‘transformers’ (the technical term for the LLM machinery) is also helping to transform our senses of self and the boundary between what is reality and what is fantasy.
Here, we have the image made by artist Yasmine based on her discussion with Tamsin:
Yasmine writes:
This image shows an example of Large Language Model in use. Audio data is gathered from a group of people in a meeting. Their speech is automatically transcribed into text data. The text is analysed and relevant segments are selected. The output generated is a short summary text of the meeting. It was inspired by BBC R&D’s process for segmenting podcasts, GPT-4 text summary tools and LOTI’s vision for taking minutes at meetings.
This second and final round focused on use cases for generative AI in the public sector, specifically by local government. Again, a live discussion was held, this time between Emily Rand (illustrator and author of seven books and recognised by the Children’s Laureate, Lauren Child, to be featured in Drawing Words) and Sam Nutt (Researcher & Data Ethicist, London Office of Technology and Innovation). They built on the previous exploration of LLMs by considering new generative AI applications which they enable for local councils and how they might transform our everyday services.
Emily described how she illustrates by hand, and described her [work] as focusing on the tangible and the real. Making illustrations about AI, whose workings are not obviously visible, was an exciting new topic. See her illustration and commentary below.
Sam described his role as part of the innovation team which sits across 26 of the boroughs of London and Mayor of London. He helps boroughs to think about how to use data responsibly. In the context of local government data and services, a lot of data collected about residents is statutory (meaning they cannot opt out of giving it), such as council tax data. There is a big prerogative for dealing with such data, especially for sensitive personal health data, that privacy is protected and bias is minimised. He considered some use cases. For instance, council officers can use ChatGPT to draft letters to residents to increase efficiency butthey must not put any personal information into ChatGPT, otherwise data privacy can be compromised. Or, for example, the use of LLMs to summarise large archives of local government data concerning planning permission applications, or the minutes from council meetings, which are lengthy and often technical, which could be made significantly more accessible to many members of the public and researchers.
Sam also raised the concern that it is very important that residents know how councils use their data so that councils can be held accountable. Therefore this has to be explained and made understandable to residents. Note that 3% of Londoners are totally offline, not using internet at all, so that’s 270,000 people—who also have an equal right to understand how the council uses their data—who need to be reached through offline means. This example brings home the importance of increasing inclusive public Al literacy.
Again, we all drew. Here are a couple of striking images made by participants who also kindly donated their pictures and words to the project:
Emily started by llustrating the idea of bias in AI. Her initial sketches showed an image showing lines of people of various sizes, ages, ethnicities and bodies. Various cursors showed the cis white able bodied people being selected over the others. Emily also did a sketch of the shape of a City and ended up combining the two. She added frames to show the way different people are clustered. The frame shows the area around the person, where they might have a device sending data about them.
At the end of the workshop, I was left with feelings of admiration and positivity. Admiration of the stunning array of visual and conceptual responses from participants, and in particular the candid and open manner of their sharing. And positivity because the responses were often highlighting the dangers of AI as well as the benefits—its capacity to reinforce systemic bias and aid exploitation—but these critiques did not tend to be delivered in an elegiac or sad tone, they seemed more like an optimistic desire to understand the technology and make it work in an inclusive way. This seemed a powerful approach.
The results
The Better Images of AI mission is to create a free repository of better images of AI with more realistic, accurate, inclusive and diverse ways to represent AI. Was this workshop a success and how might it inform Better Images of AI work going forward?
Tania Duarte, who coordinates the Better Images of AI collaboration, certainly thought so:
It was great to see such a diverse group of people come together to find new and incredibly insightful and creative ways of explaining and visualising generative AI and its uses in the public sector. The process of questioning and exploring together showed the multitude of lenses and perspectives through which often misunderstood technologies can be considered. It resulted in a wealth of materials which the participants generously left with the project, and we aim to get some of these developed further to work on the metaphors and visual language further. We are very grateful for the time participants put in, and the ideas and drawings they donated to the project. The Better Images of AI project, as an unfunded non-profit is hugely reliant on volunteers and donated art, and it is a shame such work is so undervalued. Often stock image creators get paid $5 – $25 per image by the big image libraries, which is why they don’t have time to spend researching AI and considering these nuances, and instead copy existing stereotypical images.
Tania Duarte
The images created by Emily Rand and Yasmine Boudiaf are being added to the Better Images of AI Free images library on a Creative Commons licence as part of the #NewImageNovember campaign. We hope you will enjoy discovering a new creative interpretation each day of November, and will be able to use and share them as we double the size of the library in one month.
Sign up for our newsletter to get notified of new images here.
Acknowledgements
A big thank you to organisers, panellists and artists:
Jennifer Ding – Senior Researcher for Research Applications at The Alan Turing Institute
Yasmine Boudiaf – Fellow at Ada Lovelace Institute, recognised as one of ‘100 Brilliant Women in AI Ethics 2022’
Dr Tamsin Nooney – AI Research, BBC R&D
Emily Rand – illustrator and author of seven books and recognised by the Children’s Laureate, Lauren Child, to be featured in Drawing Words
Sam Nutt – Researcher & Data Ethicist, London Office of Technology and Innovation (LOTI)
Dr Tomasz Hollanek – Research Fellow, Leverhulme Centre for the Future of Intelligence
Laura Purseglove – Producer and Curator at Science Gallery London
Dr Robert Elliot-Smith – Director of AI and Data Science at Digital Catapult
Tania Duarte – Founder, We and AI and Better Images of AI
Also many thanks to the We and Al team, who volunteered as facilitators to make this workshop possible:
Medina Bakayeva, UCL master’s student in cyber policy & AI governance, communications background
Marissa Ellis, Founder of Diversily.com, Inclusion Strategist & Speaker @diversily
Valena Reich, MPhil in Ethics of AI, Gates Cambridge scholar-elect, researcher at We and AI
Ismael Kherroubi Garcia FRSA, Founder and CEO of Kairoi, AI Ethics & Research Governance
Dr Peter Rees was project manager for the workshop
And a final appreciation for our partners: LOTI, the Science Gallery London, and London Data Week, who made this possible.
Three new workshops have been announced in September and October by the Better Images of AI project team. We will once again bring a range of AI practitioners and communicators together with artists and designers working in different creative fields, to explore in small groups how to represent artificial intelligence technologies and impacts in more helpful ways.
Following a first insightful initial workshop in July, we’re inviting anyone in relevant fields to apply to join the remaining workshops,- taking place both online and in person. We are particularly interested in hearing from journalists who write about AI. However if you are interested in critiquing and exploring new images in an attempt to find more inclusive, varied and realistic visual representations of AI, we would like to hear from you!
Our next workshops will be held on:
Monday 12 September, 3.30 – 5.30pm UTC+1 – ONLINE
Wednesday 28 September, 3 – 5pm UTC+1 – ONLINE
Thursday 6 October, 2:30 – 4:30pm UTC+1 – IN PERSON – The Alan Turing Institute, British Library 96 Euston Road London NW1 2DB
If you would like to attend or know anyone in these fields, email research@betterimagesofai.org, specifying which date. Please include some information about your current field and ideally a link to an online profile or portfolio.
The workshops will look at approaches to meet the criteria of being a ‘better image of AI’, identified by stakeholders at earlier roundtable sessions.
The discussions in all four workshops will inform an Arts and Humanities Research Council-funded research project undertaken by the Leverhulme Centre for the Future of Intelligence, the University of Cambridge and organised by We and AI.
Our first workshop was held on 25 July, and brought together over 20 individuals from creative arts, communications, technology and academia to discuss sets of curated and created images of AI and to explore the next steps in meeting the needs identified in providing better images of AI moving forward.
The four workshops follow a series of roundtable discussions, which set out to examine and identify user requirements for helpfully communicating visual narratives, metaphors, information and stories related to AI.
The first workshop was incredibly rich in terms of generating creative ideas and giving feedback on gaps in current imagery. Not only has it surfaced lots of new concepts for the wider Better Images of AI to work on, but the series of workshops will also form part of a research paper to be published in January 2023. This process is really critical to ensuring that our mission to communicate AI in more inclusive, realistic and transparent ways is informed by a variety of stakeholders and underpinned by good evidence.
Dagmar Monett, Head of the Computer Science Department at Berlin School of Economics and Law and one of the July workshop attendees, said: “”Better Images of AI also means better AI: coming forward in AI as a field also means creating and using narratives that don’t distort its goals nor obscure what is possible from its actual capacities. Better Images of AI is an excellent example of how to do it the right way.”
The academic research project is being led by Dr Kanta Dihal, who has published many related books, journal articles and papers related to emerging technology narratives and public perceptions.
The workshops will ultimately contribute to research-informed design brief guidance, which will then be made freely available to anyone commissioning or selecting images to accompany communications – such as news articles, press releases, web communications, and research papers related to AI technologies and their impacts.
To register interest: Email our team at research@betterimagesofai.org, letting us know which date you’d like to attend and giving us some information about your current field as well as a link to your LinkedIn profile or similar.