How not to communicate about AI in education

Seventeen multicoloured post-it notes are roughly positioned in a strip shape on a white board. Each one of them has a hand drawn sketch in pen on them, answering the prompt on one of the post-it notes "AI is...." The sketches are all very different, some are patterns representing data, some are cartoons, some show drawings of things like data centres, or stick figure drawings of the people involved.

Camila Leporace – journalist, researcher, and PhD in Education – argues that innovation may not be in artificial intelligence (AI) but in our critical capacity to evaluate technological change.


When searching for “AI in education” on Google Images here in Brazil, in November 2023, there is a clear and obvious  predominance of images of robots. The first five images that appeared for me were: 

  1. A robot teaching numeracy in front of a school blackboard; 
  2. A girl looking at a computer screen from which icons she  is viewing “spill out”; 
  3. A series of icons and a hand catching them in the air; 
  4. A robot finger and a human finger trying to find each other like in Michelangelo’s  “Creation of Adam,” but a brain is between them, keeping the fingers from touching; whilst the  robot finger touches the left half of the brain (which is “artificial” and blue), the  human finger touches the right half of the brain (which is coloured); and
  5. A drawing (not a photo) showing a girl sitting with a book and a robot sat on two books next to her, opposite a screen;

It is curious (and harmful) how images associated with artificial intelligence (AI) in education so inaccurately represent what is actually happening with regard to the insertion of these technologies in Brazilian schools – in fact, in almost every school in the world. AI is not a technology that can be  “touched.” Instead, it is a resource that is present in the programming of the systems we use in an invisible, intangible way. For example, Brazilian schools have been adopting AI tools in writing activities, like the correction of students’ essays; or question-and-answer adaptive learning platforms. In Denmark, teachers have been using apps to audit students ‘moods’, through data collection and the generation of bar charts. In the UK, surveillance involving students and teachers as a consequence of data harvesting is a topic getting a lot of attention. 

AI, however, is not restricted to the educational resources designed for teaching and learning, but in various devices useful for learning beyond formal learning contexts. We all use “learning machines” in our daily lives, as now machine learning is everywhere around us trying to gather information on us to provide content and keep us connected. While we do so, we provide data to feed this machinery. Algorithms classify the large masses of data it receives from us.  Often, it is young people who – in contact with algorithmic platforms – provide their data  while browsing and, in return, receive content that – in theory – matches their profiles. This is quite  controversial, raising questions about data privacy, ethics, transparency and what these data  generation and harvesting procedures can add (or not) to the future of children and young  people. Algorithmic neural networks are based on prediction, applying statistics and other features to process data and obtain results. Otherwise we, humans, are  not predictable.

The core problem with images of robots and “magic” screens in education is that they don’t properly communicate what is happening with AI in the context of teaching and learning. These uninformative images end up diverting attention from what is really important: – interactions on social networks, chatbots, and the countless emotional, psychological and developmental implications arising from these environments. While there are speculations about teachers being replaced by AI, teachers have actually never been more important in supporting parents and carers educate about navigating the digital world. That’s why the prevalence of robot teachers in the imagination doesn’t seem  to help at all. And this prevalence is definitely not new!

When we look into the history of automation in education, we find out that one hundred years ago, in the 1920s, Sidney Pressey developed analog teaching machines basically to apply tests to students. Pressey’s machines preceded those developed by the behaviourist B. F. Skinner in the late 1960s, promising – just like today’s AI platforms for adaptive teaching do – to personalise learning, make the process more fun and relieve the teacher of repetitive tasks. When they came up, those inventions not only promised similar benefits as those which fuel AI systems today, but also raised concerns similar to those we face today, including the hypothesis of replacing the teacher entirely. We could then ask: where is the real innovation regarding automation in education, if the old analog machines are so similar to today’s in their assumptions, applications and the discourse they carry?

Innovation doesn’t lie in big data or deep neural networks, the basic ingredients that boost the latest technologies we are aware of. It lies in our critical capacity to look  at the changes brought about by AI technologies with restraint and to be careful about delegating to them what we can’t actually give up. It lies in our critical thinking on how the learning processes can or cannot be supported by learning machines.

More than ever, we need to analyse what is truly human in intelligence, cognition, creativity; this is a way of guiding us in not delegating what cannot be delegated to artificial systems, no matter how powerful they are in processing data. Communication through images requires special attention. After all, images generate impressions, shape perceptions and can completely alter the general audience’s sense of an important topic. The apprehension  we’ve had towards technology for dozens of years is enough. In the midst of the technological hype, we need critical thinking, shared thoughts, imagination and accuracy.. And certainly need better images  of AI.

Better images of AI can support AI literacy for more people

Marika Jonsson's book cover; a simple yellow cover with the title (in Swedish): "En bok om AI"

Marika Jonsson, doctoral student at KTH Royal Institute of Technology, reflects on overcoming the challenge of developing an Easy Read book on artificial intelligence (AI) with so few informative images about AI available.


There are many things that I take for granted. One of them is that I should be able to easily find information about things I want to know more about. Like artificial intelligence (AI). I find AI exciting, interesting; and I see the possibilities of AI helping me in everyday life. And thanks to the fact that I have been able to read about AI, I have also realised that AI can be used for bad things; that AI creates risks and can promote inequality in society.  Most of us use or are exposed to AI daily, sometimes without being aware of it.

Between May 2020 and June 2023, I participated in a project called AllAgeHub in Sweden, where one of the aims was to spread knowledge about how to use welfare technology to empower people in their everyday lives. The project included a course on AI for the participants, who worked in the public healthcare and social care sectors. The participants then wanted to spread knowledge about AI to clients in their respective sectors. The clients could be, for example, people working in adapted workplaces or living in supported housing. There was a demand for information in Easy Read format. Easy Read format is when you write in easy-to-read language, with common words, short sentences and in simple chronological order. The text should be spaced out and have short lines, and the texts are often supported by images. Easy Read is both about how you write and about how you present what is written. The only problem was that I found almost no Easy Read information about AI in Swedish. My view is that the lack of Easy Read information about AI is a serious matter.

A basic principle behind democracy is that all people are equal and should have the same rights. Therefore, I believe we must have access to information in an understandable way. How else can you express your opinion, vote or consent to something in an informed way? That was the reason I decided to write an Easy Read book about AI. My ambition was to write concretely and support the text with pictures. Then I stumbled on the huge problem of finding informative pictures about AI. The images I found were often abstract or inaccurate. The images could also be depicting AI as robots and conveyed the impression that AI is a creature that can take over the earth and destroy humanity. With images like that, it was hard to explain that, for example, personalised ads, which can entice me to buy things I don’t really need, are based on AI technology. Many people don’t know that we are exposed to AI that affects us in everyday life through cookie choices on the internet. The aforementioned images might also make people afraid of using practical AI tools that can make everyday life easier, such as natural language processing (NLP) tools that convert speech to text or reads text aloud. So, I had to create my own pictures.

I must confess, it was difficult to create clear images that explain AI. I chose to create images that show situations where AI is used, and tried to visualise how certain kinds of AI might operate. One example is that I visualised why a chatbot might give the wrong answer by showing how a word can mean two different things with a picture of each word’s meaning. The two different meanings give the AI tool two possible interpretations about what issue is at hand. The images are by no means perfect, but they are an attempt at explaining some aspects of AI.

Two images with Swedish text explaining the images. 1. A box of raspberries. 2. symbol of person carriying a bag. The Swedish word ”bär” is present in both explanations.
The word for carry and berry is the same in Swedish. The text says: “The word berry can mean two things. Berries that you eat. A person carrying a bag.”

The work of creating concrete, comprehensible images that support our understanding of AI can strengthen democracy by giving more people the opportunity to understand information about the tools they use in their day-to-day lives. I hope more people will be inspired to write about AI in Easy Read, and create and share clear and descriptive images of AI.

As they say,  ”a picture is worth a thousand words,” so we need to choose images that tell the same story as the words we use. At the time I write this blog post, I feel there are very few images to choose from. I am hopeful we can change this, together!


The Easy Read book about AI includes a study guide. It is in Swedish, and is available for free as a pdf on AllAgeHub’s website:

https://allagehub.se/2023/06/29/nu-finns-en-lattlast-bok-om-ai-att-ta-del-av/

Co-creating Better Images of AI

Yasmine Boudiaf (left) and Tamsin Nooney (right) deliver a talk during the workshop ‘Co-creating Better Images of AI’

In July, 2023, Science Gallery London and the London Office of Technology and Innovation co-hosted a workshop helping Londoners think about the kind of AI they want. In this post, Dr. Peter Rees reflects on the event, describes its methodology, and celebrates some of the new images that resulted from the day.


Who can create better images of Artificial Intelligence (AI)? There are common misleading tropes of the images which dominate our culture such as white humanoid robots, glowing blue brains, and various iterations of the extinction of humanity. Better Images of AI  is on a mission to increase AI literacy and inclusion by countering unhelpful images. Everyone should get a say in what AI looks like and how they want to make it work for them. No one perspective or group should dominate how Al is conceptualised and imagined.

This is why we were delighted to be able to run the workshop ‘Co-creating Better Images of AI’ during London Data Week. It was a chance to bring together over 50 members of the public, including creative artists, technologists, and local government representatives to each make our own images of AI. Most images of AI that appear online and in the newspapers are copied directly from existing stock image libraries. This workshop set out to see what would happen when we created new images fromscratch. We experimented with creative drawing techniques and collaborative dialogues to create images. Participants’ amazing imaginations and expertise went into a melting-pot which produced an array of outputs. This blogpost reports on a selection of the visual and conceptual takeaways! I offer this account as a personal recollection of the workshop—I can only hope to capture some of the main themes and moments, and I apologise for all that I have left out. 

The event was held at the Science Gallery in London on 4th July 2023 between 3-5pm and was hosted in partnership with London Data Week, funded by the London Office of Innovation and Technology. In keeping with the focus on London Data Week and LOTI, the workshop set out to think about how AI is used every day in the lives of Londoners, to help Londoners think about the kind of AI they want, to re-imagine AI so that we can build systems that work for us.

Workshop methodology

I said the workshop started out from scratch—well, almost. We certainly wanted to make use of the resources already out there such as the [Better Images of AI: A Guide for Users and Creators] co-authored by Dr Kanta Dihal and Tania Duarte. This guide was helpful because it not only suggested some things to avoid, but also provided stimulation for what kind of images we might like to make instead. What made the workshop a success was the wide-ranging and generous contributions—verbal and visual—from invited artists and technology experts, as well as public participants, who all offered insights and produced images, some of which can be found below (or even in the Science Gallery).

The Workshop was structured in two rounds, each with a live discussion and creative drawing ‘challenge’. The approach was to stage a discussion between an artist and a technology expert (approx 15 mins), and then all members of the workshop would have some time (again, approx 15 mins) for creative drawing. The purpose of the live discussion was to provide an accessible introduction to the topic and its challenges, after which we all tackled the challenge of visualising and representing different elements of AI production, use and impact. I will now briefly describe these dialogues, and unveil some of the images created.

Setting the scene

Tania Duarte (Founder, We and AI) launched the workshop with a warm welcome to all. Then, workshop host Dr Robert Elliot-Smith (Director of AI and Data Science at Digital Catapult) introduced the topic of Large Language Models (LLMs) by reminding the audience that such systems are like ‘autocorrect on steroids’: the model is simply very good at predicting words, it does not have any deep understanding of the meaning of the text it produces. He also discussed image-generators, which work in a similar way and with similar problems, which is why certain AI-produced images end up garbling images of hands and arms: they do not understand anatomy.

In response to this preliminary introduction, one participant who described herself as a visual artist expressed horror at the power of such image-generating and labelling AI systems to limit and constrain our perception of reality itself. She described how, if we are to behave as artists, what we have to do in our minds is to avoid seeing everything simply in terms of fixed categories which can conservatively restrain the imagination, keeping it within a set of known categorisations, which is limiting not only our imagination but also our future. For instance, why is the thing we see in front of us necessarily a ‘wall’? Could it not be, seeing more abstractly, simply a straight line? 

From her perspective, AI models seem to be frighteningly powerful mechanisms for reinforcing existing categories for what we are seeing, and therefore also of how to see, what things are, even what we are, and what kind of behaviour is expected. Another participant agreed: it is frustrating to get the same picture from 100 different inputs and they all look so similar. Indeed, image generators might seem to be producing novelty, but there is an important sense in which they are reinforcing the past categories of the data on which they were trained.

This discussion raised big questions leading into the first challenge: the limitations of large language models.

Round 1: The Limitations of Large Language Models

A live discussion was staged between Yasmine Boudiaf (recognised as one of ‘100 Brilliant Women in AI Ethics 2022,’ and fellow at the Ada Lovelace Institute) and Tamsin Nooney (AI Research, BBC R&D) about the process of creating LLMs.

Yasmine asked Tamsin about how the BBC, as a public broadcaster, can use LLMs in a reliable manner, and invited everyone in the room to note down any words they found intriguing, as those words might form a stimulus for their creative drawings.

Tamsin described an example of LLM use-case for the BBC in producing a podcast whereby an LLM could summarise the content, add in key markers and meta-data labels and help to process the content. She emphasised how rigorous testing is required to gain confidence in the LLM’s reliability for a specific task before it could be used. A risk is that a lot of work might go into developing the model only for it to never be usable at all.

Following Yasmine’s line of question, Tamsin described how the BBC deal with the significant costs and environmental impacts of using LLMs. She described how the BBC calculated if they wanted to train their LLM, even a very small one, it would take up all their servers at full capacity for over a year, so they won’t do that! The alternative is then to pay other services such as Amazon to use their model, which means balancing costs: so here are limits due to scale, cost, and environmental impact.

This was followed by a more quiet, but by no means silent, 15 minutes for drawing time in which all participants drew…

Drawing by Marie Jannine Murmann. Abstract cogwheels suggesting that AI tools can be quickly developed to output nonsense but, with adequate human oversight and input, AI tools can be iteratively improved to produce the best outputs they can.
Drawing by Marie Jannine Murmann. Abstract cogwheels suggesting that AI tools can be quickly developed to output nonsense but, with adequate human oversight and input, AI tools can be iteratively improved to produce the best outputs they can.

One participant used an AI image generator for their creative drawing, making a picture of a toddler covered in paint to depict the LLM and its unpredictable behaviours. Tamsin suggested that this might be giving the LLM too much credit! Toddlers, like cats and dogs, have a basic and embodied perception of the world and base knowledge, which LLMs do not have.

Drawing by Howard Elston. An LLM is drawn as an ear, interpreting different inputs from various children.
Drawing by Howard Elston. An LLM is drawn as an ear, interpreting different inputs from various children.

The experience of this discussion and drawing also raised, for another participant, more big questions. She discussed poet David Whyte’s work on the ‘conversational nature of reality’ and thought on how the self is not just inside us but is created through interaction with others and through language. For instance, she mentioned that when you read or hear the word ‘yes’, you have a physical feeling of ‘yesness’ inside, and similarly for ‘no’. She suggested that our encounters with machine-made language produced by LLMs is similar. This language shapes our conversations and interactions, so there is a sense in which the ‘transformers’ (the technical term for the LLM machinery) is also helping to transform our senses of self and the boundary between what is reality and what is fantasy. 

Here, we have the image made by artist Yasmine based on her discussion with Tamsin:

Three groups of icons representing people have shapes travelling between them and a page in the middle of the image. The page is a simple rectangle with straight lines representing data. The shapes traveling towards the page are irregular and in squiggly bands.
Image by Yasmine Boudiaf. Three groups of icons representing people have shapes travelling between them and a page in the middle of the image. The page is a simple rectangle with straight lines representing data. The shapes traveling towards the page are irregular and in squiggly bands.

Yasmine writes:

This image shows an example of Large Language Model in use. Audio data is gathered from a group of people in a meeting. Their speech is automatically transcribed into text data. The text is analysed and relevant segments are selected. The output generated is a short summary text of the meeting. It was inspired by BBC R&D’s process for segmenting podcasts, GPT-4 text summary tools and LOTI’s vision for taking minutes at meetings.

Yasmine Boudiaf

You can now find this image in the Better Images of AI library, and use it with the appropriate attribution: Image by Yasmine Boudiaf / © LOTI / Better Images of AI / Data Processing / CC-BY 4.0. With the first challenge complete, it was time for the second round.

Round 2: Generative AI in Public Services

This second and final round focused on use cases for generative AI in the public sector, specifically by local government. Again, a live discussion was held, this time between Emily Rand (illustrator and author of seven books and recognised by the Children’s Laureate, Lauren Child, to be featured in Drawing Words) and Sam Nutt (Researcher & Data Ethicist, London Office of Technology and Innovation). They built on the previous exploration of LLMs by considering new generative AI applications which they enable for local councils and how they might transform our everyday services.

Emily described how she illustrates by hand, and described her [work] as focusing on the tangible and the real. Making illustrations about AI, whose workings are not obviously visible, was an exciting new topic. See her illustration and commentary below. 

Sam described his role as part of the innovation team which sits across 26 of the boroughs of London and Mayor of London. He helps boroughs to think about how to use data responsibly. In the context of local government data and services, a lot of data collected about residents is statutory (meaning they cannot opt out of giving it), such as council tax data. There is a big prerogative for dealing with such data, especially for sensitive personal health data, that privacy is protected and bias is minimised. He considered some use cases. For instance, council officers can use ChatGPT to draft letters to residents to increase efficiency butthey must not put any personal information into ChatGPT, otherwise data privacy can be compromised. Or, for example, the use of LLMs to summarise large archives of local government data concerning planning permission applications, or the minutes from council meetings, which are lengthy and often technical, which could be made significantly more accessible to many members of the public and researchers. 

Sam also raised the concern that it is very important that residents know how councils use their data so that councils can be held accountable. Therefore this has to be explained and made understandable to residents. Note that 3% of Londoners are totally offline, not using internet at all, so that’s 270,000 people—who also have an equal right to understand how the council uses their data—who need to be reached through offline means. This example brings home the importance of increasing inclusive public Al literacy.

Again, we all drew. Here are a couple of striking images made by participants who also kindly donated their pictures and words to the project:

Drawing by Yokako Tanaka. An abstract blob is outlined encrusted with different smaller shapes at different points around it. The image depicts an ideal approach to AI in the public sector, which is inclusive of all positionalities.
Drawing by Yokako Tanaka. An abstract blob is outlined encrusted with different smaller shapes at different points around it. The image depicts an ideal approach to AI in the public sector, which is inclusive of all positionalities.
Drawing by Aisha Sobey. A computer claims to have “solved the banana” after listing the letters that spell “banana” – whilst a seemingly analytical process has been followed, the computer isn’t providing much insight nor solving any real problem.
Drawing by Aisha Sobey. A computer claims to have “solved the banana” after listing the letters that spell “banana” – whilst a seemingly analytical process has been followed, the computer isn’t providing much insight nor solving any real problem.
Practically identical houses are lined up at the bottom of the image. Out of each house's chimney, columns of binary code – 1's and 0's – emerge.
“Data Houses,” by Joahna Kuiper. Here, the author described how these three common houses are all sending a distress signal—a new kind of smoke signal, but in binary code. And in her words: ‘one of these houses is sending out a distress signal, calling out for help, but I bet you don’t know which one.’ The problem of differentiating who needs what when.
A big eye floats above rectangles containing rows of dots and cryptic shapes.
“Big eye drawing,” by Hui Chen. Another participant described their feeling that ‘we are being watched by big eye, constantly checking on us and it boxes us into categories’. Certain areas are highly detailed and refined, certain other areas, the ‘murky’ or ‘cloudy’ bits, are where the people don’t fit the model so well, and they are more invisible.
Rows of people are randomly overlayed by computer cursors.
An early iteration of Emily Rand’s “AI City.”

Emily started by llustrating the idea of bias in AI. Her initial sketches showed an image showing lines of people of various sizes, ages, ethnicities and bodies. Various cursors showed the cis white able bodied people being selected over the others. Emily also did a sketch of the shape of a City and ended up combining the two. She added frames to show the way different people are clustered. The frame shows the area around the person, where they might have a device sending data about them.

 Emily’s final illustration is below, and can be downloaded from here and used for free with the correct attribution Image by Emily Rand / © LOTI / Better Images of AI / AI City / CC-BY 4.0.

Building blocks are overlayed with digital squares that highlight people living their day-to-day lives through windows. Some of the squares are accompanied by cursors.

At the end of the workshop, I was left with feelings of admiration and positivity. Admiration of the stunning array of visual and conceptual responses from participants, and in particular the candid and open manner of their sharing. And positivity because the responses were often highlighting the dangers of AI as well as the benefits—its capacity to reinforce systemic bias and aid exploitation—but these critiques did not tend to be delivered in an elegiac or sad tone, they seemed more like an optimistic desire to understand the technology and make it work in an inclusive way. This seemed a powerful approach.

The results

The Better Images of AI mission is to create a free repository of better images of AI with more realistic, accurate, inclusive and diverse ways to represent AI. Was this workshop a success and how might it inform Better Images of AI work going forward?

Tania Duarte, who coordinates the Better Images of AI collaboration, certainly thought so:

It was great to see such a diverse group of people come together to find new and incredibly insightful and creative ways of explaining and visualising generative AI and its uses in the public sector. The process of questioning and exploring together showed the multitude of lenses and perspectives through which often misunderstood technologies can be considered. It resulted in a wealth of materials which the participants generously left with the project, and we aim to get some of these developed further to work on the metaphors and visual language further. We are very grateful for the time participants put in, and the ideas and drawings they donated to the project. The Better Images of AI project, as an unfunded non-profit is hugely reliant on volunteers and donated art, and it is a shame such work is so undervalued. Often stock image creators get paid $5 – $25 per image by the big image libraries, which is why they don’t have time to spend researching AI and considering these nuances, and instead copy existing stereotypical images.

Tania Duarte

The images created by Emily Rand and Yasmine Boudiaf are being added to the Better Images of AI Free images library on a Creative Commons licence as part of the #NewImageNovember campaign. We hope you will enjoy discovering a new creative interpretation each day of November, and will be able to use and share them as we double the size of the library in one month. 

Sign up for our newsletter to get notified of new images here.

Acknowledgements

A big thank you to organisers, panellists and artists:

  • Jennifer Ding – Senior Researcher for Research Applications at The Alan Turing Institute
  • Yasmine Boudiaf – Fellow at Ada Lovelace Institute, recognised as one of ‘100 Brilliant Women in AI Ethics 2022’
  • Dr Tamsin Nooney – AI Research, BBC R&D
  • Emily Rand – illustrator and author of seven books and recognised by the Children’s Laureate, Lauren Child, to be featured in Drawing Words
  • Sam Nutt – Researcher & Data Ethicist, London Office of Technology and Innovation (LOTI)
  • Dr Tomasz Hollanek – Research Fellow, Leverhulme Centre for the Future of Intelligence
  • Laura Purseglove – Producer and Curator at Science Gallery London
  • Dr Robert Elliot-Smith – Director of AI and Data Science at Digital Catapult
  • Tania Duarte – Founder, We and AI and Better Images of AI

Also many thanks to the We and Al team, who volunteered as facilitators to make this workshop possible: 

  • Medina Bakayeva, UCL master’s student in cyber policy & AI governance, communications background
  • Marissa Ellis, Founder of Diversily.com, Inclusion Strategist & Speaker @diversily
  • Valena Reich, MPhil in Ethics of AI, Gates Cambridge scholar-elect, researcher at We and AI
  • Ismael Kherroubi Garcia FRSA, Founder and CEO of Kairoi, AI Ethics & Research Governance
  • Dr Peter Rees was project manager for the workshop

And a final appreciation for our partners: LOTI, the Science Gallery London, and London Data Week, who made this possible.

Related article from BIoAI blog: ‘What do you think AI looks like?’: https://blog.betterimagesofai.org/what-do-children-think-ai-looks-like/

A new Better Image of AI – every day for November

A new Better Image of AI – every day for November. Visit the free image library throughout November to see a range of new images from exciting artists. 30 New Images in 30 Days!

Announcing 30 New Images in 30 Days – one new image being added to the Better Images of AI Library each day of November! We and AI Founder and Better Images of AI coordinator Tania Duarte reflects on the excitement and challenges involved in this next stage of the Better Images of AI project.


In December 2021, Better Images of AI launched what at the time was intended to be a small amount of inspiration images. The hope was that providing some images which attempted to show alternative ways to represent AI technologies and impacts, based on research about how current available images were harmful or unhelpful, would inspire other creators. That they would prompt thought from journalists and other communicators, throw down the gauntlet to image libraries, get more people to share ideas with a growing community and help viewers develop better mental models about AI. So, nearly 2 years in, how is it going?

The good

On the one hand, we have been overwhelmed by the response. The images, most of which are donated and all of which are by insightful and talented artists, have clearly helped a wide range of people and organisations communicate in ways that better represent their message and provide more interesting and engaging moments with audiences. They also provided creative provocations or learning opportunities, helped to differentiate users from the boring blue brains and white robots, and enabled users to avoid fostering misunderstandings about AI.

Images have been downloaded from the library across the world; they have been used in news media, business and academic presentations, blogs, websites, event banners, brochures, and reports; and they have been viewed by millions of people. We have been pleased to see them bring life to stories in publications such as the TIME, Washington Post, and the Guardian, but also to statements from influential AI related organisations and in academia and on courses where they are reaching the next generations.

We have seen new images influenced by some of the approaches and learned from the novel interpretations and adaptations people have made. We’ve had feedback and insights from users and stakeholders via a research project which resulted in a Guide to help make the case for better images. 

The bad

However, the job is far from over. New text-to-image generators trained on the existing tropes are being used to illustrate AI and, unsurprisingly, are replicating them and feeding back anthropomorphic representations into a seemingly never-ending production line of scary robots.

As more parts of the internet, more industries and more parts of society become occupied with AI for the first time, text-to-image tools are bringing new users to the still limited range of stock images labelled “AI.” The boom in generative AI and the increase in coverage given to narratives around existential risk and super intelligent AGI has breathed new life into the sci-fi narratives which replace more accurate and insightful discussions about AI.

While we have received some funding to create new images (more about that soon!), our core operations and project remain unfunded, and, indeed, we have lost many funding applications despite such demonstrable impact. This means that non-profit volunteer organisation We and AI, who manage the collaboration, and coordinate the project and site, also took on the running costs, despite also not being funded to do so. It takes time to explore and produce impactful and meaningful visual representations of complex topics; to consult with and for a wide range of image users, volunteers, creatives, advocates, and advisors across the world; to communicate and to support and answer queries about the project; to build new proposals and potential partnerships; to evaluate, prepare, upload images and liaise with artists. It takes money to host and maintain the website, and build new functionality in advance of making it more scalable.

As a result, we have had a backlog of images and articles and have not yet launched some upgrades to the site that were made to enable the library to grow. This has been frustrating, as we know that many users have used all the existing images and are keen to have a wider selection to use. And there is a greater need than ever for more pictures related to AI!

The beautiful

It’s therefore with great joy that we can announce that, with support from volunteers at We and AI, we have finally been able to get together and process all of these images, and upload one a day for the next 30 days! 

We also have some new blog articles written to help share experiences and insight into visual communication of AI from a range of We and AI community members, and a couple of new supporter announcements. 

We will share the stories, projects and motivations behind all of these images over the month of November, as we often find that these discussions prompt important conversations about AI and our relationship with it. We hope you will enjoy discovering a new creative interpretation every day, and will be able to use and share them as we double the size of the library in one month. Check out the first one today.

We are extremely grateful to all the artists and everybody involved in the creation of the images we host.