How not to communicate about AI in education

Seventeen multicoloured post-it notes are roughly positioned in a strip shape on a white board. Each one of them has a hand drawn sketch in pen on them, answering the prompt on one of the post-it notes "AI is...." The sketches are all very different, some are patterns representing data, some are cartoons, some show drawings of things like data centres, or stick figure drawings of the people involved.

Camila Leporace – journalist, researcher, and PhD in Education – argues that innovation may not be in artificial intelligence (AI) but in our critical capacity to evaluate technological change.


When searching for “AI in education” on Google Images here in Brazil, in November 2023, there is a clear and obvious  predominance of images of robots. The first five images that appeared for me were: 

  1. A robot teaching numeracy in front of a school blackboard; 
  2. A girl looking at a computer screen from which icons she  is viewing “spill out”; 
  3. A series of icons and a hand catching them in the air; 
  4. A robot finger and a human finger trying to find each other like in Michelangelo’s  “Creation of Adam,” but a brain is between them, keeping the fingers from touching; whilst the  robot finger touches the left half of the brain (which is “artificial” and blue), the  human finger touches the right half of the brain (which is coloured); and
  5. A drawing (not a photo) showing a girl sitting with a book and a robot sat on two books next to her, opposite a screen;

It is curious (and harmful) how images associated with artificial intelligence (AI) in education so inaccurately represent what is actually happening with regard to the insertion of these technologies in Brazilian schools – in fact, in almost every school in the world. AI is not a technology that can be  “touched.” Instead, it is a resource that is present in the programming of the systems we use in an invisible, intangible way. For example, Brazilian schools have been adopting AI tools in writing activities, like the correction of students’ essays; or question-and-answer adaptive learning platforms. In Denmark, teachers have been using apps to audit students ‘moods’, through data collection and the generation of bar charts. In the UK, surveillance involving students and teachers as a consequence of data harvesting is a topic getting a lot of attention. 

AI, however, is not restricted to the educational resources designed for teaching and learning, but in various devices useful for learning beyond formal learning contexts. We all use “learning machines” in our daily lives, as now machine learning is everywhere around us trying to gather information on us to provide content and keep us connected. While we do so, we provide data to feed this machinery. Algorithms classify the large masses of data it receives from us.  Often, it is young people who – in contact with algorithmic platforms – provide their data  while browsing and, in return, receive content that – in theory – matches their profiles. This is quite  controversial, raising questions about data privacy, ethics, transparency and what these data  generation and harvesting procedures can add (or not) to the future of children and young  people. Algorithmic neural networks are based on prediction, applying statistics and other features to process data and obtain results. Otherwise we, humans, are  not predictable.

The core problem with images of robots and “magic” screens in education is that they don’t properly communicate what is happening with AI in the context of teaching and learning. These uninformative images end up diverting attention from what is really important: – interactions on social networks, chatbots, and the countless emotional, psychological and developmental implications arising from these environments. While there are speculations about teachers being replaced by AI, teachers have actually never been more important in supporting parents and carers educate about navigating the digital world. That’s why the prevalence of robot teachers in the imagination doesn’t seem  to help at all. And this prevalence is definitely not new!

When we look into the history of automation in education, we find out that one hundred years ago, in the 1920s, Sidney Pressey developed analog teaching machines basically to apply tests to students. Pressey’s machines preceded those developed by the behaviourist B. F. Skinner in the late 1960s, promising – just like today’s AI platforms for adaptive teaching do – to personalise learning, make the process more fun and relieve the teacher of repetitive tasks. When they came up, those inventions not only promised similar benefits as those which fuel AI systems today, but also raised concerns similar to those we face today, including the hypothesis of replacing the teacher entirely. We could then ask: where is the real innovation regarding automation in education, if the old analog machines are so similar to today’s in their assumptions, applications and the discourse they carry?

Innovation doesn’t lie in big data or deep neural networks, the basic ingredients that boost the latest technologies we are aware of. It lies in our critical capacity to look  at the changes brought about by AI technologies with restraint and to be careful about delegating to them what we can’t actually give up. It lies in our critical thinking on how the learning processes can or cannot be supported by learning machines.

More than ever, we need to analyse what is truly human in intelligence, cognition, creativity; this is a way of guiding us in not delegating what cannot be delegated to artificial systems, no matter how powerful they are in processing data. Communication through images requires special attention. After all, images generate impressions, shape perceptions and can completely alter the general audience’s sense of an important topic. The apprehension  we’ve had towards technology for dozens of years is enough. In the midst of the technological hype, we need critical thinking, shared thoughts, imagination and accuracy.. And certainly need better images  of AI.

Better images of AI can support AI literacy for more people

Marika Jonsson's book cover; a simple yellow cover with the title (in Swedish): "En bok om AI"

Marika Jonsson, doctoral student at KTH Royal Institute of Technology, reflects on overcoming the challenge of developing an Easy Read book on artificial intelligence (AI) with so few informative images about AI available.


There are many things that I take for granted. One of them is that I should be able to easily find information about things I want to know more about. Like artificial intelligence (AI). I find AI exciting, interesting; and I see the possibilities of AI helping me in everyday life. And thanks to the fact that I have been able to read about AI, I have also realised that AI can be used for bad things; that AI creates risks and can promote inequality in society.  Most of us use or are exposed to AI daily, sometimes without being aware of it.

Between May 2020 and June 2023, I participated in a project called AllAgeHub in Sweden, where one of the aims was to spread knowledge about how to use welfare technology to empower people in their everyday lives. The project included a course on AI for the participants, who worked in the public healthcare and social care sectors. The participants then wanted to spread knowledge about AI to clients in their respective sectors. The clients could be, for example, people working in adapted workplaces or living in supported housing. There was a demand for information in Easy Read format. Easy Read format is when you write in easy-to-read language, with common words, short sentences and in simple chronological order. The text should be spaced out and have short lines, and the texts are often supported by images. Easy Read is both about how you write and about how you present what is written. The only problem was that I found almost no Easy Read information about AI in Swedish. My view is that the lack of Easy Read information about AI is a serious matter.

A basic principle behind democracy is that all people are equal and should have the same rights. Therefore, I believe we must have access to information in an understandable way. How else can you express your opinion, vote or consent to something in an informed way? That was the reason I decided to write an Easy Read book about AI. My ambition was to write concretely and support the text with pictures. Then I stumbled on the huge problem of finding informative pictures about AI. The images I found were often abstract or inaccurate. The images could also be depicting AI as robots and conveyed the impression that AI is a creature that can take over the earth and destroy humanity. With images like that, it was hard to explain that, for example, personalised ads, which can entice me to buy things I don’t really need, are based on AI technology. Many people don’t know that we are exposed to AI that affects us in everyday life through cookie choices on the internet. The aforementioned images might also make people afraid of using practical AI tools that can make everyday life easier, such as natural language processing (NLP) tools that convert speech to text or reads text aloud. So, I had to create my own pictures.

I must confess, it was difficult to create clear images that explain AI. I chose to create images that show situations where AI is used, and tried to visualise how certain kinds of AI might operate. One example is that I visualised why a chatbot might give the wrong answer by showing how a word can mean two different things with a picture of each word’s meaning. The two different meanings give the AI tool two possible interpretations about what issue is at hand. The images are by no means perfect, but they are an attempt at explaining some aspects of AI.

Two images with Swedish text explaining the images. 1. A box of raspberries. 2. symbol of person carriying a bag. The Swedish word ”bär” is present in both explanations.
The word for carry and berry is the same in Swedish. The text says: “The word berry can mean two things. Berries that you eat. A person carrying a bag.”

The work of creating concrete, comprehensible images that support our understanding of AI can strengthen democracy by giving more people the opportunity to understand information about the tools they use in their day-to-day lives. I hope more people will be inspired to write about AI in Easy Read, and create and share clear and descriptive images of AI.

As they say,  ”a picture is worth a thousand words,” so we need to choose images that tell the same story as the words we use. At the time I write this blog post, I feel there are very few images to choose from. I am hopeful we can change this, together!


The Easy Read book about AI includes a study guide. It is in Swedish, and is available for free as a pdf on AllAgeHub’s website:

https://allagehub.se/2023/06/29/nu-finns-en-lattlast-bok-om-ai-att-ta-del-av/