Camila Leporace – journalist, researcher, and PhD in Education – argues that innovation may not be in artificial intelligence (AI) but in our critical capacity to evaluate technological change.
When searching for “AI in education” on Google Images here in Brazil, in November 2023, there is a clear and obvious predominance of images of robots. The first five images that appeared for me were:
- A robot teaching numeracy in front of a school blackboard;
- A girl looking at a computer screen from which icons she is viewing “spill out”;
- A series of icons and a hand catching them in the air;
- A robot finger and a human finger trying to find each other like in Michelangelo’s “Creation of Adam,” but a brain is between them, keeping the fingers from touching; whilst the robot finger touches the left half of the brain (which is “artificial” and blue), the human finger touches the right half of the brain (which is coloured); and
- A drawing (not a photo) showing a girl sitting with a book and a robot sat on two books next to her, opposite a screen;
It is curious (and harmful) how images associated with artificial intelligence (AI) in education so inaccurately represent what is actually happening with regard to the insertion of these technologies in Brazilian schools – in fact, in almost every school in the world. AI is not a technology that can be “touched.” Instead, it is a resource that is present in the programming of the systems we use in an invisible, intangible way. For example, Brazilian schools have been adopting AI tools in writing activities, like the correction of students’ essays; or question-and-answer adaptive learning platforms. In Denmark, teachers have been using apps to audit students ‘moods’, through data collection and the generation of bar charts. In the UK, surveillance involving students and teachers as a consequence of data harvesting is a topic getting a lot of attention.
AI, however, is not restricted to the educational resources designed for teaching and learning, but in various devices useful for learning beyond formal learning contexts. We all use “learning machines” in our daily lives, as now machine learning is everywhere around us trying to gather information on us to provide content and keep us connected. While we do so, we provide data to feed this machinery. Algorithms classify the large masses of data it receives from us. Often, it is young people who – in contact with algorithmic platforms – provide their data while browsing and, in return, receive content that – in theory – matches their profiles. This is quite controversial, raising questions about data privacy, ethics, transparency and what these data generation and harvesting procedures can add (or not) to the future of children and young people. Algorithmic neural networks are based on prediction, applying statistics and other features to process data and obtain results. Otherwise we, humans, are not predictable.
The core problem with images of robots and “magic” screens in education is that they don’t properly communicate what is happening with AI in the context of teaching and learning. These uninformative images end up diverting attention from what is really important: – interactions on social networks, chatbots, and the countless emotional, psychological and developmental implications arising from these environments. While there are speculations about teachers being replaced by AI, teachers have actually never been more important in supporting parents and carers educate about navigating the digital world. That’s why the prevalence of robot teachers in the imagination doesn’t seem to help at all. And this prevalence is definitely not new!
When we look into the history of automation in education, we find out that one hundred years ago, in the 1920s, Sidney Pressey developed analog teaching machines basically to apply tests to students. Pressey’s machines preceded those developed by the behaviourist B. F. Skinner in the late 1960s, promising – just like today’s AI platforms for adaptive teaching do – to personalise learning, make the process more fun and relieve the teacher of repetitive tasks. When they came up, those inventions not only promised similar benefits as those which fuel AI systems today, but also raised concerns similar to those we face today, including the hypothesis of replacing the teacher entirely. We could then ask: where is the real innovation regarding automation in education, if the old analog machines are so similar to today’s in their assumptions, applications and the discourse they carry?
Innovation doesn’t lie in big data or deep neural networks, the basic ingredients that boost the latest technologies we are aware of. It lies in our critical capacity to look at the changes brought about by AI technologies with restraint and to be careful about delegating to them what we can’t actually give up. It lies in our critical thinking on how the learning processes can or cannot be supported by learning machines.
More than ever, we need to analyse what is truly human in intelligence, cognition, creativity; this is a way of guiding us in not delegating what cannot be delegated to artificial systems, no matter how powerful they are in processing data. Communication through images requires special attention. After all, images generate impressions, shape perceptions and can completely alter the general audience’s sense of an important topic. The apprehension we’ve had towards technology for dozens of years is enough. In the midst of the technological hype, we need critical thinking, shared thoughts, imagination and accuracy.. And certainly need better images of AI.