What Do I See in ‘Ways of Seeing’ by Zoya Yasmine

At the top there is a Diptych contrasting a whimsical pastel scene with large brown rabbits, a rainbow, and a girl in a red dress on the left, and a grid of numbered superpixels on the right - emphasizing the difference between emotive seeing and analytical interpretation. At the bottom, there is black text which says 'what do i see in ways of seeing' by Zoya Yasmine. In the top right corner, there is text in a maroon text box which says 'through my eyes blog series'.

Artist contributions to the Better Images of AI library have always served a really important role in relation to fostering understanding and critical thinking about AI technologies and their context. Images facilitate deeper inquiries into the nature of AI, its history, and ethical, social, political and legal implications.

When artists create better images of AI, they often have to grapple with these narratives in their attempts to more realistically portray the technology and point towards its strengths and weaknesses. Furthermore, as artists freely share these images in our library, others can benefit from learning about the artist’s own internal motivations (which are provided in the descriptions) but the images can also inspire users’ own musings.

In this series of blog posts, some of our volunteer stewards are each taking turns to choose an image from the Archival Images of AI collection and unpack the artist’s processes and explore what that image means to them. 

At the end of 2024, we released the Archival Images of AI Playbook with AIxDESIGN and the Netherlands Institute for Sound and Vision. The playbook explores how existing images – especially those from digital heritage collections – can help us craft more meaningful visual narratives about AI. Through various image-makers’ own attempts to make better images of AI, the playbook shares numerous techniques which can teach you how to transform existing images into new creations. 

Here, Zoya Yasmine unpacks ‘Ways of Seeing’ Nadia Piet’s (an image-maker) own better image of AI that was created for the playbook. Zoya comments on how it is a really valuable image to depict the way that text-to-image generators ‘learn’ how to generate their output creations. Zoya considers how this image relates to copyright law (she’s a bit of an intellectual property nerd) and the discussions about whether AI companies should be able to use individual’s work to train their systems without explicit consent or remuneration. 

ALT text: Diptych contrasting a whimsical pastel scene with large brown rabbits, a rainbow, and a girl in a red dress on the left, and a grid of numbered superpixels on the right - emphasizing the difference between emotive seeing and analytical interpretation.

Nadia Piet + AIxDESIGN & Archival Images of AI / Better Images of AI / Ways of Seeing / CC-BY 4.0

‘Ways of Seeing’ by Nadia Piet 

This diptych contrasts human and computational ways of seeing: one riddled with memory and meaning, the other devoid of emotional association and capable of structural analysis. The left pane shows an illustration from Tom Seidmann-Freud’s Book of Hare Stories (1924) which portrays a whimsical, surreal scene that is both playful and uncanny. On the right, the illustration is reduced to a computational rendering, with each of its superpixels (16×16) fragmented and sorted by visual complexity with a compression algorithm. 


Copyright and training AI systems 

Training AI systems requires substantial amounts of input data – from images, videos, texts and other content. Based on the data from these materials, AI systems can ‘learn’ how to make predictions and provide outputs. However, lots of these materials used to train AI systems are often protected by copyright owned by another parties which raises complex questions about ownership and the legality of using such data without permission. 

In the UK, Getty Images filed a lawsuit against Stability AI (developers of a text-to-image model called Stable Diffusion) claiming that 7.3 million of its images were unlawfully scrapped from its website to train Stability AI’s model. Similarly, Mumsnet has launched a legal complaint against OpenAI, the developer of ChatGPT, accusing the AI company of scraping content from its site (with over 6 billion words shared by community members) without consent. 

The UK’s Copyright, Designs and Patents Act 1998 (the Act) provides companies like Getty Images and Mumsnet with copyright protection over their databases and assets. So unless an exception applies, permission (through a license) is required if other parties wish to reproduce or copy the content. Section 29(A) of the Act provides an exception which permits copies of any copyright protected material for the purposes of Text and Data Mining (TDM) without a specific license. But, this lenient provision is for non-commercial purposes only. Although the status of AI systems like Stable Diffusion and ChatGPT have not been tested before the courts yet, they are likely to fall outside the scope of non-commercial purposes

TDM is the automated technique used to extract and analyse vast amounts of online materials to reveal relationships and patterns in the data. TDM has become an increasingly valuable tool to train lucrative generative AI systems on mass amounts of materials scraped from the Internet. It becomes clear that AI models cannot be developed or built efficiently without input data that has been created by human artists, researchers, writers, photographers, publishers, and creators. However, as much of their works are being used without payment or attribution by AI companies, big tech companies are essentially ‘freeriding’ on the works of the creative industry who have invested significant time, effort, and resources into producing such rich works. 


How does this image relate to current debates about copyright and AI training? 

When I saw this image, it really prompted me to think about the training process of AI systems and the purpose of the copyright system. ‘Ways of Seeing’ has stimulated my own thoughts about how computational models ‘learn’ and ‘see’ in contrast to human creators

Text-to-image AI generators (like Stable Diffusion or Midjourney) are repeatedly trained on thousands of images which allow the models to ‘learn’ to identify patterns, like what common objects and colours look like, and then reproduce these patterns when instructed to create new images. While Piet’s image has been designed to illustrate a ‘compression algorithm’ process, I think it also serves as a useful visual to reflect how AI processes visual data computationally, reducing it to pixels, patterns, or latent features. 

It’s important to note that often the images generated by AI models will not necessarily be exact copies of the original images used in the training process – but instead, they serve as statistical approximations of training data which have informed the model’s overall understanding of how objects are represented. 

It’s interesting to think about this in relation to copyright and what this legal framework serves to protect. Copyright stands to protect the creative expression of works – for example, the lighting, exposure, filter, or positioning of an image – but not the ideas themselves. The reason that copyright law focuses on these elements is because they reflect the creator’s own unique thoughts and originality. However, as Piet’s illustration can usefully demonstrate, what is significant about the AI training process for copyright law is that often TDM is often not used to extract the protected expression of the materials.

To train AI models, it is often the factual elements of the work that might be the most valuable (as opposed to the creative aspects). The training process relies on the broad visual features of the images, rather than specific artistic choices. For example, when training text-to-image models, TDM is not often used to extract data about the lighting techniques which are employed to make an image of a cat particularly appealing. Instead, the accessibility to images of cats which detail the features that resemble a cat (fur, whiskers, big eyes, paws) are what’s important. In Piet’s image, the protectable parts of the illustration from the ‘Book of Hare Stories’ would subsist in the artistic style and execution  – for example, the way that the hare and other elements are drawn, the placement and interaction of the elements, and the overall design of the image. 

The specific challenge for copyright law is that AI companies are unable to capture these ‘unprotectable’ factual elements of materials without making a copy or storing the protected parts (Lemley and Casey, 2020). I think Nadia’s image really highlights the transformation of artwork into fragmented ‘data’ for training systems which challenges our understanding of creativity and originality. 

My thoughts above are not to suggest that AI companies should be able to freely use copyright protected works as training data for their models without remunerating or seeking permission from copyright owners. Instead, the way that TDM and generative AI ‘re-imagine’ the value of these ‘unprotectable’ elements means that AI companies still freeride on creator’s materials. Therefore, AI companies should be required to explicitly license copyright-protected materials used to train their systems so creators are provided with proper control over their works (you can read more about my thoughts here).

Also, I do not deny that there are generative AI systems that aim to reproduce a particular artist’s style – see here. In these instances, I think it would be easier to prove that there was copyright infringement since these are a clear reproduction of ‘protected elements’. However, if this is not the purpose of the AI tool, developers try to avoid the outputs replicating training data too similarly as this can open them up more easily to copyright infringement for both the input (as discussed in this piece) but also the output image (see here for a discussion). 


My favourite part of Nadia Piet’s image

I think my favourite part of the image is the choice of illustration used to represent computational processing. As Nadia writes in her description, Tom Seidmann-Freud’s illustration depicts a “whimsical, surreal scene that is both playful and uncanny”. Tom, an Austrian-Jewish painter and children’s book author and illustrator (and also Sigmund Freud’s niece), led a short life and she died of an overdose of sleeping pills in 1930 at age 37 after the death of her husband a few months prior. 

“The Hare and the Well” (Left), “Fable of the Hares and the Frogs” (Middle), “Why the Hare Has No Tail” (Right) by Tom Seidmann-Freud derived in the Public Domain Review

After Tom’s death, the Nazis came to power and attempted to destroy much of the art she had created as part of the purge of Jewish authors. Luckily, Tom’s family and art lovers were able to preserve much of her work. I think Nadia’s choice of this image critiques what might be ‘lost’ when rich, meaningful art is reduced to AI’s structural analysis. 

A second point, although not related exactly to the image, is the very thoughtful title, ‘Ways of Seeing’. ‘Ways of Seeing’ was a 1972 BBC television series and book created by John Berger. In the series, Berger criticised traditional Western cultural aesthetics by raising questions about hidden ideologies in visual images like the male gaze embedded in the female nude. He also examined what had changed in our ‘ways of seeing’ in the time between the art was made and the present day. Side note: I think Berger’s would have been a huge fan of Better Images of AI. 

In a similar vein, Nadia has used Seidmann-Freud’s art as a way to explore new parallels with technology like AI which would not have been thought about at the time the work was created. In addition, Nadia’s work serves as an invitation to see and understand AI differently, and like Berges, her work supports artists around the world.


The value of Nadia’s ‘better image of AI’ for copyright discussions

As Nadia writes in the description, Tom Seidmann-Freud’s illustration was derived from the Public Domain Review, where it is written that “Hares have been known to serve as messengers between the conscious world and the deeper warrens of the mind”. From my perspective, Nadia’s whole image acts as a messenger to convey information about the two differing modes of seeing between humans and AI models. 

We need better images of AI like this. Especially for the purposes of copyright law so we can have more meaningful and informed conversations about the nature of AI and its training processes. All too often, in conversations about AI and creativity, images used depict humanoid robots painting on a canvas or hands snatching works.

‘AI art theft’ illustration by Nicholas Konrad (Left) and Copyright and AI image (Right)

These images create misleading visual metaphors that suggest that AI is directly engaging in creative acts in the same way that humans do. Additionally, visuals showing AI ‘stealing’ works reduce the complex legal and ethical debates around copyright, licensing, and data training to overly simplified, fear-evoking concepts.

Thus, better images of AI, like ‘Ways of Seeing’, can serve a vital role as a messenger to represent the reality of how AI systems are developed. This paves the way for more constructive legal dialogues around intellectual property and AI that protect creator’s rights, while allowing for the development of AI technologies based on consented, legally acquired datasets.


About the author

Zoya Yasmine (she/her) is a current PhD student exploring the intersection between intellectual property, data, and medical AI. She grew up in Wales and in her spare time she enjoys playing tennis, puzzling, and watching TV (mostly Dragon’s Den and Made in Chelsea). Zoya is also a volunteer steward for Better Images of AI and part of many student societies including AI in Medicine, AI Ethics, Ethics in Mathematics & MedTech. 


This post was also kindly edited by Tristan Ferne – lead producer/researcher at BBC Research & Development.


If you want to contribute to our new blog series, ‘Through My Eyes’, by selecting an image from the Archival Images of AI collection and exploring what the image means to you, get in touch (info@betterimagesofai.org)

Explore other posts in the ‘Through My Eyes’ Series

How not to communicate about AI in education

Seventeen multicoloured post-it notes are roughly positioned in a strip shape on a white board. Each one of them has a hand drawn sketch in pen on them, answering the prompt on one of the post-it notes "AI is...." The sketches are all very different, some are patterns representing data, some are cartoons, some show drawings of things like data centres, or stick figure drawings of the people involved.

Camila Leporace – journalist, researcher, and PhD in Education – argues that innovation may not be in artificial intelligence (AI) but in our critical capacity to evaluate technological change.


When searching for “AI in education” on Google Images here in Brazil, in November 2023, there is a clear and obvious  predominance of images of robots. The first five images that appeared for me were: 

  1. A robot teaching numeracy in front of a school blackboard; 
  2. A girl looking at a computer screen from which icons she  is viewing “spill out”; 
  3. A series of icons and a hand catching them in the air; 
  4. A robot finger and a human finger trying to find each other like in Michelangelo’s  “Creation of Adam,” but a brain is between them, keeping the fingers from touching; whilst the  robot finger touches the left half of the brain (which is “artificial” and blue), the  human finger touches the right half of the brain (which is coloured); and
  5. A drawing (not a photo) showing a girl sitting with a book and a robot sat on two books next to her, opposite a screen;

It is curious (and harmful) how images associated with artificial intelligence (AI) in education so inaccurately represent what is actually happening with regard to the insertion of these technologies in Brazilian schools – in fact, in almost every school in the world. AI is not a technology that can be  “touched.” Instead, it is a resource that is present in the programming of the systems we use in an invisible, intangible way. For example, Brazilian schools have been adopting AI tools in writing activities, like the correction of students’ essays; or question-and-answer adaptive learning platforms. In Denmark, teachers have been using apps to audit students ‘moods’, through data collection and the generation of bar charts. In the UK, surveillance involving students and teachers as a consequence of data harvesting is a topic getting a lot of attention. 

AI, however, is not restricted to the educational resources designed for teaching and learning, but in various devices useful for learning beyond formal learning contexts. We all use “learning machines” in our daily lives, as now machine learning is everywhere around us trying to gather information on us to provide content and keep us connected. While we do so, we provide data to feed this machinery. Algorithms classify the large masses of data it receives from us.  Often, it is young people who – in contact with algorithmic platforms – provide their data  while browsing and, in return, receive content that – in theory – matches their profiles. This is quite  controversial, raising questions about data privacy, ethics, transparency and what these data  generation and harvesting procedures can add (or not) to the future of children and young  people. Algorithmic neural networks are based on prediction, applying statistics and other features to process data and obtain results. Otherwise we, humans, are  not predictable.

The core problem with images of robots and “magic” screens in education is that they don’t properly communicate what is happening with AI in the context of teaching and learning. These uninformative images end up diverting attention from what is really important: – interactions on social networks, chatbots, and the countless emotional, psychological and developmental implications arising from these environments. While there are speculations about teachers being replaced by AI, teachers have actually never been more important in supporting parents and carers educate about navigating the digital world. That’s why the prevalence of robot teachers in the imagination doesn’t seem  to help at all. And this prevalence is definitely not new!

When we look into the history of automation in education, we find out that one hundred years ago, in the 1920s, Sidney Pressey developed analog teaching machines basically to apply tests to students. Pressey’s machines preceded those developed by the behaviourist B. F. Skinner in the late 1960s, promising – just like today’s AI platforms for adaptive teaching do – to personalise learning, make the process more fun and relieve the teacher of repetitive tasks. When they came up, those inventions not only promised similar benefits as those which fuel AI systems today, but also raised concerns similar to those we face today, including the hypothesis of replacing the teacher entirely. We could then ask: where is the real innovation regarding automation in education, if the old analog machines are so similar to today’s in their assumptions, applications and the discourse they carry?

Innovation doesn’t lie in big data or deep neural networks, the basic ingredients that boost the latest technologies we are aware of. It lies in our critical capacity to look  at the changes brought about by AI technologies with restraint and to be careful about delegating to them what we can’t actually give up. It lies in our critical thinking on how the learning processes can or cannot be supported by learning machines.

More than ever, we need to analyse what is truly human in intelligence, cognition, creativity; this is a way of guiding us in not delegating what cannot be delegated to artificial systems, no matter how powerful they are in processing data. Communication through images requires special attention. After all, images generate impressions, shape perceptions and can completely alter the general audience’s sense of an important topic. The apprehension  we’ve had towards technology for dozens of years is enough. In the midst of the technological hype, we need critical thinking, shared thoughts, imagination and accuracy.. And certainly need better images  of AI.

Better images of AI can support AI literacy for more people

Marika Jonsson's book cover; a simple yellow cover with the title (in Swedish): "En bok om AI"

Marika Jonsson, doctoral student at KTH Royal Institute of Technology, reflects on overcoming the challenge of developing an Easy Read book on artificial intelligence (AI) with so few informative images about AI available.


There are many things that I take for granted. One of them is that I should be able to easily find information about things I want to know more about. Like artificial intelligence (AI). I find AI exciting, interesting; and I see the possibilities of AI helping me in everyday life. And thanks to the fact that I have been able to read about AI, I have also realised that AI can be used for bad things; that AI creates risks and can promote inequality in society.  Most of us use or are exposed to AI daily, sometimes without being aware of it.

Between May 2020 and June 2023, I participated in a project called AllAgeHub in Sweden, where one of the aims was to spread knowledge about how to use welfare technology to empower people in their everyday lives. The project included a course on AI for the participants, who worked in the public healthcare and social care sectors. The participants then wanted to spread knowledge about AI to clients in their respective sectors. The clients could be, for example, people working in adapted workplaces or living in supported housing. There was a demand for information in Easy Read format. Easy Read format is when you write in easy-to-read language, with common words, short sentences and in simple chronological order. The text should be spaced out and have short lines, and the texts are often supported by images. Easy Read is both about how you write and about how you present what is written. The only problem was that I found almost no Easy Read information about AI in Swedish. My view is that the lack of Easy Read information about AI is a serious matter.

A basic principle behind democracy is that all people are equal and should have the same rights. Therefore, I believe we must have access to information in an understandable way. How else can you express your opinion, vote or consent to something in an informed way? That was the reason I decided to write an Easy Read book about AI. My ambition was to write concretely and support the text with pictures. Then I stumbled on the huge problem of finding informative pictures about AI. The images I found were often abstract or inaccurate. The images could also be depicting AI as robots and conveyed the impression that AI is a creature that can take over the earth and destroy humanity. With images like that, it was hard to explain that, for example, personalised ads, which can entice me to buy things I don’t really need, are based on AI technology. Many people don’t know that we are exposed to AI that affects us in everyday life through cookie choices on the internet. The aforementioned images might also make people afraid of using practical AI tools that can make everyday life easier, such as natural language processing (NLP) tools that convert speech to text or reads text aloud. So, I had to create my own pictures.

I must confess, it was difficult to create clear images that explain AI. I chose to create images that show situations where AI is used, and tried to visualise how certain kinds of AI might operate. One example is that I visualised why a chatbot might give the wrong answer by showing how a word can mean two different things with a picture of each word’s meaning. The two different meanings give the AI tool two possible interpretations about what issue is at hand. The images are by no means perfect, but they are an attempt at explaining some aspects of AI.

Two images with Swedish text explaining the images. 1. A box of raspberries. 2. symbol of person carriying a bag. The Swedish word ”bär” is present in both explanations.
The word for carry and berry is the same in Swedish. The text says: “The word berry can mean two things. Berries that you eat. A person carrying a bag.”

The work of creating concrete, comprehensible images that support our understanding of AI can strengthen democracy by giving more people the opportunity to understand information about the tools they use in their day-to-day lives. I hope more people will be inspired to write about AI in Easy Read, and create and share clear and descriptive images of AI.

As they say,  ”a picture is worth a thousand words,” so we need to choose images that tell the same story as the words we use. At the time I write this blog post, I feel there are very few images to choose from. I am hopeful we can change this, together!


The Easy Read book about AI includes a study guide. It is in Swedish, and is available for free as a pdf on AllAgeHub’s website:

https://allagehub.se/2023/06/29/nu-finns-en-lattlast-bok-om-ai-att-ta-del-av/