👤 Behind the Image with Ying-Chieh from Kingston School of Art

This year, we collaborated with Kingston School of Art to give MA students the task of creating their own better images of AI as part of their final project. 

In this mini-series of blog posts called ‘Behind the Images’, our Stewards are speaking to some of the students that participated in the module to understand the meaning of their images, as well as the motivations and challenges that they faced when creating their own better images of AI. Based on our assessment criteria, some of the images will also be uploaded to our library for anyone to use under a creative commons licence. 

In our first post, we go ‘Behind the Images’ with Ying-Chieh Lee about her images, ‘Can Your Data Be Seen’ and ‘Who is Creating the Kawaii Girl?’. Ying-Chieh hopes that her art will raise awareness of how biases in AI emerge from homogenous datasets and unrepresentative groups of developers who can create AI to marginalise members of society, like women. 

You can freely access and download ‘Who is Creating the Kawaii Girl’ from our image library by clicking here.

‘Can Your Data Be Seen’ is not available in our library as it did not match all the criteria due to challenges which we explore below. However, we greatly appreciate Ying-Chieh letting us publish her images and talking to us. We are hopeful that her work and our conversation will serve as further inspiration for other artists and academics who are exploring representations of AI.

Can you tell us a bit about your background, and what drew you to the MA at Kingston University?

Ying-Chieh originally comes from Taiwan and has been creating art since she was about 10 years old. In her undergraduate, Ying-Chieh studied sculpture and then worked for a year. Whilst working, Ying-Chieh really missed drawing so decided to start freelance illustration but she wanted to develop her art skills further which led Ying-Chieh to Kingston School of Art. 

Could you talk me through the different parts of your images and the meaning behind them?

‘Can Your Data Be Seen?’

‘Can Your Data Be Seen?’ shows figures representing different subjects in datasets, but the cast light illustrates how only certain groups are captured in the training of AI models. Furthermore, the uniformity and factory-like depiction of the figures criticises how AI datasets often quantify the rich, lived experiences of humans into data points which do not capture the nuances and diversity of many human individuals. 

Ying-Chieh hopes that the image highlights the homogeneity of AI datasets and also draws attention to the invisibility of certain individuals who are not represented in training data. Those who are excluded from AI datasets are usually from marginalised communities, who are frequently surveilled, quantified and exploited in the AI pipeline, but are excluded from the benefits of AI systems due to the domination of privileged groups in datasets. 

‘Who’s Creating the Kawaii Girl’

In ‘Who’s Creating the Kawaii Girl’, Ying-Chieh shows a young female character in a school uniform which represents the Japanese artistic and cultural ‘Kawaii’ style. The Kawaii aesthetic symbolises childlike innocence, cuteness, and the quality of being lovable. Kawaii culture began to rise in Japan in the 1970s through anime, manga and merchandise collections – one of the most recognisable is the Hello Kitty brand. The ‘Kawaii’ aesthetic is often characterised by pastel colours, rounded shapes, and features which evoke vulnerability, like big eyes and small mouths. 

In the image, Ying-Chieh has placed the Kawaii Girl in the palm of an anonymous, sinister figure – this suggests a sense of vulnerability and power over the Girl. The faint web-like pattern on the figures and the background symbolises the unseen influence that AI has on how media is created and distributed that often reinforce stereotypes or facilitates exploitation. The image criticises the overwhelmingly male-dominated AI industry who frequently use technology and content generation tools to reinforce ideologies about women being controlled and subservient to men. For example, there has been a rise in nonconsensual deep fake pornography created by AI tools and also regressive stereotypes about gender roles being reinforced by information provided by large language models, like ChatGPT. Ying-Chieh hopes that ‘Who’s Creating the Kawaii Girl’ will challenge people to think about how AI can be misused and its potential to perpetuate harmful gender stereotypes that sexualise females. 

What was the inspiration/motivation for creating your image, ‘Can Your Data Be Seen’ and ‘Who’s Creating the Kawaii Girl?’? 

At the outset, Ying-Chieh wasn’t very familiar with AI or the negative uses and implications of the technology. To explore how it was being used, she looked on Facebook and found a group that was being used to share lots of offensive images of women which were generated by AI. When interrogating the group further, she realised that the group was not small, indeed, it had a large number of active users –  which were mostly men. This was Ying-Chieh’s initial inspiration for the image, ‘Who’s Creating the Kawaii Girl?’. 

However, this Facebook group also prompted Ying-Chieh to think deeper about how the users were able to generate these sexualised images of women and girls so easily. A lot of the images represented a very stereotypical model of attractiveness which prompted her to think about how the underlying datasets of these AI models were most probably very unrepresentative which reinforced stereotypical standards of beauty and attractiveness. 

Was there a specific reason you focussed on issues like data bias and gender oppression related to AI?

Gender equality has always been something that Ying-Chieh has been passionate about, but she had never considered how the issue related to AI. She came to realise how its relationship wasn’t that different to other industries which oppress women because AI is fundamentally produced by humans and fed by data that humans have created. Therefore, the problems with AI being used to harm women are not isolated in the technology, but rooted in systemic social injustices that have long mistreated and misrepresented women and other marginalised groups.

Ying-Chieh’s sketch of the AI ‘bias loop’

In her research stages, Ying-Chieh explored the ‘bias loop’ which represents how AI models are trained on data selected by humans or derived from historical data which will create biased images. At the same time, the images created by AI will serve as new training data, which will further embed our historical biases into future AI tools. The concept of the ‘bias loop’ resonated with Ying-Chieh’s interest in gender equality and made her concerned for the uses and developments of AI which privileging some groups at the expense of others, especially where this repeats itself and causes inescapable cycles of injustice. 

Can you describe the process for creating this work?

Ying-Chieh started from developing some initial sketches and engaging in discussions with Jane, the programme coordinator, about her work. As you can see below, ‘Whos’ Creating the Kawaii Girl’ has evolved significantly from its initial sketch but ‘Can Your Data Be Seen?’ has remained quite similar to Ying-Chieh’s original design. 

The initial sketches of ‘Can Your Data Be Seen?’ (left) and ‘Who’s Creating the Kawaii Girl?’

Ying-Chieh also engaged in some activities during classes which helped her to learn more about AI and its ethical implications. One of these games, ‘You Say, I Draw’ involved one student describing an image and the other student drawing the image purely relying on their partner’s description without knowing what they were drawing.

This game highlighted the role that data providers and prompters play in the development of AI and challenged Ying-Chieh to think more carefully about how data was being used to train content generation tools. During the game, she realised that the personality, background, and experiences of the prompter really influenced what the resulting image looked like. In the same way, the type of data and the developers creating AI tools can really influence the final outputs and results of a system. 

An image of the results from the ‘You Say, I Draw’ activity

Better Images of AI aims to counteract common stereotypes and misconceptions about AI. How did you incorporate this goal into your artwork? 

Ying-Chieh’s aim was to explore and address biases present in AI models in order to contribute to the Better Images of AI mission so that the future development of AI can be more diverse and inclusive. She hopes that her illustrations will make it easier for the public to understand issues about biases in AI which are often inaccessible or shielded from wider comprehension.

Her images draw more attention to how AI’s training data is bias and how AI is being used to reinforce gender stereotypes about women. From this, Ying-Chieh hopes that further action can be taken to improve data collection and processing methods as well as more laws and rules about limits to image generation where it exploits or harms individuals. 

What have been the biggest challenges of creating a ‘better image of AI’? Did you encounter any challenges in trying to represent AI in a more nuanced and realistic way? 

Ying-Chieh spoke about her challenges in trying to strike the right balance between designing images that could be widely used and recognised by audiences as related to AI but also not falling into any common tropes that misrepresented AI (like robots, descending code, the colour blue). She also found it difficult to not make images too metaphorical to the extent that they may be misinterpreted by audiences.

Based on our criteria for selecting images, we were pleased to accept, ‘Who’s Creating the Kawaii Girl?’, but had the difficult decision to not upload ‘Can Your Data Be Seen’ based on the fact that it didn’t communicate and conceptualise AI enough. What do you think of this feedback and was it something that you considered in the process? 


Ying-Chieh shared that she had been continuous that her images would not be easily recognisable as communicating ideas about AI throughout the design process. She made some efforts to counteract this, for example, on ‘Can Your Data Be Seen’ she made the figures all identical to represent data points and the lighter coloured lines on the faces and bodies of the figures represent the technical elements behind AI image recognition technology.

How has working on this project influenced your own views on AI and its impact? 

Before starting this project, Ying-Chieh said that her opinion towards AI had been quite positive. She was largely influenced by things that she had seen and read in the news about how AI was going to benefit society. However, from her research on Facebook, she has become increasingly aware that this is not entirely true. There are many dangerous ways that AI can be used which are already lurking in the shadows of our daily lives.

 What have you learned through this process that you would like to share with other artists or the public?

The biggest takeaway from this project for Ying-Chieh is how camera angles, zooming, or object positioning can strongly influence the message that an image conveys. For example, in the initial sketches of ‘Can Your Data Be Seen’, Ying-Chieh explored how she could best capture the relationship of power through different depths of perspective.  

Various early sketches of ‘Can Your Data Be Seen’ from different depths of perspective

Furthermore, when exploring ideas about how to reflect the oppressive nature of AI, Ying-Chieh enlarged the shadow’s presence in the frame for ‘Who’s Creating the Kawaii Girl’. By doing this, the shadow reinforces the strong power that elite groups have over the creation of content about marginalised groups which is often hidden and kept secret from wider knowledge. 

Ying-Chieh’s exploration of how the photographer’s angle can reflect different positions of power and vulnerability

Ying-Chieh Lee (she/her) is a visual creator, illustrator, and comic artist from Taiwan. Her work often focuses on women-related themes and realistic, dark-style comics.


Better Images of AI’s Partnership with Kingston School of Art

An image with a light blue background that reads, 'Let's Collab!' at the top, the word 'Collab' underlined in burgandy. Below that, it says 'Better Images of AI x Kingston School of Art' with 'Kingston School of Art' in teal. Below the text is an illustration of two hands high-fiving, with black sleeves and white hands. Around the hands are burgundy stars.

This year, we were pleased to partner with Kingston’s School of Art to run an elective for their MA Illustration, Animation, and Graphic Design students to create their own ‘better images of AI’. Following this collaboration, some of the student’s images have been published in our library for anyone to use freely. Their images focus on communicating different ideas about the current state of AI – from the connection between the technology and gender oppression to breaking down the interactions between humans and AI chatbots.

In this blog post, we speak to Jane Cheadle who is the course leader for the MA Animation course at Kingston School of Art about partnering with Better Images of AI for the elective. The MA is a new course and it is focussed on critical and research-led animation design processes.

If you’re interested in running a similar module/elective or incorporating Better Images of AI’s work into your university course, we would love to hear from you – please contact info@betterimagesofai.org.

How did the collaboration with Better Images of AI come about?

AI is having an impact on various industries and the creative domain is no exception. Jane explains how she and the staff in the department were asked to work towards developing a strategy addressing the use of AI in the design school. At the same time, Jane was also in contact with Alan Warburton – a creator that works with various technologies, including computer generated imagery, AI, virtual reality, and augmented reality to develop art. Alan introduced Jane to Better Images of AI and she became interested in the work that we are doing, and how this linked to their future strategy for the use of AI in the design school.

Therefore, instead of solely creating rules about the use of AI in the school, Jane thought that working with the students to explore the challenges, limits, and benefits of the technology would be more meaningful as it would provide better learning opportunities for the students (as well as herself!) about this topic. 

Where does the elective fit within the school’s curriculum?

Kingston University’s Town House Strategy aims to prepare graduates for advances in technology which will alter our future society and workplaces. The strategy aims to equip students with enhanced entrepreneurial, digital, and creative problem-solving skills so they can better advance their careers and professional practice. As part of this strategy, Kingston University encourages collaboration and partnership with businesses and external bodies to help advance student’s knowledge and awareness of the different aspects of the working world.

As part of this, the Kingston School of Art runs a cross-disciplinary design module open to students from three different MA courses (Graphic Design, Illustration, and Animation). In this module, students are asked to think about the role of the designer now, and what it might look like in the future. The goal is to prompt students to situate their creative practice within the contemporary paradigms of precarity and uncertainty, providing space for students to understand and address issues such as climate literacy, design education, and the future of work. There are multiple electives within this module and each works with a partner external to the university.

Better Images of AI were fortunate enough to be approached by Jane to be the external partner for their elective. This elective was run by Jane as well as researcher and artist, Maybelle Peters. Jane explains that this module had a dual aim: firstly, to allow students to develop better images of AI which could be published to our library. But also, secondly, to educate students about AI and its impact on society. For Jane, it was important that when exploring AI, this was applied to the student’s own practice and positionality so they could understand how AI is influencing the creative industry as well as political, power structures more broadly.

How did the elective run?

Jane shares that there was a real divide amongst the students about their familiarity with AI and its wider context. Some students had been dabbling with AI tools and wanted to develop a position on its creative and ethical use. Meanwhile, others were not using AI at all and expressed being somewhat weary of it, alongside a real sense of amorphous fear around automated image generation and other capabilities that impact the markets for their creative works.

Better Images of AI worked with the Kingston School of Art to provide a brief for the elective, and students also used our Guide to help them understand the problems with current stock imagery that is used to illustrate AI so they could avoid these common tropes in their own work.

Following this, the students worked in special interest groups to research different aspects of AI. Each group then used this research to develop practical workshops to run with the wider class. This enabled the students to develop their own better images of AI based on what they had learnt from leading and participating in workshops and research tasks. Better Images of AI also visited Kingston School of Art to provide guidance and feedback to the students in the development stages of their images.

Some of the images that were submitted as part of the elective can be seen below. Each image shows a thoughtful approach and are so varied in nature – some are super low-fi and others are hilarious – but all the students drew upon their own design/drawing/making skills to develop their unique images. 

Why did you think it was important to partner with Better Images of AI for this elective?

As designers and image makers, we agreed that there is a responsibility to accurately and responsibly represent aspects of the world, such as AI. It was important to allow students to work with real constraints and build towards a future that they want to live in. While the brief provided to the students was to create images that accurately represent what AI looks like right now, much of the student workshops focussed on what kind of AI they wanted to see, what safeguards need to be put in place, and what power relations we might need to change in order to get there.

Jane Cheadle (she/they) is an animator, researcher and educator. Jane is currently senior lecturer and MA Animation course leader in the design school at Kingston School of Art. Both of Jane’s practice and research are cross-disciplinary and experimental with a focus on drawing, collaboration and expanded animation.  


We are super thankful to Jane and Maybelle as well as the Kingston School of Art for incorporating Better Images of AI into their elective. We are so appreciative to all the students who participated in the module and shared their work with us. Jane is excited to hopefully run the elective again and we are looking forward to more work together with the students and staff at Kingston School of Art.

This blog post is the first in a series of posts about Better Images of AI collaboration with the Kingston School of Art. In a series of mini interview blog posts, we speak to three students that participated in the elective and designed their own better images of AI. Some of the student’s images even feature in our library – you can view them here.