This year, we collaborated with Kingston School of Art to give MA students the task of creating their own better images of AI as part of their final project.Â
In this mini-series of blog posts called âBehind the Imagesâ, our Stewards are speaking to some of the students that participated in the module to understand the meaning of their images, as well as the motivations and challenges that they faced when creating their own better images of AI. Based on our assessment criteria, some of the images will also be uploaded to our library for anyone to use under a creative commons licence.
In our first post, we go âBehind the Imagesâ with Ying-Chieh Lee about her images, âCan Your Data Be Seenâ and âWho is Creating the Kawaii Girl?â. Ying-Chieh hopes that her art will raise awareness of how biases in AI emerge from homogenous datasets and unrepresentative groups of developers who can create AI to marginalise members of society, like women.Â
‘Can Your Data Be Seen’ is not available in our library as it did not match all the criteria due to challenges which we explore below. However, we greatly appreciate Ying-Chieh letting us publish her images and talking to us. We are hopeful that her work and our conversation will serve as further inspiration for other artists and academics who are exploring representations of AI.
Can you tell us a bit about your background, and what drew you to the MA at Kingston University?
Ying-Chieh originally comes from Taiwan and has been creating art since she was about 10 years old. In her undergraduate, Ying-Chieh studied sculpture and then worked for a year. Whilst working, Ying-Chieh really missed drawing so decided to start freelance illustration but she wanted to develop her art skills further which led Ying-Chieh to Kingston School of Art.Â
Could you talk me through the different parts of your images and the meaning behind them?
âCan Your Data Be Seen?â shows figures representing different subjects in datasets, but the cast light illustrates how only certain groups are captured in the training of AI models. Furthermore, the uniformity and factory-like depiction of the figures criticises how AI datasets often quantify the rich, lived experiences of humans into data points which do not capture the nuances and diversity of many human individuals.
Ying-Chieh hopes that the image highlights the homogeneity of AI datasets and also draws attention to the invisibility of certain individuals who are not represented in training data. Those who are excluded from AI datasets are usually from marginalised communities, who are frequently surveilled, quantified and exploited in the AI pipeline, but are excluded from the benefits of AI systems due to the domination of privileged groups in datasets.
In âWhoâs Creating the Kawaii Girlâ, Ying-Chieh shows a young female character in a school uniform which represents the Japanese artistic and cultural âKawaiiâ style. The Kawaii aesthetic symbolises childlike innocence, cuteness, and the quality of being lovable. Kawaii culture began to rise in Japan in the 1970s through anime, manga and merchandise collections – one of the most recognisable is the Hello Kitty brand. The âKawaiiâ aesthetic is often characterised by pastel colours, rounded shapes, and features which evoke vulnerability, like big eyes and small mouths.
In the image, Ying-Chieh has placed the Kawaii Girl in the palm of an anonymous, sinister figure – this suggests a sense of vulnerability and power over the Girl. The faint web-like pattern on the figures and the background symbolises the unseen influence that AI has on how media is created and distributed that often reinforce stereotypes or facilitates exploitation. The image criticises the overwhelmingly male-dominated AI industry who frequently use technology and content generation tools to reinforce ideologies about women being controlled and subservient to men. For example, there has been a rise in nonconsensual deep fake pornography created by AI tools and also regressive stereotypes about gender roles being reinforced by information provided by large language models, like ChatGPT. Ying-Chieh hopes that âWhoâs Creating the Kawaii Girlâ will challenge people to think about how AI can be misused and its potential to perpetuate harmful gender stereotypes that sexualise females.
What was the inspiration/motivation for creating your image, âCan Your Data Be Seenâ and âWhoâs Creating the Kawaii Girl?â?
At the outset, Ying-Chieh wasnât very familiar with AI or the negative uses and implications of the technology. To explore how it was being used, she looked on Facebook and found a group that was being used to share lots of offensive images of women which were generated by AI. When interrogating the group further, she realised that the group was not small, indeed, it had a large number of active users – which were mostly men. This was Ying-Chiehâs initial inspiration for the image, âWhoâs Creating the Kawaii Girl?â.
However, this Facebook group also prompted Ying-Chieh to think deeper about how the users were able to generate these sexualised images of women and girls so easily. A lot of the images represented a very stereotypical model of attractiveness which prompted her to think about how the underlying datasets of these AI models were most probably very unrepresentative which reinforced stereotypical standards of beauty and attractiveness.Â
Was there a specific reason you focussed on issues like data bias and gender oppression related to AI?
Gender equality has always been something that Ying-Chieh has been passionate about, but she had never considered how the issue related to AI. She came to realise how its relationship wasnât that different to other industries which oppress women because AI is fundamentally produced by humans and fed by data that humans have created. Therefore, the problems with AI being used to harm women are not isolated in the technology, but rooted in systemic social injustices that have long mistreated and misrepresented women and other marginalised groups.
In her research stages, Ying-Chieh explored the âbias loopâ which represents how AI models are trained on data selected by humans or derived from historical data which will create biased images. At the same time, the images created by AI will serve as new training data, which will further embed our historical biases into future AI tools. The concept of the ‘bias loop’ resonated with Ying-Chiehâs interest in gender equality and made her concerned for the uses and developments of AI which privileging some groups at the expense of others, especially where this repeats itself and causes inescapable cycles of injustice.
Can you describe the process for creating this work?
Ying-Chieh started from developing some initial sketches and engaging in discussions with Jane, the programme coordinator, about her work. As you can see below, âWhosâ Creating the Kawaii Girlâ has evolved significantly from its initial sketch but âCan Your Data Be Seen?â has remained quite similar to Ying-Chiehâs original design.
The initial sketches of ‘Can Your Data Be Seen?’ (left) and ‘Who’s Creating the Kawaii Girl?’
Ying-Chieh also engaged in some activities during classes which helped her to learn more about AI and its ethical implications. One of these games, âYou Say, I Drawâ involved one student describing an image and the other student drawing the image purely relying on their partner’s description without knowing what they were drawing.
This game highlighted the role that data providers and prompters play in the development of AI and challenged Ying-Chieh to think more carefully about how data was being used to train content generation tools. During the game, she realised that the personality, background, and experiences of the prompter really influenced what the resulting image looked like. In the same way, the type of data and the developers creating AI tools can really influence the final outputs and results of a system.
Better Images of AI aims to counteract common stereotypes and misconceptions about AI. How did you incorporate this goal into your artwork?
Ying-Chiehâs aim was to explore and address biases present in AI models in order to contribute to the Better Images of AI mission so that the future development of AI can be more diverse and inclusive. She hopes that her illustrations will make it easier for the public to understand issues about biases in AI which are often inaccessible or shielded from wider comprehension.
Her images draw more attention to how AIâs training data is bias and how AI is being used to reinforce gender stereotypes about women. From this, Ying-Chieh hopes that further action can be taken to improve data collection and processing methods as well as more laws and rules about limits to image generation where it exploits or harms individuals.
What have been the biggest challenges of creating a âbetter image of AIâ? Did you encounter any challenges in trying to represent AI in a more nuanced and realistic way?
Ying-Chieh spoke about her challenges in trying to strike the right balance between designing images that could be widely used and recognised by audiences as related to AI but also not falling into any common tropes that misrepresented AI (like robots, descending code, the colour blue). She also found it difficult to not make images too metaphorical to the extent that they may be misinterpreted by audiences.
Based on our criteria for selecting images, we were pleased to accept, âWhoâs Creating the Kawaii Girl?â, but had the difficult decision to not upload âCan Your Data Be Seenâ based on the fact that it didnât communicate and conceptualise AI enough. What do you think of this feedback and was it something that you considered in the process?
Ying-Chieh shared that she had been continuous that her images would not be easily recognisable as communicating ideas about AI throughout the design process. She made some efforts to counteract this, for example, on âCan Your Data Be Seenâ she made the figures all identical to represent data points and the lighter coloured lines on the faces and bodies of the figures represent the technical elements behind AI image recognition technology.
How has working on this project influenced your own views on AI and its impact?
Before starting this project, Ying-Chieh said that her opinion towards AI had been quite positive. She was largely influenced by things that she had seen and read in the news about how AI was going to benefit society. However, from her research on Facebook, she has become increasingly aware that this is not entirely true. There are many dangerous ways that AI can be used which are already lurking in the shadows of our daily lives.
What have you learned through this process that you would like to share with other artists or the public?
The biggest takeaway from this project for Ying-Chieh is how camera angles, zooming, or object positioning can strongly influence the message that an image conveys. For example, in the initial sketches of âCan Your Data Be Seenâ, Ying-Chieh explored how she could best capture the relationship of power through different depths of perspective. Â
Furthermore, when exploring ideas about how to reflect the oppressive nature of AI, Ying-Chieh enlarged the shadowâs presence in the frame for âWhoâs Creating the Kawaii Girlâ. By doing this, the shadow reinforces the strong power that elite groups have over the creation of content about marginalised groups which is often hidden and kept secret from wider knowledge.
Ying-Chieh Lee (she/her) is a visual creator, illustrator, and comic artist from Taiwan. Her work often focuses on women-related themes and realistic, dark-style comics.