Illustrating Data Hazards

A person with their hands on a laptop keyboard is looking at something happening over their screen with a worried expression. They are white, have shoulder length dark hair and wear a green t-shirt. The overall image is illustrated in a warm, sketchy, cartoon style. Floating in front of the person are three small green illustrations representing different industries, which is what they are looking at. On the left is a hospital building, in the middle is a bus, and on the right is a siren with small lines coming off it to indicate that it is flashing or making noise. Between the person and the images representing industries is a small character representing artificial intelligence made of lines and circles in green and red (like nodes and edges on a graph) who is standing with its ‘arms’ and ‘legs’ stretched out, and two antenna sticking up. A similar patten of nodes and edges is on the laptop screen in front of the person, as though the character has jumped out of their screen. The overall image makes it look as though the person is worried the AI character might approach and interfere with one of the industry icons.

We are delighted to start releasing some useful new images donated by the Data Hazards project into our free image library. The images are stills from an animated video explaining the project, and offer a refreshing take on illustrating AI and data bias. They take an effective and creative approach to making visible the role of the data scientist and the impact of algorithms, and the project behind the images uses visuals in order to improve data science itself. Project leaders Dr Nina Di Cara and Dr Natalie Zelenka share some background on Data Hazards labels, and the inspiration behind the animation behind the new images.

Data science has the potential to do so much for us. We can use it to identify new diseases, streamline services, and create positive change in the world. However, there have also been many examples of ways that data science has caused harm. Often this harm is not intended, but its weight falls on those who are the most vulnerable and marginalised. 

Often too, these harms are preventable. Testing datasets for bias, talking to communities affected by technology or changing functionality would be enough to stop people from being harmed. However, data scientists in general are not well trained to think about ethical issues, and even though there are other fields that have many experts on data ethics, it is not always easy for these groups to intersect. 

The Data Hazards project was developed by Dr Nina Di Cara and Dr Natalie Zelenka in 2021, and aims to make it easier for people from any discipline to talk together about data science harms, which we call Data Hazards. These Hazards are in the form of labels. Like chemical hazards, we want Data Hazards to make people stop and think about risk, not to stop using data science at all. 

An person is illustrated in a warm, cartoon-like style in green. They are looking up thoughtfully from the bottom left at a large hazard symbol in the middle of the image. The Hazard symbol is a bright orange square tilted 45 degrees, with a black and white illustration of an exclamation mark in the middle where the exclamation mark shape is made up of tiny 1s and 0s like binary code. To the right-hand side of the image a small character made of lines and circles (like nodes and edges on a graph) is standing with its ‘arms’ and ‘legs’ stretched out, and two antenna sticking up. It faces off to the right-hand side of the image.
Yasmin Dwiputri & Data Hazards Project / Better Images of AI / Managing Data Hazards / CC-BY 4.0

By making it easier for us all to talk about risks, we believe we are more likely to see them early and have a chance at preventing them. The project is open source, so anyone can suggest new or improved labels which mean that we can keep responding to new and changing ethical landscapes in data science. 

The project has now been running for nearly two years and in that time we have had input from over 100 people on what the Hazard labels should be, and what safety precautions should be suggested for each of them. We are now launching Version 1.0 with newly designed labels and explainer animations! 

Chemical hazards are well known for their striking visual icons, which many of us see day-to-day on bottles in our homes. By having Data Hazard labels, we wanted to create similar imagery that would communicate the message of each of the labels. For example, how can we represent ‘Reinforces Existing Bias’ (one of the Hazard labels) in a small, relatively simple image? 

Icon

Description automatically generated
Image of the ‘Reinforces Existing Bias’ Data Hazard label

We also wanted to create some short videos to describe the project, that included a data scientist character interacting with ‘AI’ and had the challenge of deciding how to create a better image of AI than the typical robot. We were very lucky to work with illustrator and animator Yasmin Dwiputri, and Vanessa Hanschke who is doing a PhD at the University of Bristol in understanding responsible AI through storytelling. 

We asked Yasmin to share some thoughts from her experience working on the project:

“The biggest challenge was creating an AI character for the films. We wanted to have a character that shows the dangers of data science, but can also transform into doing good. We wanted to stay away from portraying AI as a humanoid robot and have a more abstract design with elements of neural networks. Yet, it should still be constructed in a way that would allow it to move and do real-life actions.

We came up with the node monster. It has limbs which allow it to engage with the human characters and story, but no facial expressions. Its attitude is portrayed through its movements, and it appears in multiple silly disguises. This way, we could still make him lovable and interesting, but avoid any stereotypes or biases.

As AI is becoming more and more present in the animation industry, it is creating a divide in the animation community. While some people are praising the endless possibilities AI could bring, others are concerned it will also replace artistic expressions and human skills.

The Data Hazard Project has given me a better understanding of the challenges we face even before AI hits the market. I believe animation productions should be aware of the impact and dangers AI can have, before only speaking of innovation. At the same time, as creatives, we need to learn more about how AI, if used correctly, and newer methods could improve our workflow.”

Yasmin Dwiputri

Now that we have the wonderful resources created we have been able to release them on our website and will be using them for training, teaching and workshops that we run as part of the project. You can view the labels and the explainer videos on the Data Hazards website. All of our materials are licensed as CC-BY 4.0 and so can be used and re-used with attribution. 

We’re also really excited to see some on the Better Images of AI website, and hope they will be helpful to others who are trying to represent data science and AI in their work. A crucial part of AI ethics is ensuring that we do not oversell or exaggerate what AI can do, and so the way we visualise images of AI is hugely important to the perception of AI by the public and being able to do ethical data science! 

Cover image by Yasmin Dwiputri & Data Hazards Project / Better Images of AI / AI across industries / CC-BY 4.0

Launch of a Guide for Users and Creators of Images of AI

Some screenshots of the new Better Images of AI Guide fr Users and Creators

On 24 January, the Better Images of AI project launched a Guide for Users and Creators of images of AI at a reception in London. The aim of the Guide is to lay out some key findings from Dr Kanta Dihal’s research Better Images of AI: Research-Informed Diversification of Stock Imagery of Artificial Intelligence, in a format which makes it easy for users, creators and funders of images relating to AI to refer to. 

Mark Burey, Head of Marketing and Communications at the Alan Turing Institute, welcomed an audience of AI communicators, researchers, journalists, practitioners and ethicists. The Alan Turing Institute, the UK’s national institute for data science and artificial intelligence, hosted the event and is one of the Better Images of AI’s key founding supporters.

Dr Kanta Dihal at the Leverhulme Centre for the Future of Intelligence, the University of Cambridge, introduced the Guide, summarised the contents, and gave an overview of the research project. 

Dr Kanta Dihal presents the new Better Images of AI guide at the Turing Institute

This Guide presents the results of a year-long study into alternative ways of creating images of AI. The research, led by Dr Dihal, included roundtable and workshop conversations with over 100 experts from a range of different fields. Participants from media and communications, the tech sector, policy, research, education and the arts dug down into the issues surrounding how we communicate visually and appraised the utility and impact of the images already published in the Better Images of AI library.

Dr Dihal took the opportunity to thank the many research participants in attendance, as well as the team at We and AI who coordinated the Arts and Humanities Research funded project, and expressed appreciation to BBC R&D for donations in kind.

Finishing the presentations was Tania Duarte, who managed the research project team at We and AI and who also coordinates the global community which makes up the Better Images of AI collaboration. Tania highlighted the contributions of the volunteers and non-profit organisations who have contributed to the mission to explore how to create more realistic, varied and inclusive images of AI. Their drive to address various issues caused by the misconceptions fuelled by current trends in visual messaging about AI has been inspiring and informative.

Tania expressed the hope that recommendations from Dr Dihal’s new research will motivate funders and sponsors to support the Better Images of AI project to be able to meet the demand for more images. The Guide describes the need expressed by participants’ images of a greater diversity of perspectives, covering more topics, and offering more image choices within those topics. This need is also voiced by the users of the gallery, a selection of which Tania shared during the presentation, many of which have now used all the images and have yet to easily find more.

Logos of various organisations and publications which have used images from the Better Images of AI library
Organisations which have used images from the Better Images of AI library

The Q&A with the audience became a fascinating discussion with the expert audience, with topics including the use of AI-generated images, typing robots to illustrate ChatGPT and the design of assistive robots.

A pdf version of Better Images of AI: A Guide for Users and Creators is now available to download here.

You can download images free on Creative Commons licences here.

For more detailed advice on creating specific briefs and working with designers, the team at Better Images of AI can be commissioned to work on visual communications projects.

Once again, we thank the research participants, attendees, project team and wider community for helping to provide this Guide, which we hope will help increase the provision and use of better images of AI!