Humans (back) in the Loop

Pictures of Artificial Intelligence often remove the human side of the technology completely, removing all traces of human agency. Better Images of AI seeks to rectify this. Yet, picturing the AI workforce is complex and nuanced. Our new images from Humans in the Loop attempt to present more of the positive side, as well as bringing the human back into the centre of AI’s global image. 

The ethicality of AI supply chains is not something newly brought under fire. Yet, separate from the material implications of its production, the ‘new digital assembly line’, which Mary L. Gray and Siddarth Suri explore in their book Ghost Work, holds a much more immediate (and largely unrecognised) human impact. In particular, the all-too-frequent exploitation characterising so-called ‘Clickwork’. Better Images of AI has recently coordinated with award-winning social enterprise Humans in the Loop to attempt to rectify this endemic removal of the human from discussions; with a focus on images concerning the AI supply chain, and the field of artificial intelligence more broadly.

‘Clickwork’, more appropriately referred to as ‘data work’ is an umbrella term, signifying a whole host of human involvements in AI production. One of the areas in which human input is most needed is that of data annotation, an activity that provides training data for Artificial Intelligence. What used to be considered “menial” and “low-skilled” work is today a nascent field with its own complexities and skills requirements,  involving extensive training. However, tasks such as this, often ‘left without definition and veiled from consumers who benefit from it’ (Gray & Suri, 2019), result in these individuals finding themselves relegated to the realm of “ghost work”.

While the nature of ‘ghost work’ is not inherently positive or negative, the resultant lack of protection which these data workers are subject to can produce some highly negative outcomes. Recently Time Magazine uncovered some practices which were not only being hidden, but deliberately misrepresented. The article collates testimonies from Sama employees, contracted as outsourced Facebook content moderators. These testimonials reveal a workplace characterised by ‘mental trauma, intimidation, and alleged suppression’. The article ultimately concludes that through the hidden quality of this sector of the supply chain, Facebook profits through exploitation, and through the exportation of trauma away from the West and instead toward the developing world.

So how can we help to mitigate these associated risks of ‘ghost work’ within the AI supply chain? It starts with making the invisible, visible. As Noopur Raval (2021) puts it, to collectively ‘identify and interrupt the invisibility of work’ constitutes an initial step towards undermining the ‘deliberate construction and maintenance of “screens of invisibility”‘. To counter the prevalent images of AI, circulated as an extension of ‘AI imperialism’ within the West- an idea further engaged with by Karen Hao (2022)– which remove any semblance of human agency or production, and conceal the potential for human exploitation, we were keen to show the people involved in creating the technology.

These people are very varied and not just the homogenous Silicon Valley types portrayed in popular media. They include silicon miners, programmers, data scientists, product managers, data workers, content moderators, managers and many others from all around the globe; these are the people who are the intelligence behind AI. Our new images from Humans in the Loop attempt to challenge wholly negative depictions of data work, whilst simultaneously bringing attention to the exploitative practices and employment standards within the fields of data labelling and annotation. There is still, of course, work to do, as the Founder, Iva Gumnishika detailed in the course of our discussion with her. The glossy, more optimistic look at data work which these images present must not be taken as licence to excuse the ongoing poor working conditions, lack of job stability, or exposure to damaging or traumatic content which many of these individuals are still facing.

As well meeting our aim of portraying the daily work at Humans in the Loop and to showcase the ‘different faces behind [their] projects’, our discussions with the Founder gave us the opportunity to explore and communicate some of the potential positive outcomes of roles within the supply chain. These include the greater flexibility which employment such as data annotation might allow for, in contrast to the more precarious side of gig-style working economies.

In order to harness the positive potential of new employment opportunities, especially those for displaced workers, Human in the Loop’s navigates major geopolitical factors impacting their employees (for example the Taliban government in Afghanistan, the embargoes on Syria, and more recently the war in Ukraine). Gumnishika also described issues connected with this brand of data work such as convincing ‘clients to pay dignified wages for something that they perceive as “low-value work”’ and attempting to avoid the ‘race to the bottom’ within this arena. Another challenge is in allowing the workers themselves to acknowledge their central role in the industry, and what impact their work is having. When asked what she would identify as the central issue within present AI supply chain structures, her emphatic response was that ‘AI is not as artificial as you would think!’. The cloaking of the hundreds of thousands of people working to verify and annotate the data, all in the name of selling products as “fully autonomous”, and possessing “superhuman intelligence”, only acts to the detriment of its very human components. By including more of the human faces behind AI, as a completely normal/necessary part of it, Gumnishka hopes to trigger the unveiling of AI’s hidden labour inputs. In turn, by sparking widespread recognition of the complexity, value, and humanity behind work such as data annotation and content moderation–as in the case of Sama– the ultimate goal is an overhaul of data workers’ employment conditions, wages and acknowledgement as a central part of AI futures. 

In our gallery we attempt to represent both sides of data work, and Max Gruber, another contributor to the Better Images of AI gallery, engages with the darker side of gig-work in greater depth through his work, included in our main gallery and below. It presents ‘clickworkers’ as they predominantly are currently – precariously paid workers in a digital gig economy, performing monotonous work for little to no compensation. His series of photographs depict 3D printed figures, stationed in front of their computers to the uncomfortable effect of quite literally illustrating the term “human resources”, as well as the rampant anonymity which perpetuates exploitation in the area. The figure below ‘Clickworker 3d-printed’ is captioned as ‘anonymized, almost dehumanised’, the obscuration of the face and identical ‘worker’ represented in the background of the image, all cementing the individual’s status as unacknowledged labour in the AI supply chain. 

Max Gruber / Better Images of AI / Clickworker 3d-printed / CC-BY 4.0

We can contrast this with the stories behind Human in the Loop’s employees.

Nacho Kamenov & Humans in the Loop / Better Images of AI / Data annotators labeling data / CC-BY 4.0

This image, titled ‘Data annotators labelling data’ immediately offers up two very real data workers, faces clear and contribution to the production of AI clearly outlined. The accompanying caption details the function of data annotation, when it is needed, what purpose it serves; there is no masking, no hidden element to their work, as previously.

Gumnishka shares that some of the people who appear on the images have continued their path as migrants and refugees to other European countries, for example the young woman in the blog cover photo. Others have other jobs (one of the pictures shows an architect although now having found work in her field, continues to come to training and is part of the community. For others like the woman in the colourful scarf, it becomes their main source of livelihood and they are happy to pursue it as a career.

Through adding the human faces back into the discussions surrounding artificial intelligence we see not just the Silicon Valley or business-suited tech workers we occasionally see in pictures, but the vast armies of workers across the world, many of them women, many of them outside of the West.

The image below is titled ‘A trainer instructing a data annotator on how to label images’. This helps address the lack of clarity on what exactly datawork entails, and the level of training, expertise and skill required to carry it out. This image engages directly with this idea, showing some of the extensive training required in visible action, in this case by the Founder herself.

a young woman sitting in front of a computer in an office while another woman standing next to her is pointing at something on her screen
Nacho Kamenov & Humans in the Loop / Better Images of AI / A trainer instructing a data annotator on how to label images / CC-BY 4.0 (Also used as cover image)

Although these images do not of course accurately represent the experience of all data workers, in combination with the increasing awareness of conditions enabled by contributions such as the recent Times article, or the work by Gray and Suri, by Kate Crawford in her book Atlas of AI, and with the counterbalance provided by Max Gruber’s images, the addition of the photographs from Humans in the Loop provides inspiration for others. 

We hope to keep adding images of the real people behind AI, especially those most invisible at present. If you work in AI, could you send us your pictures, and how could you show the real people behind AI? Who is still going unnoticed or unheard? Get involved with the project here: https://betterimagesofai.org/contact.

Avoiding toy robots: Redrawing visual shorthand for technical audiences

Two pencil drawn 1960s style towy robots being scribbled out by a pencil on a pale blue background

Visually describing AI technologies is not just about reaching out to the general public, it also means getting things marketing and technical communication right. Brian Runciman is the Head of Content – British Computer Society (BCS) The Chartered Institute of IT. His audience is not unfamiliar with complex ideas, so what are the expectations for accompanying images? 

Brian’s work covers the membership magazine for BCS as well as a publicly available website full of news, reports and insights from members. The BCS membership is highly skilled, technically minded and well read – so the content on site and in the magazine needs to be appealing and engaging.  

“We view our audience as the educated layperson,” Brian says. “There’s a base level of knowledge you can assume. You probably don’t have to explain what machine learning or adversarial networks are conceptually and we don’t go into tremendous depth because we have academic journals that do this.” 

Of course writing for a technical audience also means Brian and his colleagues will get smart feedback when something doesn’t quite fit expectations. “With a membership of over 60 thousand, there are some that are very engaged with how published material is presented and quite rightly,” Brian says. “Bad imagery affects the perception of what something really is.”

So what are the rules that Brian and his writers follow? As with many publications there is a house style that they try to keep to and this includes the use of photography and natural imagery. This is common among news publications that choose this over illustration, graphics or highly manipulated images. In some cases this is used to encourage a sense of trust in the readership that images are accurate and have not been changed. This also tends to mean the use of stock images. 

“Stock libraries need to do better,” Brian observes. “When you’re working quickly and stuff needs to be published, there’s not a lot of time to make image choices and searching stock libraries for natural imagery can mean you end up with a toy robot to represent things that are more abstract.”

“Terminators still come up as a visual shorthand,” he says. “But AI and automation designers are often just working to make someone’s use of a website a little bit slicker or easier. If you use a phone or a website to interact with an automated process it does what it is supposed to do and you don’t really notice it – it’s invisible and you don’t want to see it. The other issue is that when you present AI as a robot people think it is embodied. Obviously, there is a crossover but in process automation, there is no crossover, it’s just code, like so much else is.”

Tone things down and make them relatable 

Brian’s decades-long career in publishing means he has some go-to methods for working out the best way to represent an article. “I try to find some other aspect of the piece to focus on,” he says. “So in a piece about weather modelling, we could try and show a modelling algorithm but the other word in the headline is weather and an image of this is something we can all relate to.” 

Brian’s work also means that he has observed trends in the use of images. “A decade or so ago it was more important to show tech,” he says. “In a time when that was easily represented by gadgets and products this was easier than trying to describe technologies like AI. Today we publish in times when people are at the heart of tech stories and those people need to look happy.”

Pictures of people are a good way to show the impact of AI and its target users, but it also raises other questions about diversity – especially if the images are predominantly of middle aged white men. “It’s not necessary,” says Runciman. “We have a lot of head shots of our members that are very diverse. We have people from minorities, researchers who are not white or middle aged – of which there are loads. When people say they can’t find diverse people for a panel I find it ridiculous, there are so many people out there to work with. So we tend to focus on the person who is working on a technology and not just the AI itself.”

The use of images is something that Brian sees every day for work, so what would be on his wish list when it comes to better images of AI? “No cartoon characters and minimal colour usage – something subtle,” he muses. “Skeletal representations of things – line representations of networks, rendered in subtle and fewer colours.” This nods at the cliches of blue and strange bright lights that you can find in a simple search for AI images, but as Brian points out, there are subtler ways of depicting a network and images for publishing that can still be attractive without being an eyesore.