Handmade, Remade, Unmade A.I.

Two digitally illustrated green playing cards on a white background, with the letters A and I in capitals and lowercase calligraphy over modified photographs of human mouths in profile.

The Journey of Alina Constantin’s Art

Alina’s image, Handmade A.I., was one of the first additions to the Better Images of AI repository. The description affixed to the image on the site outlines its ‘alternative redefinition of AI’, bringing back into play the elements of human interaction which are so frequently excluded from discussions of the tech. Yet now, a few months on from the introduction of the image to the site, Alina’s work itself has undergone some ‘alternative redefinition’. This blog post explores the journey of this particular image, from the details of its conception to its numerous uses since: How has the image itself been changed, adapted in significance, semantically used? 

Alina Constantin is a multicultural game designer, artist and organiser whose work focuses on unearthing human-sized stories out of large systems. For this piece, some of the principles of machine learning like interpretation, classification, and prioritisation were encoded as the more physical components of human interaction: ‘hands, mouths and handwritten typefaces’, forcing us to consider our relationship to technology differently. We caught up with Alina to discuss further the process (and meaning) behind the work.

What have been the biggest challenges in creating Better Images of AI?

Representing AI comes with several big challenges. The first is the ongoing inundation of our collective imagination with skewed imagery, falsely representing these technologies in practice, in the name of simplification, sensationalism, and our human impulse towards personification. The second challenge is the absence of any single agreed-upon definition of AI, and obviously the complexity of the topic itself.

What was your approach to this piece?

My approach was largely an intricate process of translation. To stay focused upon the ‘why of A.I’ in practical terms, I chose to focus on elements of speech, also wanting to highlight the human sources of our algorithms in hand drawing letters and typefaces. 

I asked questions, and selected imagery that could be both evocative and different. For the back side of the cards, not visible in this image, I bridged the interpretive logic of tarot with the mapping logic of sociology, choosing a range of 56 words from varying fields starting with A/I to allow for more personal and specific definitions of A.I. To take this idea further, I then mapped the idea to 8 different chess moves, extending into a historical chess puzzle that made its way into a theatrical card deck, which you can play with here. You can see more of the process of this whole project here.

This process of translating A.I via my own artist’s tool set of stories/gameplay was highly productive, requiring me to narrow down my thinking to components of A.I logic which could be expressed and understood by individuals with or without a background in tech. The importance of prototyping, and discussing these ideas with audiences both familiar and unfamiliar with AI helped me validate and adjust my own understanding and representation–a crucial step for all of us to assure broader representation within the sector.

So how has Alina’s Better Image been used? Which meanings have been drawn out, and how has the image been redefined in practice? 

One implementation of ‘Handmade A.I.’, on the website of one of our affiliated organisations We and AI, remains largely aligned with the artist’s reading of it. According to We and AI, the image was chosen due to its re-centring of the human within the AI conversation: the human hands still hold the cards, humanity are responsible for their shuffling, their design (though not necessarily completely in control of which ones are dealt.) Human agency continues to direct the technology, not the other way round. As a key tenet of the organisation, and a key element of the image identified by Alina, this all adds up. 

https://weandai.org/, use of Alina’s image

A similar usage by the Universität Hamburg, to accompany a lecture on responsibility in the AI field, follows a similar logic. The additional slant of human agency considered from a human rights perspective again broadens Alina’s initial image. The components of human interaction which she has featured expand to a more universal representation of not just human input to these technologies but human culpability–the blood, in effect, is on our hands. 

Universität Hamburg use of Alina’s image

Another implementation, this time by the Digital Freedom Fund, comes with an article concerning the importance of our language around these new technologies. Deviating slightly from the visual, and more into the semantics of artificial intelligence, the use may at first seem slightly unrelated. However, as the content of the article develops, concerns surrounding the ‘technocentrism’ rather than anthropocentrism in our discussions of AI become a focal point. Alina’s image captures the need to reclaim language surrounding these technologies, placing the cards firmly back in human hands. The article directly states, ‘Every algorithm is the result of a desire expressed by a person or a group of persons’ (Meyer, 2022.) Technology is not neutral. Like a pack of playing cards, it is always humanity which creates and shuffles the deck. 

Digital Freedom Fund use of Alina’s image

This is not the only instance in which Alina’s image has been used to illustrate the relation of AI and language. The question “Can AI really write like a human?” seems to be on everyone’s lips, and ‘Handmade A.I.’ , with its deliberately humanoid typeface, its natural visual partner. In a blog post for LSE, Marco Lehner (of BR AI+) discusses employment of a GPT-3 bot, and whilst allowing for slightly more nuance, ultimately reaches a similar crux– human involvement remains central, no matter how much ‘automation’ we attempt.

Even as ‘better’ images such as Alina’s are provided, we still see the same stock images used over and over again. Issues surrounding the speed and need for images in journalistic settings, as discussed by Martin Bryant in our previous blog post, mean that people will continue to almost instinctively reach for the ‘easy’ option. But when asked to explain what exactly these images are providing to the piece, there’s often a marked silence. This image of a humanoid robot is meaningless– Alina’s images are specific; they deal in the realities of AI, in a real facet of the technology, and are thus not universally applicable. They relate to considerations of human agency, responsible AI practice, and don’t (unlike the stock photos) act to the detriment of public understanding of our tech future.

Branching Out: Understanding an Algorithm at a Glance

A window of three images. On the right is a photo of a big tree in a green field in a field of grass and a bright blue sky. The two on the left are simplifications created based on a decision tree algorithm. The work illustrates a popular type of machine learning model: the decision tree. Decision trees work by splitting the population into ever smaller segments. I try to give people an intuitive understanding of the algorithm. I also want to show that models are simplifications of reality, but can still be useful, or in this case visually pleasing. To create this I trained a model to predict pixel colour values, based on an original photograph of a tree.

The impetus for the most recent contributions to our image repository was described by the artist as promoting understanding of present AI systems. Rens Dimmendaal, Principal Data Scientist at GoDataDriven, discussed with Better Images of AI the need to cut through all the unnecessary complication of ideas within the AI field; a goal which he believes is best achieved through visual media. 

Discussions of the ‘black box’ of AI are not exactly new, and the recent calls for explainability statements to accompany new tech from Best Practice AI are certainly attempting to address the problem at some level. Tim Gordon writes of the greater ‘transparency’ required in the field, as well as the implicit acknowledgement that any wider impacts have been considered. Yet, for the broader spectrum of individuals whose lives are already being influenced by AI technologies, an extensive, jargon-filled document on the various inputs and outputs of any single algorithm is unlikely to provide much relief. 

This is where Dimmendaal comes in: to provide ‘understanding at a glance’ (and also to ‘make a pretty picture’, in his own words). The artist began with the example of the decision tree. All present tutorials on this topic, in his view, use datasets which only make the concept more difficult to understand–have a look at ‘decision tree titanic’ for a clear illustration of this.  Another explanation was provided by r2d3. Yet, for Rens, this still employed an overly complicated ‘usecase’. Hence, this selection of images.

Rens cites his inspiration for this particular project as Roger Johansson’s recreation of the ‘Mona Lisa’, using genetic programming. In the original, Johansson attempts to reproduce the piece with a combination of semi-transparent polygons and an evolutionary algorithm, gradually mutating the initial randomly generated polygons to move closer and closer to the original image. Rens recreated elements of this code as a starting point, then with the addition of the triptych format and implementation of a decision tree style algorithm made the works his own. 

Rens Dimmendaal / Better Images of AI / Man / CC-BY 4.0

In keeping with his motivations–making a ‘pretty picture’, but chiefly contributing to the greater transparency of AI methodologies–Dimmendaal elected the triptych to present his outputs. The mutation of the image is shown as a fluid, interactive process, morphing across the triptych from left to right, from abstraction to the original image itself. Getting a glimpse inside the algorithm in this manner allows for the ‘understanding at a glance’ which the artist wished to provide–the image shifts before our eyes, from the initial input to the final output. 

Rens Dimmendaal & David Clode / Better Images of AI / Fish / CC-BY 4.0

Rens Dimmendaal & Jesse Donoghoe / Better Images of AI / Car / CC-BY 4.0

Engaging with the decision tree was not only a practical decision, related to the prior lack of adequate tutorial, but also an artistic one. As Dimmendaal explains, ‘applying a decision tree to an actual tree was just too poetic an opportunity to let slide.’ We think it paid off… 

Dimmendaal has worked with numerous algorithmic systems previously (including: k-means, nearest neighbours, linear regression, svm) but cites this particular combination of genetic programming, decision trees and the triptych format as producing the nicest outcome. More of his work can be found both in our image repository, and on his personal website.

Whether or not a detailed understanding of algorithms is something you are interested in, you can input your own images to the tool Rens created for this project here and play around with making your own decision tree art. What do images relevant to your industry, product or interests look like seen through this process? Make sure to tag Better Images of AI in your AI artworks, and credit Rens. We’re excited to see what you come up with!

More from Better Images: Twitter | LinkedIn

More from the artist: Twitter | Linkedin

Humans (back) in the Loop

Pictures of Artificial Intelligence often remove the human side of the technology completely, removing all traces of human agency. Better Images of AI seeks to rectify this. Yet, picturing the AI workforce is complex and nuanced. Our new images from Humans in the Loop attempt to present more of the positive side, as well as bringing the human back into the centre of AI’s global image. 

The ethicality of AI supply chains is not something newly brought under fire. Yet, separate from the material implications of its production, the ‘new digital assembly line’, which Mary L. Gray and Siddarth Suri explore in their book Ghost Work, holds a much more immediate (and largely unrecognised) human impact. In particular, the all-too-frequent exploitation characterising so-called ‘Clickwork’. Better Images of AI has recently coordinated with award-winning social enterprise Humans in the Loop to attempt to rectify this endemic removal of the human from discussions; with a focus on images concerning the AI supply chain, and the field of artificial intelligence more broadly.

‘Clickwork’, more appropriately referred to as ‘data work’ is an umbrella term, signifying a whole host of human involvements in AI production. One of the areas in which human input is most needed is that of data annotation, an activity that provides training data for Artificial Intelligence. What used to be considered “menial” and “low-skilled” work is today a nascent field with its own complexities and skills requirements,  involving extensive training. However, tasks such as this, often ‘left without definition and veiled from consumers who benefit from it’ (Gray & Suri, 2019), result in these individuals finding themselves relegated to the realm of “ghost work”.

While the nature of ‘ghost work’ is not inherently positive or negative, the resultant lack of protection which these data workers are subject to can produce some highly negative outcomes. Recently Time Magazine uncovered some practices which were not only being hidden, but deliberately misrepresented. The article collates testimonies from Sama employees, contracted as outsourced Facebook content moderators. These testimonials reveal a workplace characterised by ‘mental trauma, intimidation, and alleged suppression’. The article ultimately concludes that through the hidden quality of this sector of the supply chain, Facebook profits through exploitation, and through the exportation of trauma away from the West and instead toward the developing world.

So how can we help to mitigate these associated risks of ghost work within the AI supply chain? It starts with making the invisible, visible. To counter the prevalent pictures of AI which remove any semblance of human agency or production, and conceal the potential for human exploitation, we were very keen to show the people involved in creating the technology. These people are very varied and not just the homogenous Silicon Valley types portrayed in popular media. They include silicon miners, programmers, data scientists, product managers, data workers, content moderators, managers and many others from all around the globe; these are the people who are the intelligence behind AI. Our new images from Humans in the Loop attempt to challenge wholly negative depictions of data work, whilst simultaneously bringing attention to the exploitative practices and employment standards within the fields of data labelling and annotation. There is still, of course, work to do, as the Founder, Iva Gumnishika detailed in the course of our discussion with her. The glossy, more optimistic look at data work which these images present must not be taken as licence to excuse the ongoing poor working conditions, lack of job stability, or exposure to damaging or traumatic content which many of these individuals are still facing.

As well meeting our aim of portraying the daily work at Humans in the Loop and to showcase the ‘different faces behind [their] projects’, our discussions with the Founder gave us the opportunity to explore and communicate some of the potential positive outcomes of roles within the supply chain. These include the greater flexibility which employment such as data annotation might allow for, in contrast to the more precarious side of gig-style working economies.

In order to harness the positive potential of new employment opportunities, especially those for displaced workers, Human in the Loop’s navigates major geopolitical factors impacting their employees (for example the Taliban government in Afghanistan, the embargoes on Syria, and more recently the war in Ukraine). Gumnishika also described issues connected with this brand of data work such as convincing ‘clients to pay dignified wages for something that they perceive as “low-value work”’ and attempting to avoid the ‘race to the bottom’ within this arena. Another challenge is in allowing the workers themselves to acknowledge their central role in the industry, and what impact their work is having. When asked what she would identify as the central issue within present AI supply chain structures, her emphatic response was that ‘AI is not as artificial as you would think!’. The cloaking of the hundreds of thousands of people working to verify and annotate the data, all in the name of selling products as “fully autonomous”, and possessing “superhuman intelligence”, only acts to the detriment of its very human components. By including more of the human faces behind AI, as a completely normal/necessary part of it, Gumnishka hopes to trigger the unveiling of AI’s hidden labour inputs. In turn, by sparking widespread recognition of the complexity, value, and humanity behind work such as data annotation and content moderation–as in the case of Sama– the ultimate goal is an overhaul of data workers’ employment conditions, wages and acknowledgement as a central part of AI futures. 

In our gallery we attempt to represent both sides of data work, and Max Gruber, another contributor to the Better Images of AI gallery, engages with the darker side of gig-work in greater depth through his work, included in our main gallery and below. It presents ‘clickworkers’ as they predominantly are currently – precariously paid workers in a digital gig economy, performing monotonous work for little to no compensation. His series of photographs depict 3D printed figures, stationed in front of their computers to the uncomfortable effect of quite literally illustrating the term “human resources”, as well as the rampant anonymity which perpetuates exploitation in the area. The figure below ‘Clickworker 3d-printed’ is captioned as ‘anonymized, almost dehumanised’, the obscuration of the face and identical ‘worker’ represented in the background of the image, all cementing the individual’s status as unacknowledged labour in the AI supply chain. 

Max Gruber / Better Images of AI / Clickworker 3d-printed / CC-BY 4.0

We can contrast this with the stories behind Human in the Loop’s employees.

Nacho Kamenov & Humans in the Loop / Better Images of AI / Data annotators labeling data / CC-BY 4.0

This image, titled ‘Data annotators labelling data’ immediately offers up two very real data workers, faces clear and contribution to the production of AI clearly outlined. The accompanying caption details the function of data annotation, when it is needed, what purpose it serves; there is no masking, no hidden element to their work, as previously.

Gumnishka shares that some of the people who appear on the images have continued their path as migrants and refugees to other European countries, for example the young woman in the blog cover photo. Others have other jobs (one of the pictures shows an architect although now having found work in her field, continues to come to training and is part of the community. For others like the woman in the colourful scarf, it becomes their main source of livelihood and they are happy to pursue it as a career.

Through adding the human faces back into the discussions surrounding artificial intelligence we see not just the Silicon Valley or business-suited tech workers we occasionally see in pictures, but the vast armies of workers across the world, many of them women, many of them outside of the West.

The image below is titled ‘A trainer instructing a data annotator on how to label images’. This helps address the lack of clarity on what exactly datawork entails, and the level of training, expertise and skill required to carry it out. This image engages directly with this idea, showing some of the extensive training required in visible action, in this case by the Founder herself.

a young woman sitting in front of a computer in an office while another woman standing next to her is pointing at something on her screen
Nacho Kamenov & Humans in the Loop / Better Images of AI / A trainer instructing a data annotator on how to label images / CC-BY 4.0 (Also used as cover image)

Although these images do not of course accurately represent the experience of all data workers, in combination with the increasing awareness of conditions enabled by contributions such as the recent Times article, or the work by Gray and Suri, by Kate Crawford in her book Atlas of AI, and with the counterbalance provided by Max Gruber’s images, the addition of the photographs from Humans in the Loop provides inspiration for others. 

We hope to keep adding images of the real people behind AI, especially those most invisible at present. If you work in AI, could you send us your pictures, and how could you show the real people behind AI? Who is still going unnoticed or unheard? Get involved with the project here: https://betterimagesofai.org/contact.

Better Images of AI’s first Artist: Alan Warburton

A photographic rendering of a young black man standing in front of a cloudy blue sky, seen through a refractive glass grid and overlaid with a diagram of a neural network

In working towards providing better images of AI, BBC R&D are commissioning some artists to create stock pictures for open licence use. Working with artists to find more meaningful and helpful yet visually compelling ways to represent AI has been at the core of the project.

The first artist to complete his commission is London-based Alan Warburton. Alan is a multidisciplinary artist exploring the impact of software on contemporary visual culture. His hybrid practice feeds insight from commercial work in post-production studios into experimental arts practice, where he explores themes including digital labour, gender and representation, often using computer-generated images (CGI). 

His artwork has been exhibited internationally at venues including BALTIC, Somerset House, Ars Electronica, the National Gallery of Victoria, the Carnegie Museum of Art, the Austrian Film Museum, HeK Basel, Photographers Gallery, London Underground, Southbank Centre and Channel 4. Alan is currently doing a practice-based PhD at Birkbeck, London looking at how commercial software influences contemporary visual cultures.

Warburton’s first encounters with AI are likely familiar to us all through the medium of disaster and science fiction films that presented assorted ideas of the technology to broad audiences through the late 1990s and early 2000s. 

As an artist, Warburton says it is over the past few years that technological examples have jumped out for him to help create his work. “In terms of my everyday working life, I suppose that rendering – the process of computing photorealistic images – has always been an incredibly slow and complex process but in the last four or five years various pieces of software that are part of the rendering  process have begun to incorporate AI technologies in increasing degrees,” he says. “AI noise reduction or things like rotoscoping are affected as the very mundane labour-intensive activities involved in the work of an animator and visual effects artists or image manipulator have been sped up. 

“AI has also affected me in the way it has affected everyone else through smart phone technology and through the way I interact with services provided by energy companies or banks or insurance people. Those are the areas that are more obscured, obtuse or mysterious because you don’t really see the systems. But with image processing software I have an insight into the reality of how AI is being used.” 

Warburton’s knowledge of software and AI tools has ensured that he is able to critically analyse which tools are beneficial. “I have been quite discriminatory in the way I use AI tools. There’s workflow tools that speed things up as well as image libraries and 3D model libraries. But the latter ones provide politically charged content even though it’s not positioned as such. Presets available in software will give you white skinned caucasian bodies and allow you to photorealistically simulate people but, for example, there’s hair simulation algorithms that default to caucasian hair. There’s this variegated tapestry of AI software tools, libraries, databases that you have to be discriminatory in the use of or be aware of the limitations and bias and voice those criticisms.” 

The artist’s personal use of technology is also careful and thought through. “I don’t have my face online,” he says. “There’s no content of me speaking online, I don’t have photographs online. That’s slightly unusual for someone who works as an artist and has necessary public engagement as part of my job, but I’m very aware that anything I put online can be used as training data –  if it’s public domain (materials available to the public as a whole, especially those not subject to copyright or other legal restrictions) then it’s fair game.

“Whilst my image is unlikely to be used for nefarious ends or contribute directly to a problematic database, there’s a principle that I stick to and I have stuck to for a very long time. There’s some control over my data, my presence and my image that I like to police although I am aware that my data is used in ways that I don’t understand. Keeping control over that data requires labour, you have to go through all of the options in consent forms and carefully select what you are willing to give away and not. Being discriminatory about how your data is used to construct powerful systems of control and AI is a losing game. You have to some extent to accept that your participation with these systems relies on you giving them access to your data.”

When it comes to addressing the issues of AI representation in the wider world, Warburton can see the issues that need to be solved and acknowledges that there is no easy answer. “Over the past five or ten years we have had waves of visual interpretations of our present moment,” he says. “Unfortunately many of those have reached back into retro tropes. So we’ve had vaporwave and post-internet aesthetics and many different Tumblr vibes trying to frame the present visual culture or the technological now but using retro imagery that seemed regressive. 

“We don’t have a visual language for a dematerialised culture.”

“We don’t have a visual language for a dematerialised culture. It’s very difficult to represent the culture that comes through the conduit of the smartphone. I think that’s why people have resorted to these analogue metaphors for culture. We may have reached the end of these attempts to describe data or AI culture, we can’t use those old symbols anymore and yet we still don’t have a popular understanding of how to describe them. I don’t know if it’s even possible to build a language that describes the way data works. Resorting to metaphor seems like a good way of solving that problem but this also brings in the issue of abstraction and that’s another problem.”

Alan’s experience and interest in this field of work have led to some insightful and recognisable visualisations of how AI operates and what is involved, which can act as inspiration for other artists with less knowledge of the technology. Future commissions from BBC R&D for the Better Images of AI project will enable other artists to use their different perspectives to help evolve this new visual language for dematerialised culture.