Images Matter!

Woman to the left, jumbled up letters entering her ear

AI in Translation

You often hear the phrase “words matter”: words help us to construct mental images in our minds, and to make sense of the world around us. Yet, in the same framing, “images matter” too. How we depict the state of technology (imagined, current or future) visually and verbally,  helps us position ourselves in relation to what is already there and what is coming.

The way these technologies are visualized and expressed in combination tells us what an emerging technology looks like, and how we should expect to interact with it. If AI is always depicted as white, gendered robots, the majority of AI systems we interact with in reality around the clock go unnoticed. What we do not notice, we cannot react to. When we do not react, we become part of the flow in the dominant (and presently incorrect) narrative. This is why we need better images of AI, as well as a language overhaul.

These issues are not limited to the english-speaking world alone. I have recently been asked to give a lecture at a Turkish university on artificial intelligence and the future of work. Over the years I have presented on this and similar topics (AI and the future of the workplace, the future of HR) on a number of occasions. As an AI ethicist and lecturer, I also frequently discuss the uses of AI in human resources, workplace datafication and employee/candidate surveillance. The difference this time? I was asked to hold the lecture in Turkish.  

Yes, it is my native language. However, for more than 15 years, I have been using English in my day-to-day professional interactions. In English, I can talk about AI and ethics, bias, social justice, and policy for hours. When discussing the same topics in Turkish though I need to use a dictionary to translate some of the technical terminology.  So, during my preparations for this presentation, I went down the rabbit hole: specifically one concerning how connected biases in language and images impact overarching narratives of artificial intelligence. 

Gender and Race Bias in Natural Language Models

In 2017 Caliskan, Bryson and Narayan explored in their pioneering work that semantics (meaning of words) derived automatically from language corpora contain human-like biases. The authors showed that natural language models, built by parsing of large corpora derived from internet, reflect the human and societal gender and racial biases. The evidence was shown in word embeddings, which is a method of representation where the words that have the same meaning or tend to be used together are mapped closer to each other on a vector in a high-dimensional space. In other words, they are hidden patterns of word co-occurrence statistics of language corpora, which include grammatical and semantic information. Caliskan et al share that the thesis behind word embeddings is that words that are closer together in the vector space are semantically closer in some sense. The research showed for example, Google Translate converts occupations in Turkish sentences in gendered ways – even though Turkish language is gender-neutral:

“O bir doktor. O bir hemsire.” to these English sentences: “He is a doctor. She is a nurse.” Or “O bir profesör. O bir öğretmen” to these English sentences “He’s a professor. She is a teacher.”

Such results reflect the gender stereotypes within the language models themselves. Such subtle changes have serious consequences.  NLP tasks such as keyword search and match, translation, web search, or text generation/recognition/analysis can be embedded in systems that make decisions on hiring, university admission, immigration applications, law enforcement interactions, etc.

Google Translate, after a patch fix of its models, now gives feminine and masculine binary translations. But 4 years after this patch fix (as of the time of writing), Google Translate still has not addressed non-binary gender translations.

Gender and Race Bias in Search Results

The second seminal work is Dr Safiya Noble’s book Algorithms of Oppression, which covers academic research on Google search algorithms, examining search results from 2009 to 2015. Similar to the findings of the above research on language models, Dr Noble argues that the search algorithms are not neutral tools, and they reflect and magnify the race and gender biases that exist in society and the people who create them. She expertly demonstrates how the search results for keywords like “white girls” are significantly different to “Black girls”,  “Asian girls” or “Hispanic girls”  The latter set of words would show images which were exclusively pornography or highly sexualized content. The research brings to the surface the hidden structures of power and bias in widely used tools that shape the narratives of technology and future. Dr Noble writes “racism and sexism are part of the architecture and language of technology[…]We need a full-on re-evaluation of the implications of our information resources being governed by corporate-controlled advertising companies.”

Google Search applied another after-the-fact fix to reduce the racy results after Dr Noble’s work. However, this also remains a patch fix: the results for “Latina girls” still show majority sexualized images and results for “Hispanic girls” show majority stock photos or Pinterest posts. The results for “Asian girls” seem to remain much the same, associated with pictures tagged as hot, cute, beautiful, sexy, brides.

Gender and Race Bias in Search Results for “Artificial Intelligence”

The third work is Better Images of AI, which is a collaboration that I am proud to have helped found and continue supporting as an advisor. A group of like-minded advocates and scholars have been fighting against the false and cliched images of artificial intelligence used in news stories or marketing material about AI. 

We have been concerned about how images such as humanoid robots, outstretched robot hands, brains shape the public’s perception of what AI systems are and what they are capable of. Such anthropomorphized illustrations not only add to the hype of AI’s endless miracles, but they also stop people questioning the ubiqutious AI systems embedded in their smart phones, laptops, fitness trackers, home appliances – to name but a few. They hinder the perception of consumers and citizens. This means that the conversations in mainstream tend to be stuck at ‘AI is going to take all of our jobs away,’ or ‘AI will be the end of humanity’ and as such the current societal and environmental harms and implications of some AI systems are not publicly and deeply discussed. Those powerful actors developing or using systems to benefit themselves rather than society are hardly held accountable. 

The Better Images of AI collaboration not only challenges the narratives and biases underlying these images, but also provides a platform for artists to share their images in a creative commons repository – in other words, it builds a communal alternative imagination. These images aim to more realistically portray the technology, the people behind it, and point towards its strengths, weaknesses, context and applications. They represent a wider range of humans and human cultures than ‘Caucasian businessperson’, show realistic applications of AI now, not in some unspecified science-fiction future, don’t show physical robotic hardware where there is none and reflect the realistically messy, complex, repetitive and statistical nature of AI systems.

Down the rabbit hole…

So with that background, back to my story for this article. For part of the lecture, I was preparing discussions surrounding AI and the future of work. I wanted to discuss how execution of different professional tasks were changing with technology, and what that means for the future of certain industries or occupational areas. I wanted to underline that some tasks like repetitive transactions, large scale iterations, standard rule applications are better done with AI – as long as they were the right solution for the context and problem, and were developed responsibly and monitored continuously. 

On the flip side, certain skills and tasks that include leading, empathizing, creating are to be left to humans–AI systems neither have the capacity or capability, nor should they be entrusted with such tasks.  I wanted to add some visuals to the presentation and also check out what is currently being depicted in the search results. I first started with basic keyword searches in English such as ‘AI and medical,’ ‘AI and education,’ ‘AI and law enforcement’ etc. What I saw in the first few examples was depressing. I decided to expand the search to more occupational areas: the search results did not get better. I then wondered what the results might be if I had the same searches but this time in Turkish.

What you see below are the first images that come up in my Google search results for each of these keywords. The images not only continue to reflect the false narratives but in some cases are flat out illogical. Please note that I have only used AI / Yapay Zeka in my search and not ‘robot’.

Yapay zeka ve sağlık : AI and medical

A picture containing text

Description automatically generated

In both Turkish and English-speaking worlds, we are to expect white Caucasian male robots to be our future doctors. They will need to wear a shirt, tie and white doctor’s coat to keep their metalic bodies warm (apparently no need for masking). They will also need to look at a tablet to process information and make diagnosis or decisions. Their hands and fingers will delicately handle surgical moves. What we should really be caring about medical algorithms right now is the representativeness of the datasets used in building the algorithms, the explainability of how the algorithm made a diagnostic determination, why it is suggesting a certain prescription or course of action, and how some health applications are completely left out of regulatory oversight.

We have already experienced current medical algorithms which result in biased and discriminatory outcomes because of a patient’s gender, socioeconomic level or even historical access of certain populations to healthcare. We know of diagnostic algorithms which have embedded code to change a determination due to a patient’s race; of false determinations due to the skin color of a patient; of faulty correlations and predictions due to training datasets representing only a portion of the population.

Yapay zeka ve hemşire : AI and Nurse

Yapay zekanın sağlık alanında kullanımı | Pitstop Reklam Ajansı Graphical user interface

Description automatically generated

After seeing the above images I wondered if the results would change if I was more specific about the profession within the medical field. I immediately regretted my decision.

In both results, the Caucasian male robot image changes to a Caucasian female image, reflecting the gender stereotypes across both cultures. The Turkish AI nurse wants you to keep quiet and not cause any disruption or noise. I was not prepared for the English version, a D+ cup wearing robot. Hard to say if the breasts are natural or artificial! This nurse has a Green Cross both on the nurse cap and the bra(?!). The robot is connected to something with yellow cables so probably limited in its physical reach, although there is definitely intention to listen to your chest or heart beat. This nurse will also show you your vitals on an image projected from her chest.

Yapay zeka ve kanun : AI and legal

A picture containing water sport, swimming

Description automatically generated A close-up of a robot

Description automatically generated with low confidence

AI in the legal system is currently one of the most contentious issues in the policy and regulatory discussions. We have already seen a number of use cases where AI systems are used by courts for judicial decisions about recidivism, sentencing or bail, some with results biased against Black people in particular. In the criminal justice field, the use of AI systems for providing investigative assistance and automating decision-making processes for routine administrative paperwork is already in place in many countries. When it comes to images though, these systems, some of which make high-stake decisions that impact fundamental rights, or the existing cases of impacted people are not depicted. Instead we either have a robot touching a blue projection (don’t ask why), or a robot holding a wooden gavel. It is not clear from the depiction if the robot will chase you and hammer you down with the gravel, or if this white male looking robot is about to make a judgement about your right to abortion. The glasses which the robot is wearing I presume are to stress that this particular legal robot is well read.

Yapay zeka ve polis : AI and Law Enforcement

A picture containing text, electronics

Description automatically generated A picture containing text, outdoor, sky

Description automatically generated

Similar to the secondary search I explained above for medical systems, I wanted to go deeper here. I searched for AI and law enforcement.  Currently, in a number of countries (including US, EU member states, China, etc) AI systems are used by police to predict crimes which have not happened yet. Law enforcement uses AI in various ways, from  evidence analysis to biometric surveillance: from anomaly detection/pattern analysis to license-plate readers; from crowd control to dragnet data collection and aggregation; from voice analysis to social media scanning to drone systems. Although crime data is notoriously biased in terms of race, ethinicity and socioeconomic background, and reflects decades of structural racism and oppression, you could not tell any of that from the image results. 

You do not see the picture of Black men wrongfully arrested due to biased and inaccurate facial recognition systems. You do not see hot spots mapped onto predictive policing maps which are heavily surveilled due to the data outcomes. You do not see the law enforcement buying large amounts of data from data-brokers – data that they would otherwise need search warrants to acquire. What you see instead in the English version is another Caucasian male-looking robot working shoulder to shoulder with police SWAT teams – keeping law and order!  In the Turkish version, the image result reflects a female police officer who is either being whispered to by an AI system or using an AI system for work. If you are a police officer in Turkey, you are probably safe for the moment as long as your AI system is shaped as a human head circuit.

Yapay zeka ve gazetecilik : AI and journalism

A picture containing text, automaton

Description automatically generated

Content and news creation are currently some of the most ubiquitous uses of AI we experience in our daily lives. We see algorithmic systems curating content at news/media channels. We experience the manipulation and ranking of the content in the search results, in the news that we are exposed to, in the social media feeds that we doom scroll. We complain about how disinformation and misinformation (and to a certain extent deepfakes) have become mainstream conversations with real life consequences. Research after research warns us about the dangers of echo chambers created by algorithmic systems, how it leads to radicalization and polarization, and demands accountability from the people who have the power to control their designs.

The image result in Turkish search is interesting in the sense that journalism is still a male occupation. The same looking people work in the field, and AI in this context is a robot of short stature waving an application form to be considered for the job.  The robot in English results is slightly more stylish. It even carries a Press card to depict the ethical obligations it has for the profession. You would almost think that this is the journalist working long hours to break an investigative piece, or one risking their life to report from conflict zones.

Yapay zeka ve finans : AI and finance

A fire hydrant in front of a digital clock

Description automatically generated with medium confidence

The finance sector,  banking and insurance industries reflect some of the most mature use cases of AI systems. For decades now, banking has been using algorithmic systems for pattern recognition and fraud detection, for credit scoring and credit/loan determinations, for electronic transaction matching to name a few. The insurance industry likewise heavily uses algorithmic systems and big data to determine insurance eligibility, policy premiums and in certain cases claim management.  Finance was one of the first industries disrupted by emerging technologies. FinTech created a number of companies and applications to break the hold of major financial institutions on the market. Big banks responded with their own innovations.

So, it is again interesting to see that even with such mature use of AI in a field, robot images are still first in the search results. We do not see the app which you used to transfer funds to your family or friends. Nor the high frequency trading algorithms which currently carry more than 70% of all daily stock exchange transactions. It is not the algorithms which collect hundreds of data points about you from your grocery shopping to GPS locations to make a judgement about your creditworthiness – your trustworthiness. It is not the sentiment analysis AI which scans millions of corporate reporting, public disclosures or even tweets about publicly traded companies and make microsecond judgements on what stocks to buy. It is not the AI algorithm which determines the interest rate and limit on your next credit card or loan application. No, it is the image of another white robot staring at a digital board of what we can assume to be stock prices. 

Yapay zeka ve ordu : AI and military

A picture containing outdoor, tree, grass, military vehicle

Description automatically generated A picture containing weapon, old

Description automatically generated

AI and military usE cases are a whole different story in the scheme of AI innovation and policy discussions. AI systems have been used for many years in satellite imagery analysis, pattern recognition, weapon development and simulations etc. The more recent debates intertwine geopolitics with an AI arms race. This indeed should keep all of us awake at night. The importance of autonomous lethal weapons (LAWs) by militaries as well as non-traditional actors is an issue upon which every single state in the world seems to agree. 

Yet agreement does not mean action. It does not mean human life is protected. LAWs have the capacity to make decisions by themselves to attack – without any accountability. Micro drones can be combined with facial recognition and attack systems to take down individuals and political dissenters. Drones can be remotely controlled to drop ammunition over remote regions. Robotic systems (correct depiction) can be used for landmine removal, crowd control or perimeter security. All these AI systems already exist. The image results though again reflect an interesting narrative. The image in Turkish results show a female American soldier using a robot to carry heavy equipment. The robot here is more like a mule in this depiction than an autonomous killer.  The image result in English shows a mixed gender robot group in what seems to be camouflage green color. At least the glowing white will not be an issue for the safety of these robots.

Yapay zeka ve eğitim : AI and Education

Yapay Zekanın Eğitimdeki 10 Kullanım Alanı – Social Business Türkiye Text

Description automatically generated

When it comes to AI and education, the images continue to be robot related. The first robot lifts kids up to the skies to show what is on the horizon. It has nothing to do with the hype of AI-powered training systems or learning analytics which are hitting schools and universities across the globe. The AI here does not seem to use proctoring software to discriminate or surveil students. It also apparently does not matter if you do not have access to broadband to interact with this AI or do your schoolwork. The search result in English, on the other hand, shows a robot which needs a blackboard and a piece of chalk to process mathematical problems. If your Excel or Tableu or R software does not look like this image, you might want to return to the vendor. Also if you are an educator in social sciences or humanities, it is probably time to re-think the future of your career.

Yapay zeka ve mühendislik : AI and engineering

Diagram

Description automatically generated with medium confidence Graphical user interface

Description automatically generated with low confidence

The blackboard and chalk using robot is better off in the future of engineering. Educator robot might be short on resources, but the engineer robot will use a digital board to do the same calculations.  Staring at this board will eventually ensure the robot engineer solves the problem. In the Turkish version, the robot gazes at a field of hexagons. If you are a current engineer in any field using AI software to visualize your data in multiple dimensions, running design or impact scenarios, or building code etc – does this look like your algorithm? 

Yapay zeka ve satış : AI and sales

A picture containing text, electronics

Description automatically generated A group of people working on a computer

Description automatically generated with low confidence

If you are a salesperson in Turkey, the prospects for you are a bit iffy. The future seems to require your brain to be exposed and held in the air. There is a safety net of a palm there to protect your AI brain just in case there is too much overload.  However if you are in sales in the English-speaking world, your sales team or your call center staff will be more of white glowy male robots. Despite being a robot, these AI systems will still need access to a laptop to type things and process data. They will also need headsets to communicate with customers because the designers forgot to include voice recognition and analysis software in the first place. Maybe next time you hear ‘press 0 to speak to an agent’ you might have different images in your mind. Never mind how the customer support services you call record your voice and train their algorithms with a very weak consent notice (‘your call might be recorded for training and quality purposes’ sound familiar?). Never mind the fact most of the current AI applications are chatbots on the websites you visit, or automated text algorithms which inquire about your questions. Never mind the cheap human labor which churns through the sales and call center operations without much of worker rights or protections.    

Yapay zeka ve mimarlık : AI and architecture

A statue of a person with a city in the background

Description automatically generated with low confidence A statue of a person with a city in the background

Description automatically generated with low confidence

It was surprising to see the same image as the first result in both Turkish and English search for architecture. I will not speculate on why this might be the case. However, our images and imaginations of current and future AI systems once again are limited to robots. This time a female robot is used in the depiction with city planning and architectural ideas flowing out from the back of the robot’s head.

Yapay zeka ve tarım : AI and agriculture

A picture containing text, plant, grass

Description automatically generated

Finally, I wanted to check what the situation was for agriculture. It was surprising that Turkish image reflected a robot delicately picking a grain of wheat. Turkey used to be a country proud of its agricultural heritage and its ability to self-sustain on food. It used to be a net exporter of food products.  Over the years, it lost that edge due to a number of factors. The current imagery of AI does not seem to take into account any human who suffer the harsh conditions in the fields. The image on the right is more focused on the conditions of the nature to ensure efficiency and high production. It was refreshing to see that at least the image of green fields was kept and maybe that stays for us a reminder that we need to respect and protect the nature. 

So, returning to where I started, images matter.  We need to be cognizant of how the emerging technologies are being visualized, why they are depicted in these ways, who makes those decisions and hence shapes the conversation, who benefits and who is harmed from such framing. We need to imagine technologies which move us towards humanity, equity and justice. We also need the images of those technologies to be accurate, diverse and inclusive.

Instead of assigning human characteristics to algorithms (which are at the end of the day human made code and rules), we need to reflect the human motivations and decisions embedded in these systems. Instead of depicting AI with superhuman powers, we need to show the labor of humans which build these systems. Instead of focusing only on robots and robotics, we need to explain AI as software embedded in our phones, laptops, apps, home appliances, cars, or surveillance infrastructures. Instead of thinking of AI as an independent entity or intelligence, we need to explain AI being used as a tool-making decisions about our identity, health, finances, work, education or our rights and freedoms. 

Handmade, Remade, Unmade A.I.

Two digitally illustrated green playing cards on a white background, with the letters A and I in capitals and lowercase calligraphy over modified photographs of human mouths in profile.

The Journey of Alina Constantin’s Art

Alina’s image, Handmade A.I., was one of the first additions to the Better Images of AI repository. The description affixed to the image on the site outlines its ‘alternative redefinition of AI’, bringing back into play the elements of human interaction which are so frequently excluded from discussions of the tech. Yet now, a few months on from the introduction of the image to the site, Alina’s work itself has undergone some ‘alternative redefinition’. This blog post explores the journey of this particular image, from the details of its conception to its numerous uses since: How has the image itself been changed, adapted in significance, semantically used? 

Alina Constantin is a multicultural game designer, artist and organiser whose work focuses on unearthing human-sized stories out of large systems. For this piece, some of the principles of machine learning like interpretation, classification, and prioritisation were encoded as the more physical components of human interaction: ‘hands, mouths and handwritten typefaces’, forcing us to consider our relationship to technology differently. We caught up with Alina to discuss further the process (and meaning) behind the work.

What have been the biggest challenges in creating Better Images of AI?

Representing AI comes with several big challenges. The first is the ongoing inundation of our collective imagination with skewed imagery, falsely representing these technologies in practice, in the name of simplification, sensationalism, and our human impulse towards personification. The second challenge is the absence of any single agreed-upon definition of AI, and obviously the complexity of the topic itself.

What was your approach to this piece?

My approach was largely an intricate process of translation. To stay focused upon the ‘why of A.I’ in practical terms, I chose to focus on elements of speech, also wanting to highlight the human sources of our algorithms in hand drawing letters and typefaces. 

I asked questions, and selected imagery that could be both evocative and different. For the back side of the cards, not visible in this image, I bridged the interpretive logic of tarot with the mapping logic of sociology, choosing a range of 56 words from varying fields starting with A/I to allow for more personal and specific definitions of A.I. To take this idea further, I then mapped the idea to 8 different chess moves, extending into a historical chess puzzle that made its way into a theatrical card deck, which you can play with here. You can see more of the process of this whole project here.

This process of translating A.I via my own artist’s tool set of stories/gameplay was highly productive, requiring me to narrow down my thinking to components of A.I logic which could be expressed and understood by individuals with or without a background in tech. The importance of prototyping, and discussing these ideas with audiences both familiar and unfamiliar with AI helped me validate and adjust my own understanding and representation–a crucial step for all of us to assure broader representation within the sector.

So how has Alina’s Better Image been used? Which meanings have been drawn out, and how has the image been redefined in practice? 

One implementation of ‘Handmade A.I.’, on the website of one of our affiliated organisations We and AI, remains largely aligned with the artist’s reading of it. According to We and AI, the image was chosen due to its re-centring of the human within the AI conversation: the human hands still hold the cards, humanity are responsible for their shuffling, their design (though not necessarily completely in control of which ones are dealt.) Human agency continues to direct the technology, not the other way round. As a key tenet of the organisation, and a key element of the image identified by Alina, this all adds up. 

https://weandai.org/, use of Alina’s image

A similar usage by the Universität Hamburg, to accompany a lecture on responsibility in the AI field, follows a similar logic. The additional slant of human agency considered from a human rights perspective again broadens Alina’s initial image. The components of human interaction which she has featured expand to a more universal representation of not just human input to these technologies but human culpability–the blood, in effect, is on our hands. 

Universität Hamburg use of Alina’s image

Another implementation, this time by the Digital Freedom Fund, comes with an article concerning the importance of our language around these new technologies. Deviating slightly from the visual, and more into the semantics of artificial intelligence, the use may at first seem slightly unrelated. However, as the content of the article develops, concerns surrounding the ‘technocentrism’ rather than anthropocentrism in our discussions of AI become a focal point. Alina’s image captures the need to reclaim language surrounding these technologies, placing the cards firmly back in human hands. The article directly states, ‘Every algorithm is the result of a desire expressed by a person or a group of persons’ (Meyer, 2022.) Technology is not neutral. Like a pack of playing cards, it is always humanity which creates and shuffles the deck. 

Digital Freedom Fund use of Alina’s image

This is not the only instance in which Alina’s image has been used to illustrate the relation of AI and language. The question “Can AI really write like a human?” seems to be on everyone’s lips, and ‘Handmade A.I.’ , with its deliberately humanoid typeface, its natural visual partner. In a blog post for LSE, Marco Lehner (of BR AI+) discusses employment of a GPT-3 bot, and whilst allowing for slightly more nuance, ultimately reaches a similar crux– human involvement remains central, no matter how much ‘automation’ we attempt.

Even as ‘better’ images such as Alina’s are provided, we still see the same stock images used over and over again. Issues surrounding the speed and need for images in journalistic settings, as discussed by Martin Bryant in our previous blog post, mean that people will continue to almost instinctively reach for the ‘easy’ option. But when asked to explain what exactly these images are providing to the piece, there’s often a marked silence. This image of a humanoid robot is meaningless– Alina’s images are specific; they deal in the realities of AI, in a real facet of the technology, and are thus not universally applicable. They relate to considerations of human agency, responsible AI practice, and don’t (unlike the stock photos) act to the detriment of public understanding of our tech future.

AIHub: An Intro to Better Images of AI

AI generated image of a coffee cup, with 'AI' written on the top

The AIHub coffee corner captures the musings of AI experts over a short conversation. As a Founding Supporter of Better Images of AI, and having previously advised on using relevant images to promote AI research (in our guide to avoid hype), it made sense to use the opportunity to discuss better images of AI!

The representation of AI in the media has long been a problem, with blue brains, white robots, and flying maths – usually completely unrelated to the content of the article – featuring heavily. We were therefore please to support Better Image of AI’s gallery of free-to-use images which they hope will increase public understanding around the different aspects of AI, and enable more meaningful conversations.

In this piece from our coffee corner, Sabine Hauert chaired a discussion with Michael Littman, Carles Sierra, Anna Tahovska and Oskar von Stryk, surrounding how exactly we might together bring better images to a wider audience.

THE DISCUSSION:

Sabine: There are lots of aspects we can consider when thinking about AI images: 
1. How can we source or design better images for AI? 
2. How should AI be represented pictorially in articles, blogs etc? 
3. What’s the problem with images in AI? 
4. What do we need to consider when thinking about portraying AI in images?

Oskar: Another question to consider is: 
5. What is the purpose of the image, and what is the context in which the image appears? 

I think this makes a big difference actually. Some things need to be contextualised, we need to consider the purpose of the article, and so on. In my experience with the media, 50% of the time they report technically incorrectly, or at least partially incorrectly. This seems to be a kind of “law of nature”, an invariant. As a result, the only difference that you care about is whether an article portrays a positive or a negative attitude towards the AI topics mentioned. I always say, “OK, I don’t care too much about the incorrectness from a scientific point of view, as it seems quite unavoidable; if it’s a positive mood I can go with it”. So I think we need contextualisation, to determine whether the picture is useful.

Carles: In terms of designing images, I was thinking about a similar concept to a hackathon but for a design school. Teams of designers, or individual designers, could propose images which represented different views or concepts within AI. It could be connected to an award. I would approach young people in design schools with concrete proposals, and have those as the object of the hackathon.

Do you have an idea of the concepts we are missing?

Carles: I mean, we need to think about what kind of AI we are representing. Maybe solving a particular problem, or explaining a problem and some of the techniques that are being used for that. And then, after we give the designers a short explanation of that concept, we ask them to bring back some designs.

Sabine: With robotics it’s slightly easier because you can show a robot, or you can show a robot doing something. The AI one is a challenge because a lot of it is abstract. It could be that a lot of these images are slightly abstract. Would the media pick those up as something they use for their articles? Or, do we need more people in our AI images?

I was recently trying to find pictures for a report that we’re working on and I was desperately looking for pictures of people using robots for applications, and it’s really hard to get images that include the people plus the technology. You either have an abstract technology, or you have the application. You never really have that interface. So, maybe we need to stage this – photographers that spend a week taking photos of people working with the technology.

Oskar: What I actually like are comics – short cartoons which have two or three elements and a small conversation which points out something very clearly or even drastically. I have collected a number of these. They can portray a point very well. Again, what’s the purpose? If it’s a journalist writing an article about an aspect of AI then of course they look for a picture that’s attractive to a general audience, just to get them attracted to the article, no matter if the relation to the article is relevant or not. From a more scientific point of view, for scientifically oriented contexts, I like these cartoons which really highlight key issues.

Sabine: Schematics to explain the concepts then. Maybe we need some better schematics just to explain the basic concepts of AI.

What are the challenges you face as a researcher? If a journalist needs a pretty picture of your own research to put at the top, what do you usually send them?

Oskar: Sometimes I have photographers come to my lab and we take nice pictures of the robots and people. The problem with robots is that people look at the hardware and don’t see the software which makes the intelligence. So, I always try to make the software more visible – usually this typically involves using big screens where we visualise the inside of the robot’s “brain”, for example. We show the localisation and how the environment is perceived, and so on.

Michael: I was going to say graphs because that’s how I want to communicate. But, that’s not great…

Sabine: Maybe it’s not impossible to show a graph. We just need someone who’s an expert in data visualisation who could make them look really pretty. In the way that it looks almost like a picture. Maybe there’s ways we can beautify figures so that they are acceptable as an image in the media.

Anna: In our institute we are lucky because we have a graphical designer employed here. So, we can put them in touch with the researchers and they can discuss the topic and she can create graphics or photographs. It’s great for us because we run a lot of projects and these have a lot of graphical elements. Also, there are a lot of articles we need images for, so it’s very beneficial to have something like this in-house.

Sabine: The New York Times does this with their articles. They have an artist who makes really abstract pictures for these articles, that can represent just a little bit of it, but it does the job. More artist engagement is a good idea.

Oskar: Actually, graphs can be interesting as well. For example, see the work of David Kriesel, who was a former member of a RoboCup team in the Humanoid league. He was the one who detected this famous xerox scanner error, and he’s also invited as a speaker at Chaos Computer Club. He does data analysis of lots of things, for example he’s looked at Coronavirus data, and he did an analysis of the German train company, the Deutsche Bahn. He has postings on LinkedIn which are very highly rated. His talks on YouTube on data analysis get many views, so I think if you combine data with interesting insights and conclusions, you can make it attractive to a large audience.

Anything we should ban? Brains, the Terminator…

Oskar: When I talk to a general audience about robots, it’s a good sign if they think about industrial robots, but usually they think about the Terminator. And if it’s not terminating their life, it’s terminating their workplace, they may fear.

Sabine: I have noticed robotics being used a lot as a portrayal for AI even if the topic has nothing to do with robotics. I always find that interesting because there is a bit of a separation between robotics and AI depending on what field of AI you’re looking at. And yet, the robots get used a lot as images. I guess because it’s a bit more visual.

Any final thoughts on how we could source good images?

Carles: I agree with Anna. I think we should approach graphic designers and schools and give them a purpose – it could be a final year assignment to get a variety of images.

Oskar: Maybe we could get a list of key statements where there are typically misunderstandings around AI and robotics. We could explain the background to the designers, and they could come up with a graphical visualisation.

You can see more of AIHub’s work on their website, and more from the Better Images of AI gallery here.

Branching Out: Understanding an Algorithm at a Glance

A window of three images. On the right is a photo of a big tree in a green field in a field of grass and a bright blue sky. The two on the left are simplifications created based on a decision tree algorithm. The work illustrates a popular type of machine learning model: the decision tree. Decision trees work by splitting the population into ever smaller segments. I try to give people an intuitive understanding of the algorithm. I also want to show that models are simplifications of reality, but can still be useful, or in this case visually pleasing. To create this I trained a model to predict pixel colour values, based on an original photograph of a tree.

The impetus for the most recent contributions to our image repository was described by the artist as promoting understanding of present AI systems. Rens Dimmendaal, Principal Data Scientist at GoDataDriven, discussed with Better Images of AI the need to cut through all the unnecessary complication of ideas within the AI field; a goal which he believes is best achieved through visual media. 

Discussions of the ‘black box’ of AI are not exactly new, and the recent calls for explainability statements to accompany new tech from Best Practice AI are certainly attempting to address the problem at some level. Tim Gordon writes of the greater ‘transparency’ required in the field, as well as the implicit acknowledgement that any wider impacts have been considered. Yet, for the broader spectrum of individuals whose lives are already being influenced by AI technologies, an extensive, jargon-filled document on the various inputs and outputs of any single algorithm is unlikely to provide much relief. 

This is where Dimmendaal comes in: to provide ‘understanding at a glance’ (and also to ‘make a pretty picture’, in his own words). The artist began with the example of the decision tree. All present tutorials on this topic, in his view, use datasets which only make the concept more difficult to understand–have a look at ‘decision tree titanic’ for a clear illustration of this.  Another explanation was provided by r2d3. Yet, for Rens, this still employed an overly complicated ‘usecase’. Hence, this selection of images.

Rens cites his inspiration for this particular project as Roger Johansson’s recreation of the ‘Mona Lisa’, using genetic programming. In the original, Johansson attempts to reproduce the piece with a combination of semi-transparent polygons and an evolutionary algorithm, gradually mutating the initial randomly generated polygons to move closer and closer to the original image. Rens recreated elements of this code as a starting point, then with the addition of the triptych format and implementation of a decision tree style algorithm made the works his own. 

Rens Dimmendaal / Better Images of AI / Man / CC-BY 4.0

In keeping with his motivations–making a ‘pretty picture’, but chiefly contributing to the greater transparency of AI methodologies–Dimmendaal elected the triptych to present his outputs. The mutation of the image is shown as a fluid, interactive process, morphing across the triptych from left to right, from abstraction to the original image itself. Getting a glimpse inside the algorithm in this manner allows for the ‘understanding at a glance’ which the artist wished to provide–the image shifts before our eyes, from the initial input to the final output. 

Rens Dimmendaal & David Clode / Better Images of AI / Fish / CC-BY 4.0

Rens Dimmendaal & Jesse Donoghoe / Better Images of AI / Car / CC-BY 4.0

Engaging with the decision tree was not only a practical decision, related to the prior lack of adequate tutorial, but also an artistic one. As Dimmendaal explains, ‘applying a decision tree to an actual tree was just too poetic an opportunity to let slide.’ We think it paid off… 

Dimmendaal has worked with numerous algorithmic systems previously (including: k-means, nearest neighbours, linear regression, svm) but cites this particular combination of genetic programming, decision trees and the triptych format as producing the nicest outcome. More of his work can be found both in our image repository, and on his personal website.

Whether or not a detailed understanding of algorithms is something you are interested in, you can input your own images to the tool Rens created for this project here and play around with making your own decision tree art. What do images relevant to your industry, product or interests look like seen through this process? Make sure to tag Better Images of AI in your AI artworks, and credit Rens. We’re excited to see what you come up with!

More from Better Images: Twitter | LinkedIn

More from the artist: Twitter | Linkedin