AI WHAT’S THAT SOUND? Stories and Sonic Framing of AI

The ‘Better Images of AI’ project is so important, as typically, portrayals of AI can be seen to reinforce established and polarised views, which can distract from the pressing issues of today, but we rarely question how AI sounds…

We are researching the sonic framing of AI narratives. In this blog post, we ask, in what ways does a failure to consider the sonic framing of AI influence or undermine attempts to broaden public understanding of AI? Based on our preliminary impressions, we argue that the sonic framing of AI is just as important as other narrative features and propose a new programme of research. We use some brief examples here to explore this.

The role of sonic framing on AI narratives and public perception

Music is useful. We employ music every day to change how we feel, how we think, to distract us, to block out unwanted sound, to help us run faster, to relax, to help us understand, and to send signals to others. Decades of music psychology research have already parsed the many roles music can serve in our everyday lives. Indeed, the idea that music is ‘functional’ or somehow useful has been with us since antiquity. Imagine receiving a cassette tape in the post from someone filled with messages of love: music transmits information and messages. Music can also be employed to frame how we feel about things. Or, written another way, music can manipulate how we feel about certain people, concepts, or things. As such, when we decide to use music to ‘frame’ how we wish a piece of storytelling to be perceived, attention and scrutiny should be paid to the resonances and emotional overtones that music brings to a topic. AI is one such topic and a topic that is heavily subject to hype. This is arguably an inevitable condition of innovation at least at inception, but while the future with AI is so clearly shaped by stories told about AI, the music chosen may also ‘obscure views of the future.’

Affective AI and its role in storytelling

30 years ago, documentarian Michael Rabiger quite literally wrote the book on documentary filmmaking. Now in it’s 7th edition, Directing the Documentary explores the role and responsibility of the filmmaker in presenting factual narratives to an audience. Crucially, Rabiger discusses the use of music in documentary film saying it should never be used to ‘inject false emotion’ thus giving the audience an unreal or amplified or biased view of proceedings. What is the function of a booming calamitous impact sound signalling the obliteration of all humankind at the hands of a robot if not to inject falsified or heightened emotion? Surely this serves only to reinforce dominant narratives of fear and robot uprising – the likes of science fiction. If we are to live alongside AI, as we are already doing, we must consider ways to promote positive emotions to move us away from the human vs machine tropes which are keeping us, well, stuck.

Moreover, we wonder about the notions of authenticity, transparency and explainability. Despite attempts to increase AI literacy through citizen science and initiatives about AI explainability, documentaries and think pieces that promote public engagement with AI and purport to promote ‘understanding’ are often riddled with issues of authenticity or a lack of transparency doing precisely nothing to educate the public. Complex concepts like neural nets, quantum computing, Bayesian probabilistic networks etc. must be reduced (necessarily so) to a level whereby a non-specialist viewer can glean some understanding of the topic. In this course retelling of ‘facts’, composers and music supervisors have an even more crucial role in aiding nuanced comprehension; yet we find ourselves faced with the current trend for bombast, extravagance and bias when it comes to soundtracking AI. Indeed, as much as attention needs to be paid to those who are creating AI technologies to mitigate a creeping bias, attention also needs to be paid to those who are composing music for the same reasons.

Eerie AI?

Techno-pessimism is reinforced by portrayals of AI in visual and sound media – suggestive of a dystopian future. Eerie music in film, for instance, can reinforce a view of AI uprising or express some form of subtle manipulation by AI agents. Casting an ear over the raft of AI documentaries in recent years, we can observe the trend for approaches to sonic framing which reinforce dominant tropes. At the extreme, Mark Crawford’s original score from Netflix’s The Social Dilemma (which is a documentary/drama) is a prime example of this in action. A track titled ‘Am I Really That Bad?’ begins as a childish waltz before gently morphing into a disturbing carnival-esque horror soundtrack. The following track ‘Server Room’ is merely a texture full of throbbing basses, Hitchcock-style string screeches, atonal vibraphones, and rising tension that serves only to make the listener uncomfortable. Alternatively, ‘Theremin Lullaby’ offers up luscious utopian piano textures Max Richter would be proud of, before plunging us into ‘The Sliding Scale’, a cut that comes straight from Tron: Legacy with its chugging bass and blasts of noise and static. Interestingly, in a behind the scenes interview with the composer, we learn that the ‘expert’ cast of the Social Dilemma were interviewed and guided the sound design. However, the film received much criticism for being sensationalist and the cast themselves were criticised as former tech giant employees hiding in plain sight. If these unsubtle, polarised positions are the only sonic fayre on offer, we should be questioning who is shaping music and the extent to which it is being used to actively manipulate audience impressions of AI.

Of course, there are other forms of story and documentaries about AI which are less subject to dramatisation. Some examples exist where sound designers, composers and filmmakers are employing the capabilities afforded by music to help demonstrate complex ideas and support the experience of the viewer in a nuanced manner. A recent episode of the BBC’s Click programme uses a combination of image and music to demonstrate supervised machine learning techniques to great effect. Rather than the textural clouds of utopian AI or the dystopian future hinted (or screamed) at by overly dramatic Zimmer-esque scores, the composer Bella Saer and engineer Yoad Nevo create a musical representation of the images, providing positive and negative aural feedback for the machine learning process. Here, the music transforms into a sonic representation of the processes we are witnessing being played out on the screen. Perhaps this represents the kinds of narratives society needs.

Future research

We don’t yet have the answers, only impressions. It remains a live research and development question as to how far sonic framing influences public perception of AI and we are working on documentary as a starting point. As we move closer to understanding the influence of representation in AI discourse, it surely becomes a pressing matter. Just as the BBC is building and commissioning an image repository of more inclusive and representative images of AI, we hope to provoke discussion about how we can bring together creative and technology industries to reframe how we audibly communicate and conceptualise AI.

Still, a question remains about the stories being told about AI, who is telling them and how they are told. Going forward, our research will investigate and test these ideas, by interviewing composers and sound designers of AI documentaries. As for this blog, we encourage you to pay attention to how AI sounds in the next story you are told about AI or when you see an image. We call for practitioners to dig a little deeper when sonically framing AI.


About us

Dr Jenn Chubb (@JennChubb) is Research Fellow at the University of York, now with XR Stories. She is interested in all things ethics, science and stories. Jenn is researching sonic framing of AI in narratives and sense making. Jenn plays deliberately heavy and haunting music in a band called This House is Haunted.

Dr Liam Maloney (@liamtmaloney) is an Associate Lecturer in Music & Sound Recording at the University of York. Liam is interested in music, society, disco, and what streaming is doing to our listening habits. When he has a minute to spare he also makes ambient music.

Jenn and Liam decided not to use any robot related images. Title image “soundwaves” by seth m (CC BY-NC-ND 2.0)

What does AI look like?

A version of this post was previously published on the BBC R&D blog by Tristan Ferne, Henry Cooke and David Man

We have noticed that news stories or press releases about AI are often illustrated with stock photos of shiny gendered robots, glowing blue brains or the Terminator. We don’t think that these images actually represent the technologies of AI and ML that are in use and being developed. Indeed, we think these are unhelpful stereotypes; they set unrealistic expectations, hinder wider understanding of the technology and potentially sow fear. Ultimately this affects public understanding and critical discourse around this increasingly influential technology. We are working towards better, less clichéd, more accurate and more representative images and media for AI.

Try going to your search engine of choice and search for images of AI. What do you get?

What are the issues?

The problems with stock images of AI has been discussed and analysed a number of times already and there are some great articles and papers about it that describe the issues better than we can. The Is Seeing Believing? project asks how we can evolve the visual language of AI. The Real Scandal of AI also identifies issues with stock photos. The AI Myths project, amongst other topics, includes a feature on how shiny robots are often used to represent AI.

Going a bit deeper, this article explores how researchers have illustrated AI over the decades, this paper discusses how AI is often portrayed as white “in colour, ethnicity, or both” and this paper investigates the “AI Creation” meme that features a human hand and a machine hand nearly touching. Wider issues with the portrayal and perception of AI have also been frequently studied, as by the Royal Society here.

The style of the existing images is often influenced by science fiction and there are many visual cliches of technology, such as 0s and 1s or circuit boards. The colour blue is predominant – it seems to be representing technology, but blue can also be seen as representing male-ness. The frequent representation of brains associate these images with human intelligence, although much AI and ML in use today is far removed from human intelligence. Robots occur frequently, but AI applications are very often nothing to do with robots or embodied systems. The robots are often white or they’re sexualised female representations. We also often see “evil” robots from popular culture, like the Terminator.

What is AI?

From reviewing the research literature and by interviewing AI engineers and developers we have identified some common themes which we think are important in describing AI and ML and that could help when thinking about imagery.

  • AI is all based on maths, statistics and probabilities
  • AI is about finding patterns and connections in data
  • AI works at a very large scale, manipulating almost unimaginable amounts of data
  • AI is often very complex and opaque and it’s hard to explain how it works. It’s even hard for the experts and practitioners to understand exactly what’s going on inside these systems
  • Most AI systems in use today only really know about one thing, it is “narrow” intelligence
  • AI works quite differently to the human brain, in some ways it is an alien non-human intelligence
  • AI systems are artificial and constructed and coded by humans
  • AI is a sociotechnical system; it is combinations of computers and humans, creating, selecting and processing the data
  • AI is quite invisible and often hidden
  • AI is increasingly common, becoming pervasive, and affects almost all of us in so many areas. It can be powerful when connected to systems of power and affects individuals, society and the world

We would like to see more images that realistically portray the technology and point towards its strengths, weaknesses, context and applications. Maybe they could…

  • Represent a wider range of humans and human cultures than ‘caucasian businessperson’ or ‘humanoid robot’
  • Represent the human, social and environmental impacts of AI systems
  • Reflect the realistically messy, complex, repetitive and statistical nature of AI systems
  • Accurately reflect the capabilities of the technology: generally applied to specific tasks and are not of human-level intelligence
  • Show realistic applications of AI
  • Avoid monolithic or unknowable representations of AI systems
  • Avoid using electronic representations of human brains, or robots

Towards better images

In creating new stock photos and imagery we need to consider what makes a good stock photo. Why do people use them and how? Is the image representing a particular part of the technology or is it trying to tell a wider story? What emotional response should the viewers have when looking at it? Does it help them understand the technology and is it an accurate representation

Consider the visual style; a diagram, a cartoon or a photo each brings different attributes and will communicate ideas in different ways. Imagery is often used to draw attention so it may be important to create something that has impact and is recognisable. A lot of existing stock photos of AI may be misrepresentative and unhelpful, but they are distinctive and impactful and you know them when you see them.

Some of the themes we’ve seen develop from our work include:

  • Putting humans front and centre, and showing AI as a helper, a tool or something to be harnessed.
  • Showing the human involvement in AI; in coding the systems or creating the training data.
  • Positively reinforcing what AI can do, rather than showing the negative and dangerous aspects.
  • Showing the input and outputs and how human knowledge is translated into data.
  • Making the invisible visible.
  • AI getting things wrong

Some of the interesting metaphors used include sieves and filters (of data), friendly ghosts, training circus animals, social animals, like bees or ants with emergent behaviours, child-like learning or the past predicting the future.

A new image representing datasets, creating order and digitisation

This is just a starting point and there is much more thinking to be done, sketches to be drawn, ideas to be harnessed, definitions agreed on and metaphors minted.

A coalition of partners are working on this, including BBC R&D, We and AI, and several independent researchers and academics including Creative Technologist Alexa Steinbrück, AI Researcher Buse Çetin, Research Software Engineer Yadira Sanchez Benitez, Merve Hickok and Angela Kim. Ultimately we aim to create a collection of better stock photos for AI; we’re starting to look for artists to commission and we’re looking for more partners to work with. Please get in touch if you’re interested in working with us.

Icon credits
Complexity by SBTS from the Noun Project
Octopus by Atif Arshad from the Noun Project
pattern by Eliricon from the Noun Project
watch world by corpus delicti from the Noun Project
sts by Nithinan Tatah from the Noun Project
narrowing by andriwidodo from the Noun Project
Error 404 by Aneeque Ahmed from the Noun Project
box icon by Fithratul Hafizd from the Noun Project
Ghost by Pelin Kahraman from the Noun Project
stack by Alex Fuller from the Noun Project
Math by Ralf Schmitzer from the Noun Project
chip by Chintuza from the Noun Project