At the end of summer, Better Images of AI were invited to the Tipping Point exhibition commissioned by BRAID in Edinburgh. The exhibition featured works, which ranged from digital installations to sculptural interventions, zines and comedic sketches, from creators who were responding to the present realities and near-future horizons of AI.
With the very exciting announcement of a 3-year extension of the BRAID programme which will involve another round of funding for commissioned works and exhibition, Tania Duarte (who visited the exhibition and provided support to BRAID) brings together the themes from the art and panel discussions from her time in Edinburgh.
Although the Tipping Point is not strictly related to visual representations of AI, in their approaches to reimagining, the artists had to grapple with the same questions that our community often do: how to more realistically represent AI, what does inclusive AI look and feel like, and what is the role of AI in society. These discussions could provide inspiration for artists submitting to the Better Images of AI library, or as reflections to prompt more thoughtful approaches to the uses of AI, especially relating to the choices we make when using it in creative practice.
All images in this post are © 2025 Chris Scott. All rights reserved.
Edinburgh’s summer festivals are famous throughout the world for the scale of celebration of arts and culture. This August, Nicola Benedetti (Festival Director of Edinburgh International Festival), described how its significance is more important than ever:
“This year’s International Festival has been one of extraordinary contrasts, from grandeur and scale to intimacy and informality. I’ve seen this year how art can build bridges, change minds and find connection in a world that so desperately needs it.” – Nicola Benedetti
Building bridges is exactly what Bridging Responsible AI Divides (BRAID), was funded by the UKRI Arts and Humanities Research Council to address. The Tipping Point new artists commission and exhibition proved a powerful way to explore topics such as connection, and also resilience, humour, ecology, mindfulness, resistance, ethics, empowerment and creativity in the context of present realities and near-future horizons of AI.
Led by The University of Edinburgh in partnership with the Ada Lovelace Institute and the BBC, BRAID’s Inspired Innovation lead Beverley Hood (artist and a reader at Edinburgh College of Art), hosted the launch on the 8th August 2025. She introduced the moving and captivating exhibition of seven very different visions of approaching AI with wisdom and care, starting with workshops and a panel discussion with the artists.
The opportunity to not only view the artists work at the exhibition, but also hear them discuss their process, shared challenges and different perspectives together was not only fascinating, but also added depth to the ideas and imagination which creative practice unlocked. The themes which arose in the discussion gave a rich idea of how artistic representations can allow a more nuanced, open and inclusive exploration of the key questions humanity is facing in front of AI systems which are changing our interactions, roles and society.
We were left dreaming of a world where artists are in charge of creating tools with non-commercial design intents, and the whole exhibition provided a glimpse of how different the world could be.

Redefining AI and encouraging new thinking
Each exhibit has extensive documentation of the different themes related to envisaging how we get to the responsible use of AI. Each artist chose to do this with their very different artistic methods and backgrounds, showing the plurality and breadth of viewpoints and interpretations which can be applied to our mental models of what is termed “AI”. These were a stark contrast from the hegemonic and often monolithic imaginaries which are typically seen in the media, in marketing and in popular culture.
It is no surprise then that the panel even goes on to discuss changing the term and meaning of AI itself. Wesley Goatley’s installation of three possible, but progressive, futures; ‘A Harbinger, a Horizon, and a Hope’ constructs a new way of using technology in the Hope scenario, and Wesley explains that the hope is:
“that they’ve just completely reframed or rephrased AI to stand for Assistive Interface rather than Artificial Intelligence. And if we did that, we made that change, at least in our heads, I think we would shift entirely our expectations of those tools. Shift entirely what we want them for, what we would apply them to, what we were worried about, perhaps how we would design them.”
Throughout the Hope piece in Goatley’s installation, you hear the stories and narratives playing out in small online interactions between the communities who are using the technology. They talk about AI, but they mean a system interface every single time, and no one mentions intelligence, artificial or otherwise, and this glimpse into a world where we are not obsessed with the idea of intelligence to the distraction of the actual utility of tools is refreshing.


Part of Wesley’s installation ‘A Harbinger, a Horizon, and a Hope’
Indeed, throughout the exhibition several themes which seem lost in the distraction of the wider AI discourse’s focus on ‘intelligence’ surfaced. These were pulled together as part of the discussion of the panel members, who discussed how the new thinking and values they were proposing should be represented in AI. Of note were the themes of addressing AI’s environmental impact, the need to stimulate sociotechnical AI literacy, and exposing AI’s extractive nature.
The environmental impact of AI: should we go slow, local, and low resource?
A key theme explored in the exhibition was the huge energy consumption of AI and its resulting environmental implications.
Some of the artworks directly explore the themes of slow and low resource AI and the material aspects:
- Grace Attlee who worked with Julie Freeman on ‘Models of Care’ described how they explored whether really low resource AI could actually enhance the creative practice. They trained low resource models with their own soundscape data collected from glaciers in Iceland. She acknowledged that they had to balance the carbon footprint of doing this in terms of the transportation emissions production processes, and report at the end.
- Perry-James Sugden described how within the development of ‘(S)Low-Tech AI’ they actively used AI in various ways such as the algorithm that they created, as well passively within internet activity which involved interacting with AI.
Collaborator Daria Jelonek expanded on the active part by explaining that having experimented with AI models 10 years ago, they became interested in building smaller models of AI, for example, a system called permutation to give you a range of outcomes:
“It’s not like learning and training. You give it an input, and in our case, for example, we had four rocks which lead to a permutation and rearrangement of twenty four outcomes. And we thought, this is enough. We deliberately didn’t want to use heavy AI models, because that would be against our concept of the idea. And at the same time, in this project we also created our own audiovisual data sets. So it’s not that we’re relying on heavy AI data sets or training online, but we went across the Scottish landscape and captured audiovisual material there which we use for the work.”


(S)Low-Tech AI installation (Studio Above&Below)
She explained that their work was born out of a counter movement to the fast evolving generative AI landscape which was born 2023, as a layered way to bring the challenge into a form. One layer was creating a more physical interface as a reminder of where the viewer is situated, starting with designing a tangible interface using literal rocks to represent the physical elements of AI often forgotten ‘behind shiny screens and in a box far, far away’. They imagined their system as making the user calm down, reflect and have a space where computational tools can actually make you feel good. This is in contrast to the current AI tools which Jelonek describes as being developed to make your life easier or find shortcuts, but actually just make you feel faster.
Another layer she discussed was a geological layer which through interacting with it gives audiences a visual representation of the impact they have with AI tools, to make them aware of the broader impact of AI tools on physical reality.
Goatley explained the environmental significance of the ‘local model’ (LM Studio) used as part of his research, describing a ‘light’ locally hosted model which rather than being accessed through the ‘cloud’ (ie a big data centre), is the same sort of model which you can download and run on your computer instead.
While being similar to large language model interfaces that you can question and probe, you avoid the “incredibly, insanely pollutant, consumptive, dangerous technologies relying on huge infrastructure that are growing at scale in the UK and abroad”. Not using the carbon costs of large scale computation by keeping it on your device means it is normally slower, but that little bit of friction is an important part of the art and design:
“It reminds you that there’s a real mechanism here; It’s not just a magic portal to the mystery intelligence in the sky.” – Wesley Goatley, 2025
The need for sociotechnical AI literacy
A theme that ran throughout the exhibits was the idea of the projects as being ways to both signal the need for, but also deliver, a degree of sociotechnical literacy in relation to AI.
Goatley’s proposition of 3 different futures with AI, although seeming to foreground the AI and capabilities, actually tells the stories behind the tools which have been developed. He aims to make the AI or the technical aspect of it disappear as quickly as possible in the context of what’s happening in his piece, and to ground people in their feelings about it. He believes that these narratives, and engaging with the tools gives a kind of literacy and ability to learn and make decisions about tooling in terms of the objectives surfacing from the narratives. “You gain a sociotechnical literacy about what is possible and what your responsibilities could be”.
Elements of this approach were echoed in ‘Models of Care” by Freeman: sonic sculptures, which although not directly designed as an interface, were still thought of as something tangible for visitors to interact with. Attlee described how Freeman wished to design something to ground people, especially as spaces like galleries can be unwelcoming. She wished people could get into a sculpture or hold on to something that actually plays soundscapes through the physicality of the object, to “kind of hold AI”.
The two wooden sound sculptures in the exhibition are the result of this vision of a space that can be entered physically. One emits compositions by Freeman and Norwegian musician Torben Snekkestad. The second, smaller sculpture holds a third composition by Anna Wszeborowska, generated by a low resource AI model that has been trained on glacial field recordings. The interaction between sound and material, enables connections to be made between the physical, audible and conceptual. Vibrations are felt through the nervous system, making it less invisible and intangible. This breaks down barriers to learning about AI, and Attlee describes how the choice to use smaller models also represents a prompt for learning about them.
Another approach entirely to the need to scrutinise AI came from Rachel Maclean’s work, which presents imaginary AI generated characters trained on her own back-catalogue. The generative AI output is displayed on a small Raspberry Pi device with magnified lens above and surrounded by scientistic, colonial, industrial motifs like 3D printed busts, and a towering metal and glass structure. As Gavin Leuzzi from BRAID pointed out, the sculpture: “places the viewer in the role of a scientist observing the output of AI critically and dispassionately… Like a warning not to get sucked into fantasies and illusions.”
A global extraction system from south to north and beyond
Inspired by the audience discussion and Q&A, there was a degree of thinking about the perspective from Edinburgh as being from a city which had benefited from colonialist extraction, and how this was addressed within the exhibit. Similarly, an audience member suggested the role that Lowry and Turner had played in documenting the effects of technology on environments and society, and whether this was something that could be tackled in a similar way.

The parallels between AI, the Industrial Revolution, and the British Empire in terms of technological innovation forcing change, and how they are linked to violence and extraction from the natural world and from human labour were discussed. A sobering thought was of entering a period in which companies like OpenAI are so big that they behave like Empires, seen also in the way that they interact with nation states.
Goatley recommended ‘The History of Automation’ by Lutman, which considers de-skilling and upskilling, and concludes that automation doesn’t release people from labour. Goatley commented that thinking about the current moment through a historical lens could be an area for further study.
Imagining and building new possible futures
New thinking is encouraged by the exhibition as a bridge to imagine possible new futures. However, the aim described by Hood was not to speculate, but instead to embed propositional change within the design and the concept of the artworks, so they can demonstrate how such changes might come to be. She described a desire to move beyond an exercise in critique of AI through an arts and humanities lens. Although critique is a common and powerful strategy within the arts, the call invited more direct strategies for potential future impact. This was a difficult brief which was met in a number of different ways.

Centring care within AI
A very relevant but overlooked theme to discuss in the context of automation is that of care. Shervington-White, Ashcroft and Attlee spoke to how care might be better considered within AI. In their works, they built different visions of AI tools to enable care, radical care in AI development, and building models of care.


Shervington-White worked with a technologist called Luca Chung to develop a workflow to pick up faces within archived footage, which are seen as computer vision bounding boxes within the video. This use of AI becomes an anchor point for an intimate conversation about technology that is accessible and delivered from a human, community perspective. Speakers from black communities give their own ideas of what they believe would make AI more responsible for them in their lives. A lot of the strategies they talked about were looking at communities that are most underserved by AI being involved in having them shape it. The message is that if it works for those who are the least protected, then hopefully, it should work for everybody in the end.
Louise Ashcroft, one of the other artists exhibiting, and who had held a workshop earlier in the day, had within her project asked for direct examples of how AI should be used, and documented humorous examples of what AI should be used for, many of which centred care in some way. Beverley contrasted these with the less direct examples of propositional change within Shervington-White’s video installation, as evoking a compelling and emotional mood and attitude within the film which evokes the idea of radical care, with decisions centred in community not within tech companies.
The right to resist and ability to reclaim
Also in the audience after a morning workshop was Arda Awais from Identity 2.0 who was called on to talk about one of the most direct propositional approaches, ‘AI to Z’, which looks at resisting generative AI models. The project creates places for people to engage in different types of resistance, no matter how interested or passionate they are about it. This is documented in the project through a zine which includes a range of strategies identified by activists in a range of different areas, including some which are low effort and individual. These are important as Awais explains that people can be disempowered by feeling they need to make a really big change which can seem overwhelming. Identity 2.0 worked to break down the impact each person can make, and to make it easy and approachable by using a conversational tone and providing an accessible glossary for AI jargon. They have since submitted the Zine to zine libraries such as the DAIR Zine library, where it is available to inspire many others and effect change in how people feel empowered to push back against the encroachment of AI in their lives.
Goatley’s exhibit is explicitly propositional in the sense that it creates and foregrounds what diverse communities might want and how that could be delivered in a tool they have built. These suggest a less complicated form of politics, a lower power use which can be achieved in a way which is not speculative but uses what we all have right now such as mesh networks, distributed computing, as well as designing for disabled users and older users. This is all tools we already have, and it was really about bringing that together in one object in that way, making the proposition very close to hand, achievable, and scalable. He described how he was keen not to fall into a common trap of future thinking and imagining that there will be a speculative way of fixing things fifty years down the line. Instead by deconstructing and reconstructing elements from low resource existing technologies, he shows how we can get there.
Deconstructing anthromorphism
One notable thing in the exhibition compared to many explorations of AI was the complete absence of anthropomorphic, human related ideas of AI. Comparisons with human intelligence are often unhelpful and very misleading, but they also hinder creative exploration through anchoring ideas in replications of human embodiment, biases and limitations. Shedding these constraints was one of the ways in which the projects and exhibitions were able to interrogate and present more meaningful facets of AI systems, ideas and impacts.
This was not always easy to avoid, and Goatley describes the challenge he had in trying to find the tools to make an LLM voice interface for the project that he could:
“with consistency make it not refer to itself as I, and suggest its own knowledge in some way, and use all these terms that are the sole domain of humans. And it’s largely only used by tech companies to try to manipulate our understanding of what these tools are and what they can do. But it’s a real struggle. I think I did it. At least I haven’t managed to make it break yet. But it took 7 weeks of just tweaking a system prompt over and over and over again, and changing models just to get rid of that one thing, it’s so deeply baked in, it’s really nefarious”.
Maclean reflected on a different way in which interacting with generative AI can lead to a type of anthropomorphisation. Her fascinating and mysterious sculpture illustrates the beguiling and alluring pull of generative AI technologies that make it easy for an artist to simply forget that it’s a data processing machine that they’re engaging with. She cautions that while artists should not identify with generative AI as anything more than a technological tool, the fantastical beings she has created within the sculpture partly make visible the imaginary beings that we can so easily project onto the technology. Maclean warns of the need to check what effect these tools have on how we approach artistic practice and work.
Dominant narratives of all powerful and inevitable AI which we have no option but to embrace or be left behind are therefore strikingly refuted through different visions of what could (and maybe should) be. The different visions in Tipping Point force us to engage with the paucity of ambition seen in the AI we have now in terms of creating systems which work in harmony with nature and enhance the human experience. They question the relentless trajectory of development towards ever moving goalposts of productivity, efficiency, standardisation and surveillance, offering instead different views of what AI might offer us.
About Tipping Point
Tipping Point explores how artists can help us more wisely respond to the present realities and near-future horizons of AI. Featuring seven newly commissioned artworks from across the UK, the exhibition presents new ways of thinking about today’s AI, the futures we want and the communities needed to build it. Artworks, ranging from digital installations to sculptural interventions, zines and comedy sketches, address themes that reimagine AI uptake, inspire activism and resilience, and showcase artistic creativity in the field.
Tipping Point was funded by the Arts and Humanities Research Council (AHRC) and delivered by BRAID.
(S)Low-Tech AI was created by the experimental art and technology practice Studio Above&Below, co-founded by Daria Jelonek and Perry-James Sugden.
(S)Low-Tech AI seeks a shift towards slower, smaller, and more grounded AI systems. By reducing complexity and focusing on what is available and understandable, the artists showcase simplified and transparent forms of computation while connecting it to ecological roots and mindful decision making.
Watch Daria and Perry discuss (S)Low-Tech AI, their captivating installation for BRAID that examines AI through the lens of geology https://edin.ac/45Xqn0t.
AI-Z was a project by creative studio Identity 2.0, co-founded by Savena Surana and Arda Awais. In this clip, Arda discusses collaborating with the activist community beyond tech when developing their artist commission project AI-Z.
See Arda discuss it here – https://edin.ac/4p2RpfJ
AI-Z explores how zine-making can help people to address the pervasive and sometimes unwelcome encroachment of AI into our daily lives through methods of intersectional resistance and play. The project is also about archiving the collaborative process and building resources for community engagement around responsible AI.
Eye Yours! They’ve Ggetuo is a sculpture by Rachel Maclean and represents the first artwork from They’ve Got Your Eyes, a new body of AI-generated work spanning film, sculpture and digital painting.
See Rachel discuss it here – https://edin.ac/4mIlvnl
Eye Yours! They’ve Ggetuo interrogates the tension between what AI is – a system of pattern-recognising algorithms – and what it feels like to interact with it. The artwork invites viewers into a hallucinatory space that questions the assumptions we project onto AI.
“Closer to Go(o)d?” is a powerful Afrofuturist-inspired short film by Kiki Shervington-White which she discusses here- https://edin.ac/4mMiXEM
“Closer to Go(o)d?” draws on participatory workshops undertaken with working-class Black and ethnically diverse communities in Birmingham, with the aim of promoting a demystifying, radical, ethical approach to Responsible AI, one that is centred on care and community.
A Harbinger, a Horizon, and a Hope: Three Heralds of Possible AI Futures is a commission by
You can hear him speaking about the open-source AI devices he created here – https://edin.ac/46bhW2X
A Harbinger, a Horizon, and a Hope presents three voice-enabled AI devices that each represent a distinct and possible near future scenario for AI technologies and their relationship to individuals, communities, and society. Through interacting with these devices, audiences learn more about these potential futures and the experiences of the people living through them.
Some of Wesley’s images have been added to the Better Images of AI library, view them here:
Models of Care sonic sculptures were created by Julie Freeman. You can hear her speak about her resonant art here – https://edin.ac/4mTDnf7
Models of Care explores environmental responsibility and the relationship between artificial intelligence, climate change, and human agency through sculpture and sound.
Real Stupidity was a project by Louise Ashcroft. Hear her talk about her Fringe comedy-inspired commission here – https://edin.ac/4mAaO6b
Real Stupidity is a newly commissioned artwork that takes a humorous approach by joining forces with comedians to create a series of ‘Speculative Gadgets,’ a range of wearable AI devices that tackle contemporary societal issues.
Find out more about the BRAID programme at BRAID UK.
All images in this post are © 2025 Chris Scott. All rights reserved.

