Dreaming Beyond AI

Dreaming Beyond AI is a multi-disciplinary and collaborative web-based project bringing together artists, researchers, activists, and policymakers to create new narratives and visions around AI technologies. The project aims to enable understanding of the impact AI technologies have on inequity, and questioning mainstream AI narratives as well as imposed visions of the future.

I spoke to Nushin Yazdani, Raziye Buse Çetin and Iyo Bisseck about their approaches to visualizing different aspects of AI and the challenge of imagining plural and positive visions of our future with technology.


Alexa: How would you describe “Dreaming Beyond AI” in your own words?

Iyo: Fluidity.

Nushin: Liquidity.

Nushin: Maybe also: Making the interdependencies visible, going away from this top-down, either-or. That’s what we’re aiming for: The pluriverse of ideas, visions, and narratives.

Buse: The process of collaboration is also something that we paid attention to and thought about how we can do it differently. We thought about how can we embody the values we are preaching, like intersectionality, and inclusivity, against patriarchal white supremacist work culture that is focused on productivity and past record of institutionalized success. We have been lucky enough to receive support for this project. I have been invited by Nushin to the project although I  had not been involved with projects at the intersection of tech & art before; so when we were choosing our collaborators and artists we also asked how can we extend the same values and trust, how can we minimize our attachment to patriarchal, capitalist parameters of “success” and “reliability”?

Alexa: That was also my impression, that it’s less than a website and more like a platform that invites people to contribute…

Buse: I am a bit cautious with the word “platform”. I mean we’re still using this term but trying to find another, similar to a “container”, a “space”, a recipient for people to come together in a way that makes their work, contributions, stories, and standpoints visible.

Nushin: The wish or idea for us is – I can speak for the whole group I think – this is not something that we only invite people to but that people can also approach us with their ideas and their wishes. We can make it, as Buse said, like a container, so everybody can fill it, not just us. Not in a kind of exclusive way but more everybody is invited to contribute.

Alexa: I am intrigued by this “container” term, it appears a lot, also as kind of a metaphor for technology in general. Compared to this idea of technology as a stick, a weapon, this tool thingy – the opposite would be the container. There is this sci-fi author, Ursula LeGuin. You told me that her essay “The carrier bag theory of fiction” from 1986 was one of the foundational inspirations for the project. Could you tell me more about how it matters to you?

Ursula Le Guin
Picture of Sci-Fi author Ursula Le Guin (by: Marian Wood Kolisch, Oregon State University,CC BY-SA 2.0, via Wikimedia Commons)

Buse: Of course! Ursula LeGuin says that maybe the first technology that we had was not a weapon but a recipient, a carrier bag in which we could collect our things, because we were living as nomads, going from place to place. What is a more important invention than this?

We thought that this approach is missing when we talk about technology in the sense of “go fast and break things” and “disrupt” and aggressively change the market, predict and optimize, etc. We need to come back and go deeper into creating space for other visions.

Scene from “2001 - A space odyssey” - The monkeys found a bone and start hitting each other
Scene from “2001 – A space odyssey” – The monkeys find a bone and start hitting each other

When we think about what is considered a technology, I feel it’s very much gendered and intertwined with capitalism and this myth of weapons. If you look at the most developed AI applications it’s in the domain of the military. The military applications of technology basically drive where technology is going overall. And I go into a little bit of a spiritual realm with this but for me it also makes me think of masculine and feminine energy. Not in the sense of gender but maybe like “yin and yang” in Eastern spiritual traditions. In the sense that one is outwards looking and outwards going, achieving, going to Mars, etc. While the other is mostly… magnetic, receptive, and reflective, and creates nothingness but space within that nothingness… nurturing like nature, the Earth, and similar archetypes.

Alexa: When you said the words “magnetic” and “receptive” and “fluidity” I really felt the links to the visuals! These concepts are very well reflected in the designs. What was the process of transforming the visual concepts into the actual design and 3D graphics?

Iyo: It was a bit complicated because we wanted our design to be accessible in a way. The real question was how to create an experience while still letting the opportunity for people that are not comfortable with technology just try it. We talked about relationality and ll the things being related and connected. We wanted to avoid the image of the brain to represent AI because it lacks the potential for transformation. I talk about fluidity because we really like the water as a way to be one at the moment and alone at another moment, and have the possibility to connect and disconnect and see it through time.

Nushin: Maybe I can add to the water imagery a little bit. There is such a whole body of knowledge but our idea is to elevate some drops of knowledge that we think are in this context missing but really important to showcase which are other narratives of what AI could be. Of what technology could be for us. Showing specific drops of ideas that come from this whole collective knowledge of the world, from different places. Showing knowledge that is maybe not the knowledge of academia or what the industry accepts as proper knowledge.

Screenshot of "Dreaming Beyond AI"
Screenshot of “Dreaming Beyond AI” (experiential Pluriverse view)

Alexa: Was there concrete inspiration in terms of visual vibe?

Nushin: As Buse said, this idea of Ursula Le Guin’s contrary to this vision of what is technology as something that has corners or is fixed or is a concrete thing.. we tried to turn it around visually and bring it closer to nature and making it this thing that is maybe not hard and pointy and hurtful but more something that is soft and can adjust to things that are coming, is flexible…

Buse: We also want to help and support people to understand what AI is. The project aimyths.org is a great inspiration for that too. Understanding what is actually happening, questioning, beyond technology. For example when we’re discussing “algorithmic bias” it’s a question of social justice and inequity in our systems, and not only something that is in the interpersonal realm. And we were thinking about how to design a space where everything can coexist. We thought of visiting the website as a journey for the navigator. The first frame when you enter the website represents the status quo. Then you go into the Pluriverse – that’s also a reference that we like, it is a term that comes from Arturo Escobar’s thinking. How things are connected to each other, the patterns in each water drop. It’s basically linked to the topic that we are exploring.

Alexa: I am curious about your thoughts regarding the designs/moods of the different sections/aspects (e.g. “Intelligence”, “Patterns”)! How did you come up with the specific designs and colour schemes? What were you aiming to communicate?

Buse: In general we wanted the visual imagery to be “reminiscent” of the themes (e.g. “AI violence”, “intelligence” etc.) that we were exploring. When this is not easy or when we were actually trying to question, unpack, (re)define the usual interpretations of these concepts we sometimes also opted for what would be perceived as “the opposite” or “not usual” way of depicting the concept in question. Different colours, patterns, and images are also visual cues about how the word, concept, or idea makes us feel because we believe “knowledge” doesn’t exclude feeling. 

Iyo: To represent the themes, I collaborated with Nushin and Buse who gave me names of moods, and feelings, to get an idea of the theme. The idea behind this was not to represent them in a frontal way, but that forms a basis for a more general interpretation.

We can go over each of these theme designs one by one. I can tell you about the words of the moods and feelings and talk a bit about the choice of images.

Patterns

Iyo: The first idea of this repeating pattern is that of the enclosure, which to some extent can lock in repeating, normalized patterns. Then what I appreciated while trying it is the great transparency of this theme.

Once inside, it is one of the places where you can see the landscape the most, and where this landscape communicates with the grid. In this representation, there was the idea of being able to go further, to have a view of one’s environment while making explicit the trap that this can create.

Machine Vision & Feeling

Iyo: For this texture, I was instructed to have something related to the eye.

I was inspired by the representation of thermal cameras to represent the machine vision.

These thermal cameras are also used in research laboratories to recognize emotions. Although this use is questionable, I found that the graphic universe that emerged could correspond to Machine vision and feeling.

Intelligence

Iyo: I believe that this visual is not definitive. Intelligence is complex, dynamic and contextual. That’s why we opted for vagueness in this visual.

AI Violence

Iyo: For this theme, I was given the word pink. I thought of technology that is sold as inherently progressive and innocent – en rose – and the violence it creates being hidden inside this rose-tinted vision.

Refusal

Iyo: For this theme, I took the cross to symbolize refusal. Repetition of the motif, affirmation of this refusal, inevitable refusal. Technological refusal is generally a taboo, again because it is widely considered inherently progressive. The cross is strong and straightforward and also provoking. I wanted to amplify our right to refuse. 

AI & Relationality

Buse: Rhizome, mycelium networks, connectedness.

Planet Earth & Outrastructure

Iyo: The texture of this theme is related to the earth, the inspirations were around something earthy and mossy.

Future-Present Vibrations

Iyo: For this, the words were: “colourful, fun”. The chosen visual is optimistic and vibrant.

Alexa: For me, there is always a tension between representing AI or technology as it is like now versus visions of the future and the technology we want to have. With the BIOAI image database, we have some red flags. That would be e.g. really futuristic depictions in this very common sci-fi aesthetic. But I feel that there is also a big need for better visions, better futures of technology and of AI especially. I feel that your project is also about preferable futures and the images and the aesthetics are trying to provide an alternative. Was there some tension as well in representation or being afraid of becoming too futuristic or was that something that you wanted?

Nushin: I think we’re all brought up with these images of what technology could be. Either: “Robots are gonna rule us”, this very dystopic Black mirror vision. Or: AI is gonna solve humanity’s problems, like it is now depicted as a major option to “solve” the climate crisis. It’s already such a big step to get away from this imagery and see them as just possible depictions but not the ones that definitely have to come. And it’s so much harder to show plural and positive visions that could be there. It seems the dystopian vision is so much easier to depict and imagine, since we see it in the media so much. I think that’s actually pretty crazy that it’s so much easier to imagine all these things that could go wrong than actually collectively working on what we could imagine.

Buse: the intention is not only to create this repertoire of positive visions but basically try to open a space and a place where people can feel good in their bodies to be able to imagine something else. I think that’s hard when you’re just in front of your laptop and you’re stuck in a kind of trauma response which is either freeze, fight or flight. Because we are disconnected from our bodies, feelings and sensations in an auto-pilot mode and our neurocognitive, neurobiological “weaknesses” are exploited via dark patterns, all the scrolling, notifications, design that pushes you to feel urgency, urge to buy…. You’re overstimulated then, you can’t be like “Oh, let me imagine something positive about the future” – I don’t think that it’s possible on autopilot mode. This is usually how we serve “information” (again in a very limited conception of information). Even though you don’t look at anything or read anything just looking at the “Dreaming Beyond AI” Website on a big screen if you have one, listening to the sound and making the meditations at the beginning; first of all it calms you down and brings you back to your body. And ideally, hopefully, this would just make you feel something, maybe relaxed enough to be able to envision something else, relaxed enough to ask yourself some questions. Maybe it would make something resonate with you so that you can join us in imagining or just feel inspired.

Alexa: An open question. What do you wish for the media representation of AI, do you have short-cut solutions that people could implement to make it better?

Iyo: What is really interesting for me is the process to make it, more than the results. When we think about Artificial Intelligence and Machine Learning there’s a lot about the result and efficiency. What’s interesting to me is adding a reflection on the data extraction process. Who extracts them? Where is it extracted from? Who owns the data? What type of data is extracted in what context? For what purpose?  What I find important is really the whole process of digitization and extraction of our data. To analyze it and observe the existence of domination relationships in this process in order to find alternatives to do otherwise. Even before questioning their efficiency.

Alexa: Showing more of the process behind it and how it’s made?

Iyo: Yes, but also allow its democratization. To allow people to create, understand, select and own their data because behind these issues there are questions of power. So what I would like to see in relation to the media representation of AI is really that this representation can be created by a large number of people, especially by people marginalized by the existing representations.

Alexa: Buse, Nushin and Iyo – Thank you so much for the interview!




Nushin Yazdani (Concept, Curation)
Nushin Isabelle Yazdani is a transformation designer, artist, and AI design researcher. She works with machine learning, design justice, and intersectional feminist practices, and writes about the systems of oppression of the present and the possibilities for just and free futures. At Superrr Lab, Nushin works as a project manager on creating feminist tech policies. With her collective dgtl fmnsm, she curates and organizes community events at the intersection of technology, art, and design. Nushin has lectured at various universities, is a Landecker Democracy Fellow and a member of the Design Justice Network. She has been selected as one of 100 Brilliant Women in AI Ethics 2021.

Raziye Buse Çetin (Concept, Curation)
R. Buse Çetin is an AI researcher, consultant, and creative. Her work revolves around the ethics, impact, and governance of AI systems. Buse’s work aims to demystify the intersectional impact of AI technologies through research, policy advocacy, and art. Watch: Buse’s TEDx talk “Why is AI a Social Justice Issue?”.

Iyo Bisseck (Webdesign & Development)
Iyo Bisseck is a Paris-based designer, researcher, artist and coder extraordinaire. She holds a BA in media interaction design from ECAL in Lausanne and an MA in virtual and augmented reality research from Institut Polytechnique Paris. Interested in the biases showing the link between technologies and systems of domination, she explores the limits of virtual worlds to create alternative narratives.

Sarah Diedro Jordão (communications strategy)
Sarah Diedro Jordão is a communications strategist, a social justice activist, and a podcast producer. She was formerly a UN Women and Youth Ambassador, has served as a strategic advisor to the North-South Center of the Council of Europe on intersectionality in policymaking. Sarah currently works as a freelance consultant in storytelling, communications strategy, event moderation, and educational workshop creation.

How do blind people imagine AI? An interview with programmer Florian Beijers

A human hand touching a glossy round surface with cloudy blue texture that resembles a globe
Florian Beijers

Note: We acknowledge that there is no one way of being blind and no one way of imagining AI as a blind person. This is an individual story. And we’re interested in hearing more of those! If you are blind yourself and want to share your way of imagining AI, please get in touch with us. This interview has been edited for clarity.

Alexa: Hi Florian! Can you introduce yourself?

Florian: My name is Florian Beijers. I am a Dutch developer and accessibility auditor. I have been fully blind since birth, I use a screen reader. And I give talks, write articles and give interviews like this one.

Alexa: Do you have an imagination of Artificial Intelligence?

Florian: I was born fully blind so I have never actually learned to see images, neither do I do this in my mind or in my dreams. I think in modalities I can somehow interact with in the physical world. This is sound, tactile images, sometimes even flavours or scents. When I think of AI, it really depends on the type of AI. If I think of Siri I just think of an iPhone. If I think of (Amazon) Alexa, I think of an Amazon Echo.

It really depends on what domain the AI is in

I am somehow proficient in knowing how AI works. I generally see scrolling code or a command line window with responses going back and forth. Not so much an actual anthropomorphic image of, say, Cortana or like these Japanese Anime. It really depends on what domain the AI is in.

Alexa: When you read news articles about AI and they have images there, do you skip these images or do you read their alt text?

Florian: Often they don’t have any alts, or a very generic alt like “image of computer screen” or something like that. Actually, it’s so not on my radar. When you first asked me that question about one week ago – “Hey we’re researching images of AI in the news” – I was like: Is that a thing?

(laughter)

Florian: I had no clue that that was even happening. I had no idea that people make up their own images for AI. I know in Anime or in Manga, there’s sometimes this evil AI that’s actually a tiny cute girl or something.

I had no idea that people make up their own images for AI

Alexa: Oh yes, AI images are a thing! Especially the images that come from these big stock photo websites make up such a big part of the internet. We as a team behind Better Images of AI say: These images matter because they shape our imagination of these technologies. Just recently there was an article about an EU commission meeting about AI ethics and they illustrated it with an image of the Terminator …

(laughter)

Alexa: … I kid you not, that happens all the time! And a lot of people don’t have the time to read the full article and what they stick with is the headline and the image, and this is what stays in their heads. And in reality, the ethical aspects mentioned in the article were about targeted advertisements or upload filters. Stuff that has no physical representation whatsoever and it’s not even about evil, conscious robots. But this has an influence on people’s perception of AI: Next time they hear somebody say “Let’s talk about the ethics of AI”, they think of the Terminator and they think “I have nothing to add to this discussion” but actually they might have because it’s affecting them as well!

Florian: That is really interesting because in 9 out of 10 times this just goes right by me.

Alexa: You are quite lucky then!

Florian: Yes, I am kind of immune to this kind of brainwashing.

Alexa: But you know what the Terminator looks like?

Florian: Yeah, I mean I’ve seen the movie. I’ve watched it once with audio description. But even if I am not told what it looks like I make it a generic robot with guns…

Alexa: Do you own a smart speaker?

Florian: Yes. I currently have a Google Home. I am looking into getting an Amazon Alexa Echo Dot as well. I enjoy hacking on them as well like creating my own skills for them.

Alexa: In the past, I did some research on how voice assistants are anthropomorphised and how they’re given names, a gender, a character and whole detailed backstories by their makers. All this storytelling. And the Google Assistant stood out because there’s less of this storytelling. They didn’t give it a human name, to begin with.

Two smart speakers: A Google home and an Amazon Echo. Image: Jonas Nordström CC BY 2.0

Florian: No it’s just “Google”. It’s like you are literally talking to a corporation.

Alexa: Which is quite transparent! I like it. Also in terms of gender, they have different voices, at least in the US, they are colour-coded instead of being named “female” or “male”.

Florian: It’s a very amorphous AI, it’s this big block of computing power that you can ask questions to. It’s analogous to what Google has always been: The search giant, you can type things into it and it spits answers back out. It’s not really a person.

Alexa: Yeah, it’s more like infrastructure.

Florian: Yeah, a supercomputer.

Alexa: I wondered if you were using a voice assistant like Amazon Alexa that is more heavily anthropomorphised and has all this character. How would you imagine this entity then?

Florian: Difficult. Because I know kind of how things work AI-wise, I played with voice assistants in the past. That makes it really hard to give it the proper Hollywood finish of having an actual physical shape.

Alexa: Maybe for you, AI technology has a more acoustic face than a visual appearance?

Florian: Yes! The shape it has is the shape it’s in. The physical device it’s coming from. Cortana is just my computer, Siri is just my phone.

The shape AI has is the shape it’s in

Alexa: Would you say that there is a specific sound to AI?

Florian: Computers have been talking to me ever since I can remember. This is essentially just another version of that. When Siri first started out it used the voice from VoiceOver (the iOS screen reader). Before Siri got its own voice it used a voice called Samantha, that’s a voice that’s been in computers since the 1990s. It’s very much normal for devices to talk at me. That’s not really a special AI thing for me.

A sound example of a screen reader

Alexa: When did you start programming?

Florian: Pretty much since I was 10 years old when I did a little HTML tutorial that I found on the web somewhere. And then off and on through my high school career until I switched to studying informatics. I’ve been a full-time developer since 2017.

Computers have been talking to me ever since I can remember

Alexa: I think how I first got in touch with you on Twitter was via a post you did about screenreaders for programmers, there was a video and I was mind-blown how fast everything is.

Florian: It’s tricky! Honestly, I haven’t mastered it to the point where other blind programmers have. I use a Braille display, which is a physical device that shows you line by line in Braille. I use that as a bit of a help. I know people, especially in the US, who don’t use Braille displays. Here in Europe it’s generally a bit better arranged in terms of getting funding for these devices, because these devices are prohibitively expensive, like 4000-6000 Euros. In the Netherlands, the state will pay for those if you’re sufficiently beggy and blindy. Over in the US, that’s not as much of a given. A lot of people tend not to deal with Braille. Braille literacy is down as a result of that over there.

I use a Braille display to get more of a physical idea of what the code looks like. That helps me a lot with bracket matching and things like that. I do have to listen out for it as well otherwise things just go very slowly. It’s a bit of a combination of both.

Alexa: So a Braille display is like an actual physical device?

Florian: It’s a bar-shaped device on which you can show a line of Braille characters at a time. Usually, it’s about 40 or 80 characters long. And you can pan and scroll through the currently visible document.

I use a Braille display to get more of a physical idea of what the code looks like

Alexa: How do you get the tactile response?

Florian: It’s like tiny little pins that go up and down. Piezo cells. The dots for the Braille characters come up and fall as new characters replace them. It’s a refreshable line of Braille cells.

A person's hands using a Braille display on a desk next to a regular computer keyboard
A person using a braille display. Image: visualpun.ch, CC BY-SA 2.0, https://www.flickr.com/photos/visualpunch/

Alexa: Would that work for images as well? Could you map the pixels to those cells on a Braille display?

Florian: You could and some people have been trying that. Obviously the big problem there is that the vast majority of blind people will not know what they’re looking at, even if it’s tactile. Because they lack a complete frame of reference. It’s like a big 404.

(laughing)

Florian: In that sense, yes you could. People have been doing that by embossing it on paper. Which essentially swells the lines and slopes out of a particular type of thick paper, which makes it tactile. This is done for example for mathematical graphs and diagrams. It wouldn’t be able to reproduce colour though.

Alexa: You are a web accessibility expert. What are some low hanging fruits that people can pick when they’re developing websites?

Florian: If you want to be accessible to everyone, you want to make sure that you can navigate and use everything from the keyboard. You want to make sure that there is a proper organizational hierarchy. Important images need to have an alt text. If there’s an error in a form a user is filling out, don’t just make it red, do something else as well, because of blind and colourblind people. Make sure your form fields are labelled. And much more!

Alexa: Florian, thank you so much for this interview!


Links

Florian on Twitter: @zersiax
Florian’s blog: https://florianbeijers.xyz/
Article: “A vision of coding without opening your eyes”
Article: “How to Get a Developer Job When You’re Blind: Advice From a Blind Developer Who Works Alongside a Sighted Team” on FreeCodeCamp.org
Youtube video “Blindly coding 01”:  https://www.youtube.com/watch?v=nQCe6iGGtd0
Audio example of a screen reader output: https://soundcloud.com/freecodecamp/zersiaxs-screen-reader

Other links

Accessibility on the web: https://developer.mozilla.org/en-US/docs/Learn/Accessibility/What_is_accessibility
Screen reader: https://en.wikipedia.org/wiki/Screen_reader
Refreshable Braille display: https://en.wikipedia.org/wiki/Refreshable_braille_display
Paper embossing: https://www.perkinselearning.org/technology/blog/creating-tactile-graphic-images-part-3-tips-embossing

Cover image:
“Touching the earth” by Jeff Kubina from Columbia, Maryland, CC BY-SA 2.0 https://creativecommons.org/licenses/by-sa/2.0, via Wikimedia Commons

Panel discussion with head of photography of Zeit-Online

A screenshot of a video conference showing the participants of the panel discussion

The German conference “KI und Wir” organized a panel discussion about the topic of visual representation of AI in the media. The guests were:

Alexa Steinbrück – AI researcher and educator at University of Art and Design, Burg Giebichenstein
Amelie Goldfuß – Designer and educator at University of Art and Design, Burg Giebichenstein
Michael Pfister – director of photography at German newspaper ZEIT ONLINE

https://youtu.be/8l1IpckiKuk