An abstract image containing stylized black cubes and a half-transparent veil infront of a night street scene

From Black Box to Algorithmic Veil: Why the image of the black box is harmful to the regulation of AI

The following is based on an excerpt of the upcoming book “Self-imposed Algorithmic Thoughtlessness and the Automation of Crime Control”, Nomos/Hart 2022 by Lucia Sommerer


Language is never innocent: words possess a secondary memory, which in the midst of new meanings mysteriously persists.

Roland Barthes1

The societal, as well as the scholarly discussion about new technologies, is often characterized by the use of metaphors and analogies. When it comes to the legal classification of new technologies, Crootof even speaks of a ‘battle of analogies’2. Metaphors and analogies offer islands of familiarity when legally navigating through the floods of complex technological evolution. Metaphors often begin where the intuitive understanding of new technologies ends.3 The less familiar we feel with a technology, the greater our need for visual language as a set of epistemic crutches. The words that we choose to describe our world, however, have a direct influence on how we perceive the world.4 Wittgenstein even argues that they represent the boundaries of our world.5 Metaphors and analogies are never neutral or ‘innocent’, as Barthes puts it, but come with ‘baggage’6, i.e. metaphors in the digital realm are loaded with the assumptions of the analogue world from which the imagery is borrowed.7 Consider the following question about one of the most widespread metaphors on the subject of algorithms, the black box:

What do you see before your inner eye, when you hear the term ‘black box’?

Some people may think of a monolithic, robust, opaque, dark and square figure.

What few people will see is humans.

This demonstrates both the strengths and the weaknesses of the black box image and thus its Janus-headedness. In the discussion about algorithms, the black box narrative was originally intended as a ‘wake-up call’8 to direct our attention – through memorable visual language – towards certain risks of algorithmic automation; namely towards the risks of a loss of (human) control and understandability. The black box terminology successfully fulfils this task.

But it also threatens to obscure our view of the people behind algorithmic systems and their value judgements. The black box image conceals an opportunity to control the human decisions behind an algorithmic system and falsely suggests that algorithms are independent of human prejudices. By drawing attention to one problem area of the use of algorithms (non-transparency), the black box narrative threatens to distract from others (controllability, hidden human value judgements, lack of neutrality). The term black box hides the fact that algorithms are complex socio-technical systems9 that are based on a multitude of different human decisions10. Further, by presenting algorithmic technology as a monolithic, unchangeable and incomprehensible black box, connotations such as ‘magical’ and ‘oracular’ often arise.11 Instead of provoking criticism, such terms often lead to awe and ultimately surrender to the opacity of the black box. Our options for dealing with algorithms are reduced to ‘use vs. do not use’. Opportunities that would allow for nuances in the human design process of the black box go unnoticed. The inner processes of the black box as a system are sealed off from humans and attributed an inevitability that strongly resembles the inevitability of the forces of nature; forces that can be ‘tamed’ but never systematically controlled.12 The black box narrative also ascribes such problematic inevitability to negative side effects such as the discriminatory effects of an algorithm. This view diverts attention away from the very human-made sources of algorithmic discriminatory behaviour (e.g. selection of training data). The black box narrative in its most widespread form – namely as an unreflected catchphrase – paradoxically achieves the opposite of what it is intended to do; namely, to protect us from a loss of control over algorithms.

In reality it is, however, possible to disclose a number of human value judgements that stand behind even supposed black box algorithm, for example, through logging requirements in the design phase or output testing.

The challenge posed by the regulation of algorithms, therefore, is more appropriately described as an ‘algorithmic veil’ than a black box; an ‘algorithmic veil’ that is placed over human decisions and values. One advantage of the metaphor of the veil is that it almost inherently invites us to lift it. A black box, on the other hand, does not contain such a prompt. Quite the opposite: a black box indicates that an attempt to gain any insight whatsoever is unlikely to succeed. The metaphors we use in the discussion about algorithms, therefore, can directly influence what we think is possible in terms of algorithm regulation. By conjuring up the image of the flowing fabric of an algorithmic veil, which only has to be lifted, instead of a massive black box, which has to be broken open, my intention is not to minimize the challenges of algorithm regulation. Rather, the veil should be understood as an invitation to society, programmers and scholars: instead of talking about what algorithms ‘do’ (as if they were independent actors), we should talk about what the human programmers, statisticians, and data scientists behind the algorithm do. Only when this perspective is adopted can algorithms be more than just ‘tamed’, i.e., systematically controlled by regulation.


1 Roland, Writing Degree Zero, New York 1968, 16.
2 Thomson-DeVeaux FiveThirtyEight v. 29.5.2018, https://perma.cc/YG65-JAXA.
3 So-called cognitive metaphor, cf. Drewer, Die kognitive Metapher als Werkzeug des Denkens. Zur Rolle der Analogie bei der Gewinnung und Vermittlung wissenschaftlicher Erkenntnisse, Tübingen 2003.
4 Lakoff/Johnson, Metaphors We Live By, Chicago 2003; Jäkel, Wie Metaphern Wissen schaffen: die kognitive Metapherntheorie und ihre Anwendung in Modell-Analysen der Diskursbereiche Geistestätigkeit, Wirtschaft, Wissenschaft und Religion, Hamburg 2003.
5 Wittgenstein, Tractatus Logico-Philosophicus – Logisch-Philosophische Abhandlung, Berlin 1963, Satz 5.6.
6 Lakoff/Wehling, „Auf leisen Sohlen ins Gehirn.“ Politische Sprache und ihre heimliche Macht, 4. Aufl., Heidelberg 2016, 1 ff. speak of the so-called ‘Issue Defining Frame’.
7 See for example how metaphors differently relate to the data we unconsciously leave behind on the Internet: data as the ‘new oil’ (Mayer-Schönberger/Cukier, Big Data – A Revolution that will transform how we live, work and think, New York 2013, 20), ‘data waste’ (Harford, Significance 2014, 14 (15)) or ‘data extortion’ (Singer/Maheshwari The New York Times v. 25.4.2017, https://perma.cc/9VF8-J7F7). A metaphor’s starting point has great significance for the outcome of a discussion, as Behavioral Economics Research under the heading of ‘Anchoring’ has shown, see Kahneman, Thinking, Fast and Slow, London 2011, 119 ff.
8 In this sense, Pasquale, The Black Box Society – The Secret Algorithms That Control Money and Information, Cambridge et al. 2015.
9 Cf. Simon, in: Floridi (Hrsg.), The Onlife Manifesto – Being Human in a Hyperconnected Era, Heidelberg et al. 2015, 145 ff., 146; for the corresponding work of the Science & Technology Studies see Simon, Knowing Together: a Social Epistemology for Socio-Technical Epistemic Systems, Diss. Univ. Wien, 2010, 61 ff. m.w.N..
10 See Lehr/Ohm, UCDL Rev. 2017, 653 (668) (‘Out of the ether apparently springs a fully formed “algorithm”’) .
11 Elish/boyd, Communication Monographs 2017, 1 (6 ff.);Garzcarek/Steuer, Approaching Ethical Guidelines for Data Scientists, arXiv 2019, https://perma.cc/RZ5S-P24W (‘algorithms act very similar to ancient oracles’); science fiction framing and a reference to the book/film Minority Report, in which human oracles predict murders with the help of technology, are also frequently found; see Brühl/Steinke Süddeutsche Zeitung v. 4.3.2019, https://perma.cc/6J55-VGCX; Stroud Verge v. 19.2.2014, http://perma.cc/T678-AA68.
12 Similarly, as early as 20 years ago, Nissenbaum, Science and Engineering Ethics 1996, 25 (34).

Title image by Alexa Steinbrück