Experiences and fun facts from videogame practice & research
18 March 2021
Originating from a wrongly transcribed name of a 9th century mathematician, algorithms and interpretations of the concept hidden behind this term have been a part of human history for centuries. The present day meaning perceives algorithms as “a set of mathematical procedures whose purpose is to expose some truth or tendency about the world” [Striphas, 2015]. There is also another, more archaic meaning assigned to algorithms as “coding systems that might reveal, but that are equally if not more likely to conceal” [ibid.]. Algorithms are basically recipes followed by human or non-human entities to accomplish an intended goal.
These almost enigmatic explanations and often vague ideas of what algorithms actually are and what they consist of, made them for the longest time a puzzle sought after by mathematicians, logicians, and computer scientists respectively; and it’s an ongoing quest and problem in the present also for the social scientists.
The first examples of algorithmic theory can be traced as far as into the early 1948 in Claude Shannon’s work. Although Shannon is well-known for his mathematical theory of communication, which formalizes the process of communication, in his work, he also introduced how to parse noise from the signal. By devising a set of procedures, Shannon accidentally invented the first algorithmic theories of information. Shannon operated on the junction of both previously mentioned explanations of what algorithms are - a set of mathematical procedures AND the coding systems - because he was essentially ciphering and deciphering communication [Striphas, 2015]. For Shannon, communication was similar to cryptography and he was trying to find the right way to transform noise and fuzzy input into something that pointed the way toward order.
From this point on, the main goal of the researchers in the computational domains was to understand the inner logic of the algorithms, precising their performance, and making them more effective at the same time. Especially the hidden logic of algorithms appeared to be the most salient problem to solve. For example, Kowalski [1979] dedicated his article to describe a logic component (the “what” of an algorithm) and control component (the “how” of an algorithm) as two essential parts of every algorithm, and how precise definition of both would make computer programs better, easily runnable, and modifiable in the long run. As others in this field argue in the favor of these arguments [Blass & Gurevich, 2004], the clear and correct statement of a problem algorithm is supposed to solve and the more efficient way to solve it, is essential for a proper working of every algorithm and also for its understanding.
However, these purely logical and mathematical perspectives on algorithms started to develop some epistemological holes when a call for empirical research of algorithms emerged in the opposition to the deductive algorithmic science approach practiced in the 90s [Hooker, 1994]. Computer scientists back in the day were criticized for their reserved mood to do empirical research and publish the work of that sort, mostly for the reasons of reproducibility. The main argument was that analytical approaches relying on the methods of deductive mathematics based on documenting a set of benchmarks, do not usually tell researchers how an algorithm is going to work on practical problems, and why. Among others, Hooker proposed the empirical research consisting of experimental design, heuristic use of experimentation, and statistical analysis done through computational testing as the next logical step in researching algorithms. What was a novel approach entirely, was his suggestion to invent empirically-inspired explanations borrowed from other disciplines, e.g. perceiving algorithms as living organisms [Hooker, 1994, p. 211].
Approaching algorithms from the empirical angle and borrowing from other than mathematical disciplines opened up possibilities how to further elaborate on the existing body of knowledge, and how to understand algorithms in a new way. The computationally oriented fields laid down the understanding of the complex inner workings of algorithms; the new fields entering this space opened it up to empirical and philosophical questions, often associated with the involvement of humans in the process or in the product. For example, Nissebaum [2001] reflects on this emerging shift by contemplating how computer and information systems embody values. She argues that the best perspective to employ is to see values as part of technology, which was not the normative way of thinking in the early 2000s when she published her work.
Not only technologies shape values, but it also goes the other way around – values affect the shape of technologies. This is just a logical conclusion to what Heidegger stated in his essays – technology is anything but neutral and values-free because it was made by humans [Heidegger, 1977] . For him, anything that is not part of nature is a human invention and it cannot ever be neutral because it was made with a purpose and values inherent to the humans who made the invention. Humans develop technology to reveal something that is hidden and they want it to reveal itself. The invented technology is also in the pursuit of finding something yet to be discovered unknown even to its inventors. Algorithms, as they are defined at the beginning of this article as “a set of mathematical procedures whose purpose is to expose some truth or tendency about the world” and “coding systems that might reveal, but that are equally if not more likely to conceal” [Striphas, 2017] are the closest to the original notion of technology Heidegger talks about. Heidegger’s thoughts combined with the observations of Nissenbaum are all philosophical and empirical arguments supporting statements that technology or algorithms cannot be perceived as neutral.
As researchers wanted to complement deductive mathematics with empirical approach, another wave [Nissebaum, 2001] wanted to expand the set of criteria the researchers would normally use to evaluate systems to include also bias, anonymity, privacy, security, decentralization, transparency, etc. Researchers to this day follow this lead and conceptualize the problems of studying algorithms in its entirety, making the algorithm studies currently the field of a critical approach to social sciences.
Scholars have been concerned with algorithms because we act on the knowledge they produce and certify, which makes them what Gillespie calls public relevance algorithms [Gillepsie, 2014]. This is an especially relevant issue because we interact with this type of knowledge daily through services such as Amazon, Facebook, Netflix and others. Some researchers warned about this shift as early as when Web 2.0 was introduced, further promoted and upheld by development of the participatory web cultures and architecture of personal profiles and metadata [Beer, 2009]. One of the early studies points out how Google’s auto-complete search algorithm [Baker & Potts, 2013] suggested racist, sexist, and homophobic terms solely from the users’ input data.
These and the findings from the more recent work on algorithmic oppression of Google [Noble, 2018] are just some of the evidence of Gillespie’s warning that algorithms possess a specific algorithmic assessment of information and specific presumptions about what knowledge is and how one should identify its most relevant component [Gillespie, 2014]. For example, critical scholars argue that algorithms have the capacity to infer categories of identity upon users based on their habits. This not only flattens the experiences of being human, but it also implies that everything from a gender to a label “terrorist” used by secret agencies to give their enemies better definitions is categorizable just from the data [Chney-Lippold, 2019].
The problem is the analyzed data are not impartial - on the contrary, they are and contextual and without actual human intervention and interpretation can show results that are biased. Race, gender, political values, and other categories are just some of the value-based variables that have been entered into the data by human minds. In general, machines operate with assumptions made by the people who programmed them and the analyzed data possess hidden structurations of categories and values. After all, algorithms are composed of collective human practices [Seaver, 2017]and as a product of human invention represent a particular knowledge logic. These systems will embody values whether we intend or want them to, and we should not ignore them and let the chance take the wheel of what is going to happen [Nissebaum, 2001]. As researchers we should be aware of that when studying them and the ecosystems they are embedded in.
Researchers so far attempted to cover the complex interplay between systems or devices [Nissebaum, 2001], including algorithms, and those who built them, what they had in mind, how they are supposed to be used, and the natural, cultural, social, and political context in which they are embedded [Barocas et al., 2013; Crawford, 2016; Ziewitz, 2016; Kitchin, 2017]. All of them argue for the same in their respective arguments - researchers identify possible political connotations of algorithms and the contested field they operate within. Many of them go even further to inquire about the image of algorithms as powerful yet inscrutable entities, and how to research politics and governance in something as complex to grasp as algorithms. We as a public lack the clarity about how algorithms exercise their power over us. We are also presented (usually) by computer scientists and technology companies that algorithms are impartial, reliable, and legitimate is misleading. The call for understanding the black-boxed nature of algorithms, and the fact they are hardly straightforward to deconstruct and understand due to their perpetually changing nature is just one of the concerns of studying “the full socio-technical assemblage of algorithms.” [Kitchin, 2017] The important question still stands the same: where to start.
The discrepancies of how computer scientists perceive algorithms and how critical social science scholars perceive algorithms creates first of the difficulties to research algorithms fully. Both groups often lack the knowledge of social and technological respectively. The critical scholars might have overlooked in their effort to tackle algorithms what algorithms actually are. They have focused too much on what they wish them to be from their own perspective [Seaver, 2017]. On the other hand, computer scientists are not that much interested in the social aspects of algorithmic governance and their goal is to make the algorithms as effective as it is possible. This often results in the hesitance on the side of the developers about what is really going on in the “machine” they themselves programmed [ibid., 2017]. The developers cannot simply be sure anymore.
Another practical problem preventing researchers from understanding algorithms comes from the complexity of code and the data as the building blocks of algorithms. The computer code has been for the longest time perceived as a law [Ziewitz, 2016], however in early discussions researchers [Kowalski, 1979; Hooker, 1994] proposed to treat the data structures and coding as two separate entities and research them as such. The argument was developed in favor of understanding how data structures and coding practices affect our own observations of algorithmic behavior, ideally by distinguishing the phenomenon (the algorithm) from the apparatus used to investigate it (the data structures, code, etc.). By separating data structures from procedures, the higher levels of the program can be written before the data structure without interfering with it [Kowalski, 1979]. This is currently more of a problem for database solutions and the infrastructures underlying the algorithms than the social problems of algorithms but it shows how deep one can go to untangle the algorithmic complexity. As of now, the usual practice is that not just one code but multiple source codes combined work on the same data, thus algorithms should be perceived as “multiples”, as unstable objects that people interact with, or just observe as outsiders like researchers [Seaver, 2017].
Methodologically, all the dimensions of what algorithm(s) consist of pose multiple opportunities for research. Additionally to examining the code and pseudo-code, Kitchin [2017] proposes a reverse engineering, to reflexively produce code, interview designers or conduct an ethnography of a coding team, unpack the full socio-technical assemblage of algorithms, etc. None of us has found the best approach. Yet. It seems like we are creating more questions than answering them. But we are all respectively doing the best we can.
Algorithms are products of different types of culture, blending both technical and non-technical together – programmers tweak them to perform better, men use them in a different way than women, some game them, some blindly follow their directions, etc. They also don’t exist in the vacuum; they are embedded in a broader socio-technical environment that is influenced by governments, institutions, marketplaces, finances, social and technical infrastructures. They don’t work as flawlessly as companies wish them to and people might believe. That’s why everyone is talking about them and researching them from different angles.
Baker, P., & Potts, A. (2013). ‘Why do white people have thin lips?’Google and the perpetuation of stereotypes via auto-complete search forms. Critical discourse studies, 10(2), 187-204.
Barocas, S., Hood, S., & Ziewitz, M. (2013). Governing algorithms: A provocation piece. Available at SSRN 2245322.
Beer, D. (2009). Power through the algorithm? Participatory web cultures and the technological unconscious. new media & society, 11(6), 985-1002.
Blass, A., & Gurevich, Y. (2004). Algorithms: A quest for absolute definitions. In Current Trends in Theoretical Computer Science: The Challenge of the New Century Vol 1: Algorithms and Complexity Vol 2: Formal Models and Semantics (pp. 283-311).
Cheney-Lippold, J. (2019). Chapter 1: “Categorization”. In: We Are Data. NYU Press.
Crawford, K. (2016). Can an algorithm be agonistic? Ten scenes from life in calculated publics. Science, Technology, & Human Values, 41(1), 77-92.
Gillespie, T. (2014). The relevance of algorithms. Media technologies: Essays on communication, materiality, and society, 167(2014), 167.
Heidegger, M. (1977). The question concerning technology.
Hooker, J. N. (1994). Needed: An empirical science of algorithms. Operations research, 42(2), 201-212.
Kitchin, R. (2017). Thinking critically about and researching algorithms. Information, Communication & Society, 20(1), 14-29.
Kowalski, R. (1979). Algorithm= logic+ control. Communications of the ACM, 22(7), 424-436.
Nissenbaum, H. (2001). How computer systems embody values. Computer, 34(3), 120-119.
Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. nyu Press.
Striphas, T. (2015). Algorithmic culture. European Journal of Cultural Studies, 18(4-5), 395-412.
Zarsky, T. (2016). The trouble with algorithmic decisions: An analytic road map to examine efficiency and fairness in automated and opaque decision making. Science, Technology, & Human Values, 41(1), 118-132.
Ziewitz, M. (2016). Governing algorithms: Myth, mess, and methods. Science, Technology, & Human Values, 41(1), 3-16.