The official launch event for the Society and Space MSc blog, which also served as a celebration of the 25 years of the course’s existence, took place in Geographical Sciences on Wednesday 27th April. The panel for the event was made up of six Society and Space alumni, who spoke highly of their experience on the course, and responded to the event’s theme in the light of their current research. JD Dewsbury, a Reader based in the School and an alumnus of the course also gave a retrospective view on the course through the stages of its development, reflecting on what has been unique in the course’s culture and pedagogy. The event drew in a diverse audience including past and current students, staff from the School and other disciplines, and the general public. After the panel pieces were given, the audience joined in a discussion about the possibility for academic research to address current social issues, to imagine not-yet-thinkable ways of doing, and to create more expansive ways of organising and co-existing within planetary boundaries.
In his talk, Sam Kinsley spoke about “the idea of an ‘imaginary’ (in the vein of ‘geographical’ or ‘sociological’ imaginaries) to offer a critical reading of how particular stories about automation and agency are taking hold”. Kinsley frames this argument firstly “in terms of ‘anticipation’ and in terms of ‘stupidity’.” The following is an excerpt from a post on his academic blog about anticipation and stupidity within the imaginary of the algorithm and the ways algorithmic imaginaries represent their own form of ‘worlding’ that does not entirely coincide with the realities these algorithms are designed to represent.
Firstly, the phenomena labelled ‘algorithms’ are suggested to anticipate the activities of people, organisations and (other) mechanisms. This is one of the substantive claims of ‘big data’ analytics in relation to any form of ‘social’ data, for example. It is certainly true that, building on ever-larger datastores, software (with its programmers, users etc. etc.) have a capacity to make certain kinds of prediction. Nevertheless, and as many have pointed out, these are predictions based upon a model (derived from data) that I argue constitutes a world (it does not reflect the world — these predictions are ontogenetic, calling entities/relations into being, rather than descriptive).
Further, precisely because these anticipatory mechanisms are often a part of systems that use their outputs in order to select what may be seen, or not, and thus what may be acted upon, or not, they are arguably a form of self–fulfilling prophecy. The anticipation is ‘proven’ accurate precisely because it functions within a context where the data and its structures (the model) are geared towards their efficient calculation by the ‘algorithm’. Thus, we might choose to be more cautious about the claims of large social media experiments that are focused on a single platform, precisely because they are self-validating. A social media platform is a world unto itself, not a reflection of ‘reality’ (and whatever we choose that to mean). Indeed, it has been highlighted by others (Mackenzie 2005, Kitchin 2014) that the outcomes of ‘algorithms’ can be unexpected in terms of their work in world-ing.
Yet, the supposition of such an anticipation is, itself, a form of anticipation – a kind of imagining of agency. The capacity to ‘predict’ is suggested to have effects, and those effects produce particular kinds of experience, or spaces. Visions of a world are conjured with what we imagine ‘algorithms’ can do. Thus it is a double-bind of anticipation: to write anticipatory programmes, a programmer must imagine what kinds of things the programme can/should anticipate. There is accordingly a geographical imaginary of anticipatory systems. Furthermore, that imaginary is becoming normative – in two senses: normative or prescriptive in the sense of the double-bind just mentioned; and normative, in the Wittgensteinian sense, such that such an imaginary becomes the criteria by which we judge each other as to whether how and what we say about something (e.g. ‘algorithms’) is appropriate, or not, to the context of discussion.
Secondly, ‘Algorithms’, as socio-technical apparatuses, can, if we allow, act as a mirror in which we might reflect upon the generation and use of sets of rules, and how they are followed . In order for contingencies to be made, the anticipatory ‘world-ing’ of the programmer must be complex (and a form of catastrophism – always planning for the potential error or breakdown). Such a reflection upon ‘algorithms’ is, in effect, a reflection upon reason and stupidity. For the purposes of this post, I identify two elements to this reflection: the reification of the apparatus we call ‘algorithms’; and the idiomaticity and untranslatability of language in terms of the conventions of programming ‘code’.
Much of the recent discourse of ‘algorithms’ invites, or even assumes, a belief in the validity and sovereignty of the black-boxed system named an ‘algorithm’. The ‘algorithm’ is reportedly capable of extraordinary and perhaps fear-inducing feats. We are often directed to focus upon the apparent agencies of the code as such, perhaps ignoring the context of practices in which the ‘algorithm’ is situated: practices of ‘coding’, ‘compiling’ (perhaps), ‘designing’, ‘managing’, ‘running’ and may others that involve the negotiation of different rationales for how and why the ‘algorithm’ can and should function. There is nothing in-and-of-itself “bad” about the apparently hidden agencies of an ‘algorithm’ — although, of course, sometimes questionable activities are enabled by such secrecy — and focusing upon that hiddenness elides those contexts of practice .
By ‘reifying’ (following Adorno and Horkheimer 2002; Stiegler 2015) the black-boxed ‘algorithm’ we submit to a form of stupidity. We allow those practitioners that enable the development and functioning of the ‘algorithm’, and ourselves as critical observers, to “vanish […] before the apparatus” (Adorno and Horkheimer 2002, xvii). This is inherently an act of positioning ourselves in some kind of peculiarly subordinate relation to the apparatus, it is a debasement of our theoretical knowledge (because, of course, we understand the context of practices, we understand the kinds of ‘world-ing’ discussed above), and of our critical ‘know-how’. Such a ‘stupidity’ is a tendency towards an incapacity; an incapability to meet the future—deferring instead to the calculative capacities of the apparatus, and its (arguably) impoverished world-ing.
You can read more here. Many thanks to Sam Kinsley and the rest of the panellists for their excellent contributions, and to Sam for allowing us to reblog his writing here!