The Wired Mind - The Myth of Objectivity
Welcome to my new newsletter segment The Wired Mind! In these essays I reflect on philosophical and ethical theories and concepts inspired by encounters with popular culture.
I was reading
a review of Murderbot in Dutch newspaper De Volkskrant, when something peaked my interest. Somewhere in the review, the writer casually says:
“Skarsgard portrays robot like someone fairly high on the spectrum, having trouble with any type of intimacy or even making or maintaining eye-contact, something he should be insensitive to, being a robot.” (note: this is my translation - for the original Dutch, see footnote1)
This made me wonder about the way we compare ourselves to machines, and neurodivergent people, or more specifically, people who are diagnosed with autism.
The Machine, or more specifically, the humanoid robot, is characterised by its possession of human intelligence, and beyond. In the early 1900’s, neurology research led to new insights in the working of our brain. Using electrodes on brains had given researchers new insights on the role of electronic pulses firing the neurons in our brains; these insights quickly piqued the interest of mathematicians and computer scientists such as Alan Turing. The idea of electricity as the driving force of the human brain thus translated into an interest in creating a thinking machine. This burning desire to recreate the human brain in machine-form has driven AI research ever since. Meanwhile, the thinking machine has been bestowed an almost mythical status. When AGI finally arrives, it won’t just be a humanoid machine, but better than a human. The field of AI leans heavily on the underlying assumption that the machine is a purely objective construct - with no emotions to hinder it’s decision making and unlimited computing power, it will be able to make decisions faster, fairer and with lower margins of error.
But it is not only the humanoid robot that is being characterised with an unfailing impassiveness and objectivity, at the root of which lies the all-powerful force of mathematics. Parallel to the robot masking as a human, we see the highly functioning autistic mind, or the ‘savant’, fulfilling the same assumptions of being a type of unemotional, objective observer, powered by a supreme understanding of mathematics.
The idea of the autistic mind as a machine
may be traced back to the general conceptualisation of (high-functioning) autism as fitting within the framework of the (white male) math-genius. Cara-Julie Kather gives an interesting description of this stereotype in the essay Constructing Autism: Norming Thought through Mathematics, Masculinity, Whiteness and Fascism. Mathematics is a discipline that, as Kather states, seems to be “an image of thought and a mode of thinking that favours binaries and is entrenched in goals of epistemic certainty and universal truth”2. This focus on mathematics as representing universal truth is not niche, even in philosophy. The Wiener Kreis3, an early 20th century group of philosophers chaired by Moritz Schlick, are attesting to this. They attributed value only to logic, natural sciences and mathematics, deeming metaphysics, art and natural language as being ‘too vague’ and multi-interpretable to be able to have us discover the ‘truth’ of things. This idea of logic as the measure of truth (describing existent ‘states of affairs’) was also present in the early Wittgenstein4, although he later reconsidered this view in his Philosophical Investigations.
That the ‘savant’ shows parallels with our idealised humanoid robot might also be allowed by a persistent theory about the autistic mind that makes people associate them with the unemotional robot. I am referring to the ‘empathy-deficiency theory’, which finds it’s conception in the early 2000’s in research by Simon Baron-Cohen. In short, this theory assumes that people diagnosed with autism both lack empathy due to them being unable to understand others (a failing grasp of other people existing as ‘other minds’ similar to their own mind) as well as a more overall inability to feel empathy, thus being unable to display empathetic concern. The empathy-deficiency theory has, however, since been doubted and criticised. Even though there might be some autistic people who identify with ‘lacking’ empathy, there are others who feel different5. Some experience feelings of empathy very strongly, to the point of it being overwhelming, or experience it, but just find it harder to correctly identify the situations in which showing empathy is expected. In other cases, autistic people may find it easier to be empathetic towards those close to them or towards animals.
The problem of other minds
has been prominent in philosophy, and has been ‘popularised’ by Descartes in his Meditations, where he performs an experiment of radical doubt. Simply said, the problem of other minds is an epistemological (knowledge) problem about any person other than myself. How can I be sure that someone else has an internal life? I can infer from their behaviours - if I see someone bump their toe and curse loudly, this would lead me to assume this person is in pain - but this inference is never a surety on the question if they actually possess so called ‘qualia’, or the mental states caused by experiences (so in this case, if they actually experienced pain). For all I know, other people might just be sophisticated zombies who know just how to act to fool me into thinking they have a rich inner life comparable to my own. Coming back to Wittgenstein: he wrote a pretty cool thought experiment about this problem in his Philosophical Investigations, called the Beetle in the Box. Imagine we all have a small box (I always imagine an old-fashioned matchbox). Everyone tells everyone else that they have a beetle in their box. But we are never allowed to look into the other boxes. We have to imagine what is in theirs going by our own beetle. But of course, everyone can think of a beetle somewhat differently. I may think of a Hercules Beetle. Someone else of a Scarab or on of these cute looking green Weevils. Or somebody may think of another creature entirely (and how about the VW Beetle?!).
I am drawing in on the other minds problem because, even though we place great value in AI and robots, we usually do not view them as other minds, but as the above mentioned philosophical zombies - creatures that are apt at appearing like other minds, but aren’t (that is, until ChatGPT had the greater population doubting this due to it’s sophisticated answers). For me, this resonates with how we view the autistic mind when we adhere to stereotypes like the empathy-deficient savant.
Some of the great humanoid robots in science fiction history
, however, may defy the stereotype of the machine as an empathy deficient assembly of mechanical parts. Think of the robots gaining self-consciousness in the recent Westworld tv-series or the Keats hybrid from the Hyperion novels. The real problem with the way we view humanoid robots, and the ‘savant’, I think, is to do with their purported state of objectivity.
In his Foundation novels, Asimov adds the ‘Zeroth law’ to his three Laws of Robotics. According to the Zeroth Law, “A robot may not injure humanity or, through inaction, allow humanity to come to harm6”. I think this law embodies the assumption that, through the mathematical essence that drives the robot and its behaviours, the humanoid robot will be capable of making decisions ‘for the greater good’. Even though in the books, there are doubts about how a robot (or anyone, for that matter) could decide on what is harmful and what is not, AI companies want to convince themselves and us that the objectivity of mathematics and logic will magically show the way to the most ‘ethical’ decision. The ‘savant’ is often viewed as a creature similarly driven by what is call a mathematical mode in the Kather paper. In line with this, we often see that humanoid robots that do express emotion are subsequently designated as dangerous, because showing emotion is deemed as them being aberrant, deviating from the norm of ‘objectivity’. This is making them unpredictable and thus uncontrollable. An example of this is the way the SecUnit from Murderbot is treated after he managed to hack his governor module and was able to explore his own thoughts and experiences in a more subjective way.
It appears, then,
that both the humanoid robot and the ‘savant’ are viewed as being simultaneously Prime and Imperfect. They are Prime beings insofar as they conform to the idea of embodying a mode of mathematical objectivity that will solve problems and thus make them useful - they are Imperfect when they challenge this image of objectivity by unexpectedly showing emotion, making them ‘faulty’.
Ultimately, I think this glorifying of objectivity may harm our ethical decision making rather than safeguard it. I am by no means a relativist when it comes to truth - I think that in contexts like scientific research, we have to be pragmatic and take a Popperian approach, where we designate some things as ‘true’ for as long as we can’t prove its falsehood. But this does not mean that everything can be objectified, or that emotions should be discounted as a valid compass in our (moral) decision making. The humanoid robot and the ‘savant’ both, then, should be judged on their worth by more than their mathematical accuracy, They have an intrinsic value beyond their ‘analytic’ utility, and the other minds problem goes as much for us as for them when we reexamine Wittgenstein’s beetle.
“Skarsgård speelt de robot ondertussen alsof hij behoorlijk op het spectrum zit, met grote problemen met elke vorm van intimiteit of zelfs maar oogcontact, waar hij als robot eigenlijk ongevoelig voor zou moeten zijn.”
https://krisis.eu/article/view/41462/39676
https://plato.stanford.edu/entries/vienna-circle/
Specifically: in his Tractatus Logico-Philosophicus
For example, see: https://www.autism.org.uk/advice-and-guidance/professional-practice/autism-and-empathy and https://pmc.ncbi.nlm.nih.gov/articles/PMC9804307/
Isaac Asimov, Robots and Empire, Prelude to Foundation, Foundation and Earth