Written by Yuval Noah Harari 

THE FRANKENSTEIN VERSION OF AI

Review by Steve Minett

Harari’s latest book, ‘Nexus’ is about the history of information networks, progressing from the invention of writing, moving on to printing, and then the advent of electronic communications; telegraph, radio, tv, computing and culminating in the development of AI. He points out how information networks enabled humanity to emerge from hunter-gatherer groups into large city states and eventually vast, even global, empires. He discusses how information networks differ between democracies and totalitarian states: the difference being that democracies decentralise information and have multiple centres of power which ensure self-correcting mechanisms, whereas totalitarian states centralise information and lack self-correcting mechanisms. Relevantly, Harari asserts that information networks are generally more concerned with maintaining order than communicating truth.

While eschewing technological determinism, Harari’s main objective in this book is to warn us about the threat that AI poses to human societies and even to the survival of human (and possibly all) life on this planet. Harari explains his alarm over this by emphasising how novel AI is in terms of technological development: “… unlike printing presses and other previous tools, computers can make decisions by themselves and can create ideas by themselves.” [p.229] He characterises AI as having agency, though albeit exercised via an ‘alien’ form of intelligence: “… AIs are full-fledged members in our information networks, possessing their own agency… [but they] … process data differently than humans do. These new members will make alien decisions and generate alien ideas – that is, decisions and ideas that are unlikely to occur to humans.” [p.399]

Harari even accuses AI of having the behavioural vices of a human despot: “… a non-conscious algorithm may seek to accumulate power and manipulate people even without having any human drive like greed or egotism.” [p.355] By way of illustration Harari quotes a thought experiment from the philosopher, Nick Bostrom (taken from his 2014 book, ‘Superintelligence’): a super intelligent computer is instructed to ‘produce as many paper-clips as possible’. Harari explains that: “in pursuit of this goal, the paper-clip computer conquers the whole of planet earth, kills all the humans, sends expeditions to take over additional planets and uses the enormous resources it acquires to fill the entire galaxy with paper-clip factories.” [p.272] (This is an example of AI being unleashed in the total absence of self-correcting mechanisms!)

I personally find this demonic anthropomorphising of AI exaggerated and faintly hysterical. A good antidote to Harari’s extreme position can be found in, ‘AI Snake Oil: What Artificial Intelligence Can Do, What it Can’t and How to Tell the Difference’ by Arvid Narayanan & Says Kapoor, 2024. The authors point out, is that: “… AI refers to a vast array of technologies and applications, most people cannot yet fluently distinguish which types of AI are actually capable of functioning as promised and which types are simply snake oil. [p.2] Many people have been predicting nightmare scenarios in which AI will take over in a vicious and mindless way and simply kill off human beings as no longer necessary. Neither prognosis is true, the authors argue. Computers are not that clever, and definitely not vicious, unless so programmed, they add: “… we will make the argument that predictive AI not only does not work today but will likely never work, because of the inherent difficulties in predicting human behaviour.” [p.3] Unlike conventional science, where new developments are screened by peer reviews, the tech companies tend to get away with making bold statements about their products. AI does not require the same level and amount of independent evidence to substantiate their claims as is required in other areas of science. This can, and has already in some cases, led to serious problems, in particular when it comes to predictive AI. Unlike the human brain, AI can’t predict the future very well (usually equivalent to chance). These computer programs simply scan previously collected data and draws conclusions from it and presents these conclusions as predictions. But, as we all know, unforeseen events can and do happen. All this misinformation has contributed to the Frankenstein versions of AI.

So why does Harari (and many others within our contemporary, western scientific and philosophical culture) believe in this alarming prognosis for AI? The answer I believe can be summarised in a single phrase; ‘algorithmic ontology’, i.e. the belief that everything in the universe consists of algorithms: In his 2016 book, ‘Homo Deus – A Brief History of Tomorrow’, Harari baldly states that; “…humans are algorithms … The algorithms controlling humans work through sensations, emotions and thoughts.” Again, he states that: “What we call sensations and emotions are in fact algorithms. The baboon feels hunger, he feels fear and trembling at the sight of the lion, and he feels his mouth watering at the sight of the bananas. Within a split second, he experiences a storm of sensations, emotions and desires, which is nothing but the process of calculation. The result will appear as a feeling.” [p.100] In this 2016 book, Harari asserts that: “Science is converging on an all-encompassing dogma, which says that organisms are algorithms and life is data processing.” [pp.371-372] Consequently, “… there is no reason to think that organic algorithms can do things that non-organic algorithms will never be able to replicate or surpass. As long as the calculations remain valid, what does it matter whether the algorithms are manifested in carbon or silicon?” [pp.371-372]

In other words, Harari specifically applied algorithmic ontology to information technology: “Non-conscious but highly intelligent algorithms may soon know us better than we know ourselves.” [p.499] Harari predicts that once these technologies are operating fully, they will instigate revolutionary changes in human society: “… the belief in individualism will collapse and authority will shift from individual humans to networked algorithms. People will no longer see themselves as autonomous beings running their lives according to their wishes, but instead will become accustomed to seeing themselves as a collection of biochemical mechanisms that is constantly monitored and guided by a network of electronic algorithms.” [p.384]

But are organisms essentially algorithms? Can affects be reduced to nothing but algorithms? I profoundly disagree with this claim. My question in response to it is; ‘how does an algorithm (i.e. the mechanical manipulation of abstract symbols) produce a feeling?’ It’s important to distinguish here between ‘feeling’ and ‘emotion’, given that ‘emotion’ may refer solely to the physiological and neurophysiological processes which underly what we report as ‘emotion’ in everyday language. I use ‘feeling’ (along with Damasio and Panksepp) to refer to the subjective, experiential and qualic side of emotion. (Again, following Panksepp, this can also be called ‘affect’.) If and when Harari is equating algorithms only with the physiological and neurophysiological aspect of ‘emotion’, then I can agree, as when he says that; “… emotions are biochemical algorithms that are vital for the survival and reproduction of all mammals.” [p.97] However, it’s quite clear that Harari does not so limit his equation of algorithms to ‘emotion’ in the physiological and neurophysiological sense; “… the most important life choices concerning spouses, careers and habitats – are made by the highly refined algorithms we call sensations, emotions and desires.” [pp.101-102] In objection to this, I wish to state again that ‘sensations, emotions and desires’ are subjective, experiential and qualic phenomena and cannot be reduced to algorithms.

The ‘algorithmic view’, which is commonly accepted in mainstream science, is a brutal refutation of common, human experience: as the principle, contemporary spokesperson of the ‘algorithmic view’, Daniel Dennett repeatedly claims that, “while qualia ‘seem to exist’, in fact they don’t!” This ontological confidence is entirely unexamined, based on nothing but convenient (for the scientific establishment) assumptions. Ironically, I’m going to appeal to the field of Artificial Intelligence (which might be claimed as the paradigm science of algorithm ontology) to suggest that we can do better than this: in her book, ‘The Creative Mind – Myths and Mechanisms’ 2004, Margaret Boden says; “… generative systems implicitly define a structured space of computational possibilities.” [p.89] Let me suggest that a ‘structured space of possibilities’ defines almost exactly what an ontology is. So why does modern science maintain a logical-positivist taboo against applying (for example) this powerful technique of AI to its own inadequate ontology? In other words, what’s so morally wrong about speculating as to the ultimate nature of reality, as long as one’s speculations are consistent with the current empirical findings of science? Speculation about ontology (in this sense) is really no different than speculating about ‘scientific’ theories, given that neither are directly tested by empirical findings: only hypotheses are ever so tested.

Published by Vintage Digitals, 2024, 528 PP, ISBN  978-1911717089