November 21, 2024
AI does not have a ‘ghost in the machine’

AI does not have a ‘ghost in the machine’

Author: Donald Clark
Source

AI seems to have acquired the concept of a soul or a ‘ghost in the machine’, a phrase coined by Gilbert Ryle, who critiqued the idea as hopelessly Cartesian in his wonderful book ‘The Concept of Mind’. I’m also uncomfortable with this Cartesian shift towards maths and software having a soul. We tend to see objects as having subjects behind them, so readily invent such ghosts in the machine, even seeing our minds as souls. The irreducible ‘I’ in Descartes seems to have been resurrected in A’I’, with the Artificial ‘I’. There is no ‘I’ in ‘AI’. 

Promethean myth

From the Promethean myth onwards, we have imagined technology as a monster. Frankenstein being first modern Prometheus (literally the subtitle of Mary Shelly’s book), a monster in literature manifested as a man in movies. The Terminator generation has its own Promethean icons. The tech kids, brought up on a diet of sci-fi, are fond of these thought experiments, which they think is philosophy. They speculate on tech that ‘has a mind of its own’ that will ‘destroy us’, with no evidence other than thinking of a possible consequence free from any mitigating design or control. Let’s not turn fiction too readily into non-fiction.

 

First, there is no such thing as AI. It is not an entity, it is many things and these many things talk to each other, work in tandem and form a nexus of technology. John McCarthy who coined the phrase in 1956, regretted it. It is a number of different data techniques, training using that data along with several statistical and algorithmic methods for producing a word by word output. It is competent without comprehension.

 

Neither is there is one unitary form of intelligence in human and/or animal form. It is a complex set of cognitive conditions. With AI, it is even more diffuse and complex, not a set of discrete brains, housed in bone skulls, all approximately the same size with roughly the same limitations but an instantly communicating network. Even if we were to use human identity and intelligence as a benchmark, our brains have an incredibly limited working memory, a limited, fallible and declining long-term memory, we forget almost everything we try to learn, we have dozens of identified intrinsic biases, are emotionally unstable, can become dysfunctional through illness, be delusional, even hallucinate, can’t network, often get dementia and Alzheimer’s and die.

 

AI is many things

AI can learn separately but share instantly. Imagine that one person learns, then, instantaneously everyone else knows. There is no person here, no single AI entity, only a growing network of competence. To be fair that is where the danger lies but let’s not read intention, deception and agency into this network.

 

In particular, Generative AI is not a person, it is us. We are staring into a mirror of our own collective, cultural capital. We are speaking to a hive mind, that has competence without comprehension. It can be any of us and all of us, as it knows most of what we know. 

 

In our rush to see an essence in everything, we see AI as one thing. We talk about AI as if it was a being or entity that could be good or evil. A common conceptual leap is to do what Descartes did, with his though experiment around an Evil Demon and see the ghost in the machine as malevolent. A consequence of this ghost in the machine thinking is attributing evil intent to the ghost as a demon who will become autonomous, escape our control, conjure up existential threats to our species and eliminate us. Without that ghost, soul or demon, we strip AI of moral intention, either good or bad. 

 

What exacerbates this mode of thought, that there is an intentional entity that lies or deceives us, is poor prompting, provoking and poking the bear. If you ask it weird things it will come back with weird stuff. In that sense it merely reflects the stupidity or intent of we humans. As something that is competent without comprehension, it speaks back to us, so seems like another person but it is not. We are actually speaking to ourselves as a species. Our social and cultural heritage has been captured and seems to speak back to us but we are speaking with ourselves. It is a mass act of social generation, mediation and reflection. It seems human not because it is human, it just reflects our social production and heritage. You are not talking to an individual, with a soul, but looking into a wide and deep pool of socially structured knowledge. 

 

Anthropomorphising

This is the root cause of much of the anthropomorphism, speculative causality and doomsaying around AI which posits sci-fi levels of autonomy and agency and ignores control. Yann LeCun is their primary critic and constantly refers to this sci-fi speculation as way out of control. For him, we generally ignore good design and engineering to mitigate the wildly speculative risks. There is no form of existing or AI or even anticipated AI that suggests an extinction event for humanity. This risk is negligible compared to risks such as climate change and that is where we need to use AI to help solve our more intractable problems.

 

Argument from authority

Giving ‘hero scientists’ too much credence is also to fall victim to the ‘argument from authority’. This is not the way to deal with such issues. The simple idea of being cautious and careful as we go is being swept aside by the relentless rhetoric and focus on the existential and extinction risk. The lack of debate on how to accelerate beneficial uses of AI is infuriating. This lack of balance now means that the benefits in education and healthcare are in danger of not being realised because of hyperbolic speculation that has become apocalyptic. 

 

AGI

AGI just seems like an unnecessary concept, the most common application of the ghost in the machine principle, the idea that there will be a single threshold, some oddly varied human bar, that will be transcended, rather than lots of separate capabilities, all of which can be designed to be controlled. We can create lots of AI technologies with different specialisations without seeing AI as crossing a single line. 

“How can you design seatbelts for a car if the car does not yet exist?” Yann LeCun. Indeed, the self-diving car is a good example as to why we should not be concerned about AGI. We are nowhere near having a self-driving cars on our roads, which confirms that we are nowhere near AGI. In truth abstract researchers are often far from the coalface of implementation and tend to exaggerate thought experiments and abstractions.

 

Copyright

The issue of creative copyright is a misunderstanding of the technology as some sort of single copying machine or single source which LLMs retrieve from or sample in some way. This is not how it works. There’s both an input and output issues here. On input, namely the data used for training, there is a big difference between ‘learning from’ and ‘copying from’.  The models learn from the trained data, they do not copy that data. Japan has already confirmed in law that there will be no copyright issue here. On output, the models spit out freshly minted words, probabilistically, one by one.  Again there is no copyright problem. We would be as well saying all human brains break copyright in all creative acts as they input through reading, hearing and perceiving text and imagery when learning, then output new words, speech or images. There is no ghost in the machine copying or sampling.

 

Fakery

AI is far more likely to protect us from fake and false information that make us believe in it. Around 400 million to 1.5 billion fake accounts are taken down by AI every quarter (figure oscillates) and hate speech is just 0.01-0.02% of all content and 82% of it is taken down automatically by AI before moderation. This is a known problem with known solutions. We saw this in the clickbait drone story, which turned out to be fake news. A USAF official who was quoted saying the Air Force conducted a simulated test where an AI drone killed its human operator is now saying he “misspoke” and that the Air Force NEVER ran this kind of test, in a computer simulation or otherwise.

 

Conclusion

In positing an ego, soul, agent or ghost in the machine into the technology we make a category mistake in taking a concept from one domain, the human domain, and applying it to maths and software. There is no ‘I’ in ‘AI’.

Read more