How we define and use, or misuse, words can have cataclysmic, long-lasting effects. In the 1980s, most Western governments adopted polices of ‘financial deregulation’, apparently failing to notice that the phrase itself is a contradiction, a logical error. Finance is rules, so it cannot be deregulated. Scroll forward half a century after a vast number of financial players invented their own forms of money – that is what ‘deregulation’ turned out to really mean – and we are locked in endless confusion about what money actually is. Once you start in the wrong place, you can never find your way back. It is reminiscent of being lost in Dante’s dark forest at the beginning of The Divine Comedy.
A similar semantic error is happening with Artificial Intelligence (AI). Unless corrected it, too, will have dire long-term consequences. Computers cannot create human thinking and intelligence, artificial or otherwise. Neither will they become ‘smarter’ than us. They can only scan data and identify patterns, which allows for faster and better deductions by humans. It would be more accurate to rename AI Artificial Deduction Simulator (ADS).
We are being repeatedly warned that AI could end the human race unless controlled and it certainly sounds impressive when former Google scientist Geoffrey Hutton says he ‘suddenly switched’ his views on whether AI ‘is going to be more intelligent than us,’ resigning his position as a consequence. Sam Altman, chief executive of OpenAI is sounding the alarm, saying his work at OpenAI will lead to most people being worse off, and sooner than ‘most people believe’. Elon Musk is calling for a six month moratorium because of the dangers. He co-wrote a letter with Apple cofounder Steve Wozniak to that effect.
It is true that AI technology will be economically and socially disruptive, as many new technologies are. It will especially affect some areas of repetitive work that require accuracy rather than thoughtfulness. But ask the question: ‘what does “more intelligent” mean?’ The response from technologists will likely be that AI will be able to process information billions of time faster than us. But that is not intelligence. It does not involve the creation of meaning, which, amongst other things, requires self-awareness.
The central problem is that AI proponents, like financiers with ‘deregulation’, have been captured by their own metaphors. Unfortunately, because they are respected as ‘experts’, they go on to capture the wider population with those same metaphors.
If AI represents a great peril to the human race, it will not be because AI will become smarter than us. It will be because we aren’t sensible enough to realise that a computer cannot emulate the full range of human thinking. If we are foolish enough to let AI control physical infrastructure, or worse, weapons systems, we could do ourselves great harm. But it will be because of our own foolishness, not an inevitable consequence of the march of technology.
The mistake can be seen by asking a few basic questions. Can an AI computer doubt itself? What with? Yet humans are capable of self-doubt, indeed it is crucial to intellectual rigour. Can AI come up with a postulation with sketchy or incomplete data? No. It is always a case of garbage in, garbage out. But humans can and do routinely. Unlike AI, which can only scan data sets for the purposes of deduction, humans can induce, come up with ideas or theories by making connections from very incomplete information.
'We seem to be in a headlong rush to rob ourselves of our own natures. The more that happens, the more likely it is that we will end up becoming slaves of what we have created.'
Humans have imaginations, which allows them, for example to have empathy for the situation of others. What is the software code for imagining? Humans can have an understanding of truth, in part because that usually involves some sort of moral position, notably that dishonesty is wrong. Computers have no such constraints: they can spit out false information as easily as truths.
There is another deception. The purpose of thinking is to create meaning. It is not to demonstrate that you are ‘intelligent’, which is something you might assess after the fact, such as with an IQ test. So even the use of the word ‘Intelligence’ in AI is something of a misdirection.
The whole thing is an exercise in ‘personification’, a literary term describing how writers invest non-sentient objects with human qualities they cannot have (such as ‘depressed clouds’ or a ‘cheerful sun’). The AI experts are personifiers, investing a machine with human capabilities.
It is hard to be optimistic that these errors will be corrected. It is too easy to adopt the latest metaphors and run down a false path. Neither is the quasi-scientific nonsense just confined to thinking about AI. Witness, for example, the sloppy thinking behind the current efforts by the Federal government to legislate against ‘harmful misinformation and disinformation in Australia’. Information is passive, it cannot meaningfully be the subject of legislation because it is only acted upon, it does not do anything. What the government really wants to do is prevent people intending to convey meanings that they do not approve of. But in order to conceal that, they characterise consumers of online material as robots having data fed into them, not humans capable of vastly different responses and interpretations. That this language subterfuge seems to go largely unnoticed does not augur well.
We seem to be in a headlong rush to rob ourselves of our own natures. The more that happens, the more likely it is that we will end up becoming slaves of what we have created.
David James is the managing editor of personalsuperinvestor.com.au. He has a PhD in English literature and is author of the musical comedy The Bard Bites Back, which is about Shakespeare's ghost.
Main image: (Getty images)