Welcome to Eureka Street

back to site

ECONOMICS

The rise of the machines

  • 31 January 2022
There is a great deal of commentary about the growing importance of artificial intelligence, or AI, especially in business circles. To some extent this is a self-fulfilling prophecy — if people think something will have a seminal effect then it probably will. But if the supposed commercial benefits are significant, the dangers are potentially enormous.

Elon Musk, a person intimately familiar with AI, frets that it could become ‘an immortal dictator from which we would never escape’ suggesting that it will overtake human intelligence in five years. He, and many others, envisage a ‘technological singularity’: a point when machine intelligence surpasses human intelligence and the computers accelerate at an incomprehensible rate.

At one level these claims are just nonsense that reveal just how degraded our understanding of ourselves has become. The first and most obvious problem is that AI can never replicate the complexity and range of human intelligence. At best, it can improve on a small part of our thinking, computation. But computation is only one part of our cognition, and cognition is only one slice of the range and depth of human thought.

There are other errors, perhaps implying some more intelligence is required when thinking about AI. Computers do not have intentionality (will), which is self-evidently necessary to thinking. They have no sense of their own mortality. Anything that involves our understanding of qualities rather than quantities, such as the beauty of a painting or a piece of music, is outside the range of AI, or any computer. Computers cannot think, and to call what they do ‘intelligence’ is only to confirm how narrow our measurements of thinking are (IQ measures, basically).

Then there is the problem of consciousness: humans’ ability to be aware of their own thoughts and of themselves. It is possible to program software that can continuously produce new software configurations in response to the computer’s interaction with its environment. That is what using AI to get a computer to ‘learn’ means. But no machine will ever be aware of the experience of having learned. It is a machine. It is not merely lacking in self-consciousness, it is inanimate.

"The danger is that it will lead to a massive degradation of our humanity, reduce us to nothing but industrial outputs, transactions and binary behaviours."

Human self-awareness is impossible to deal with in mathematical terms because it is an infinite regress. There will never be an algorithm that plots self-consciousness because it could never

Join the conversation. Sign up for our free weekly newsletter  Subscribe