Welcome to Eureka Street

back to site

AUSTRALIA

Why the threat of AI is invisible

  • 13 June 2023
Over the last few weeks, various tech magnates and computer scientists have told us that we are standing on the precipice of a new epoch where the creation of a highly-developed AI is likely to eclipse our own cognitive capacities. 

One recent piece in Time magazine by decision theorist at the Machine Intelligence Research Institute Eliezer Yudkowsky seems intended to generate particular alarm. It's a response to an open letter, run by the Future of Life institute, that called for ‘all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.’ Eliezer comes at the topic with a little more force:

 

‘Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in “maybe possibly some remote chance,” but as in “that is the obvious thing that would happen.” It’s not that you can’t, in principle, survive creating something much smarter than you; it’s that it would require precision and preparation and new scientific insights, and probably not having AI systems composed of giant inscrutable arrays of fractional numbers.’

 

This feels reminiscent of a scene in Adam McKay’s film Don’t Look Up where Leonardo DiCaprio’s character melts down on a talk show, screaming that a comet will hit Earth and destroy everyone. The hosts exchange awkward glances and cut to an ad break. 

Because to someone who is not an AI expert, it feels a little hyperbolic. Technology ethicist and co-founder of the Center for Humane Technology Tristan Harris is more measured but expresses similar concerns, equating recent warnings about AI to a discussion with Robert Oppenheimer in 1944 about the civilization-ending potential of nuclear weapons. 

 

'You don’t have to look far to see a burgeoning sort of fatalistic complacency. We live in a world brimming with apocalyptic prognostications – climate change, nuclear war, pandemics – it is, perhaps, too easy to become resigned, or even to treat these threats as inevitable endpoints to the human story.'  

Our collective response has been, so far, little more than a tremor of worry amidst bemused nonchalance. In my own response, I’ve noticed this same failure of intuition. There’s something about this topic that, instead of filling us with abject terror necessary to enact required course-corrections, gives us a sort of momentary, curious glee.

It feels more like science fiction than a
Join the conversation. Sign up for our free weekly newsletter  Subscribe