Welcome to Eureka Street

back to site

ChatGPT and the apocalypse

5 Comments

 

A friend recently sat me down to demonstrate the wonders of the AI Chatbot, ChatGPT. ‘Write me a short story about an old man missing his dog and do it in the style of the King James Bible,’ he typed. Seconds later, there it was. ‘Create a lesson plan for a Year 10 Geography class on the cities of the world,’ he instructed. One sip of my tea and voila! An exercise that would usually take about two hours was complete in about eight seconds. What?! I was astonished. Open-mouthed. 

If you haven’t yet encountered ChatGPT you soon will, perhaps unknowingly, as a recipient of artificial intelligence-generated content. This ‘Chatbot’ has been trained to interact with users in a conversational way and can generate nuanced and sophisticated responses drawing on billions of data points.

My friend’s unalloyed enthusiasm was obvious—I could see his (strategic) mind whirring with possibility. He’s a marketer and can immediately see how this technology will make his life way easier. Anyone dependent on the production of fresh copy for business plans, emails to clients, marketing campaigns or … almost anything actually, is salivating at the prospects of this tool. Anyone other than copywriters perhaps! 

This AI technology will unlock speed and efficiency and open up enormous possibilities of scale and exponential growth for businesses. I am assured that we haven’t even scratched the surface of its potential and it won’t be long at all before companies are stacking video with voice generating AI software to provide, for instance, a full suite of customer support services that capture each business’s unique offerings. Sooner than you might think we could arrive at our local telco store to be met by a truly knowledgeable service-oriented hologram that knows what it’s talking about. Imagine!   

The world of AI is here, and vistas of immense and exciting possibility lie before us. That much is certain. 

But I must admit, my initial amazement at first encountering ChatGPT quickly gave way to unease and a sense that something essential could be about to be lost. I guess it’s possible that we’ll be able to channel the wonders of AI in a direction that contributes to our collective flourishing … but do we really have the foresight and resources to guide us and avoid unexpected consequences? Moderns have frequently been guilty of naivety when it comes to the impact of technology and the wisdom to carefully consider the wider implications of these instruments of magical powers. 

Cultural scholar Christopher Watkin, drawing on the work of French Philosopher Jaques Ellul, speaks of the dangers of an unquestioning commitment to ‘technique’ or the insatiable drive for efficiency that is a pervasive presence in everything today from manufacturing to education to healthcare and even relationships. Watkin believes that to build society primarily on the notion of ‘efficiency for efficiency’s sake’ inevitably subordinates human beings to an economic function. They become a means to an end. Ironically, what is intended to liberate, ends up enslaving. The pursuit of efficiency through technology and rationality can unintentionally lead to incredibly irrational outcomes that diminish our humanity. The net cost/benefit needs to be in view.  

 

'Alarmingly, even someone as invested as Altman can see the down sides and potential catastrophes that could eventuate should AI be directed in negative directions.'

 

In 2020, Alex Karp, CEO of the U.S. data company Palantir, offered a surprising warning about the dangers of technology’s future directions in a letter to investors quoted in the Financial Times. ‘Our society has effectively outsourced the building of software that makes our world possible to a small group of engineers in an isolated corner of the country,’ he wrote. ‘The question is whether we also want to outsource the adjudication of some of the most consequential moral and philosophical questions of our time.’ Elaborating on this theme Karp wrote, ‘The engineering elite in Silicon Valley may know more than most about building software. But they do not know more about how society should be organised or what justice requires.’

That kind of assessment requires a sober appraisal of who we are as human beings, and the ways we can engage with tools of weighty consequence – both good and bad. 

Sam Altman is the 37-year-old CEO of OpenAI, the company that has released ChatGPT. In a recent YouTube interview with Connie Loizos for StrictlyVC, Altman appears genuine in his aim to introduce AI in a responsible manner that reduces harm. But he can barely contain his zeal for where this technology can take us. He envisages a growth in our knowledge of the universe and everything in it that in one year will achieve what would have taken thousands of years without AI. He speaks of ‘unbelievable abundance’, finding ways to resolve deadlocks and ‘improve all aspects of reality’ such that we ‘all live our best lives.’ It’s a thoroughly utopian vision and in that sense, we have been here before. 

Alarmingly, even someone as invested as Altman can see the down sides and potential catastrophes that could eventuate should AI be directed in negative directions. ‘Lights out for all of us,’ he says as a description of the worst-case scenario. ‘I think it’s impossible to overstate the importance of AI safety and alignment work,’ he says. That all sounds pretty serious to me, but fascinatingly, Altman is mostly worried about the accidental misuse of AI rather than sinister deployment. 

His faith in progress and the essential goodness of humankind is unmistakable. ‘What I think is going to have to happen is society will have to agree and, like, set some laws on what an AI can never do or what one of these systems can never do.’ Even the most superficial understanding of human history would give us reason to wonder where Altman finds optimism for society’s ability to ‘agree’ on matters as complex as this and how he is able to imagine that the deception of the human heart would not be in play in this dance with the super-power we have before us. 

American author and commentator Andy Crouch is deeply concerned that we use technology to help us thrive as human beings. Unquestioning acceptance of all the gadgetry that increasingly dominates our lives won’t cut it. He characterises the great promise of technology as ‘We will no longer have to do X’ (wash our clothes by hand, build a huge fire every night to stay warm), and ‘We will now be able to do Y (move easily from town to town, or country to country by driving and flying). What we don’t give enough consideration to however is how those ‘promises’ are coupled with another set of imperatives, ‘We now won’t be able to do x (remember how to build a fire ….’)  and, in fact ‘We now will be compelled to do y’ (put a smart phone in the hands of children in order for them to function in society.) There are times when this deal doesn’t serve us well.

Part of the complexity we now face lies in where and in what ways artificial intelligence is deployed into our lives.

When legendary songwriter Nick Cave was sent the lyrics to a song written by ChatGPT supposedly in Cave’s style his reaction was swift and unequivocal: ‘This song sucks’, he wrote on Red Hand Files, and labelled it a ‘grotesque mockery of what it is to be human.’ Constructing emails to clients or a marketing plan for a product might be fine but don’t mess with something as profoundly human as song writing or poetry. ‘Songs arise out of suffering … and are predicated upon the complex, internal human struggle of creation,’ wrote Cave. ‘… as far as I know, algorithms don’t feel. Data doesn’t suffer.’ 

Nick Cave’s blunt reaction to the song writing potential of AI carried with it a not unreasonable concern with where all this might be heading. ‘ChatGPT’s melancholy role is that it is destined to imitate and can never have an authentic human experience, no matter how devalued and inconsequential the human experience in time may become,’ he wrote. 

That ‘melancholy role’ of technology in the human future is echoed in Kazuo Ishiguro’s futuristic novel from 2021, Klara and the Sun. Among many other things, this haunting story invites us to consider human nature in light of advances in technology: the promise that we can eventually supersede our physical limitations, the dream that we can replace all that hinders and decays.

The whole story is told from the perspective of Klara, an AF (artificial friend), a class of android employed to look after teenagers. Klara is purchased to care for Josie, one of the elites who’ve been ‘lifted’, or genetically improved and given access to the best educational opportunities.

But she is seriously ill, likely because of that process, and her parents are terrified she will succumb to her illness. Klara is there to emulate Josie to the extent she can ‘replace’ her if that becomes necessary. Will it be enough?

There is a darkness in all this manipulation of what it is to be human and a desperation in characters convincing themselves that technology will be the means of our salvation. ‘The second Josie won’t be a copy,’ implores a true believer. ‘She’ll be the exact same and you’ll have every right to love her just as you love Josie now.’

The pathos lies in the fact that, as technologically marvellous as she is, it’s clear that Klara won’t ever reach the depths of human emotion, memory, and complexity necessary to be Josie in any way that matters. And in the process of reaching for this outcome everyone is diminished. We are embodied beings — both glorious and fragile. Is there a place for accepting our limitations and setting corresponding limits to where our technology can take us?

We will need help to navigate such complexity and the wisdom — both ancient and modern — of philosophers, ethicists and theologians. 

Considering what is essential to our human nature would be an important place to start. The Jewish and Christian conception of each and every person ‘made in the image of God’ can’t be surpassed in ascribing irreducible value to human life. But that same designation recognises that we are also flawed, limited and capable of great harm – even unintended harm. The implication of both of these truths could serve as a vital foundation and guardrail for wherever the AI journey is about to take us. Buckle up!

 

 

 


 

Simon Smart is the Executive Director of the Centre for Public Christianity and the host of Life & Faith podcast.

Main image: Machine learning and artificial intelligence concept. (Getty images)

Topic tags: Simon Smart, ChatGPT, Apocalypse, AI

 

 

submit a comment

Existing comments

Love that final paragraph, Simon, my seat belt is fastened. But I have to keep checking. Not checking about my flaws, limitations and capability for great harm but checking on my irreducible value. At the time Kindle became available the demise of the printed word in the form where we could make messy notes in the margins and hold the book was confidently predicted. However, bookshops and bibliophiles continue to flourish. Reference books and remembering snippets of loved books cannot be replaced by automation. Imagine all the people living life in books….ones that fall off the shelf and need dusting.


Pam | 02 February 2023  

Waaay back in 1980s the fledgling software Q&A (Question and Answer) was not much different to Chatbot; it was just a "stand-alone" database that analysed data and reported on the basis of a similarly typed phrased request... like "tell me who is most likely to do xyz"; it was used by Victoria Police so any idiot could use it, me too. What data can't do is put a particular bias unless there is some weighted programming to start with: the human element.
Where we introduce human values or deny the technology because it doesn't have empathy or "lived" experience it's a bit farcical to dismiss or reject a broader technical project because it doesn't have a heart. Elvis Presley's song "Kentucky Rain" didn't necessitate he or the song writer actually walk through the soggy state to write the tear-jerker...are we any more "cheated" that they didn't get wet feet in the production than if it was authored by a high-tech AI system that scientifically knows what extracts emotional responses but never got wet? We've progressed a long way from cuneiform hieroglyphics, slate, quills, press and fountain pens to strive to be able to communicate efficiently. Perhaps the greater challenge of Chatbots is if we, the "edjerkated" will allow the empowering of the semi-literate to draft treatise to a quality which they may struggle otherwise. Knowledge is power... are we going to facilitate those who don't possess knowledge "in their head" the same rights of authority as those with a PhD...and perhaps a murky understanding of what they remember correctly.


ray | 02 February 2023  
Show Responses

Knowledge is power, but unlimited access to information is not. Already we have more and more ways to access information and yet our ability to communicate is increasingly poorer as the ability to listen, interpret, evaluate information before reacting or responding to it is poorer. Good communication requires far more than access to more information.


Jacqueline B | 07 February 2023  

Hackable Animals?
In a foreword by Michael E. Zimmerman in, "Big Picture Perspectives on Planetary Flourishing: Metatheory for the Anthropocene, Volume 1", a book looking at how the integral metatheories of Ken Wilber and Roy Bhaskar may be enacted for human flourishing in the world, in agreement with you, states on page xxiii that, "[s]cientific and technical knowledge alone cannot determine praxis; those who think otherwise are committed to technocracy."
One of those technocratic institutions is the World Economic Forum (WEF) – "you will own nothing and be happy" – and a worrying sign coming from their annual Davos, Switzerland, meetings is the statement by WEF advisor, psychologist Yuval Noah Harari, that humans are nothing more than "hackable animals", and who also debunks anything spiritual.
Thus, it would seem that the technocrats have forgotten about God, and we humans are already considered as nothing more than lab rats to be used as they please.


Richard Bull | 03 February 2023  
Show Responses

Richard, while I think you're on the right track of observation I'd suggest that humans have been susceptible to influence and sooth-sayers for a lot longer than techno-philes and denigrators would care to consider. A Chatbot is a bit like Cyrano de Bergerac; aiding the otherwise incapacitated to appear eloquent... and is that not the rub? Do we allow our ingenuous selves to be charmed or "fooled" by illiterazzi without the knowledge that the crafted words and reasons while perhaps not fully corrupt have been construed to determine a desired outcome. We're subject to spin every day and unwittingly programmed by our education and technology devices to respond to the world... and sometimes the "spritual" may be more corporeal in nature than we'd care to consider. One can only hope that the next "logical" technology will be smart-safe to alert us when we're being exposed to rhetoric, influence and lies. Even I'd pay for that subscription.


ray | 05 February 2023  

Similar Articles

The book corner: Faith and doubt in American fiction

  • Paul Mitchell
  • 02 February 2023

Through exploring the work of nine Catholic American authors — with special focus on Flannery O’Connor, Toni Morrison, Cormac McCarthy and Don DeLillo — Longing for an Absent God boldly attempts to discover what it is about faith and the desire for transcendence that exerts such influence over the popular imagination. 

READ MORE

Son of the West: A tribute to Peter Haffenden

  • Arnold Zable
  • 01 February 2023

Peter’s playful, profound love of life ranged from the earth to the skies, and from the oceans to the great mysteries of the universe. It was a love that was grounded in family and community rituals. 

READ MORE