In the last fortnight, the shadow of Artificial Intelligence has covered the earth. In England, representatives of national governments and large corporations met to consider the dangers posed by its uncontrolled use. In Australia, university exams began with increasingly stringent precautions against cheating. The spectre of instant answers to questions drifted over them.
During Melbourne Cup Week, everything can be bought and sold, including the names of races. In addition to the horses and their entourage of trainers, owners, and other attendants, a host of reporters descended upon the racecourse to find an angle on and photograph the same rose garden, the same races, the same jockeys, the same celebrities, the same fashionable dressing and alcoholic behaviour, and the same weather. The spectre of the future replacement of reporters by AI haunted the racecourse. Last week, too, a report made available to a Parliamentary Inquiry was found to contain erroneous accusations generated by AI. The spectre of misinformation flutters from the pages of all reports on which governments rely.
These are all large and heavy issues. My own musings have been less weighty and more personal. As a regular writer on social, cultural, and religious questions, I wonder about the future of such writing and writers in the age of AI. The spectre of a use-by date drifts close to my craft. Even in its present and presumably primitive form, AI can provide a reasonable analysis and comment on most controversial issues. It draws on other analyses, opinions, and the evidence on which they are based. It can also provide arguments to support conflicting judgments on issues – for and against a peace treaty in Gaza, for example. As it develops, too, AI will be able to present its argument in the distinctive style of an undying individual writer. The opinion will then carry the weight of a personal judgment and not that of a generic opinion. What merely human commentator could compete with a Jonathan Swift redivivus or an Alexis de Tocqueville?
'These personal reflections press the larger question of what, if anything, would be lost or gained if I and other merely human commentators of different stripes were to be replaced by AI. I suspect that the answer lies precisely in the human frailty of columnists.'
My attitude to senior commentators in the mainstream media on contemporary issues does not allay my fears. Some, certainly, I read carefully. They consistently offer fresh insights. But not the majority. I can anticipate the position they will take on issues, the kind of arguments they will make, and the way in which they will represent the people they approve or disapprove of. A glance at their columns will suffice to see if they bring something new. The language they use, too, will generally reveal and sometimes constitute their argument. Any use of the word ‘woke’, for example, or generic appeal to Western Civilization provides a reliable excuse for not reading further. So does any mention of fascism and appeal to progress. Such commentary and its attendant style could well be provided more cheaply by AI and attributed to a pseudonymous staff writer. This would have the added advantage of overcoming mortality. There is no reason why AI could not continue to contribute such columns forever if, in Samuel Johnson’s phrase, it abandoned itself to them.
Such astringent judgments ought to encourage self-reflection in the person who makes them. Knowing that AI is breathing down my neck certainly leads me to reflect on the value of what I write and on characteristic quirks of style that could easily be imitated. In my case, the latter include long sentences, such as this one, which include a list of associated ideas; the repeated use of the word ‘reflection’; the juxtaposing of two opposed opinions or considerations in the same sentence; and the knowing appeal to literary or historical parallels to situate an issue in its broader cultural context. All are overdone and are easily imitable. AI would also pick up quickly the repeated appeal in argument to the unique value of each human being, to the social nature of human beings, and the consequent responsibility of persons, groups, and governments to care for one another. Nor would AI overlook an esteem for Pope Francis which extends beyond respect for the position he holds within the Catholic Church. Given all these and other even more annoying characteristics of which I am happily unaware, it would not be difficult for AI to compose columns echoing my convictions and my style on different topics, if someone were silly enough to bankroll the enterprise.
These personal reflections press the larger question of what, if anything, would be lost or gained if I and other merely human commentators of different stripes were to be replaced by AI. I suspect that the answer lies precisely in the human frailty of columnists. It is evident in the non sequiturs, the dismissive phrases, the omissions, the inconsistencies, the nervousness, and the overweening pride which the reader registers without being aware of it. They bear testimony to the unfinished and open-ended nature of all human affairs and of judgments on them. Their rough texture reminds us that all judgments on human affairs are provisional and need constantly to be revisited. Artificial intelligence is too smooth by half.
Andrew Hamilton is consulting editor of Eureka Street, and writer at Jesuit Social Services.
Main image: (Getty Images)