Welcome to Eureka Street

back to site

Why are we being forced to buy into AI?

 

When Bitcoin first took off, I used to wonder if it had been purposefully created to sabotage the world’s efforts to combat climate change. You couldn’t design a much better way to do it. The algorithm is ingenious really – the more Bitcoin that’s created, the more computing power is required to create more. That means the value of existing Bitcoin goes up, but also the cost (in computing power and, hence, energy) of ‘mining’ for Bitcoin also goes up exponentially. This creates a vicious ‘carrot and stick’ scenario for would-be crypto entrepreneurs – requiring ever-increasing computing power to support the currency’s growth.

That growth has been astounding. In 2022, Bitcoin mining consumed around 150 terawatt-hours of electricity  – more energy than the entire country of Argentina – and that was just one corner of what has become a crowded cryptocurrency market.

It seems insane. Societies around the world are grappling with the need to transition our economies to renewable energy and reduce emissions to combat climate change – and here is a whole new sector of the economy demanding ever-increasing access to energy resources, willing to go wherever they need to go to get access to cheap power.

While China banned crypto mining in 2021, miners moved operations to other jurisdictions – including the USA – with far less environmentally-friendly power sources. Beyond the environmental impacts, it’s also putting stress on these communities – residents in Texas, USA, have blamed the proliferation of Bitcoin mining facilities for power outages and health issues, not to mention the pressure it puts on electricity prices.

And now there’s Artificial Intelligence (AI). There’s been great deal said and written about the potential for AI to transform our industries, with many already well down the path of developing and implementing the technology. However, we’re only now beginning to grapple with how our obsession with AI is transforming our energy environment, and thus also potentially hampering our efforts to reduce emissions.

AI requires enormous computing power, and hence requires enormous amounts of energy. Leading technology companies, which previously committed to reducing emissions – are now seeing their targets recede further and further away from them as they pivot towards establishing data centers to support AI. Google revealed recently that its emissions had increased by 48 per cent in the last five years, while Microsoft’s emissions have risen by around 30 per cent since 2020. ‘The moon is five times as far away as it was in 2020, if you just think of our own forecast for the expansion of AI and its electrical needs’, said Microsoft President Brad Smith in a recent interview with Bloomberg.

While companies leading the AI revolution and building new data centres argue that AI will be crucial (somehow) to combatting climate change, the establishment of these data centres is making the renewable transition more difficult. In the USA, coal-fired power plants that were scheduled for closure have been kept online to power proposed new data centres. Even if tech companies manage to power their data centres with renewable energy, the drive to build these centres is still pushing up energy demand across the grid and making power more expensive (and less clean) for other consumers, including families.

 

'AI’s benefits at the moment are mostly theoretical, but it’s impact on our efforts to combat climate change are becoming more and more clear.'

 

It's clear that there’s a revolution taking place across the world, to create space for AI in our society. It’s been sold to consumers as a fait accompli, but should it be? We’re in a situation now where our energy sector is being restructured and new power sources are being required to support a technical innovation which is currently mostly built on theoretical possibilities.

It's not clear yet what AI might be capable of doing well. Having played around with it myself to discover its uses in media, it’s clear that it can be used for ‘copyright laundering’; ie. ‘creating’ (for want of a better term) text, sound and images that can mimic original works reasonably well, but without much substance below the surface. AI may make ‘creation’ more accessible to the masses, but rather than providing opportunities for more people to create works expressing themselves, it’s creating an explosion of mimicry that expresses very little at all. If you want to create something meaningful, it still needs a human mind behind it.

But perhaps AI isn’t for creative types. What about other industries?

A number of health applications have been rolled out, using AI to track people’s medical history or detect depression and anxiety symptoms. There are tools to help proscribe medications or diagnose patients. But skeptics point out that most of its current promises are ‘hype and magical thinking’. At best, AI tools can offer suggestions to doctors. They can’t ‘understand’ a particular person or situation better than someone in the room with all the data at their fingertips. Indeed, it’s argued if AI solutions are rushed out or relied on too much they might in fact risk causing more problems than they solve.

Similarly, in scientific research, AI tools have a worrying tendency to produce ‘hallucinations’ that make any conclusions drawn from their data problematic. Even if those can be somehow ironed out, you can’t replace human thinking with artificial thinking (‘thinking’ isn’t even the right term for what AI does). Some warn that ‘the proliferation of AI tools in science risks producing a phase of scientific enquiry in which we produce more but understand less’. If you want science to produce understanding, you still need a human mind to do the ‘work’ of understanding.

Then there’s the far less opaque, and perhaps most problematic, use of AI in surveillance and armed conflict. There are reports that AI has been used by the Israeli military to identify potential Hamas members and collaborators in Gaza, picking up tens of thousands of potential targets. Just how it is identifying particular targets hasn’t been revealed, although reports indicate that even local police officers have been identified as collaborators. Even if the system’s reported ’90 per cent accuracy rate’ is to be believed that still leaves thousands of innocent people mistakenly targeted, victims of technology that’s potentially being rolled out before it’s properly understood, let alone tested.

Meanwhile, billions of dollars are being poured into building data centres across the globe to serve whatever ends we’ll eventually make of the technology. It’s estimated that by 2034, data centres will consume 1,580 TWH – the amount of energy that’s currently being used by all of India. That dwarfs the energy consumed by cryptocurrency. It is power that will come off the same grids used by homes and local industries, and thus it will impact on plans to transition communities away from fossil fuels.

AI’s benefits at the moment are mostly theoretical, but it’s impact on our efforts to combat climate change are becoming more and more clear. Why are we being forced to buy into it? At the very least, we should be having a more robust conversation about whether its benefits are worth the cost.

 

 


Michael McVeigh is Head of Publishing and Digital Content at Jesuit Communications, publishers of Eureka Street.

Topic tags: Michael McVeigh, AI, Climate, Environment, Energy, Tech

 

 

submit a comment

Existing comments

Asking why are we being forced to buy into AI is about as useful as asking whey does God permit evil. It just is, AI is with us and there is no way the genie can be put back in the bottle. And although one might find consolation at present in the belief that AI is not and never will be a match for Human intelligence, those who understand it are predicting that we will have artificial super-intelligence surpassing human intelligence by the end of this decade. So rather than pull our pinnies over our heads and hope that it goes away, we need to confront some existential questions. And they are not how about how much energy current AI systems are using but rather about what values and morality the new Superintelligence will have. The not for profit group Open AI has already identified the problem and committed substantial resources toward addressing it but it is only one of the players developing AI. So far, so good, but the problem is not a just a technical one amenable to a technical solution. There is an excellent article by A.C. Grayling and Brian Ball in The Conversation that speaks to the need for input from philosophy and other disciplines. I commend it to my fellow ES readers. See < https://openai.com/index/introducing-superalignment/> . and < https://theconversation.com/philosophy-is-crucial-in-the-age-of-ai-235907 >


Ginger Meggs | 09 August 2024  

Similar Articles

Only in America

  • Peter Craven
  • 25 July 2024

It’s easy, isn’t it – much too easy – to invoke the standard response that only in the so-called Land of the Free could these things transpire. A vulgar, mendacious man who has refused to believe that he lost the last election is now the improbable victim of an assassination attempt. And the incumbent president, who has not done badly at his impossible job, surrenders his chance at re-election.

READ MORE

Donald Trump: 'I had God on my side'

  • Warwick McFadyen
  • 24 July 2024

Following the assassination attempt, Donald Trump evidently sees his survival as a sign from God, in whom he very likely does not believe, that he is certain to achieve victory this November. It seems Trump’s religious road veers towards whichever destination offers him the greatest prize.

READ MORE