Monday, 1 September 2025

Maybe we Need to Learn how to Trust Machines that are Smarter than Us

By Martin Cohen


IF ANYONE BUILDS IT EVERYONE DIES

That’s the title of a new mass market paperback by Eliezer Yudkowsky and Nate Soares. The subtitle is Why Superhuman AI Would Kill Us All. And it has received its first trade review: 

“Accessible... persuasive... A timely and terrifying education on the galloping havoc AI could unleash —unless we grasp the reins and take control.”

As Nate and Eliezer tell it, AI companies are on the cusp of developing Artificial General Intelligence that will have mastery over not just one narrow domain such as chess or language translation or DNA sequencing, but over everything. And once that happens we’re basically screwed.

The danger is not that you will wake up one day to find the Terminator looming over your bed. It’s that humanity will become collateral damage once AI gains the power to do whatever it wants.

 Mmmm… hold on. Because just maybe the problem isn’t AI, the problem is people.

There’s a great deal of scary speculation abut the effects of Artificial Intelligence. Recent stories have covered children being encouraged to kill themselves by chatbots, driverless cars ploughing into pedestrians, humanoid robots suddenly going berserk and trying to hit their human masters.

All of these stories however, essentially describe AI that has gone wrong - bugs in the software. The more interesting question is whether computers operating increasingly autonomously, like the so-called generative AI behind things like ChatGPT, might cease to be our servants and instead one day become our masters? And if so, whether they would have an agenda that has nothing to do with human values, but an alien one instead that subverts those values and replaces it with one serving machines.

Waaay back, in Ancient Greece, ‘techne’, the root of our word technology, was often a dangerous thing, a kind of trap. Even as the Ancient Greeks were innovators in technology, they harboured concerns about its potential misuse and the dangers it could pose. The roots of our fears of advanced technology today go deep, particularly when it came to the creation of intelligent machines.

Often the Greek myths and stories that depicted intelligent, self-moving machines, like the automatons supposedly made by Hephaestus, the god of fire and metalworking, also associated them with negative consequences particularly when controlled by powerful or malevolent individuals to inflict harm and chaos. 

The idea of technology as a trap is rooted in the fact that advancements in science can and do have unintended consequences. The Ancient Greek tales rightly reflect fears that technology could lead to a loss of freedom, a reliance on external forces and a decline in human virtue.

And yet, there are also optimistic tales. The Golden Maidens, also known as Kourai Khryseai, were automatons also crafted by Hephaestus. These were golden, female figures that appeared to be alive and could anticipate and respond to Hephaestus' needs. They were not just tools, but were believed to have intelligence and the ability to speak.


Today, the Golden Maidens are held up as an early concept of artificial intelligence, reflecting humanity's long-standing fascination with creating machines that can mimic life and possess agency. Nonetheless, just like today’s AI, the maidens’ purpose wasn’t just to help with little chores. Just by existing, they became testament to the dominion over both fire and creation of their owner. Their values were those of their master. Today's generative AI, however, I think is both more powerful and more interesting and I see no reason why, having consumed the bulk of human thinking and knowledge over the centuries, today's golden machines should not arrive at much wiser conclusions than even their creators. It is not in any sense ‘logical’ to suppose that machines created by humanity will not share its values. Just maybe, they will prove to be more enlightened – and more moral!

No comments:

Post a Comment