Monday, 1 September 2025

Maybe we Need to Learn how to Trust Machines that are Smarter than Us

By Martin Cohen


IF ANYONE BUILDS IT EVERYONE DIES

That’s the title of a new mass market paperback by Eliezer Yudkowsky and Nate Soares. The subtitle is Why Superhuman AI Would Kill Us All. And it has received its first trade review: 

“Accessible... persuasive... A timely and terrifying education on the galloping havoc AI could unleash —unless we grasp the reins and take control.”

As Nate and Eliezer tell it, AI companies are on the cusp of developing Artificial General Intelligence that will have mastery over not just one narrow domain such as chess or language translation or DNA sequencing, but over everything. And once that happens we’re basically screwed.

The danger is not that you will wake up one day to find the Terminator looming over your bed. It’s that humanity will become collateral damage once AI gains the power to do whatever it wants.

 Mmmm… hold on. Because just maybe the problem isn’t AI, the problem is people.

There’s a great deal of scary speculation abut the effects of Artificial Intelligence. Recent stories have covered children being encouraged to kill themselves by chatbots, driverless cars ploughing into pedestrians, humanoid robots suddenly going berserk and trying to hit their human masters.

All of these stories however, essentially describe AI that has gone wrong - bugs in the software. The more interesting question is whether computers operating increasingly autonomously, like the so-called generative AI behind things like ChatGPT, might cease to be our servants and instead one day become our masters? And if so, whether they would have an agenda that has nothing to do with human values, but an alien one instead that subverts those values and replaces it with one serving machines.

Waaay back, in Ancient Greece, ‘techne’, the root of our word technology, was often a dangerous thing, a kind of trap. Even as the Ancient Greeks were innovators in technology, they harboured concerns about its potential misuse and the dangers it could pose. The roots of our fears of advanced technology today go deep, particularly when it came to the creation of intelligent machines.

Often the Greek myths and stories that depicted intelligent, self-moving machines, like the automatons supposedly made by Hephaestus, the god of fire and metalworking, also associated them with negative consequences particularly when controlled by powerful or malevolent individuals to inflict harm and chaos. 

The idea of technology as a trap is rooted in the fact that advancements in science can and do have unintended consequences. The Ancient Greek tales rightly reflect fears that technology could lead to a loss of freedom, a reliance on external forces and a decline in human virtue.

And yet, there are also optimistic tales. The Golden Maidens, also known as Kourai Khryseai, were automatons also crafted by Hephaestus. These were golden, female figures that appeared to be alive and could anticipate and respond to Hephaestus' needs. They were not just tools, but were believed to have intelligence and the ability to speak.


Today, the Golden Maidens are held up as an early concept of artificial intelligence, reflecting humanity's long-standing fascination with creating machines that can mimic life and possess agency. Nonetheless, just like today’s AI, the maidens’ purpose wasn’t just to help with little chores. Just by existing, they became testament to the dominion over both fire and creation of their owner. Their values were those of their master. Today's generative AI, however, I think is both more powerful and more interesting and I see no reason why, having consumed the bulk of human thinking and knowledge over the centuries, today's golden machines should not arrive at much wiser conclusions than even their creators. It is not in any sense ‘logical’ to suppose that machines created by humanity will not share its values. Just maybe, they will prove to be more enlightened – and more moral!

Monday, 11 August 2025

Postmodernism Collides With the ‘Theory of Everything’


By Keith Tidman

 

The term postmodernism entered the lexicon in the second half of the twentieth century, critical of modernity and of notions like objective natural reality, absolute truth, grand ideologies or belief systems, and what were referred to from the outset as metanarratives: more specifically, an “incredulity toward metanarratives,” as couched by French philosopher Jean-François Lyotard in 1979.

 

Metanarratives were seen as single, universal truths, or overarching accounts of reality that attempt to provide an all-encompassing (some say ‘totalizing’) description of the world. These metanarratives are based on society’s faith in Enlightenment principles: Where science and technology are viewed as among the key wellsprings of human advancement. And where inquiry is grounded in greater certainty, in the binary illumination of true and false, and in the objectivity of knowledge. All of which fall under the banner of realism, and which postmodernists tend roundly to refute.

 

Instead, postmodernists assert that knowledge, rather than absolute, is blemished, subjectively dependent on the vagaries of individual perception and the ways by which the human mind thinks, interprets, and presumes to understand the world. In that view, what we claim to know is seen as swayed by the biasing influences of social constructs: institutional, structural, methodological, dialectical, linguistic, cultural, normative, and sociopolitical. One foundational cornerstone of postmodernism’s philosophical viewpoint is often skepticism toward the methods of scientific realism (the scientific method) used in attempting to grasp the world as it really is.

 

Certainly, postmodernism brings worthwhile nuggets to deliberating the contributions of science and technology in ways that bring to the table fair criticisms and countervailing beliefs. There are of course different ways to knowing, some more favored and effective than others. To those extents, let’s be clear: There’s no getting around the fact that no discipline is immune from fault and censure. There are only degrees of deserved censure. So, it may be argued, as postmodernists indeed do, that importantly knowledge is only ever provisional.

 

That being said, I suggest that postmodernist thinking at times unjustly overreaches in its criticisms, going so far as to vilify science and technology. Yet, context matters. The breadth of knowledge that these two symbiotic fields of inquiry have contributed should awe: from countless advancements that materially make our daily lives less feral, to understanding the colossal cosmos we inhabit — and ever so much in-between. Through what-if pondering, structured hypotheses, tests and retests, and confirmation or refutation, progress in what we know — even if only provisional — comes in two forms: the gradualism of incremental, accreting change, with its measurable lurches forward in knowledge and understanding; and wholesale, inventive shifts from one paradigmatic model of reality to another.

 

Now let’s pivot. In particular, how might these doctrines of postmodernism magnificently collide with science? I suggest the question points us to what we might regard as science’s holy grail, namely its quest to develop what’s called a theory of everything, or ToE. An all-embracing, coherent theory resolutely pursued for decades by Albert Einstein, Stephen Hawking, and a line of other prominent theoretical physicists. The ToE, which sometimes also carries the moniker ‘unified field theory,’ is arguably science’s greatest affront to postmodernism’s disdain for the metanarratives mentioned above.

 

The basic aims of the envisioned ToE, in its narrowest form, can be summarized as this: developing a single theoretical scaffold that unifies all the forces of nature and particles into a master theory of the universe, describing all physical phenomena. Where no incompatibilities or unsolvable contradictions can exist. At the very least, unification must encompass gravity, electromagnetism, the weak nuclear force (responsible for particle decay), and the strong nuclear force (responsible for binding the fundamental particles of matter to form larger particles). Together, these four forces, encapsulated in what’s called the Standard Model, govern everything that happens in the universe.

 

However, currently there is a rather thorny incompatibility. It comes about while trying to unite quantum mechanics, which applies to very small scales, with Einstein’s theory of general relativity, which applies to very large scales. Although each of these two fields has been repeatedly authenticated as working spectacularly in their separate domains, the flawless unification of quantum mechanics and the theory of general relativity has thus far proven elusive. Which some postmodernists might find confirming.

 

More likely, however, there just needs to be a search for a still deeper reality than these fields, which would amicably and seamlessly integrate both into a single reigning reality or force, as they well might have been earlier in the life of the universe. That all-inclusive force would be described by the master theory, or ToE, that scientists envision. Among the different instances of research playing to these interests is ‘string theory,’ where particles are actually minuscule, uniquely vibrating strings with as many as ten dimensions to spacetime, rather than the points we usually think of. Another among the theories is a quantum version of gravity, by which spacetime would be seen in terms of quantum mechanical laws. 

 

Even from a science standpoint, when it comes to a theory of everything, there remains the question, how truly everything is everything? Might it lead to understanding the whys and wherefores of all natural laws? Or might a TofE metanarrative always leave out some aspect of what composes this model of the world — that is, a tantalizingly missing something whose description requires another set of equations and axioms, and then yet another set, indefinitely? Because of such uncertainties and incompleteness, humankind will irresistibly continue to hunt for a ToE, as doing so is illustrative of our natural exploratory constitution. In time, it might even help soften Lyotard’s disapproving ‘incredulity toward metanarratives.’

 

Humanity is thus riveted to probing for a greater understanding of those first principles that make the cosmos tick in such orderly and decipherable fashion. That is, a comprehensible theory of everything that answers the following fundamental queries about the universe: Why is there this particular ‘something’ that composes the universe rather than an entirely different something or rather than ‘nothing’ at all? And what is the ToE metanarrative — the single, all-unifying theoretical framework — which describes that something?

 

Monday, 30 June 2025

The Blind Philosophers and the Elephant: A Parable About Reality

Illustration by Pamela Zagarenski

By Keith Tidman


The parable of The Blind Men and the Elephant originated on the Indian subcontinent around 500 BCE, from Buddhist, Hindu, and Jain sources, and afterwards spreading widely. The story is very simple: some blind men, for the first time in their lives, encounter an elephant and attempt to determine what kind of thing it is just through their sense of touch. But here’s the catch: their descriptions of the animal vary greatly, based on the particular part of the elephant each got to experience. The elephant has thus stood as a metaphor for reality.

Here, I offer a different take on the parable, raising the stakes by sampling historically rival ideas about reality. In my version, blindfolded philosophers similarly gather around an elephant, again each touching a different part: tusk, ear, head, trunk, leg, shoulders, tail, tongue, foot, and so forth. The philosophers describe the elephant based on their partial impressions. Each claims that their own description is the most accurate, despite their limited knowledge. That is, each philosopher extrapolates to what they presume to be the totality of reality, tending to discount the others’ descriptions.

What does this exercise tell us about what we empirically know regarding reality: especially subjectivity versus objectivity, and each philosopher’s role as presumed witness to reality? A matter as much about epistemology as about reality and truth. In this quest, how well can we get beyond faint apparitions, toward something more reified? Are we bound by principles of uncertainty? In the following discussions, I attempt to unspool several iconic philosophers’ reactions to touching isolated parts of the elephant, drawing on what we know about each philosopher’s historical framing of reality, truth, and theories of knowledge.

Thales: Feeling the elephant’s drenched tongue, Thales of Miletus (BCE) believes the experience confirms his conviction that water is the essential nature of reality — the single element from which all other things in the cosmos derive. The term used by the Hellenic philosophers to characterize that underlying, reality-revealing substance was arche, the original stuff from which the world came to be. For Thales, arche was water, whereas for other Ancients it was the air, or fire, or earth. Water as arche was demonstrated by the qualities of the elephant, based in objective, hard-and-fast materialism rather than in the mythology, lore, or religion of the day.

Plato: Feeling the elephant’s coarse back, Plato might believe the experience confirms his model of a dualistic reality. The dualism stems from there being an imperfect, sensory world of observations and a concurrently existing ideal world of immutable, timeless Forms (or Ideas). Plato considered Forms to be the highest manifestation of reality, from which he developed his theory of knowledge. Plato believed that true knowledge of reality emanates from understanding the Forms rather than derived from one’s bodily senses, like touch. To that extent, what people experience (perceive) in their day-to-day lives — in this case, the rough hide on the animal’s back — is but a flawed representation of ultimate reality — the whole elephant, as reality’s metaphor.

Thomas Aquinas: Feeling the large crown of the elephant’s head, Aquinas would conclude that the skull must contain a large, complexly structured brain. He would not doubt that this impressive head, and the brain it housed, must have been the evolving product of a succession of causes directed toward attaining perfection. This succession of causes and effects is traceable all the way back to the uncaused first cause or prime mover, which he defined as God. Aquinas viewed this striving toward excellence — the most fundamental aspects of being — as the natural order of the cosmos. For Aquinas, reality is split between essence (what makes the thing it is) and existence (the fact of being present in reality).

René Descartes: Perhaps feeling the elephant’s thick, pillar-like leg would convince Descartes that this thing was real, prompting him to ponder the fundamental nature of the object (the trunk of a tree or a column?) he was handling. In so doing, Descartes would be reminded that the acts of pondering and wondering are forms of human thought, which in turn would recall his axiom: I think, therefore I am. But also underpinning Descartes’ philosophy is something called mind-body dualism. That is, the idea that the mind (mental substance) is immaterial — from which “formal reality” emanates as an idea — while the body (in particular, the brain) is physical substance — from which “objective reality” emanates independent of the mind.

David Hume: In his case, feeling the hard-to-overlook enormous chest of the elephant, Hume might be reinforced in his staunch empiricism, observation, and skepticism. He might at the same time concede the entrenched limits of our knowledge, as well as uncertainty as to whether even rigorous inductive reasoning and investigation would be enough to confirm the true nature of external reality, in this instance the whole elephant. In this vein, Hume might split mental perceptions between ideas (thoughts) and impressions (sensations and feelings), making the argument that ideas are faint copies of impressions. 

Immanuel Kant: Feeling the undulating tail of the elephant, and trying to figure out what it might be — a snake or stretch of rope, perhaps — Kant would surely remind himself to distinguish between phenomena (the world of appearances, derived from the innate structure of our minds) and noumena (the world of things as they truly are in themselves, independent of our minds). We can only know the world of phenomena, he would say, and not the external, objective nature of things, as the latter is beyond our cognitive capacity and thus unknowable. Kant would be puzzled, unable to fathom with clarity and certainty the essence of this rope-like thing that he intently grasped.    

Georg Hegel: Feeling the elephant’s expansive shoulders, Hegel might be inspired to reflect on his metaphysics, grounded in idealism, which is that the utmost expression of reality actually stems from the mind, or what he labeled the “absolute spirit.” As the mind evolves, and self-awareness and knowledge of truth are gained, it does so channeled by a procedure he called dialectics. This starts with a hypothesis (the thesis), then leads to a counterargument (the antithesis), and concludes by reconciling the best of the two prior propositions (the synthesis). Hegel concludes that the synthesis of all the philosophers’ collective experiences with the elephant’s body parts would best reflect the physical world — the elephant in its meaningful entirety — that the philosophers were encountering.

Friedrich Nietzsche: Feeling the slowly waving ear of the elephant, Nietzsche might think that he was in contact with a fan. This would fit with his belief that reality is a matter of individual viewpoint, shaped by people’s instincts and interpretation of what they experience through the senses. This view was steeped in a denial of ultimate reality— of an objective, unchanging reality—but rather in empiricism and what Nietzsche referred to as the “will to power”: that is, the alluring urge to stamp reality with our own values, convictions, passions, and predispositions.

Ludwig Wittgenstein: In his case, feeling the hard tusk of the elephant, Wittgenstein might decide the object was a spear. That being said, he would choose with care the language to describe the object and its presumed functions, convinced that reality (our worldview) is shaped by the words, phrases, and logical structure of our language — bearing on what we think. As Wittgenstein pointed out, there is a direct correspondence between the limits of language, consisting of propositions that provide pictures of reality such as the whole elephant, and the limits of our understanding (perception) of facts and context-based reality. In other words, language’s meaning is derived from the societal and cultural conditions in which it’s used, differing among languages, which Wittgenstein referred to as “language games.”

Daniel Dennett: Finally, Dennett, the last of our philosophers, feels the trunk of the elephant, giving him the impression it is a tube through which materials pass. Manipulating the trunk to discern its function might fit nicely, for him, with his physicalist model of reality. Dennett considers that experience requires both consciousness and mind to happen, translatable through the neurophysiological operations of the brain. The brain, as the material seedbed of consciousness, relates to the reality of subjective experience. That is, the mind is not dualistically separate, mythically hovering apart from the brain as some have insisted. He believed science is a key path to better understand the processes involved. However, he acknowledges that experiences, like his contact with the trunk, do not always precisely mirror external reality, given the biased preconceptions about the reality and truth we harbor and which notionally influence us all.

I propose that the parable can be used in this manner to illustrate the diverse ways, over the centuries, that a sampling of key Western philosophers described the world. Some painted reality as subjective and empirically knowable, others as coming in both subjective and objective form, though they would be unsure how to parse the two. In every case here, there’s an instinctive yearn for symmetry between ultimate reality and the bounded information captured by the senses and curated and interpreted by the brain. Yet, beyond our sample, other philosophers argue that ultimate reality is opaque, obscure, and even changeable, and so to those extents it’s a reality that eludes certainty.


Monday, 9 June 2025

The Eyes of Gaza: A Diary of Resilience

 

Book cover

By Martin Cohen
 

Eyes are, indeed, on Gaza, although many in Israel and the US still seem to be both oblivious and unashamed. This week, the European Union – at long last! – agreed to at least review its policy of financially supporting Israel, and hence facilitating its policies in Gaza. Books like The Eyes of Gaza: A Diary of Resilience are drops, tear drops indeed, in an ever-expanding ocean of Palestinian sorrow, but surely contribute to the understanding of those prepared to listen.

The author, Plestia Alaqad, is a Palestinian journalist and author who has been forced to bear witness to the destruction in Gaza. “At just twenty-one years old, she captivated audiences with her raw and poignant coverage of her surroundings”, the publisher says, adding that she has offered “an unfiltered glimpse into the harrowing realities of life under siege”.

Rather than say anything myself, I thought to just choose a few paragraphs from the book – and let them speak for themselves.

7 October 2023

I’m familiar with what the steps taken during an emergency situation look like. One: you start stocking your house with bread, flour and lots of groceries. Two: you open the windows a little bit so they won’t break from the pressure released by bombs and airstrikes. 

You know what always inspires me? The spirit of Palestinians. How after every loss, you only find us stronger and trying even harder to live and love life. In 2021, I thought I was experi- encing the worst days of my life. Buildings and houses were being bombed by the IOF; even the streets were being bombed, making it harder for paramedics to reach injured people. Yet, once the Aggression was over, there was a community initiative to clean the streets of Gaza (I immediately took up a broom and joined in). Only a few weeks later, Gaza’s streets were full with Palestinians striving to live despite the harsh reality that surrounded us. 


10 October 2023

I see that Dana’s house is wide open, so I go inside and I call her to take her on a virtual tour of her home. Her mom’s and brother’s rooms are completely burned out, almost unrecog- nizable, while her room is full of debris and broken glass. But it’s still her home, just like my building is still my building. In Gaza, even if your house is destroyed and the ceilings have fallen to the floor, it is still your house, and you’re going to claim it as your house. Even if it’s not safe, even if it’s ground down to rubble, you will put a tent down on the flattened remains and you will call it your home. The connection between a Palestinian and their house is a sacred one, 


11 October 2023

Mohamed and I go to report on Al-Krama district, which was bombed yesterday. My heart breaks when I see family photographs randomly scattered under the rubble, and I feel terror thinking of the day when Israel will kill me, and random people will walk in the street, see my diaries discarded under debris, and wonder who Plestia Alaqad was and why she died when she did. 


12 October 2023

I wake up to a notice from Israel, warning any Palestinians in North Gaza to flee to the south within twenty-four hours, which is nearly impossible. How are approximately 1.1 million people supposed to evacuate when there are barely any cars left working? We don’t have any gas. And where are we supposed to evacuate to? To a tent? Not everyone has family and friends in the south.
The world can’t pretend that there are two sides here any more. There is no humanity, no equity, no semblance of justice. It’s a calculated, deliberate and ruthless ethnic cleansing, and nobody seems to care.


13 October 2023

I am shocked by what I see in the streets on the way to Rasha’s house. People are just walking, walking, walking, carrying their lives in their bags with them. I see the 1948 Nakba in front of my eyes, just as my grandfather once described it to me. I remember him telling me how he was forcibly displaced from his home, and how Israel’s goal was to ethnically cleanse Palestine of Palestinians. And here I was, seeing it for myself.


9 November 2023

In the morning, I watch as over 50,000 people are forcibly displaced from their homes in North Gaza to camps in the south. It is absolutely harrowing. I stand by as thousands of people file through the safe corridor, their whole lives packed into suitcases in the space of five minutes. It is like a scene out of a dystopian novel – my mind goes straight to the prose in Nineteen Eighty-Four – come to life.
And yet there is a kid, Waleed, standing there with sweets, handing them out to people as they pass him by. He is wearing a cute ‘Happy Birthday’ hat.


16 November 2023

The hospitals in Gaza are full of amputatee kids. They’re the saddest stories by far. A week ago, I met a baby girl, Fatma – my grandmother’s name. Fatma had lost both of her legs. I spoke to her mother, and she just kept repeating how she wished that it was her legs that had been amputated instead of her daughter’s. She told me that Fatma had come as a blessing after fourteen years of infertility. And I just stood there beside her, blankly reporting on the scene, privately wishing that I could somehow alleviate her and Fatma’s pain. 


20 November 2023

What’s the point of wearing a safety helmet and press vest? I don’t want to wear them any more; they’re like giant targets instead of safety nets. Israel is targeting journalists. And doctors. And lawyers. And engineers. Basically, anybody who might practically be able to help rebuild Gaza in the future.

How long until they get to me?

I knew Gaza before 7 October 2023. I’ve known Gaza through- out the Genocide. But I have yet to know the Gaza of tomorrow. 



The Eyes of Gaza: A Diary of Resilience, by Plestia Alaqad, was published by Macmillan in 2025

Monday, 19 May 2025

Hog-Tied Truth

Truth coming out of the Well (1898) by E. Debat-Ponsan;

by Andrew Porter


Human affairs are tricky, and perhaps for this reason it is especially important to respect how vital ‘truth’ is in one’s relationship to self and others, in institutions, the presiding thrusts of culture, and any form of leadership. The cord is too commonly cut between what is real and acceptance of it. A society that abandons the inalienable value of truth-telling wrecks a whole host of ripe possibilities. The desire for confirmation bias cannot be the foundation of a democracy.

The current United States’ leadership and its sycophants willfully disregard truth and make falsity king. A propaganda mouthpiece called OAN (the One America News), that purports to be a “news network”, has been selected to be the Voice of America. A political pundit calls the organization “just another font of lies”. The United States is fundamentally divided between those who back veracity and those who are willing to accept lies and the injustice that attends them.

Untruth is like radon to a culture, a slow-acting poison. You can’t run a society or a government on deception and misrepresentation. Unsupportable views on justice, adherence to the Constitution, and principles of right simply erode the integrity of what cannot afford to be eroded. Lives depend on whether ‘truth’ is honored – or annihilated.

The hopeful outlook is that assaults on ‘the rule of law’ and the beauty of truth will be a crucible, forcing clarity and inspiring, in democratic countries, a new determination to back ‘truth’. Once loved and defended, ‘truth’ shines the brighter. Truth is the string in a string telephone; what can we hear if it is not there?

The problem of extensive falsity dogs the world currently. It will never not be surprising that there are a good number who do not particularly care about the ‘truth’ if it gets in the way of their suppositions.

A few years ago, Masha Gessen, the Russian-American journalist and activist, commented on Hannah Arendt’s ideas in an essay entitled: ‘Is Politics Possible in the Absence of Truth?’ concluding: 

“When lies overpower truth, politics dies. When politics dies, our world collapses, and we humans die too—because it is only in the world, among other humans, that we exist”.
Which is why a commitment to the truth ought to seriously be reenergised. The unthinkably awful is not just one viable option among others. 

‘Truth’ has, for a good while, been undermined by moral (as opposed to cultural) relativism. This dismisses or denigrates a single or universal set of moral principles. If one person's truth or ethical take is as good as another’s, something essential is and will be in decline. This may well contribute to a loss of moral vocabulary and a shared set of facts, causing people to be unable to distinguish patent falsehood from accuracy. There may be other factors as well, but moral relativism certainly encourages an animus against ‘central or accepted authority’. This paves the way for the siloing of media choices and people's susceptibility to a demagogue or authoritarian.

The veteran American journalist, Edward Miller points out that: 

“the disciplines of science and the rule of law have the same purpose: to find the truth through the careful, unbiased weighing of evidence.”

Miller wonders whether we will stand firm in the fight for this. Because lies are no basis for governance or for conducting any aspect of life. Lies—if they can be understood as such—cause harm to a whole range of things. They damage personal relationships; they undermine trust in institutions; they make government the opposite of a vehicle for advancing the common good.

The question today is, even if truth wins out in the long run, will we be able to weather anti-truth in the short run?

Monday, 28 April 2025

Trump-Harris 2024 The figures that just don’t add up

Three reasons to think that the 2024 US election was stolen
 
 

By Martin Cohen

OKAY, EVERYONE KNOWS that there were big surprises in the 2024 election. The election that for months was “too close to call” overnight became something of a landslide for Trump.

Pollsters who had called states correctly for thirty years suddenly had everything wrong.

There were no checks needed for this vote- instead the Democrats folded the same day.

Only much later - too late - did sceptical voices start to be hears. Because there were - are - some very odd features to this election day.

Three examples. 

First one: in a typical election, if there is a 'swing' to a party, it applies widely. Yet the voting patterns showed a 7% jump in the Trump enthusiasts in swing states against a mere 0.6% shift elsewhere. That's not just odd! It's unbelievable. (Add to which, in five of the seven swing states where election results show Donald Trump as the victor, the Senate races and sometimes the majority of down-ballot races were swept by Democrats!)


Similarly, the election showed Trump flipping 88 counties from Democrat to Republican and zero counties going the other way. Yet Trump won less than half of the popular vote! In 1984, when Reagan won nearly 60% of the popular vote the Democrats still flipped 30 counties!

There are 3,141 counties in the United States. 3,053 voted for the same party in the last two elections.
Trump flipped eight eight of them: Harris flipped zero.

Trump’s achievement is not only remearkable but literally unprecedented. Newsweek noted that nearly a third of flipped counties were also longstanding Democratic strongholds, spanning decades of solid blue elections.

Starr County, TX - blue for 132 years
Duval County, TX - 112 years
Webb County, TX - 108 years
Carlton County, MN - 92 years
Maverick County, TX - 92 years
Iberville Parish, LA - 48 years
St. James Parish, LA - 48 years
Marshall County, MS - 48 years
Anson County, NC - 48 years
Jasper County, SC - 48 years
Hidalgo County, TX - 48 years
Willacy County, TX - 48 years
Surry County, VA - 48 years
Scott County, IA - 36 years
Imperial County, CA - 32 years
Jefferson County, GA - 32 years
Miami-Dade County, FL - 32 years
Tensas Parish, LA - 32 years
Pasquotank County, NC - 32 years
Atlantic County, NJ - 32 years
Cumberland County, NJ - 32 years
Socorro County, NM - 32 years
Naussau County, NY - 32 years
Bucks County, PA - 32 years
Passaic County, NJ - 28 years
Clinton County, NY - 28 years
Trump “apparently” flipped 54 counties that had previously voted for both Clinton and Biden, and flipped back 34 counties that had voted for him in 2016 but switched to Biden in 2020.
And Harris failed to ‘flip’ any counties. 

The last time a candidate did this (flipped no counties) was nearly 100 years ago in the midst of the Great Depression, when Herbert Hoover failed to flip a single county red from blue in 1932. In that election, Franklin D. Roosevelt  ended up with 472 electoral votes to Hoover’s 42, eventually carrying no less than 42 states to Hoovers’s paltry six.

Even in the infamous 1984 landslide where Ronal Reagan won 49 states, a few red counties still flipped to Walter Mondale

But back to the huge difference in voting patterns in the Swing States and the national vote. Everyone in US politics knew that just seven states would decide the election. And then we come to Musk's $1m-a-day to swing states voters that the the hapless Democrats muttered was deeply concerning’. As part of his Must essentially hoovered up voting information. Musk said he wants to get “over a million, maybe two million, voters in the battleground states to sign the petition.

Yet in early November, a lawyer for Elon Musk said in a Philadelphia courtroom Monday that the winners of Musk’s $1 million daily prize giveaway in election swing states are not chosen at random, contradicting what Musk had said when he announced the contest in October. 

Christopher Peterson, a University of Utah law professor who specialises in consumer protection, said in an email to NBC News of the disclosure: “This is absolutely, unambiguously illegal.” Adding:
“You cannot lawfully lie to the public about conducting a random sweepstakes, lottery, or contest and then rig the results to hand-select the winners,” he said. “It really is not complicated. This is just fraud; a simple, ugly fraud on the public.”

Clearly Musk was prepared to lie and cheat – and in public view – to help Trump win. The idea that he might also use this bizarrely conducted trawl of voter names and addresses (just in those extraordinarily pro-Trump ‘Swing States’) - to tamper with the vote tabulators does not seem completely our of character. Quite the reverse!

To sum up, in modern, competitive elections, this 88–0 split is a unicorn – it’s never happened before, nothing remotely like this has ever been seen.

And yet…  in her concession speech Harris urged all Americans to accept the results of the election saying:

“A fundamental principle of American democracy is that when we lose an election, we accept the results.. And anyone who seeks the public trust must honour it.”

Fine words, for a politician for whom politics is a well-paid hobby. For a politician whose ethics allowed for overt support for a likely genocide. But history may also show them to foolish words, indeed Harris’ most politically tin-eared comment yet.

Tuesday, 15 April 2025

Was Alvy Right? Does the Universe’s Fate Affect Purpose?

 

By Keith Tidman

In the 1977 movie “Annie Hall,” Woody Allen played the role of a fictional protagonist Alvy Singer, who iconically portrayed a nebbish character: timid, anxious, insecure. All in all, vintage Woody Allen. But equally, these less-than-stellar traits were apparent in Alvy as a young boy. Which is why, when Alvy and his mother went to the doctor’s, she reported that her son was depressed and refusing to do his homework. She thought that Alvy’s unease stemmed from “something he read.”

 

In response to the doctor’s inquiries, Alvy gingerly elaborated on the whys and wherefores of his disquiet: “The universe is expanding…. Well, the universe is everything, and if it’s expanding, someday it will break apart, and that would be the end of everything.” At which point, Alvy’s mother interjects, “Why is that your business? What has the universe got to do with it? You’re here in Brooklyn! Brooklyn is not expanding!” To which Alvy, perhaps channeling Albert Camus’s absurdism, concludes dejectedly, “What is the point?”

 

The way in which this dialog unfolds has been dubbed “Alvy’s error”. That is to say, Alvy — along with the philosophers and scientists who similarly argue over the meaninglessness of life in a universe seemingly on track to die leading to the extinction of our species and civilization — have been accused of “assessing purpose at the wrong level of analysis”.

 

As the ‘errors’ reasoning goes, instead of focusing on a timescale involving billions, or even trillions, of years, we should keep the temporal context within the frame of our own lifespans, spanning days to years to decades. That being said, one might reasonably ask why timescales, cosmic or otherwise, should matter at all in calculating the purpose of human life; the two are untethered.

 

On balance, I suggest Alvy actually got it right, and his mom got it wrong. It’s a conclusion, however, that requires context — the kind provided by the astrophysicists who reported on a recent study’s stunning new insights into the universe’s life cycle. At the center of the issue is what’s called “dark energy,” a mysterious substance that astrophysicists believe exists based on its cosmological effects. It’s a repulsive force that pushes apart the lumpy bits of the universe — the galaxies, stars, and planets — incidentally setting off Alvy’s bout of handwringing by causing the universe to expand ever faster.

 

To be clear, dark energy is no trifle. It is estimated to compose seventy percent of the universe. (In addition, equally unseen dark matter composes another twenty-five percent of the universe. By comparison, what we experience around us everyday as observable matter — when we agonizingly stub our toe on the table or gaze excitedly upon vast cosmic swaths, star nurseries, and black holes — composes just a tiny five-percent sliver of cosmic reality.)

 

That our species is able to persistently ponder alternative models of cosmology, adjusting as new evidence comes in, is remarkable. That our species can apply methods to rigorously confirm, revise, or refute alternative models is similarly remarkable. The paradox is that three of the four models now in play by astrophysicists will lead to humankind’s extinction, along with that of all other sophisticated intelligent species and their civilizations ever to inhabit the universe. How can this be so?

 

For starters, the universe’s expansion has consequences. That said, recent observations and research has added a new twist to what we understand regarding issues of cosmology — from how the universe’s initial spark happened 13.8 billion years ago, and especially how things might end sometime in the future. Alvy was gripped by angst over one such consequence: he contemplated that the increasing acceleration might continue until the universe experiences a so-called Big Rip. Which is when everything, from galaxy clusters to atomic nuclei, fatally “breaks apart,” to borrow Alvy’s words, leading to a grand-scale extinction.

 

But, according to the most recent studies of the standard cosmological model, and of the increasingly understood role of dark energy, there’s a paradox as to a possible cosmic end state other than a Big Rip. Because there is an alternative outcome of accelerating expansion, in which the distance between stars and galaxies greatly increases, such that the universe eventually goes cold and dark. This is sometimes called the thermodynamic death of the universe, moved along by the destructive role of entropy, which increases the universe’s (net) state of disorder. No less fearful, surely, from the standpoint of an already-timorous Alvy.

 

The third possibility that dark energy creates is that its pushing (repulsive) effects on the cosmic lumps start to weaken, in turn causing the universe’s expansion to slow down and reverse, eventually leading to contraction and a so-called Big Crunch. Whether the crunch segues to another Big Bang is hypothesized, but the recent cosmological and dark-energy research doesn’t yet speak to this point about a cosmic bounce. Either way, extinction of our species and of all other intelligent life forms and civilizations remains inevitable as our full cosmic history plays out.

 

The fourth and last option is less existentially nihilistic than the preceding three possibilities — and is one that might be expected to have had a calming effect on Alvy, if he only knew. In this less-likely cosmological model, dark energy’s effect on cosmic expansion might slow but stabilize rather than implode. In averting the fate of a Big Rip, or a heat death, or a Big Crunch, there would be no extinction event occurring. Rather, circumstances would lead to a universe existing stably into infinity.

 

No matter how one dices reality, the existentialism and nihilism espoused by, for example, Schopenhauer, Sartre, and Nagel, as well as ideas about will-to-power advanced by Nietzsche, hang over the inconvenient realities of a universe fated to reach an all-encompassing expiry date, depending on the longer-term influences of dark energy.

 

From a theist’s standpoint, striving to live Aquinas’s “beatific vision,” one might wonder why a god would create a highly intelligent, conscious species like ours — along with innumerable cosmic neighbors (extraterrestrials) of unimaginably greater intelligence and sophistication because of earlier starts — when every species is assured to go poof. There will be no exceptions; the scale of annihilation will be cosmic. 


So, what’s the meaning and intent, if any, of such teasing capriciousness? "What's the point?," as Alvy muttered with deep resignation. And how realistic can a transcendental force be, purportedly serving as a prime first cause of us and of our cosmic co-inhabitants subject to such conditions? Besides, contrary to some assumptions, even the existence of a god does not vouchsafe purpose for our species; nor does it vouchsafe purpose for the universe itself. 

 

On the other hand, from a secular, naturalistic viewpoint, life might be imagined as meaningful in the sense of “purpose in life.” That is, where we make decisions and perform deeds as moral, empathic individuals and community members — not on the scale of an entire species. By definition, these secular events occur in the absence of a divine plan, such that we emerge from the physical laws of nature, to go on to create personal value, purpose, and social norms. As to purpose in life, where meaning is defined on the scale of a single personal, the prospect of “the end of everything” might be seen as less vexing, as meaning is acquired on the level of a single person.

 

In the sense, however, thats conveyed by the slightly altered phrase purpose of life  where the one-word change shifts the focus from the individual and to the species  cosmic extinction looms more consequentially in terms of the lack of purpose and meaning. Given the prospect of such cosmic annihilation, Alvy might be excused his existential musings.

 

Monday, 10 March 2025

The Omnipotence Paradox

Averroes was the first philosopher to address the omnipotence paradox in the 12th century.

By Keith Tidman


People of faith, in defining their god, credit god with some extraordinary characteristsics: omnipotence  omniscience, omnipresence, omnibenevolence, and omnisapience (being ‘all wise’). However, the first of these properties, omnipotence, for centuries has bumped up against a particularly curious paradox, with consequences for theists seeking to reason logically about their god.

 

The paradox has been posed in multiple ways. Here’s the version, inspired by the 12th-century polymath Averroes (Ibn Rushd), that people are perhaps most familiar with: 

 

Could a god create a boulder so heavy that even he could not lift it? 

 

Because if god cannot create such a boulder, then he’s not all powerful; and if he cannot lift the boulder he created, he’s likewise not all powerful. It’s a lose-lose scenario. Here’s another: Could an omnipotent god build a safe so impenetrable that even he cannot break into it? There are innumerable similar cases, another being this one, which has implications for whether or not we have free will: Can an omnipotent god create a person he could not control? And, as philosopher E.J. Mackie additionally asked, can a god ‘make rules which bind himself?

 

The crux of such paradoxes is that a god, if all powerful, should be able to do things simultaneously possible and impossible. So, for instance, contrary to Euclidean axioms, an all-powerful being should be capable of creating a situation where things equal to the same thing can be unequal to one another. Another thought experiment involves an all-powerful god who’s the universe’s best player at the complicated Oriental game of Go (a bit like draughts/checkers or chess, but played with many more white and black counters), while also creating an opponent able to beat him. One other example includes arranging for an irresistible force to successfully overpower an immovable object. 

 

To the point of such paradoxes, it’s rational and fair to define the word ‘omnipotence’ as a god possessing limitless abilities. That is, his being a maximally powerful god unconstrained by seeming illogic or by arbitrarily redefining the word for our convenience. What we might call strong omnipotence. There should not be exceptions made to the meaning of omnipotence that compromise the word. We might call such a redefinition weak omnipotence.

 

Supposed degrees of power, rather than an all-powerful being, further complicate the picture. One reason is that the phrase ‘degrees of power’ leads to the claim that omnipotence is reduceable to mere semantics — that is, the meaning we assign to words, subject to interpretation and change. After all, we know that language is highly bendable. Depending on the effects of context upon natural language, such meanings can prove vague, subjective, and contentious.

 

By extension, unmodified power — where the word omnipotence has not been self-servingly tinkered with — can accommodate what we might regard as two mutually exclusive situations. That is to say, strong omnipotence likely eclipses the (known) laws of logic, where we regard those recognized laws as still both incomplete and imperfect.

 

So, in terms of the usefulness of the literal definition of the word omnipotence, all outcomes — including potentially contradictory ones — are possible, despite gaps in our understanding. But we cannot, based on such gaps alone, perfunctorily dismiss the paradox. Over time, these holes in our comprehension will be filled, and the paradoxes duly resolved. 


This, despite literary scholar C.S. Lewis’s attempt to narrow the definition of omnipotence, saying the following: ‘Omnipotence means power to do all that is intrinsically possible, not to do the intrinsically impossible.’ The term again being subjected to dilution — to something much less than literal omnipotence, despite Lewis trying to make up for it by contending counterintuitively that ‘this is no limit to [god’s] power.’

 

Meantime, unconditional (strong) omnipotence implies a god ought to be morally impeccable. Yet, the study of theodicy — why and how there’s natural and behavioral evil in the world, despite a supposedly all-powerful and all-kind divinity — challenges this notion of moral perfection. The incongruity stems from the expectation that an all-powerful god has the ability to proscribe all evil, if so willing.

 

So, there’s either a divine being gifted with all-capable power or there’s not; it’s binary. If there’s not, we should discard the term omnipotence on grounds it’s inherently meaningless. Among other things, there cannot be random situations where absolute omnipotence applies and other situations where it doesn’t. We shouldn’t, then, opportunistically conspire to match up definitions and applications of all-inclusive omnipotence to accommodate our comfort levels.

 

Even theists who subscribe to a transcendental being’s all-embracing power face the same conundrums. Some theists allow, as an example, that according to the so-called ‘ontological argument’ for a god’s existence, even an all-knowing being cannot create something greater than himself, as the argument provided by St. Anselm in the 11th century defines god as that which nothing greater can be conceived (meaning imagined or thought). That is, a ‘necessary being.

 

This line strikes me as unsupportable, however. In part thats because the argument depends exclusively on semantics, absent the empirical validation or refutation that’s provided, for example, by the argument of design offered by the complexities and intricacies of the material world. It’s also unsupportable in part because the ontological argument capriciously limits gods power, even though something greater is indeed imaginable and thinkable.

 

The omnipotence paradox manacles the term with redefinitions that bring us to what we have called weak omnipotence (degrees of power short of absolute), and suggests that power unencumbered by checks cannot and does not exist. Because of the paradox, the unqualified version of the term ‘omnipotence’ may have interesting applications in mythology and lore — but flounders in reality.


Tuesday, 28 January 2025

Can Free Will Exist in an Otherwise Deterministic World?


By Keith Tidman

It’s probably fair to say that most people believe free will and determinism cannot exist side by side. The notion is counterintuitive, even a little odd. Why? Well, the argument is that these two notions of how things happen in the world necessarily cancel out one another. The thinking goes that out of commonsense, you’ve got to pick one or the other.

 

But there’s a different way to look at it that says “not so fast,” and instead argues that free will and determinism are mutually compatible. According to this approach, free will and determinism coexist, interlaced with one another — a school of thought referred to as compatibilism, or soft determinism.

 

Yet, according to the first group — those who assert incompatibility — hard determinism necessarily precludes free will of any kind or any degree. Where free will is purported to be an illusion. Such that, in reality, if any action transpires, it is impossible it could not have happened; nor could it have happened any differently than it did.

 

Such arguments attempt to have one’s cake — unbridled free choice — and eat it too. Meaning to keep hard determinism. For me, however, they are unconvincing. Thomas Hobbes’ comment that free will is “the liberty of the man [to do] what he has the will, desire, or inclination to do” seems to shed little to no light. Nor does the approach embraced by Immanuel Kant, contending that we are free when we exercise reason. 

 

Unenlightening, too, is John Stuart Mill’s proposal that a person is free when “his habits or his temptations are not his masters, but he theirs.” The line may be catchy, but offers little to support free will. The same could be said about A.J. Ayer, for circularly proclaiming that “to say I could have acted otherwise is to say I should have acted otherwise if I had so chosen.” These are just some of many instances of the so-called weakening of free will, to opportunistically fit the case that free will and determinism can unify.

 

So what is at the nub of such thinking? Well, start by remembering that the concept of personal agency says that when a person acts of his own free will, he or she could readily have acted otherwise. So, for example, if he takes his dog for a walk or eats a fig or invests in tech stocks, such a person could instead have lounged with their dog on the sofa or baked themselves fresh bread or invested in cryptocurrency.

 

Yet, are we really so free in our daily choices? What if, instead, all our decisions and all our actions are baked into our lives by two consequential factors: the sequence and paths of all past happenings, taking in the whole universe, back to its beginning, where one thing follows another; plus the irresistible laws of physics and other natural laws that animate and describe the universe? Here, decisions and deeds are determined by a river of ceaselessly branching causes and effects.

 

With that river in mind, let’s return to Hobbes’ support of compatibilism. The English philosopher ventured, metaphorically, that: 

“Liberty and necessity are consistent: as in the water [a river] that hath not only liberty, but a necessity of descending by the channel.” 

However, the picture Hobbes paints seems woefully incomplete. After all, the river’s flow is determined not only by channel banks, which Hobbes pointed out, but also by tree roots, rocks, tributaries entering the river, erosion over time, floods and droughts, dams, gradient of the slope, climate, soil type, industrial activity — and more. In short, the river’s flow is determined by many influences.

 

The same complex dynamic applies to the flow of human decisions and deeds. The flow of behaviors becomes deterministically set in myriad ways — chiseled in time (the when), place (the where), and manner (the how) — whereby whatever happens at this moment in time or happens later become unalterable. The paradox is that the past, present, and future are equally explainable in deterministic terms. That is, even if we were to attempt changing events to ostensibly exercise free choice, such behavioral change would itself happen deterministically.

 

Let’s look at an example. Given that natural law impinges upon probability — such as, for instance, with the rolling of dice — the outcome of each toss is predetermined. It depends on the uncountable variables and constants, subtle and blatant, that describe the initial conditions and the paths along which the caste dice travel. These cause-and-effect conditions deterministically impinge on the toss’s result.

 

To be specific, the interplaying conditions include the force with which the dice are thrown, the material the dice are made from, the effects of gravity and air resistance, the weight distribution, the release angle, the friction of the table surface, the centrifugal force, sweat on the palm, and other factors that perturb the roll. In short, many predetermining elements ungovernably affect the toss, even though we remain largely oblivious to them.

 

Yet, societies’ institutions need at least the illusion of free will out of expediency, to hold citizens accountable for behaviors that breach legal norms. Retributive justice requires laws, calibrated to align with belief in free will, for two reasons: to hold people responsible for their adjudged deeds, and by extension to prevent society unraveling into disorder. Both are noble goals on behalf of accountability and justice.

 

There’s also moral, not just legal, accountability — again aimed to marshal order. To these ends, individuals and communities (social, cultural, religious institutions) establish codes of ethics and social standards. Our language includes words like ‘benevolence,’ to capture behavioral expectations. All the while, determinism puts moral responsibility in peril. Duly, even just a degree of free choice serves the purpose of compatibilists (those who believe free will and determinism coexist). Their purpose is this: to be behaviorally free enough to have done otherwise, at least in some instances.

 

The preceding phrase, ‘at least in some instances,’ is tellingly how some compatibilists specify why marrying free will to determinism might work. However, calculatedly tinkering with free will so as to link it to determinism invariably dilutes commonsense notions of free will — as the writings of Hobbes, Mill, Kant, and Ayer, to mention just a few, show.

 

The takeaway is that compatibilism — no matter how free will may offhandedly be redefined and weakened to compel a partnership with determinism — seems not to work. Instead, it appears that determinism alone defines destiny.

 

Monday, 30 December 2024

What’s Next for Artificial Intelligence?


By Keith Tidman

For many years now people have been sounding the clarion call of Artificial Intelligence, buying into its everyday promise in ways we’ve grown accustomed to, as it scrapes the internet trove for usable information while focusing on largely single tasks. But the call of what’s being referred to as Artificial General Intelligence, also known as ‘strong AI’ or simply AGI, has fallen on less-attentive ears. Often, its potentially vast abilities, acting as a proxy for the human brain’s rich neural network, have been relegated by popular culture’s narrow vision to the realm of science fiction. 

Yet, the more likely impacts of strong AI will manifest themselves in the form of major shifts in how we model reality across all aspects of civilization, from the natural sciences to the social sciences and the full breadth of the humanities, where ultimately very few if any domains of human intellectualism and other activity will be left untouched. In some cases, adjustments to theories of knowledge will accrete like coral  reflecting vast paradigm shifts, as such fundamental change was termed by the philosopher of science Thomas Kuhn. These sweeping, forward-leaping shifts in knowledge and understanding will serve in turn as the fertile seedbeds of what’s designated AGI superintelligence.

 

We can expect, in the coming years, a steady stream of eureka moments, as physicists, neuroscientists, biologists, chemists, computer scientists, philosophers of mind, and others working on aspects of strong AIs development explore the frontiers of whats possible. Even to this extent, there’s still a way to go in order to grasp the full vision, where the precise timeline is the subject of earnest debate. This, despite the fact that Nobel prizes were chalked up in 2024 for investigations into this field, including for the development of machine-learning technology using artificial neural networks. (Geoffrey Hinton, often referred to as the ‘godfather of AI’, and physicist John Hopfield were among these awardees.)


Deep questions and much learning remain, however, around what’s necessary for even approximating the complexities of the human mind and consciousness, ranging from thinking about thinking and the insatiability of wide-eyed curiosity. After all, unlike the relatively more brute-force-like tactics of today’s narrower, so-called ‘weak AI’, the robustness of Artificial General Intelligence at its pinnacle will allow it to do all sorts of things: truly think, understand, ideate, experience, solve problems in unheard-of ways, experiment, deconstruct and reconstruct, intuit, engage in what-if thought experimentation, critically analyze, and innovate and create on grand scales.

 

Increasingly, the preceding abilities will be the stuff of science fact, not science fiction. And eventually, through the ensuing possibility of AGI’s self-optimization — that is, absent intervention by biased human algorithm-builders — Artificial General Intelligence will be able to do all that, and more, much better than humans can. Self-optimization translates to the technology managing its own evolutionary journey. That tipping point in the dash toward superintelligence will likely become a matter for irrepressibly curious, enterprising humans and strong AI itself to figure out how to accommodate and catalyze each other, for the best outcome.


Within the philosophy of science, the posture of scientific observation that is rooted in the acceptance that knowledge of the world is always interpreted, structured, and filtered by the observer and that, consequently, pronouncements need to be built on the recognition of how hard it is to grasp the world, is called epistemic humility. The approach has implications in the heady sprint toward superintelligence. Epistemic humility refers to the limits on what we know or think we know (provisional knowledge); the degrees of certainty or uncertainty with which we know it; what we don’t know but later might with further investigation; and what’s deemed, at least for now, as flat-out unknowable. In other words, don’t just assume; instead, rationally, empirically verify or falsify, and then verify or falsify again, with our minds open to new information and calls for changed models. Artificial General Intelligence will be a critical piece of that humbling puzzle.

 

Other links between AGI and things, events, and conditions in the world will include, in the longer term, consciousness-like abilities such as awareness, perception, sentience, identity, presence in time and space, visions of alternative futures, anchors to history, pondering, volition, imagination, adaptation, innovation, sense of agency, memory — and more. To know that itself purposely exists. Just as the whole range of human cognitive capabilities emerge from the neurophysiological activity of a person’s brain, similarly they will emerge from the inner network of Artificial General Intelligence, its nonbiological foundation notwithstanding. Certainly, the future commercially scaling up of quantum computers, with their stunningly ultra-fast processing compared even with today’s supercomputers (quantum computing is projected to be many millions of times faster), will help fast-track AGI’s reach. The international race is on.

 

Critics warn, though, that the technology could lead to civilizational and human extinction. Two years ago, one advocacy organization hyperbolically framed humanity’s challenge in the arena of Artificial Intelligence as equivalent to mitigating the risk posed by the trajectory of climate change, the prospect of future pandemics, and the world’s bristling nuclear arsenals. I suspect such apocalyptic anxieties, although admittedly palpable, will ultimately prove to be unhelpful miscues and distractions on the upcoming AGI stage. Ridding ourselves more and more of what today’s AI industry daintily refers to as ‘hallucinations,’ or in everyday parlance errors, will prove a critical early step in moving along toward strong AI. ‘Red teaming’ AGI models in structured environments, while such models evolve in capability and complexity, will test for flaws, harms, vulnerabilities, and misbehaviors, in order to continually inform remediation strategies.


Guardrails are, of course, necessary, but they must not unduly hinder progress. It won’t be enough for even thoughtful protagonists and antagonists of AGI to contest ideas. Rather, the intellectual capital invested in ideas needs to be wide-ranging and inclusive. Humanity will therefore be best served if it allows informed, clear-minded multidisciplinary teams of specialists — ethicists, physicists, legal scholars, anthropologists, philosophers, technologists, neuroscientists, historians, sociologists, psychologists, government policymakers — along with the public at large to share their respective expertise and opinions in contemplating prudent ways forward, and for what purposes. Even, perhaps, to consider the potential rights and responsibilities of such stellarly smart systems.

 

In those contexts, we might expect that future development of Artificial General Intelligence will help enrich our understanding of what it means for us to be us in such a world of superintelligent, creative, expert systems. It will irrepressibly bring us to a place where human learning and machine learning transect in mutually force-multiplying ways. As the technology evolves, the real challenge will be, in the long run, to fathom the world-altering, pan-visionary promise of what AGI can know, understand, innovate, and do as part of our common enterprises.