Tuesday 5 December 2023

Chernobyl's Philosophical Lesson

How to Slay the Nuclear Zombie? 

By Martin Cohen

Review article on the occasion of the publication of ‘Chernobyl’ by Emin Altan

Now here's a coffee table debate starting book with a difference. Emin Altan’s photographic tale of the nuclear power station that exploded on 26 April 1986 is both a grim journey and yet somehow a poetic one. Page after page of evocative images – black and white with just a hint of lost colour – speak not only about the folly of nuclear power, but of the greater folly of human conceit.

The images in the book for the most part fall into two categories. There are the are ones from the radiation-soaked exclusion zone that actually could be taken almost anywhere where human plans have been thwarted and decay has set in. A basketball court strewn with rubble, juxtaposed with a rediscovered photo – hopelessly mouldy – of children in gym gear exercising with sticks is an example that caught my eye. You sense that these children were imagining themselves as future world-beaters, and the reality of human transcience is brought home by the peeling decay of the abandoned gym.

There is a beauty in these decaying photographs that Altan’s book powerfully conveys. The book plays with images of life that are also images of death. This is a photographic essay that is about much more than Chernobyl. Better would be to say that it is about existential questions of human existence. Scenes of life abruptly halted blended with decades of inevitable decay But then, you might wonder, how does nuclear energy, always keen to claim to be the brave and the new, fit in? But it does very well, because, as I say in my contribution for the book, nuclear energy is a zombie technology… a technology that arises from the grave, if not every night, seemingly every decade, before stalking the Earth in pursuit of hapless victims.

Nuclear energy is eye-wateringly expensive, with effectively unlimited downstream costs for dealing with shuttered power stations and radioactive waste. It is the only human strategy for energy generation that also comes with a very real risk of one day destroying all human life on the planet.

Another paradox is that, in recent years the nuclear industry has sold its reactors not to wealthy countries - but to the world’s poorest: Sudan, Nigeria, Egypt, the Philippines, Indonesia… Why do such countries sign up for nuclear? The answer is finance deals, and dirty money for regimes. Which is why India and China, countries in which millions of people live below the poverty line and can’t afford electricity at all, are the world’s biggest spenders on nuclear.

However, the reasons why, once upon a time, all self-respecting environmentalists hated nuclear power are still there. It produces invisible pollution— radiation— with the potential to seep everywhere, causing genetic diseases that interfere with nature. After the explosion at Chernobyl, an invisible cloud slowly spread across the Earth poisoning food chains and leaving toxic residues in the seas and soils. Residues that would be toxic for thousands of years… And Chernobyl could have been far worse, had it not been for the heroism coupled with (ironically) the ignorance of the people who fought to prevent the plant exploding.

When I researched nuclear’s real share of the world energy pie for my book, The Doomsday Machine, a few years ago, what emerged very clearly was that renewables, including old technologies like hydroelectric, played a secondary but significant of the energy mix - but nuclear did not. It was, I wrote then, merely ‘the cherry’ on the top of the energy pie.

Because, while the technology of renewables steadily becomes cheaper and more efficient, nuclear energy steadily increases in cost, while efficiency gains remain purely speculative. Put another way, energy is a very complex issue, and simple one-size-fits-all solutions won’t work. It’s true, as as the nuclear lobby says, that renewables cannot easily replace nuclear for energy intensive industries and that their output is by nature erratic. It’s also true that for all the rhetoric, global primary energy consumption has not only increased over the last century, but has increased exponentially.

The conclusion, then surely, is that part of the solution to the world’s energy problem, the solution that removes the need for nuclear, is we have to stop the ever-increasing rate of energy consumption. However, this apparently virtuous aim is complicated, indeed made not virtuous at all, when it is realised that at the moment most of the world’s population already use rather modest amounts of energy, while it is a rich elite who gobble up the lion’s share. Yes, the careless consumption of the world’s rich has to be curbed, but on the other hand, a more equal distribution of income in the world must inevitably also create higher energy demand. Because, today, hundreds of millions of people lack access to sufficient energy, often with dire consequences for themselves and the environment. When people lack access to electricity for cooking and heating, they rely on solid fuel sources – mostly firewood, but also dung and crop waste. The use of wood for fuel often contributes to deforestation – even if in principle wood can be cropped sustainably. Electricity, on the other hand, allows refrigeration of food; washing machines for clothes; and light to read at night. In some countries today, children can be found sitting under street lights to do their homework. The energy problem of half of the world is energy poverty.

Fortunately, the kinds of energy needed by these families and individuals are increasingly within the ability of renewables to provide, while the demands of industry are flat or declining and possible to meet within the current energy mix – without needing nuclear. Without, likewise, needing us to answer all the existential questions. Which is just as well, as surely these have no easy answers.

Monday 6 November 2023

A Poem comparing Confucius and Marx (with a twist)

By Chengde Chen

I just asked ChatGPT, one of the new artificial intelligence to write a poem about Confucius and Marx. You can see what I got below:

Confucius and Marx 
In ancient halls where wisdom's flame did burn,
Two minds emerged, in different times they'd yearn.
Confucius, sage of old, with noble grace,
His thoughts engraved in time, a lasting trace.

In golden words, he spoke of virtue's way,
Harmony and order, like the break of day.
His Analects, a guide for hearts and minds,
A path to walk, where virtue intertwines.

Marx, a modern thinker, bold and keen,
In times of change, a revolutionary dream.
He saw the struggle, class against class,
A call for justice, echoing like glass.

Confucius sought the moral man's design,
Harmony in life, like a vintage wine.
Marx, the critic, saw the chains of fate,
Called for a revolution, a new state.

Two voices echo through the corridors,
Philosophers distinct, like distant shores.
Confucius with tradition, hand in hand,
Marx, the visionary, reshaping the land.

In ancient scrolls and communist tomes,
Their words live on, in intellectual homes.
Confucius and Marx, an unlikely pair,
Yet their ideas dance in the cosmic air.

I really think we poets, and writers generally, are close to be being made redundant by it, and I feel lucky to get my novel published before it came! . A philosopher friend (Donald Gillis) recently talked to me about this question: how should we define AI knowledge – and what is the difference between a computer gaining from ‘deep learning’ from numerous documents – and humble humans learning from experience? 

(Oh, and the image for the poem was also created by artificial intelligence!)

Thursday 26 October 2023

Why Don't People Seem to Care about Palestinian Lives?

Palestine is being ‘ethnically clensed’ in plain sight - yet the West seems indifferent

By Martin Cohen

Palestine is being ‘ethnically cleansed’ in plain sight - yet the West seems indifferent. Why is this? Wherever you start, the trail soon leads back to US politics.

How close is the current U.S. President, Joe Biden to Israel and how much influence does the US have over Israeli policy? The answer is “very” and “not much”. In 2010, in the middle of the then-vice president’s trip to Israel, the ‘Bibi’ Netanyahu government embarrassed Biden by announcing 1,600 new homes for Jews in East Jerusalem, which was supposed to be the future capital of a future Palestinian rump state. Biden is notoriously aggressive and won’t normally tolerate any disagreement. Thus, in a 2022 article for Axios, entitled ‘Old Yeller: Biden's Private Fury’, Alex Thompon notes how:

“Being yelled at by the president has become an internal initiation ceremony in this White House, aides say — if Biden doesn't yell at you, it could be a sign he doesn't respect you.’

But with Israel, it seems the situation is rather different.

One of Netanyahu’s advisors, Uzi Arad, later revealed that when Prime Minister Netanyahu met with Biden soon after publicly humiliating him, Biden threw his arm around “Bibi” and said with a smile, “Just remember that I am your best fucking friend here.” Likewise, in 2012, Biden publicly said to Netanyahu: 

“Bibi, I don’t agree with a damn thing you say, but I love you.”

In vain, it seems, do advisors try to educate Biden about the complex politics of the region. About memories like that of the Nakba, at the heart of this ignored history. This is a term which means “catastrophe” in Arabic. It refers to the mass displacement and dispossession of Palestinians during the 1948 Arab-Israeli war. Prior to this, contrary to claims that Arabs and Jews cannot live together, Palestine was a multi-ethnic and multi-cultural society. However, the conflict between Arabs and Jews intensified in the 1930s with the increase of Jewish immigration, driven by persecution in Europe, and with the Zionist movement aiming to establish a Jewish state in Palestine. It is always unpopular to state it, but in fact Hitler supported the idea which surely tells you want a terrible one it always was.

Today, the politics of Americans – and many other countries too, including the U.K –  with respect to Israel is characterised by three things. Prejudice against Arabs - who are seen as various kinds of “terrorist”; ignorance and indifference to the history of the region. However, American politics add in one other ingredient, and a most dangerous one too,  which is an irrational conviction that the Bible predicts the Second Coming of the Messiah – but only once the Holy Land is reunited under Israeli control. It has even been suggested that Joe Biden is part of this evangelical cult, though I have no way of knowing if this rumour is true. What I do know is that this ridiculous and irrational view has considerable influence on both Democrat and Republican parties. It feeds into a political consensus that, one way or bloody another, Palestine needs to become “Israel”.

Nonetheless, in November 1947, the UN General Assembly passed a resolution partitioning Palestine into two states, one Jewish and one Arab (with Jerusalem under UN administration). When, understandably, the Arab world rejected the plan, Jewish militias launched attacks against Palestinian towns and villages, forcing tens of thousands to flee. The situation escalated into a full-blown war in 1948. The result of this war was the permanent displacement of more than half of the Palestinian population.

Today, most of the inhabitants of Gaza are refugees or descendants of refugees from the 1948 Nakba and the 1967 war, and more than half are under the age of 18. Apart from the tragedy of forcibly displacing children, attempts to blame the inhabitants of Gaza for either “voting for” Hamas or not resisting them are hollow given this age distribution.

Today too, due to Israel’s siege of Gaza, the majority of Palestinians there no longer have access to basic needs such as healthcare, water, sanitation services, and electricity. Prior to the siege, their situation was already pretty desperate: according to the UN, 63 percent of the population was dependent on international aid; 80 percent lived in poverty and 95 percent did not have access to clean water.
Alas, many American voters have been encouraged to feel indifference to Palestinian suffering for decades, and instead have passively accepted an alternative reality in which the Jewish people not only there - but worldwide - are a persecuted but courageous minority. Never mind that nearly six million Americans are Jewish and live pretty safely there…

The bottom line then is that, in the normal way, there is NO political price to be paid by the Democrats for supporting the Israeli government in its latest, murderous expansion of “Jewish areas”. However, this time, I actually think is NOT normal.

The catch is, despite Biden's "unconditional" support, Israel knows the Palestinians won't conveniently flee abroad (despite so many being killed at the moment, with highly publicise strategies of cutting off water and bombing hospitals) so its strategy becomes one of just killing. But Gaza alone contains some 600 000 people - mostly children. If they won’t flee, then they need to be killed. After all, Gaza was already a kind of prison. It will be hard to square that circle.

When I was younger, I remember meeting some of the "IDF heroes" of the last war - certainly they fought at a significant disadvantage against well-armed foes. Could it be today that the 360 000 reservists now begin to doubt their commanders? I think it is possible. However, If not, they will soon find themselves wading through civilian bodies in the rubble of Palestinian homes.

But back to a question posed recently on Quora will Biden pay a price for his indifference to the plight of millions of Palestinians? No, in the short term,  I don’t see Biden or anyone else paying a price for this. However, in the longer term – indeed maybe as soon as within a few months – I think things will look very different At which point, either Israel corrects itself (as Netanyahu represents only a small minority) – or history will do it for them.

Further reading on Palestine



Monday 28 August 2023

A Word to the Wise

Philosophy is a sailboat that deftly catches the fair breeze…

By Andrew Porter

We live in a time in which most people, were you to ask them ‘Do you think you’re wise?’, would look askance or confused and not answer straightforwardly. They are not prepared for the question by long anticipation and living in that habitat. But you might hear answers such as, ‘I’m wise about some things’ or ‘I’m pretty savvy when it comes to how to handle people’. But your question would remain unanswered.

Maybe it’s the circles I run in, but it seems that there's little to no hankering for wisdom; it is not prevalent. It is as if many people feel that moral relativism – the common zeitgeist – has taken them off the hook and they are relieved. But choices have a way of illuminating obvious help or harm. There’s really no getting off the hook.

Wisdom can be encapsulated in a reasoned decision by an individual, but it is always in tune with larger reason. One of the great things about Plato as a philosopher is that he walks around and into the thick of the question of wisdom with boldness and perspective. A champion of reason, he grounds human morality in virtue, but emphasises that it is part of a ‘virtue’ of reality: the nature and function of the ontologically real is to be good, true, and beautiful.

This immersion of humankind and personal choices in a larger environment seems a crucial lesson for our times. This odd and ungrounded era we live in does not have a ready and able moral vocabulary; it, more often than not, leaves moral nuance like an abandoned shopping cart in the woods. Why is Plato one of the best voices to re-energise as his philosophy applies to current-day issues and angst?

One of the problems of individuals and institutions in contemporary times is that they think they are wise without ever examining how and if that’s true. So often, they – whether you yourself, a spouse, a boss, politicians, or fellow citizens – assume a virtue they own not. This is exactly what Socrates, in Plato's hands, addresses. What are some of the problems in the world open to reform or transformation?

Certainly, social justice issues continue to rear their head and undermine an equitable society. Entrenched power systems and attendant attitudes are not only slow to respond, but display no moral understanding. Today, it seems there is a raft of problems, from psychological to philosophical, and the consequences turn dire. At the root of all actual and potential catastrophes, it seems, is a lack of that one thing that has been waylaid, discarded, and ignored: wisdom.

Plato crafted his philosophy about soul and virtue, justice and character, in alignment with his metaphysics. This is its genius, making a harmony of inner and outer

In the Republic, Plato himself oscillates between saying that a philosopher-king, the only assurance the city would be happy and just, would be a lover of wisdom and actually wise. In our time, the problem is a lack of desire to find or inculcate wisdom. Societies have, in general, hamstrung themselves. We do not have ready tools to care about and value wisdom, however far off. We do not, to any cogent degree, educate children to be philosopher-kings of their own lives.

Western societies and perhaps Eastern ones as well have not increased in wisdom because they have abandoned the pursuit. The task is left unattended. The current problem is not that the world (or smaller entities such as companies, schools, and individuals) cannot find a truly wise person; so-called civilisation acts wilfully against finding or even thinking about finding such. It is a mobile home that's been put up on blocks.

Philosophy can inculcate the kind of consciousness that the 20th century Swiss philosopher, Jean Gebser, called integral reality, which perceives a truth that, as he says, ‘transluces’ both the world and humankind (in the sense of shining light through). In short, philosophy holds the promise of educating. It is not a crazy old man on his porch, moving his cane to tell the traffic to slow down; rather, philosophy is a sailboat that deftly catches fair breeze – and moves us forward.

Monday 7 August 2023

The Dubious Ethics of the Great Food Reset

Picture “for a school project”

By Martin Cohen

There’s a plan afoot to change the way you eat. Meat is destroying the land, fish and chips destroys the sea and dairy is  just immoral. Open the paper and you'll see a piece on how new biotechnologies are coming to the rescue. It's all presented as a fait accompli with the result that today, we are sleepwalking to not only a "meat-free" future, but one in which there are no farm animals, no milk, no cheese, no butter - no real food in short. And that's not in our interests, nor (less obviously) in the interest of biodiversity and the environment. There's just the rhetoric that it is "for the planet" 

According to researchers at the US think-tank, RethinkX, “we are on the cusp of the fastest, deepest, most consequential disruption” of agriculture in history. And it's happening fast. They say that by 2030, the entire US dairy and cattle industry will have collapsed, as precision fermentation” – producing animal proteins more efficiently via microbes – “disrupts food production as we know it”.
Theres trillions of dollars at stake and very little public debate about it. Instead, theres a sophisticated campaign to persuade people that this revolution is both inevitable and beyond criticism.

No wonder Marx declared that food lay at the heart of all political structures and warned of an alliance of industry and capital intent on both controlling and distorting food production.

The Great Food Reset a social and political upheaval that affects everyone, yet at the moment the debate is largely controlled by the forces promoting the changes: powerful networks of politicians and business leaders, such as the United Nations Environment Program, the so-called EAT-Lancet "Commission" (it's not really a commission, how words mislead!) - and the World Economic Forum, all sharing a rationale of 'sustainable development', market expansion, societal design, and resource control. Vocal supporters are the liberal media and academics who, perversely, present the movement as though part of a grassroots revolution.

There have been plenty of political programmes designed to push people into ‘the future’. Often, they flirt with increasingly intolerant compulsion. So too, with The Great Food Reset. Governments are already imposing heavy burdens on traditional farming and attempting to penalise the sale of animal products in the marketplace - either on the grounds that they are ‘unhealthy’ or, even more sweepingly, that they are bad for the environment.

In recent months, the steam has gone out of the “vegan food revolution”, mainly because people like their traditional foods more than the new ones, which typically are made from the four most lucrative cash crops: wheat, rice, maize and soybean. Incredibly, and dangerously, from over half a million plant species on the planet, we currently rely on just these four crops for more than three-quarters of our food supply. Animal sourced foods are our link to food variety.

But there's another reason to defend animal farming, which is that for much of the world, small farms are humane farms, with the animals enjoying several years of high quality life in the open fields and air. The new factory foods have no needs for animals and the argument that, well, better dead than farmed, just doesn't hold water – at least for traditional farms. It's the fundamental ethical dilemma: yes, death is terrible – but is it worse to have never lived?

In recent decades, we’ve seen many areas of life remodelled, whether we wanted them to be or not.. But to dictate how we grow food, how we cook food, and how we eat it, may just be a step too far.

Monday 17 July 2023

When Is a Heap Not a Heap? The Sorites Paradox and ‘Fuzzy Logic’

By Keith Tidman

Imagine you are looking at a ‘heap’ of wheat comprising some several million grains and just one grain is removed. Surely you would agree with everyone that afterward you are still staring at a heap. And that the onlookers were right to continue concluding ‘the heap’ remains reality if another grain were to be removed — and then another and another. But as the pile shrinks, the situation eventually gets trickier.


If grains continue to be removed one at a time, in incremental fashion, when does the heap no longer qualify, in the minds of the onlookers, as a heap? Which numbered grain makes the difference between a heap of wheat and not a heap of wheat? 


Arguably we face the same conundrum if we were to reverse the situation: starting with zero grains of wheat, then incrementally adding one grain at a time, one after the other (n + 1, n + 2 ...). In that case, which numbered grain causes the accumulating grains of wheat to transition into a heap? Put another way, what are the borderlines between true and not true as to pronouncing there’s a heap?


What we’re describing here is called the Sorites paradox, invented by the fourth-century BC Athenian Eubulides, a philosopher of the Megarian school, named after Euclides of Megara, one of the pupils of Socrates. The school, or group, is famous for paradoxes like this one. ‘Sorites’, by the way, derives not from a particular person, but from the Greek word soros, meaning ‘heap’ or ‘pile’. The focus here being on the boundary of ‘being a heap’ or ‘not being a heap’, which is indistinct when single grains are either added or removed. The paradox is deceptive in appearing simple, even simplistic, yet, any number of critically important real-world applications attest to its decided significance. 


A particularly provocative case in point, exemplifying the central incrementalism of the Sorites paradox, is concerns deciding when a fetus transitions into a person. Across the milestones of conception, birth, and infancy, the fetus-cum-person acquires increasing physical and cognitive complexity and sophistication, occurring in successively tiny changes. Involving not just the number of features, but of course also the particular type of features (that is, qualitative factors). Leading us to ask, what are the borderlines between true and not true as to pronouncing there’s a person. As we know, this example of gradualism has led to highly consequential medical, legal, constitutional, and ethical implications being heatedly and tirelessly debated in public forums. 


Likewise, with regard to this same Sorites-like incrementalism, we might assess which ‘grain-by-grain’ change rises to the level of a ‘human being’ close to the end of a life — when, let’s say, deep dementia increasingly ravages aspects of a person’s consciousness, identity, and rationalism, greatly impacting awareness. Or, say, when some other devasting health event results in gradually nearing brain death, and alternative decisions hover perilously over how much to intervene medically, given best-in-practice efforts at a prognosis and taking into account the patient’s and family’s humanity, dignity, and socially recognised rights.


Ot take the stepwise development of ‘megacomplex artificial intelligence’. Again, involving consideration of not just ‘how many features’ (n + 1 or n - 1), but also ‘which features’, the latter entailing qualifiable features. The discussion has stirred intense debate over the race for intellectual competitiveness, prompting hyperbolic public alarms about ‘existential risks’ to humanity and civilisation. The machine equivalence of human neurophysiology is speculated to transition, over years of gradual optimisation (and down the road, even self-optimisation), into human-like consciousness, awareness, and cognition. Leading us to ask, where are the borderlines between true and not true as to pronouncing it has consciousness and greater-than-human intelligence? 


In the three examples of Sorites ‘grain-by-grain’ incrementalism above — start of life, end of life, and artificial general intelligence — words like ‘human’, ‘consciousness’, ‘perception’, ‘sentience’, and ‘person’ provide grist for neuroscientists, philosophers of mind, ethicists, and AI technologists to work with, until the desired threshold is reached. The limitations of natural language, even in circumstances mainly governed by the prescribed rules of logic and mathematics, might not make it any easier to concretely describe these crystalising concepts.


Given the nebulousness of terms like personhood and consciousness, which tend to bob up and down in natural languages like English, bivalent logic — where a statement is either true or false, but not both or in-between — may be insufficient. The Achilles’ heel is that the meaning of these kinds of terms may obscure truth as we struggle to define them. Whereas classical logic says there either is or is not a heap, with no shades in the middle, there’s something called fuzzy logic that scraps bivalence.


Fuzzy logic recognises there are both large and subtle gradations between categorically true and categorically false. There’s a continuum, where statements can be partially true and partially false, while also shifting in their truth value. A state of becoming, one might say. A line may thus be drawn between concepts that lie on such continuums. Accordingly, as individual grains of wheat are removed, the heap becomes, in tiny increments, less and less a heap — arriving at a threshold where people may reasonably concur it’s no longer a heap.


That tipping point is key, for vagueness isn’t just a matter of logic, it’s also a matter of knowledge and understanding (a matter of epistemology). In particular, what do we know, with what degree of certainty and uncertainty do we know it, when do we know it, and when does what we know really matter? Also, how do we use natural language to capture all the functionality of that language? Despite the gradations of true and false that we just talked about in confirming or refuting a heap, realistically the addition or removal of just one grain does in fact tip whether it’s a heap, even if we’re not aware which grain it was. Just one grain, that is, ought to be enough in measuring ‘heapness’, even if it’s hard to recognise where that threshold is.


Another situation involves the moral incrementalism of decisions and actions: what are the borderlines between true and not true as to pronouncing that a decision or action is moral? An important case is when we regard or disregard the moral effects of our actions. Such as, environmentally, on the welfare of other species sharing this planet, or concerning the effects on the larger ecosystem in ways that exacerbate the extreme outcomes of climate change.


Judgments as to the merits of actions are not ethically bivalent, either — by which I mean they do not tidily split between being decidedly good or decidedly bad, leaving out any middle ground. Rather, according to fuzzy logic, judgments allow for ethical incrementalism between what’s unconditionally good at one extreme and what’s unconditionally bad at the other extreme. Life doesn’t work quite so cleanly, of course. As we discussed earlier, the process entails switching out from standard logic to allow for imprecise concepts, and to accommodate the ground between two distant outliers.


Oblique concepts such as ‘good versus bad’, ‘being human’, ‘consciousness’, ‘moral’, ‘standards’ — and, yes, ‘heap’ — have very little basis from which to derive exact meanings. A classic example of such imprecision is voiced by science’s uncertainty principle: that is, we cannot know both the speed and location of a particle with simultaneously equal accuracy. As our knowledge of one factor increases in precision, knowledge of the other decreases in precision.


The assertion that ‘there is a heap’ becomes less true the more we take grains away from a heap, and becomes increasingly true the more we add grains. Finding the borderlines between true and not true in the sorts of consequential pronouncements above is key. And so, regardless of the paradox’s ancient provenance, the gradualism of the Sorites metaphor underscores its value in making everyday determinations between truth and falsity.

Monday 26 June 2023

Ideas Animate Democracy

Keith Tidman

The philosopher Soren Kierkegaard once advised, ‘Life can only be understood backwards … but it must be lived forward’ — that is, life understood with one eye turned to history, and presaged with the other eye turned to competing future prospects. An observation about understanding and living life that applies across the board, to individuals, communities, and nations. Another way of putting it is that ideas are the grist for thinking not only about ideals but about the richness of learnable history and the alternative futures from which society asserts agency in freely choosing its way ahead. 

As of late, though, we seem to have lost sight that one way for democracy to wilt is to shunt aside ideas that might otherwise inspire minds to think, imagine, solve, create, discover and innovate — the source of democracy’s intellectual muscularity. For reflexively rebuffing ideas and their sources is really about constraining inquiry and debate in the public square. Instead, there has been much chatter about democracies facing existential grudge matches against exploitative autocratic regimes that issue their triumphalist narrative and view democracy as weak-kneed.  

In mirroring the decrees of the Ministry of Truth in the dystopian world of George Orwell’s book Nineteen Eighty-Four — where two plus two equals five, war is peace, freedom is slavery, and ignorance is strength — unbridled censorship and historical revisionism begin and end with the fear of ideas. Ideas snubbed by authoritarians’ heavy hand. The short of it is that prohibitions on ideas end up a jumbled net, a capricious exercise in power and control. Accordingly, much exertion is put into shaping society’s sanctioned norms, where dissent isn’t brooked. A point to which philosopher Hannah Arendt cautioned, ‘Totalitarianism has discovered a means of dominating and terrorising human beings from within’. Where trodden-upon voting and ardent circulation of propagandistic themes, both of which torque reality, hamper free expression.


This tale about prospective prohibitions on ideas is about choices between the resulting richness of thought or the poverty of thought — a choice we must get right, and can do so only by making it possible for new intellectual shoots to sprout from the raked seedbed. The optimistic expectation from this is that we get to understand and act on firmer notions of what’s real and true. But which reality? One reality is that each idea that’s arbitrarily embargoed delivers yet another kink to democracy’s armour; a very different reality is that each idea, however provocative, allows democracy to flourish.


Only a small part of the grappling over ideas is for dominion over which ideas will reasonably prevail long term. The larger motive is to honour the openness of ideas’ free flow, to be celebrated. This exercise brims with questions about knowledge. Like these: What do we know, how do we know it, with what certainty or uncertainty do we know it, how do we confirm or refute it, how do we use it for constructive purposes, and how do we allow for change? Such fundamental questions crisscross all fields of study. New knowledge ferments to improve insight into what’s true. Emboldened by this essential exercise, an informed democracy is steadfastly enabled to resist the siren songs of autocracy.


Ideas are accelerants in the public forum. Ideas are what undergird democracy’s resilience and rootedness, on which standards and norms are founded. Democracy at its best allows for the unobstructed flow of different social and political thought, side by side. As Benjamin Franklin, polymath and statesman, prophetically said: ‘Freedom of speech is a principal pillar of a free government’. A lead worth following. In this churn, ideas soar or flop by virtue of the quality of their content and the strength of their persuasion. Democracy allows its citizens to pick which ideas normalise standards — through debate and subjecting ideas to scrutiny, leading to their acceptance or refutation. Acid tests, in other words, of the cohesion and sustainability of ideas. At its best, debate arouses actionable policy and meaningful change.


Despite society being buffeted daily by roiling politics and social unrest, democracy’s institutions are resilient. Our institutions might flex under stress, but they are capable of enduring the broadsides of ideological competitiveness as society makes policy. The democratic republic is not existentially imperiled. It’s not fragilely brittle. America’s Founding Fathers set in place hardy institutions, which, despite public handwringing, have endured challenges over the last two-and-a-half centuries. Historical tests of our institutions’ mettle have inflicted only superficial scratches — well within institutions’ ability to rebound again and again, eventually as robust as ever.


Yet, as Aristotle importantly pointed out by way of a caveat to democracy’s sovereignty and survivability, 

‘If liberty and equality . . . are chiefly to be found in democracy, they will be attained when all persons share in the government to the utmost.’

A tall order, as many have found, but one that’s worthy and essential, teed up for democracies to assiduously pursue. Democracy might seem scruffy at times. But at its best, democracy ought not fear ideas. Fear that commonly bubbles up from overwrought narrative and unreasoned parochialism, in the form of ham-handed constraints on thought and expression.


The fear of ideas is often more injurious than the content of ideas, especially in the shadows of disagreeableness intended to cause fissures in society. Ideas are thus to be hallowed, not hollowed. To countenance contesting ideas — majority and minority opinions alike, forged on the anvil of rationalism, pluralism, and critical thinking — is essential to the origination of constructive policies and, ultimately, how democracy is constitutionally braced.



Monday 12 June 2023

The Euthyphro Dilemma: What Makes Something Moral?

The sixteenth-century nun and mystic, Saint Teresa. In her autobiography, she wrote that she was very fond of St. Augustine … for he was a sinner too

By Keith Tidman  

Consider this: Is the pious being loved by the gods because it is pious, or is it pious because it is being loved by the gods?  Plato, Euthyphro

Plato has Socrates asking just this of the Athenian prophet Euthyphro in one of his most famous dialogues. The characteristically riddlesome inquiry became known as the Euthyphro dilemma. Another way to frame the issue is to flip the question around: Is an action wrong because the gods forbid it, or do the gods forbid it because it is wrong? This version presents what is often referred to as the ‘two horns’ of the dilemma.


Put another way, if what’s morally good or bad is only what the gods arbitrarily make something, called the divine command theory (or divine fiat) — which Euthyphro subscribed to — then the gods may be presumed to have agency and omnipotence over these and other matters. However, if, instead, the gods simply point to what’s already, independently good or bad, then there must be a source of moral judgment that transcends the gods, leaving that other, higher source of moral absolutism yet to be explained millennia later. 


In the ancient world the gods notoriously quarreled with one another, engaging in scrappy tiffs over concerns about power, authority, ambition, influence, and jealousy, on occasion fueled by unabashed hubris. Disunity and disputation were the order of the day. Sometimes making for scandalous recounting, these quarrels comprised the stuff of modern students’ soap-opera-styled mythological entertainment. Yet, even when there is only one god, disagreements over orthodoxy and morality occur aplenty. The challenge mounted by the dilemma is as important to today’s world of a generally monotheistic god as it was to the polytheistic predispositions of ancient Athens. The medieval theologians’ explanations are not enough to persuade:

‘Since good as perceived by the intellect is the object of the will, it is impossible for God to will anything but what His wisdom approves. This is as it were, His law of justice, in accordance with which His will is right and just. Hence, what He does according to His will He does justly: as we do justly when we do according to the law. But whereas law comes to us from some higher power, God is a law unto Himself’ (St. Thomas Aquinas, Summa Theologica, First Part, Question 21, first article reply to Obj. 2).

In the seventeenth century, Gottfried Leibniz offered a firm challenge to ‘divine command theory’, in asking the following question about whether right and wrong can be known only by divine revelation. He suggested, rather, there ought to be reasons, apart from religious tradition only, why particular behaviour is moral or immoral:


‘In saying that things are not good by any rule of goodness, but sheerly by the will of God, it seems to me that one destroys, without realising it, all the love of God and all his glory. For why praise him for he has done if he would be equally praiseworthy in doing exactly the contrary?’ (Discourse on Metaphysics, 1686). 


Meantime, today’s monotheistic world religions offer, among other holy texts, the Bible, Qur’an, and Torah, bearing the moral and legal decrees professed to be handed down by God. But even in the situations’ dissimilarity — the ancient world of Greek deities and modern monotheism (as well as some of today’s polytheistic practices) — both serve as examples of the ‘divine command theory’. That is, what’s deemed pious is presumed to be the case precisely because God chooses to love it, in line with the theory. That pious something or other is not independently sitting adrift, noncontingently virtuous in its own right, with nothing transcendentally making it so.


This presupposes that God commands only what is good. It also presupposes that, for example, things like the giving of charity, the avoidance of adultery, and the refrain from stealing, murdering, and ‘graven images’ have their truth value from being morally good if, and only if, God loves these and other commandments. The complete taxonomy (or classification scheme) of edicts being aimed at placing guardrails on human behaviour in the expectation of a nobler, more sanctified world. But God loving what’s morally good for its own sake — that is, apart from God making it so — clearly denies ‘divine command theory’.


For, if the pious is loved by the gods because it is pious, which is one of the interpretations offered by Plato (through the mouth of Socrates) in challenging Euthyphro’s thinking, then it opens the door to an authority higher than God. Where matters of morality may exist outside of God’s reach, suggesting something other than God being all-powerful. Such a scenario pushes back against traditionally Abrahamic (monotheist) conceptualisations.


Yet, whether the situation calls for a single almighty God or a yet greater power of some indescribable sort, the philosopher Thomas Hobbes, who like St. Thomas Aquinas and Averroës believed that God commands only what is good, argued that God’s laws must conform to ‘natural reason’. Hobbes’s point makes for an essential truism, especially if the universe is to have rhyme and reason. This being true even if the governing forces of natural law and of objective morality are not entirely understood or, for that matter, not compressible into a singularly encompassing ‘theory of all’. 


Because of the principles of ‘divine command theory’, some people contend the necessary takeaway is that there can be no ethics in the absence of God to judge something as pious. In fact, Fyodor Dostoyevsky, in The Brothers Karamazov, presumptuously declared that ‘if God does not exist, everything is permitted’. Surely not so; you don’t have to be a theist of faith to spot the shortsighted dismissiveness of his assertion. After all, an atheist or agnostic might recognise the benevolence, even the categorical need, for adherence to manmade principles of morality, to foster the welfare of humanity at large for its own sufficient sake. Secular humanism, in other words  which greatly appeals to many people.


Immanuel Kant’s categorical imperative supports these human-centered, do-unto-others notions: ‘Act only in accordance with that maxim through which you can at the same time will that it become a universal law’. An ethic of respect toward all, as we mortals delineate between right and wrong. Even with ‘divine command theory’, it seems reasonable to suppose that a god would have reasons for preferring that moral principles not be arrived at willy-nilly.


Monday 29 May 2023

Life in the Slow Lane

Illustration by Clifford Harper/Agraphia.co.uk
By Andrew Porter

Three common plagues were cited in the early New England settlements: wolves, rattlesnakes, and mosquitoes. Our current-day ‘settlements’ – cities and towns – now have their own plagues: a crush of too many people, crummy attitudes, pollution, and retrogressive political actions. How do freedom and power play out amongst individuals and communities?

One lens that can help us gain perspective on our life in relation to necessities and obligations beyond us, is to think about our agency and our values. If we get it right about what freedom and power are, we might clarify what values we want to exercise and embody.

People pushed back against the wolves and did what they could against other ‘scourges’, most regularly by killing them. This seemed like freedom – power asserted. Over the centuries, peoples around the world – coursing through trials like wars and epidemics and bouts of oppression, as well as various forms of enlightenment and progress on human rights – have struggled to articulate freedom and power to make existence shine. To fulfill purposes is the human juggernaut; but what purposes? It is pretty vital that we figure out what freedom and power are in this time of converging crises, so that actual life might flourish. The trouble is, so many people are commonly thrown off by false and unjustifiable versions of freedom and power.

In our fast-paced life, we so-called civilised humans have to decide how to achieve balance. This means some kind of genuine honouring of life in its physical and spiritual aspects. The old work-life balance is only part of it. What does vitality itself suggest is optimal or possible, and how do we make sense of what's at stake as we prioritise between competing goods?

If a parent decides that it is a priority to take care of a newborn child rather than sacrifice that time and importance to time at work, they may well be making a fine decision. Freedom here is in the service of vital things. We might say that in general freedom is that which makes you whole and that power is the exercise of your wholeness. Or, freedom is the latitude to live optimally and power is potency for good.

Since freedom is eschewing the lesser and opting for and living what has more value, we had better do some good defining. All situations confirm that freedom only accrues with what is healthful and attends flourishing. If one says, “Top functioning for me is having a broad range of options, the whole moral range,” you can see how this is problematic. We as humans have the range, but our freedom is in limiting ourselves to the good portion.

Power is commonly considered that which lords the most force over others and exerts the biggest influence broadly. Isn’t this what a hurricane does, or a viral infection, or an invasion? If you look around, though, all the people with so-called power actually dominate using borrowed power: that is, power borrowed from others or obtained on the backs of others, whether human or otherwise. This kind of power – often manifesting in greed and exploitation – is mere thievery. And what about power over one’s own liabilities to succumb or other temptations?

For many people, life in the slow lane is much more satisfying than that in the fast one. However, the big deal may be about getting off the highway altogether. What I am suggesting is that satisfaction and contentment are in the proper measure of freedom and power. And the best definition for organisms is probably that long-established by the planet. Earth has in place various forms of ‘nature’ with common value-elements.

For us, to be natural probably means being both like and unlike the rest of nature. It is some kind of unique salubrity. An ever-greater bulk of the world lives in a busy, highly industrialized society, and the idea of living naturally seems like something that goes against our human mission to separate ourselves from the natural world. But the question remains: is the freedom and power that comes with ‘natural living’ an antiquated thing, or can you run the world on it; can it work for a life?

Kant spoke of our animality in his Religion Within the Boundaries of Mere Reason (1794) part of the Critique of Pure Reason and part of his investigation of the ethical life. In this, he argues that animality is an ineliminable and irreducible component of human nature and that the human being, taken as a natural being, is an animal being. Kant says that animality is an “original predisposition [anlage] to the good in human nature”. We increasingly see that being human means selecting the wisdom of nature, often summed up in ecological equipoise, so that we can survive, thrive, and have reason to call ourselves legitimate. Freedom in this consists of developing greater consciousness about our long-term place on Earth (if such is possible) and legitimate power in in exact proportion to the degree we limit ourselves to human ecology.

Life on its own grass-centered lane has figured out what true freedom and power are. The Vietnamese Buddhist monk and global spiritual leader Thich Nhat Hạnh once wrote:
“Around us, life bursts with miracles – a glass of water, a ray of sunshine, a leaf, a caterpillar, a flower, laughter, raindrops....When we are tired and feel discouraged by life’s daily struggles, we may not notice these miracles, but they are always there.”
Figuring out the most efficacious forms of freedom and power promises to make us treat ourselves and others more justly.

Monday 15 May 2023

‘Game Theory’: Strategic Thinking for Optimal Solutions

Cortes began his campaign to conquer the Aztec Empire by having all but one of his ships scuttled, which meant that he and his men would either conquer the Aztecs Empire or die trying.. Initially, the Aztecs did not see the Spanish as a threat. In fact, their ruler, Moctezuma II, sent emissaries to present gifts to these foreign strangers. 

By Keith Tidman


The Peloponnesian War, chronicled by the historian Thucydides, pitted two major powers of Ancient Greece against each other, the Athenians and the Spartans. The Battle of Delium, which took place in 424 BC, was one of the war’s decisive battles. In two of his dialogues (Laches and Symposium), Plato had Socrates, who actually fought in the war, apocryphally recalling the battle, bearing on combatants’ strategic choices.


One episode recalls a soldier on the front line, awaiting the enemy to attack, pondering his options in the context of self-interest — what works best for him. For example, if his comrades are believed to be capable of successfully repelling the attack, his own role will contribute only inconsequentially to the fight, yet he risks pointlessly being killed. If, however, the enemy is certain to win the battle, the soldier’s own death is all the more likely and senseless, given that the front line will be routed, anyway, no matter what it does.


The soldier concludes from these mental somersaults that his best option is to flee, regardless of which side wins the battle. His ‘dominant strategy’ being to stay alive and unharmed. However, based on the same line of reasoning, all the soldier’s fellow men-in-arms should decide to flee also, to avoid the inevitability of being cut down, rather than to stand their ground. Yet, if all flee, the soldiers are guaranteed to lose the battle before the sides have even engaged.


This kind of strategic analysis is sometimes called game theoryHistory provides us with many other examples of game theory applied to the real world, too. In 1591, the Spanish conqueror Cortéz landed in the Western Hemisphere, intending to march inland and vanquish the Aztec Empire. He feared, however, that his soldiers, exhausted from the ocean journey, might be reluctant to fight the Aztec warriors, who happened also to greatly outnumber his own force.


Instead of counting on the motivation of individual soldier’s courage or even group ésprit de corps, Cortéz scuttled his fleet. His strategy was to remove the risk of the ships tempting his men to retreat rather than fight — and thus, with no option, to pursue the Aztecs in a fight-or-die (vice a fight-or-flee) scenario. The calculus for each of Cortéz’s soldiers in weighing his survivalist self-interest had shifted dramatically. At the same time, in brazenly scuttling his ships in the manner of a metaphorical weapon, Cortéz wanted to dramatically demonstrate to the enemy that for reasons the latter couldn’t fathom, his outnumbered force nonetheless appeared fearlessly confident to engage in the upcoming battle.


It’s a striking historical example of one way in which game theory provides means to assess situations where parties make strategic decisions that take account of each other’s possible decisions. The parties aim to arrive at best strategies in the framework of their own interests — business, economic, political, etc. — while factoring in what they believe to be the thinking (strategising) of opposite players whose interests may align or differ or even be a blend of both.


The term, and the philosophy of game theory, is much more recent, of course, developed in the early twentieth century by the mathematician John von Neumann and the economist Oskar Morgenstern. They focused on the theory’s application to economic decision-making, with what they considered the game-like nature of the field of economics. Some ten years later, another mathematician, called John Nash, along with others expanded the discipline, to include strategic decisions applicable to a wide range of fields and scenarios, analysing how competitors with diverse interests choose to contest with one another in pursuit of optimised outcomes. 


Whereas some of the earliest cases focused on ‘zero-sum’ games involving two players whose interests sharply conflicted, later scenarios and games were far more intricate. Such as ‘variable-sum’ games, where there may be all winners or all losers, as in a labour dispute. Or ‘constant-variable’ games, like poker, characterised as pure competition, entailing total conflict. The more intricately constructed games accommodate multiple players, involve a blend of shared and divergent interests, involve successive moves, and have at least one player with more information to inform and shape his own strategic choices than the information his competitors hold in hand.


The techniques of game theory and the scenarios examined are notable for their range of applications, including business, economics, politics, law, diplomacy, sports, social sciences, and war. Some features of the competitive scenarios are challenging to probe, such as accurately discerning the intentions of rivals and trying to discriminate behavioural patterns. That being said, many features of scenarios and alternative strategies can be studied by the methods of game theory, grounded in mathematics and logic.


Among the real-world applications of the methods are planning to mitigate the effects of climate extremes; running management-labour negotiations to get to a new contract and head off costly strikes; siting a power-generating plant to reflect regional needs; anticipating the choices of voter blocs; selecting and rejecting candidates for jury duty during voir dire; engaging in a price war between catty-cornered grocery stores rather than both keeping their prices aligned and high; avoiding predictable plays in sports, to make it harder to defend against; foretelling the formation of political coalitions; and negotiating a treaty between two antagonistic, saber-rattling countries to head off runaway arms spending or outright conflict.


Perhaps more trivially, applications of game theory stretch to so-called parlour games, too, like chess, checkers, poker, and Go, which are finite in the number of players and optional plays, and in which progress is achieved via a string of alternating single moves. The contestant who presages a competitor’s optimal answer to their own move will experience more favourable outcomes than if they try to deduce that their opponent will make a particular move associated with a particular probability ranking.


Given the large diversity of ‘games’, there are necessarily multiple forms of game theory. Fundamental to each theory, however, is that features of the strategising are actively managed by the players rather than through resort to just chance, hence why game theory goes several steps farther than mere probability theory.


The classic example of a two-person, noncooperative game is the Prisoner’s Dilemma. This is how it goes. Detectives believe that their two suspects collaborated in robbing a bank, but they don’t have enough admissible evidence to prove the charges beyond a reasonable doubt. They need more on which to base their otherwise shaky case. The prisoners are kept apart, out of hearing range of each other, as interrogators try to coax each into admitting to the crime.


Each prisoner mulls their options for getting the shortest prison term. But in deciding whether to confess, they’re unaware of what their accomplice will decide to do. However, both prisoners are mindful of their options and consequences: If both own up to the robbery, both get a five-year prison term; if neither confesses, both are sentenced to a one-year term (on a lesser charge); and if one squeals on the other, that one goes free, while the prisoner who stays silent goes to prison for fifteen years. 


The issue of trust is of course central to weighing the options presented by the ‘game’. In terms of sentences, both prisoners are better off choosing to act unselfishly and remain hush, with each serving one year. But if they choose to act selfishly in expectation of outmaneuvering the unsuspecting (presumed gullible) partner — which is to say, both prisoners picture themselves going free by spilling the beans while mistakenly anticipating that the other will stay silent — the result is much worse: a five-year sentence for both.

Presaging these types of game theoretic arguments, the English philosopher Thomas Hobbes, in Leviathan (1651), described citizens believing, on general principle, they’re best off with unrestrained freedom. Though, as Hobbes theorised, they will come to realise there are occasions when their interests will be better served by cooperating. The aim being to jointly accomplish things not doable by an individual alone. However, some individuals may inconsiderately conclude their interests will be served best by reaping the benefits of collaboration — that is, soliciting help from a neighbour in the form of physical labour, equipment, and time in tilling — but later defaulting when the occasion is for such help to be reciprocated.


Resentment, distrust, and cutthroat competitiveness take hold. Faith in the integrity of neighbours in the community plummets, and the chain of sharing resources to leverage the force-multiplicity of teamwork is broken. Society is worse off — where, as Hobbes memorably put it, life then becomes all the more ‘solitary, poor, nasty, brutish and short’. Hobbes’s conclusion, to avoid what he referred to as a ‘war of all against all’, was that people therefore need a central government — operating with significant authority — holding people accountable and punishing accordingly, intended to keep citizens and their transactions on the up and up.


What’s germane about Hobbes’s example is how its core themes resonate with today’s game theory. In particular, Hobbes’s argument regarding the need for an ‘undivided’, authoritative government is in line with modern-day game theorists’ solutions to protecting people against what theorists label as ‘social dilemmas’. That is, when people cause fissures within society by dishonourably taking advantage of other citizens rather than cooperating and reciprocating assistance, where collaboration benefits the common good. To Hobbes, the strategic play is between what he refers to as the ‘tyranny’ of an authoritative government and the ‘anarchy’ of no government. He argues that tyranny is the lesser ‘evil’ of the two. 


In dicing real-world ‘games’, people have rationally intuited workable strategies, with those solutions sufficing in many everyday circumstances. What the methodologies of game theory offer are ways to formalise, validate, and optimise the outcomes of select intuitions where outcomes matter more. All the while taking into account the opponent and his anticipated strategy, and extracting the highest benefits from choices based on one’s principles and preferences.