Monday, 28 August 2023

A Word to the Wise

Philosophy is a sailboat that deftly catches the fair breeze…


By Andrew Porter


We live in a time in which most people, were you to ask them ‘Do you think you’re wise?’, would look askance or confused and not answer straightforwardly. They are not prepared for the question by long anticipation and living in that habitat. But you might hear answers such as, ‘I’m wise about some things’ or ‘I’m pretty savvy when it comes to how to handle people’. But your question would remain unanswered.

Maybe it’s the circles I run in, but it seems that there's little to no hankering for wisdom; it is not prevalent. It is as if many people feel that moral relativism – the common zeitgeist – has taken them off the hook and they are relieved. But choices have a way of illuminating obvious help or harm. There’s really no getting off the hook.

Wisdom can be encapsulated in a reasoned decision by an individual, but it is always in tune with larger reason. One of the great things about Plato as a philosopher is that he walks around and into the thick of the question of wisdom with boldness and perspective. A champion of reason, he grounds human morality in virtue, but emphasises that it is part of a ‘virtue’ of reality: the nature and function of the ontologically real is to be good, true, and beautiful.

This immersion of humankind and personal choices in a larger environment seems a crucial lesson for our times. This odd and ungrounded era we live in does not have a ready and able moral vocabulary; it, more often than not, leaves moral nuance like an abandoned shopping cart in the woods. Why is Plato one of the best voices to re-energise as his philosophy applies to current-day issues and angst?

One of the problems of individuals and institutions in contemporary times is that they think they are wise without ever examining how and if that’s true. So often, they – whether you yourself, a spouse, a boss, politicians, or fellow citizens – assume a virtue they own not. This is exactly what Socrates, in Plato's hands, addresses. What are some of the problems in the world open to reform or transformation?

Certainly, social justice issues continue to rear their head and undermine an equitable society. Entrenched power systems and attendant attitudes are not only slow to respond, but display no moral understanding. Today, it seems there is a raft of problems, from psychological to philosophical, and the consequences turn dire. At the root of all actual and potential catastrophes, it seems, is a lack of that one thing that has been waylaid, discarded, and ignored: wisdom.

Plato crafted his philosophy about soul and virtue, justice and character, in alignment with his metaphysics. This is its genius, making a harmony of inner and outer

In the Republic, Plato himself oscillates between saying that a philosopher-king, the only assurance the city would be happy and just, would be a lover of wisdom and actually wise. In our time, the problem is a lack of desire to find or inculcate wisdom. Societies have, in general, hamstrung themselves. We do not have ready tools to care about and value wisdom, however far off. We do not, to any cogent degree, educate children to be philosopher-kings of their own lives.

Western societies and perhaps Eastern ones as well have not increased in wisdom because they have abandoned the pursuit. The task is left unattended. The current problem is not that the world (or smaller entities such as companies, schools, and individuals) cannot find a truly wise person; so-called civilisation acts wilfully against finding or even thinking about finding such. It is a mobile home that's been put up on blocks.

Philosophy can inculcate the kind of consciousness that the 20th century Swiss philosopher, Jean Gebser, called integral reality, which perceives a truth that, as he says, ‘transluces’ both the world and humankind (in the sense of shining light through). In short, philosophy holds the promise of educating. It is not a crazy old man on his porch, moving his cane to tell the traffic to slow down; rather, philosophy is a sailboat that deftly catches fair breeze – and moves us forward.

Monday, 7 August 2023

The Dubious Ethics of the Great Food Reset


Picture “for a school project”

By Martin Cohen
 

There’s a plan afoot to change the way you eat. Meat is destroying the land, fish and chips destroys the sea and dairy is  just immoral. Open the paper and you'll see a piece on how new biotechnologies are coming to the rescue. It's all presented as a fait accompli with the result that today, we are sleepwalking to not only a "meat-free" future, but one in which there are no farm animals, no milk, no cheese, no butter - no real food in short. And that's not in our interests, nor (less obviously) in the interest of biodiversity and the environment. There's just the rhetoric that it is "for the planet" 

According to researchers at the US think-tank, RethinkX, “we are on the cusp of the fastest, deepest, most consequential disruption” of agriculture in history. And it's happening fast. They say that by 2030, the entire US dairy and cattle industry will have collapsed, as precision fermentation” – producing animal proteins more efficiently via microbes – “disrupts food production as we know it”.
Theres trillions of dollars at stake and very little public debate about it. Instead, theres a sophisticated campaign to persuade people that this revolution is both inevitable and beyond criticism.

No wonder Marx declared that food lay at the heart of all political structures and warned of an alliance of industry and capital intent on both controlling and distorting food production.

The Great Food Reset a social and political upheaval that affects everyone, yet at the moment the debate is largely controlled by the forces promoting the changes: powerful networks of politicians and business leaders, such as the United Nations Environment Program, the so-called EAT-Lancet "Commission" (it's not really a commission, how words mislead!) - and the World Economic Forum, all sharing a rationale of 'sustainable development', market expansion, societal design, and resource control. Vocal supporters are the liberal media and academics who, perversely, present the movement as though part of a grassroots revolution.

There have been plenty of political programmes designed to push people into ‘the future’. Often, they flirt with increasingly intolerant compulsion. So too, with The Great Food Reset. Governments are already imposing heavy burdens on traditional farming and attempting to penalise the sale of animal products in the marketplace - either on the grounds that they are ‘unhealthy’ or, even more sweepingly, that they are bad for the environment.

In recent months, the steam has gone out of the “vegan food revolution”, mainly because people like their traditional foods more than the new ones, which typically are made from the four most lucrative cash crops: wheat, rice, maize and soybean. Incredibly, and dangerously, from over half a million plant species on the planet, we currently rely on just these four crops for more than three-quarters of our food supply. Animal sourced foods are our link to food variety.

But there's another reason to defend animal farming, which is that for much of the world, small farms are humane farms, with the animals enjoying several years of high quality life in the open fields and air. The new factory foods have no needs for animals and the argument that, well, better dead than farmed, just doesn't hold water – at least for traditional farms. It's the fundamental ethical dilemma: yes, death is terrible – but is it worse to have never lived?

In recent decades, we’ve seen many areas of life remodelled, whether we wanted them to be or not.. But to dictate how we grow food, how we cook food, and how we eat it, may just be a step too far.

Monday, 17 July 2023

When Is a Heap Not a Heap? The Sorites Paradox and ‘Fuzzy Logic’


By Keith Tidman
 

Imagine you are looking at a ‘heap’ of wheat comprising some several million grains and just one grain is removed. Surely you would agree with everyone that afterward you are still staring at a heap. And that the onlookers were right to continue concluding ‘the heap’ remains reality if another grain were to be removed — and then another and another. But as the pile shrinks, the situation eventually gets trickier.

 

If grains continue to be removed one at a time, in incremental fashion, when does the heap no longer qualify, in the minds of the onlookers, as a heap? Which numbered grain makes the difference between a heap of wheat and not a heap of wheat? 

 

Arguably we face the same conundrum if we were to reverse the situation: starting with zero grains of wheat, then incrementally adding one grain at a time, one after the other (n + 1, n + 2 ...). In that case, which numbered grain causes the accumulating grains of wheat to transition into a heap? Put another way, what are the borderlines between true and not true as to pronouncing there’s a heap?

 

What we’re describing here is called the Sorites paradox, invented by the fourth-century BC Athenian Eubulides, a philosopher of the Megarian school, named after Euclides of Megara, one of the pupils of Socrates. The school, or group, is famous for paradoxes like this one. ‘Sorites’, by the way, derives not from a particular person, but from the Greek word soros, meaning ‘heap’ or ‘pile’. The focus here being on the boundary of ‘being a heap’ or ‘not being a heap’, which is indistinct when single grains are either added or removed. The paradox is deceptive in appearing simple, even simplistic, yet, any number of critically important real-world applications attest to its decided significance. 

 

A particularly provocative case in point, exemplifying the central incrementalism of the Sorites paradox, is concerns deciding when a fetus transitions into a person. Across the milestones of conception, birth, and infancy, the fetus-cum-person acquires increasing physical and cognitive complexity and sophistication, occurring in successively tiny changes. Involving not just the number of features, but of course also the particular type of features (that is, qualitative factors). Leading us to ask, what are the borderlines between true and not true as to pronouncing there’s a person. As we know, this example of gradualism has led to highly consequential medical, legal, constitutional, and ethical implications being heatedly and tirelessly debated in public forums. 

 

Likewise, with regard to this same Sorites-like incrementalism, we might assess which ‘grain-by-grain’ change rises to the level of a ‘human being’ close to the end of a life — when, let’s say, deep dementia increasingly ravages aspects of a person’s consciousness, identity, and rationalism, greatly impacting awareness. Or, say, when some other devasting health event results in gradually nearing brain death, and alternative decisions hover perilously over how much to intervene medically, given best-in-practice efforts at a prognosis and taking into account the patient’s and family’s humanity, dignity, and socially recognised rights.

 

Ot take the stepwise development of ‘megacomplex artificial intelligence’. Again, involving consideration of not just ‘how many features’ (n + 1 or n - 1), but also ‘which features’, the latter entailing qualifiable features. The discussion has stirred intense debate over the race for intellectual competitiveness, prompting hyperbolic public alarms about ‘existential risks’ to humanity and civilisation. The machine equivalence of human neurophysiology is speculated to transition, over years of gradual optimisation (and down the road, even self-optimisation), into human-like consciousness, awareness, and cognition. Leading us to ask, where are the borderlines between true and not true as to pronouncing it has consciousness and greater-than-human intelligence? 

 

In the three examples of Sorites ‘grain-by-grain’ incrementalism above — start of life, end of life, and artificial general intelligence — words like ‘human’, ‘consciousness’, ‘perception’, ‘sentience’, and ‘person’ provide grist for neuroscientists, philosophers of mind, ethicists, and AI technologists to work with, until the desired threshold is reached. The limitations of natural language, even in circumstances mainly governed by the prescribed rules of logic and mathematics, might not make it any easier to concretely describe these crystalising concepts.

 

Given the nebulousness of terms like personhood and consciousness, which tend to bob up and down in natural languages like English, bivalent logic — where a statement is either true or false, but not both or in-between — may be insufficient. The Achilles’ heel is that the meaning of these kinds of terms may obscure truth as we struggle to define them. Whereas classical logic says there either is or is not a heap, with no shades in the middle, there’s something called fuzzy logic that scraps bivalence.

 

Fuzzy logic recognises there are both large and subtle gradations between categorically true and categorically false. There’s a continuum, where statements can be partially true and partially false, while also shifting in their truth value. A state of becoming, one might say. A line may thus be drawn between concepts that lie on such continuums. Accordingly, as individual grains of wheat are removed, the heap becomes, in tiny increments, less and less a heap — arriving at a threshold where people may reasonably concur it’s no longer a heap.

 

That tipping point is key, for vagueness isn’t just a matter of logic, it’s also a matter of knowledge and understanding (a matter of epistemology). In particular, what do we know, with what degree of certainty and uncertainty do we know it, when do we know it, and when does what we know really matter? Also, how do we use natural language to capture all the functionality of that language? Despite the gradations of true and false that we just talked about in confirming or refuting a heap, realistically the addition or removal of just one grain does in fact tip whether it’s a heap, even if we’re not aware which grain it was. Just one grain, that is, ought to be enough in measuring ‘heapness’, even if it’s hard to recognise where that threshold is.

 

Another situation involves the moral incrementalism of decisions and actions: what are the borderlines between true and not true as to pronouncing that a decision or action is moral? An important case is when we regard or disregard the moral effects of our actions. Such as, environmentally, on the welfare of other species sharing this planet, or concerning the effects on the larger ecosystem in ways that exacerbate the extreme outcomes of climate change.

 

Judgments as to the merits of actions are not ethically bivalent, either — by which I mean they do not tidily split between being decidedly good or decidedly bad, leaving out any middle ground. Rather, according to fuzzy logic, judgments allow for ethical incrementalism between what’s unconditionally good at one extreme and what’s unconditionally bad at the other extreme. Life doesn’t work quite so cleanly, of course. As we discussed earlier, the process entails switching out from standard logic to allow for imprecise concepts, and to accommodate the ground between two distant outliers.

 

Oblique concepts such as ‘good versus bad’, ‘being human’, ‘consciousness’, ‘moral’, ‘standards’ — and, yes, ‘heap’ — have very little basis from which to derive exact meanings. A classic example of such imprecision is voiced by science’s uncertainty principle: that is, we cannot know both the speed and location of a particle with simultaneously equal accuracy. As our knowledge of one factor increases in precision, knowledge of the other decreases in precision.

 

The assertion that ‘there is a heap’ becomes less true the more we take grains away from a heap, and becomes increasingly true the more we add grains. Finding the borderlines between true and not true in the sorts of consequential pronouncements above is key. And so, regardless of the paradox’s ancient provenance, the gradualism of the Sorites metaphor underscores its value in making everyday determinations between truth and falsity.


Monday, 26 June 2023

Ideas Animate Democracy


Keith Tidman
 

The philosopher Soren Kierkegaard once advised, ‘Life can only be understood backwards … but it must be lived forward’ — that is, life understood with one eye turned to history, and presaged with the other eye turned to competing future prospects. An observation about understanding and living life that applies across the board, to individuals, communities, and nations. Another way of putting it is that ideas are the grist for thinking not only about ideals but about the richness of learnable history and the alternative futures from which society asserts agency in freely choosing its way ahead. 


As of late, though, we seem to have lost sight that one way for democracy to wilt is to shunt aside ideas that might otherwise inspire minds to think, imagine, solve, create, discover and innovate — the source of democracy’s intellectual muscularity. For reflexively rebuffing ideas and their sources is really about constraining inquiry and debate in the public square. Instead, there has been much chatter about democracies facing existential grudge matches against exploitative autocratic regimes that issue their triumphalist narrative and view democracy as weak-kneed.  


In mirroring the decrees of the Ministry of Truth in the dystopian world of George Orwell’s book Nineteen Eighty-Four — where two plus two equals five, war is peace, freedom is slavery, and ignorance is strength — unbridled censorship and historical revisionism begin and end with the fear of ideas. Ideas snubbed by authoritarians’ heavy hand. The short of it is that prohibitions on ideas end up a jumbled net, a capricious exercise in power and control. Accordingly, much exertion is put into shaping society’s sanctioned norms, where dissent isn’t brooked. A point to which philosopher Hannah Arendt cautioned, ‘Totalitarianism has discovered a means of dominating and terrorising human beings from within’. Where trodden-upon voting and ardent circulation of propagandistic themes, both of which torque reality, hamper free expression.

 

This tale about prospective prohibitions on ideas is about choices between the resulting richness of thought or the poverty of thought — a choice we must get right, and can do so only by making it possible for new intellectual shoots to sprout from the raked seedbed. The optimistic expectation from this is that we get to understand and act on firmer notions of what’s real and true. But which reality? One reality is that each idea that’s arbitrarily embargoed delivers yet another kink to democracy’s armour; a very different reality is that each idea, however provocative, allows democracy to flourish.

 

Only a small part of the grappling over ideas is for dominion over which ideas will reasonably prevail long term. The larger motive is to honour the openness of ideas’ free flow, to be celebrated. This exercise brims with questions about knowledge. Like these: What do we know, how do we know it, with what certainty or uncertainty do we know it, how do we confirm or refute it, how do we use it for constructive purposes, and how do we allow for change? Such fundamental questions crisscross all fields of study. New knowledge ferments to improve insight into what’s true. Emboldened by this essential exercise, an informed democracy is steadfastly enabled to resist the siren songs of autocracy.

 

Ideas are accelerants in the public forum. Ideas are what undergird democracy’s resilience and rootedness, on which standards and norms are founded. Democracy at its best allows for the unobstructed flow of different social and political thought, side by side. As Benjamin Franklin, polymath and statesman, prophetically said: ‘Freedom of speech is a principal pillar of a free government’. A lead worth following. In this churn, ideas soar or flop by virtue of the quality of their content and the strength of their persuasion. Democracy allows its citizens to pick which ideas normalise standards — through debate and subjecting ideas to scrutiny, leading to their acceptance or refutation. Acid tests, in other words, of the cohesion and sustainability of ideas. At its best, debate arouses actionable policy and meaningful change.

 

Despite society being buffeted daily by roiling politics and social unrest, democracy’s institutions are resilient. Our institutions might flex under stress, but they are capable of enduring the broadsides of ideological competitiveness as society makes policy. The democratic republic is not existentially imperiled. It’s not fragilely brittle. America’s Founding Fathers set in place hardy institutions, which, despite public handwringing, have endured challenges over the last two-and-a-half centuries. Historical tests of our institutions’ mettle have inflicted only superficial scratches — well within institutions’ ability to rebound again and again, eventually as robust as ever.

 

Yet, as Aristotle importantly pointed out by way of a caveat to democracy’s sovereignty and survivability, 


‘If liberty and equality . . . are chiefly to be found in democracy, they will be attained when all persons share in the government to the utmost.’


A tall order, as many have found, but one that’s worthy and essential, teed up for democracies to assiduously pursue. Democracy might seem scruffy at times. But at its best, democracy ought not fear ideas. Fear that commonly bubbles up from overwrought narrative and unreasoned parochialism, in the form of ham-handed constraints on thought and expression.

 

The fear of ideas is often more injurious than the content of ideas, especially in the shadows of disagreeableness intended to cause fissures in society. Ideas are thus to be hallowed, not hollowed. To countenance contesting ideas — majority and minority opinions alike, forged on the anvil of rationalism, pluralism, and critical thinking — is essential to the origination of constructive policies and, ultimately, how democracy is constitutionally braced.

 

 

Monday, 12 June 2023

The Euthyphro Dilemma: What Makes Something Moral?

The sixteenth-century nun and mystic, Saint Teresa. In her autobiography, she wrote that she was very fond of St. Augustine … for he was a sinner too

By Keith Tidman  

Consider this: Is the pious being loved by the gods because it is pious, or is it pious because it is being loved by the gods?  Plato, Euthyphro


Plato has Socrates asking just this of the Athenian prophet Euthyphro in one of his most famous dialogues. The characteristically riddlesome inquiry became known as the Euthyphro dilemma. Another way to frame the issue is to flip the question around: Is an action wrong because the gods forbid it, or do the gods forbid it because it is wrong? This version presents what is often referred to as the ‘two horns’ of the dilemma.

 

Put another way, if what’s morally good or bad is only what the gods arbitrarily make something, called the divine command theory (or divine fiat) — which Euthyphro subscribed to — then the gods may be presumed to have agency and omnipotence over these and other matters. However, if, instead, the gods simply point to what’s already, independently good or bad, then there must be a source of moral judgment that transcends the gods, leaving that other, higher source of moral absolutism yet to be explained millennia later. 

 

In the ancient world the gods notoriously quarreled with one another, engaging in scrappy tiffs over concerns about power, authority, ambition, influence, and jealousy, on occasion fueled by unabashed hubris. Disunity and disputation were the order of the day. Sometimes making for scandalous recounting, these quarrels comprised the stuff of modern students’ soap-opera-styled mythological entertainment. Yet, even when there is only one god, disagreements over orthodoxy and morality occur aplenty. The challenge mounted by the dilemma is as important to today’s world of a generally monotheistic god as it was to the polytheistic predispositions of ancient Athens. The medieval theologians’ explanations are not enough to persuade:


‘Since good as perceived by the intellect is the object of the will, it is impossible for God to will anything but what His wisdom approves. This is as it were, His law of justice, in accordance with which His will is right and just. Hence, what He does according to His will He does justly: as we do justly when we do according to the law. But whereas law comes to us from some higher power, God is a law unto Himself’ (St. Thomas Aquinas, Summa Theologica, First Part, Question 21, first article reply to Obj. 2).


In the seventeenth century, Gottfried Leibniz offered a firm challenge to ‘divine command theory’, in asking the following question about whether right and wrong can be known only by divine revelation. He suggested, rather, there ought to be reasons, apart from religious tradition only, why particular behaviour is moral or immoral:

 

‘In saying that things are not good by any rule of goodness, but sheerly by the will of God, it seems to me that one destroys, without realising it, all the love of God and all his glory. For why praise him for he has done if he would be equally praiseworthy in doing exactly the contrary?’ (Discourse on Metaphysics, 1686). 

 

Meantime, today’s monotheistic world religions offer, among other holy texts, the Bible, Qur’an, and Torah, bearing the moral and legal decrees professed to be handed down by God. But even in the situations’ dissimilarity — the ancient world of Greek deities and modern monotheism (as well as some of today’s polytheistic practices) — both serve as examples of the ‘divine command theory’. That is, what’s deemed pious is presumed to be the case precisely because God chooses to love it, in line with the theory. That pious something or other is not independently sitting adrift, noncontingently virtuous in its own right, with nothing transcendentally making it so.

 

This presupposes that God commands only what is good. It also presupposes that, for example, things like the giving of charity, the avoidance of adultery, and the refrain from stealing, murdering, and ‘graven images’ have their truth value from being morally good if, and only if, God loves these and other commandments. The complete taxonomy (or classification scheme) of edicts being aimed at placing guardrails on human behaviour in the expectation of a nobler, more sanctified world. But God loving what’s morally good for its own sake — that is, apart from God making it so — clearly denies ‘divine command theory’.

 

For, if the pious is loved by the gods because it is pious, which is one of the interpretations offered by Plato (through the mouth of Socrates) in challenging Euthyphro’s thinking, then it opens the door to an authority higher than God. Where matters of morality may exist outside of God’s reach, suggesting something other than God being all-powerful. Such a scenario pushes back against traditionally Abrahamic (monotheist) conceptualisations.

 

Yet, whether the situation calls for a single almighty God or a yet greater power of some indescribable sort, the philosopher Thomas Hobbes, who like St. Thomas Aquinas and Averroës believed that God commands only what is good, argued that God’s laws must conform to ‘natural reason’. Hobbes’s point makes for an essential truism, especially if the universe is to have rhyme and reason. This being true even if the governing forces of natural law and of objective morality are not entirely understood or, for that matter, not compressible into a singularly encompassing ‘theory of all’. 

 

Because of the principles of ‘divine command theory’, some people contend the necessary takeaway is that there can be no ethics in the absence of God to judge something as pious. In fact, Fyodor Dostoyevsky, in The Brothers Karamazov, presumptuously declared that ‘if God does not exist, everything is permitted’. Surely not so; you don’t have to be a theist of faith to spot the shortsighted dismissiveness of his assertion. After all, an atheist or agnostic might recognise the benevolence, even the categorical need, for adherence to manmade principles of morality, to foster the welfare of humanity at large for its own sufficient sake. Secular humanism, in other words  which greatly appeals to many people.

 

Immanuel Kant’s categorical imperative supports these human-centered, do-unto-others notions: ‘Act only in accordance with that maxim through which you can at the same time will that it become a universal law’. An ethic of respect toward all, as we mortals delineate between right and wrong. Even with ‘divine command theory’, it seems reasonable to suppose that a god would have reasons for preferring that moral principles not be arrived at willy-nilly.

  

Monday, 29 May 2023

Life in the Slow Lane


Illustration by Clifford Harper/Agraphia.co.uk
By Andrew Porter

Three common plagues were cited in the early New England settlements: wolves, rattlesnakes, and mosquitoes. Our current-day ‘settlements’ – cities and towns – now have their own plagues: a crush of too many people, crummy attitudes, pollution, and retrogressive political actions. How do freedom and power play out amongst individuals and communities?

One lens that can help us gain perspective on our life in relation to necessities and obligations beyond us, is to think about our agency and our values. If we get it right about what freedom and power are, we might clarify what values we want to exercise and embody.

People pushed back against the wolves and did what they could against other ‘scourges’, most regularly by killing them. This seemed like freedom – power asserted. Over the centuries, peoples around the world – coursing through trials like wars and epidemics and bouts of oppression, as well as various forms of enlightenment and progress on human rights – have struggled to articulate freedom and power to make existence shine. To fulfill purposes is the human juggernaut; but what purposes? It is pretty vital that we figure out what freedom and power are in this time of converging crises, so that actual life might flourish. The trouble is, so many people are commonly thrown off by false and unjustifiable versions of freedom and power.

In our fast-paced life, we so-called civilised humans have to decide how to achieve balance. This means some kind of genuine honouring of life in its physical and spiritual aspects. The old work-life balance is only part of it. What does vitality itself suggest is optimal or possible, and how do we make sense of what's at stake as we prioritise between competing goods?

If a parent decides that it is a priority to take care of a newborn child rather than sacrifice that time and importance to time at work, they may well be making a fine decision. Freedom here is in the service of vital things. We might say that in general freedom is that which makes you whole and that power is the exercise of your wholeness. Or, freedom is the latitude to live optimally and power is potency for good.

Since freedom is eschewing the lesser and opting for and living what has more value, we had better do some good defining. All situations confirm that freedom only accrues with what is healthful and attends flourishing. If one says, “Top functioning for me is having a broad range of options, the whole moral range,” you can see how this is problematic. We as humans have the range, but our freedom is in limiting ourselves to the good portion.

Power is commonly considered that which lords the most force over others and exerts the biggest influence broadly. Isn’t this what a hurricane does, or a viral infection, or an invasion? If you look around, though, all the people with so-called power actually dominate using borrowed power: that is, power borrowed from others or obtained on the backs of others, whether human or otherwise. This kind of power – often manifesting in greed and exploitation – is mere thievery. And what about power over one’s own liabilities to succumb or other temptations?

For many people, life in the slow lane is much more satisfying than that in the fast one. However, the big deal may be about getting off the highway altogether. What I am suggesting is that satisfaction and contentment are in the proper measure of freedom and power. And the best definition for organisms is probably that long-established by the planet. Earth has in place various forms of ‘nature’ with common value-elements.

For us, to be natural probably means being both like and unlike the rest of nature. It is some kind of unique salubrity. An ever-greater bulk of the world lives in a busy, highly industrialized society, and the idea of living naturally seems like something that goes against our human mission to separate ourselves from the natural world. But the question remains: is the freedom and power that comes with ‘natural living’ an antiquated thing, or can you run the world on it; can it work for a life?

Kant spoke of our animality in his Religion Within the Boundaries of Mere Reason (1794) part of the Critique of Pure Reason and part of his investigation of the ethical life. In this, he argues that animality is an ineliminable and irreducible component of human nature and that the human being, taken as a natural being, is an animal being. Kant says that animality is an “original predisposition [anlage] to the good in human nature”. We increasingly see that being human means selecting the wisdom of nature, often summed up in ecological equipoise, so that we can survive, thrive, and have reason to call ourselves legitimate. Freedom in this consists of developing greater consciousness about our long-term place on Earth (if such is possible) and legitimate power in in exact proportion to the degree we limit ourselves to human ecology.

Life on its own grass-centered lane has figured out what true freedom and power are. The Vietnamese Buddhist monk and global spiritual leader Thich Nhat Hạnh once wrote:
“Around us, life bursts with miracles – a glass of water, a ray of sunshine, a leaf, a caterpillar, a flower, laughter, raindrops....When we are tired and feel discouraged by life’s daily struggles, we may not notice these miracles, but they are always there.”
Figuring out the most efficacious forms of freedom and power promises to make us treat ourselves and others more justly.

Monday, 15 May 2023

‘Game Theory’: Strategic Thinking for Optimal Solutions

Cortes began his campaign to conquer the Aztec Empire by having all but one of his ships scuttled, which meant that he and his men would either conquer the Aztecs Empire or die trying.. Initially, the Aztecs did not see the Spanish as a threat. In fact, their ruler, Moctezuma II, sent emissaries to present gifts to these foreign strangers. 



By Keith Tidman

 

The Peloponnesian War, chronicled by the historian Thucydides, pitted two major powers of Ancient Greece against each other, the Athenians and the Spartans. The Battle of Delium, which took place in 424 BC, was one of the war’s decisive battles. In two of his dialogues (Laches and Symposium), Plato had Socrates, who actually fought in the war, apocryphally recalling the battle, bearing on combatants’ strategic choices.

 

One episode recalls a soldier on the front line, awaiting the enemy to attack, pondering his options in the context of self-interest — what works best for him. For example, if his comrades are believed to be capable of successfully repelling the attack, his own role will contribute only inconsequentially to the fight, yet he risks pointlessly being killed. If, however, the enemy is certain to win the battle, the soldier’s own death is all the more likely and senseless, given that the front line will be routed, anyway, no matter what it does.

 

The soldier concludes from these mental somersaults that his best option is to flee, regardless of which side wins the battle. His ‘dominant strategy’ being to stay alive and unharmed. However, based on the same line of reasoning, all the soldier’s fellow men-in-arms should decide to flee also, to avoid the inevitability of being cut down, rather than to stand their ground. Yet, if all flee, the soldiers are guaranteed to lose the battle before the sides have even engaged.

 

This kind of strategic analysis is sometimes called game theoryHistory provides us with many other examples of game theory applied to the real world, too. In 1591, the Spanish conqueror Cortéz landed in the Western Hemisphere, intending to march inland and vanquish the Aztec Empire. He feared, however, that his soldiers, exhausted from the ocean journey, might be reluctant to fight the Aztec warriors, who happened also to greatly outnumber his own force.

 

Instead of counting on the motivation of individual soldier’s courage or even group ésprit de corps, Cortéz scuttled his fleet. His strategy was to remove the risk of the ships tempting his men to retreat rather than fight — and thus, with no option, to pursue the Aztecs in a fight-or-die (vice a fight-or-flee) scenario. The calculus for each of Cortéz’s soldiers in weighing his survivalist self-interest had shifted dramatically. At the same time, in brazenly scuttling his ships in the manner of a metaphorical weapon, Cortéz wanted to dramatically demonstrate to the enemy that for reasons the latter couldn’t fathom, his outnumbered force nonetheless appeared fearlessly confident to engage in the upcoming battle.

 

It’s a striking historical example of one way in which game theory provides means to assess situations where parties make strategic decisions that take account of each other’s possible decisions. The parties aim to arrive at best strategies in the framework of their own interests — business, economic, political, etc. — while factoring in what they believe to be the thinking (strategising) of opposite players whose interests may align or differ or even be a blend of both.

 

The term, and the philosophy of game theory, is much more recent, of course, developed in the early twentieth century by the mathematician John von Neumann and the economist Oskar Morgenstern. They focused on the theory’s application to economic decision-making, with what they considered the game-like nature of the field of economics. Some ten years later, another mathematician, called John Nash, along with others expanded the discipline, to include strategic decisions applicable to a wide range of fields and scenarios, analysing how competitors with diverse interests choose to contest with one another in pursuit of optimised outcomes. 

 

Whereas some of the earliest cases focused on ‘zero-sum’ games involving two players whose interests sharply conflicted, later scenarios and games were far more intricate. Such as ‘variable-sum’ games, where there may be all winners or all losers, as in a labour dispute. Or ‘constant-variable’ games, like poker, characterised as pure competition, entailing total conflict. The more intricately constructed games accommodate multiple players, involve a blend of shared and divergent interests, involve successive moves, and have at least one player with more information to inform and shape his own strategic choices than the information his competitors hold in hand.

 

The techniques of game theory and the scenarios examined are notable for their range of applications, including business, economics, politics, law, diplomacy, sports, social sciences, and war. Some features of the competitive scenarios are challenging to probe, such as accurately discerning the intentions of rivals and trying to discriminate behavioural patterns. That being said, many features of scenarios and alternative strategies can be studied by the methods of game theory, grounded in mathematics and logic.

 

Among the real-world applications of the methods are planning to mitigate the effects of climate extremes; running management-labour negotiations to get to a new contract and head off costly strikes; siting a power-generating plant to reflect regional needs; anticipating the choices of voter blocs; selecting and rejecting candidates for jury duty during voir dire; engaging in a price war between catty-cornered grocery stores rather than both keeping their prices aligned and high; avoiding predictable plays in sports, to make it harder to defend against; foretelling the formation of political coalitions; and negotiating a treaty between two antagonistic, saber-rattling countries to head off runaway arms spending or outright conflict.

 

Perhaps more trivially, applications of game theory stretch to so-called parlour games, too, like chess, checkers, poker, and Go, which are finite in the number of players and optional plays, and in which progress is achieved via a string of alternating single moves. The contestant who presages a competitor’s optimal answer to their own move will experience more favourable outcomes than if they try to deduce that their opponent will make a particular move associated with a particular probability ranking.

 

Given the large diversity of ‘games’, there are necessarily multiple forms of game theory. Fundamental to each theory, however, is that features of the strategising are actively managed by the players rather than through resort to just chance, hence why game theory goes several steps farther than mere probability theory.

 

The classic example of a two-person, noncooperative game is the Prisoner’s Dilemma. This is how it goes. Detectives believe that their two suspects collaborated in robbing a bank, but they don’t have enough admissible evidence to prove the charges beyond a reasonable doubt. They need more on which to base their otherwise shaky case. The prisoners are kept apart, out of hearing range of each other, as interrogators try to coax each into admitting to the crime.

 

Each prisoner mulls their options for getting the shortest prison term. But in deciding whether to confess, they’re unaware of what their accomplice will decide to do. However, both prisoners are mindful of their options and consequences: If both own up to the robbery, both get a five-year prison term; if neither confesses, both are sentenced to a one-year term (on a lesser charge); and if one squeals on the other, that one goes free, while the prisoner who stays silent goes to prison for fifteen years. 

 

The issue of trust is of course central to weighing the options presented by the ‘game’. In terms of sentences, both prisoners are better off choosing to act unselfishly and remain hush, with each serving one year. But if they choose to act selfishly in expectation of outmaneuvering the unsuspecting (presumed gullible) partner — which is to say, both prisoners picture themselves going free by spilling the beans while mistakenly anticipating that the other will stay silent — the result is much worse: a five-year sentence for both.


Presaging these types of game theoretic arguments, the English philosopher Thomas Hobbes, in Leviathan (1651), described citizens believing, on general principle, they’re best off with unrestrained freedom. Though, as Hobbes theorised, they will come to realise there are occasions when their interests will be better served by cooperating. The aim being to jointly accomplish things not doable by an individual alone. However, some individuals may inconsiderately conclude their interests will be served best by reaping the benefits of collaboration — that is, soliciting help from a neighbour in the form of physical labour, equipment, and time in tilling — but later defaulting when the occasion is for such help to be reciprocated.

 

Resentment, distrust, and cutthroat competitiveness take hold. Faith in the integrity of neighbours in the community plummets, and the chain of sharing resources to leverage the force-multiplicity of teamwork is broken. Society is worse off — where, as Hobbes memorably put it, life then becomes all the more ‘solitary, poor, nasty, brutish and short’. Hobbes’s conclusion, to avoid what he referred to as a ‘war of all against all’, was that people therefore need a central government — operating with significant authority — holding people accountable and punishing accordingly, intended to keep citizens and their transactions on the up and up.

 

What’s germane about Hobbes’s example is how its core themes resonate with today’s game theory. In particular, Hobbes’s argument regarding the need for an ‘undivided’, authoritative government is in line with modern-day game theorists’ solutions to protecting people against what theorists label as ‘social dilemmas’. That is, when people cause fissures within society by dishonourably taking advantage of other citizens rather than cooperating and reciprocating assistance, where collaboration benefits the common good. To Hobbes, the strategic play is between what he refers to as the ‘tyranny’ of an authoritative government and the ‘anarchy’ of no government. He argues that tyranny is the lesser ‘evil’ of the two. 

 

In dicing real-world ‘games’, people have rationally intuited workable strategies, with those solutions sufficing in many everyday circumstances. What the methodologies of game theory offer are ways to formalise, validate, and optimise the outcomes of select intuitions where outcomes matter more. All the while taking into account the opponent and his anticipated strategy, and extracting the highest benefits from choices based on one’s principles and preferences.

 

Monday, 1 May 2023

Problems with the Problem of Evil


By Keith Tidman

  

Do we really reside in what German polymath Gottlieb Wilhelm Leibniz referred to as ‘the best of all possible worlds’, picked by God from among an infinite variety of world orders at God’s disposal, based on the greatest number of supposed perfections? (A claim that the French Enlightenment writer Voltaire satirised in his novella Candide.)

 

How do we safely arrive at Leibniz’s sweeping assessment of ‘best’ here, given the world’s harrowing circumstances, from widespread violence to epidemics to famine, of which we’re reminded every day? After all, the Augustinian faith-based explanation for the presence of evil has been punishment for Adam and Eve’s original sin and expulsion from the Garden of Eden. From this emerged Leibniz’s term ‘theodicy’, created from two Greek words for the expression ‘justifying God’ (Theodicy: Essays on the Goodness of God, the Freedom of Man and the Origin of Evil, 1710).


No, there’s a problem … the ‘problem of evil’. If God is all powerful (omnipotent), all knowing (omniscient), all places (omnipresent), all good and loving (omnibenevolent), and all wise, then why is there evil in the very world that God is said to have designed and created? Not having averted or fixed the problem, instead permitting unrestrained reins and abiding by noninterventionism. There is not just one form of evil, but at least two: moral evil (volitionally wrongful human conduct) and natural evil (ranging from illnesses and other human suffering, to natural law causing ruinous and lethal calamities).

 

There are competitor explanations for evil, of course, like that developed by the first-century Greek bishop Saint Irenaeus, whose rationalisation was that evil presented the population with incentives and opportunities to learn, develop, and evolve toward ever-greater perfection. The shortcoming with this Irenaean description, however, is that it fails to account for the ubiquity and diversity of natural disasters, like tsunamis, volcanoes, earthquakes, wildfires, hurricanes, and many other manifestations of natural law taking its toll around the globe.

 

Yet, it has been argued that even harmful natural hazards like avalanches and lightning, not just moral indiscretions, are part of the plan for people’s moral, epistemic growth, spurring virtues like courage, charity, gratitude, patience, and compassion. It seems that both the Augustinian and Irenaean models of the universe adhere to the anthropic principle that cosmic constants are imperatively fine grained enough (balanced on a sharp edge) to allow for human life to exist at this location, at this point in time.

 

Meanwhile, although some people might conceivably respond to natural hazards and pressing moral hardships by honing their awareness, which some claim, other people are overcome by the devastating effects of the hazards. These outcomes point to another in the battery of explanations for evil, in the reassuring form of a spiritual life after death. Some people assert that such rewards may be expected to tower over mundane earthly challenges and suffering, and that the suffering that moral and natural evil evokes conditions people for the enlightenment of an afterlife. 

 

At this stage, the worldly reasons for natural hazards and moral torment (purportedly the intentions behind a god’s strategy) become apparent. Meanwhile, others argue that the searing realities of, say, the Holocaust or any other genocidal atrocities or savagery or warring in this world are not even remotely mitigated, let alone vindicated, by the anticipated jubilation of life after death, no matter the form that the latter might take.

 

Still another contending explanation is that what we label evil in terms of human conduct is not a separate ‘thing’ that happens to be negative, but rather is the absence of a particular good, such as the absence of hope, integrity, forbearance, friendship, altruism, prudence, principle, and generosity, among other virtues. In short, evil isn’t the opposite of good, but is the nonattendance of good. Not so simple to resolve in this model, however, is the following: Would not a god, as original cause, have had to create the conditions for that absence of good to come to be?

 

Others have asserted that God’s design and the presence of evil are in fact compatible, not a contradiction or intrinsic failing, and not preparation either for development in the here and now or for post-death enlightenment. American philosopher Alvin Plantinga has supported this denial of a contradiction between the existence of an all-capable and all-benevolent (almighty) god and the existence of evil:

 

‘There are people who display a sort of creative moral heroism in the face of suffering and adversity — a heroism that inspires others and creates a good situation out of a bad one. In a situation like this the evil, of course, remains evil; but the total state of affairs — someone’s bearing pain magnificently, for example — may be good. If it is, then the good present must outweigh the evil; otherwise, the total situation would not be good’ (God, Freedom, and Evil, 1977).

 

Or then, as British philosopher John Hick imagines, perhaps evil exists only as a corruption of goodness. Here is Hick’s version of the common premises stated and conclusion drawn: ‘If God is omnipotent, God can prevent evil. If God is perfectly good, God must want to prevent all evil. Evil exists. Thus, God is either not omnipotent or perfectly good, or both’. It does appear that many arguments cycle back to those similarly couched observations about incidents of seeming discrepancy.

 

Yet others have taken an opposite view, seeing incompatibilities between a world designed by a god figure and the commonness of evil. Here, the word ‘design’ conveys similarities between the evidence of complex (intelligent) design behind the cosmos’s existence and complex (intelligent) design behind many things made by humans, from particle accelerators, quantum computers, and space-based telescopes, to cuneiform clay tablets and the carved code of Hammurabi law.


Unknowability matters, however, to this aspect of design and evil. For the presence, even prevalence, of evil does not necessarily contradict the logical or metaphysical possibility of a transcendental being as designer of our world. That being said, some people postulate that the very existence, as well as the categorical abstractness of qualities and intentions, of any such overarching designer are likely to remain incurably unknowable, beyond confirmation or falsifiability.

 

Although the argument by design has circulated for millennia, it was popularised by the English theologian William Paley early in the nineteenth century. Before him, the Scottish philosopher David Hume shaped his criticism of the design argument by paraphrasing Epicurus: ‘Is God willing to prevent evil, but not able? Then he is impotent. Is he able, but not willing? Then he is malevolent. Is he both able and willing? Whence then is evil? Is he neither able nor will? Then why call him God?’ (Dialogues Concerning Natural Religion, 1779).

 

Another in the catalog of explanations of moral evil is associated with itself a provocative claim, which is that we have free will. That is, we are presented with the possibility, not inevitability, of moral evil. Left to their own unconstrained devices, people are empowered either to freely reject or freely choose immoral decisions or actions. From among a large constellation, like venality, malice, and injustice. As such, free will is essential to human agency and by extension to moral evil (for obvious reasons, leaving natural evil out). Plantinga is among those who promote this free-will defense of the existence of moral evil. 

 

Leibniz was wrong about ours being ‘the best of all possible worlds’. Better worlds are indeed imaginable, where plausibly evil in its sundry guises pales in comparison. The gauntlet as to what those better worlds resemble, among myriad possibilities, idles provocatively on the ground. For us to dare to pick up, perhaps. However, reconciling evil, in the presence of theistic paradoxes like professed omnipotence, omniscience, and omnibenevolence, remains problematic. As Candide asked, ‘If this is the best ... what are the others?

 

Wednesday, 19 April 2023

Making the Real

Prometheus in conference…
By Andrew Porter


They say that myth is the communication of the memorable, or imitation of that which is on some level more real. Our inner myths – such as memory – make real what's true for us and we often communicate these lenses in stories, writing, art, and ways of being. What a person communicates, having been on their own hero's journey where they received the boon, is a kind of myth, a display of another place, where the animals are strange and the gods walk among us.

We even make the real in creating a fiction. But isn’t the real different from fiction? Is it a caveat to say that fiction can be more real than sensible experience? If we are true to the facts and the actual events as depth of the characters involved and the flavour of the scenes we’ve lived in, are we not recounting a legitimate ‘inner tradition’? The experience is fresh and new in the telling; storytelling is the power of connection.

In making our own version of the real, teller and listener infuse myth with logos and vice versa. Poetry (of all kinds), for instance, is the intermediary between heroic times and pedestrian hearing. It is in a sense audience to itself, living the amazement in the memory and memorialising. Like any genuine recounting, poetry tries to communicate with respect for the receiver and deep understanding of what may be received. This is as much to say that the poet is more than a bridge; they are the synergy of two depths of being: past heights and current receiver; both, hopefully, sacrifice their separateness for the joining. Is a poet perhaps most authentically themselves in the bringing together of self, experience, and the other?

To locate the real means to get at the meaning beyond the bare events. This is done, I think, via another kind of central dynamic, between knowledge and sensitivity, or between reason and instinct. This middle ground is intuition, perhaps, or understanding of a rich sort, mixing reason and emotion or hearer and other land. Wonder is evoked or elicited in the clarity of ten thousand stars finding their way to eyes and brain.

Communication of the valuable, we might say, promises a complementarity between the transcendent world and the mundane world. It believes in wonder and growth. Its ultimate lesson is the good, even if of human potential. It comprehends that the real must be translated, that an insight cannot be dumped out of a bag with a shrug. At best, the communicator can feel the blazing value of the extraordinariness they have been beautifully exposed to and the worthy receiver carries it on, retains it, preserves it. This is a vital synergy. Aren’t the best times in life of this kind, when existence illuminates itself? Imagine believing what the storyteller imparts, that the gods exist, though they were somewhat mundane at the time. Spirit seems to flow when its electrons are in motion with the charge of it all.

Stories we’ve all heard are ‘invented stories’. Were they true? Art can perhaps convey a truth better than any other way could; even nature, typically banking on sharp reality with no moonshine, yet supports interpretation. If we can produce and reproduce a synergy of muthos and logos, what integration of a person or a society might ensue?

One current issue is how we interpret our place and role in history. What story are we telling ourselves? Is it illusion of the worst kind? Do we need new myths? In our narrowness we likely have a very skewed definition of real. There may be a chance to make ourselves implicate in nature's order in a human way and understand this as true techne. The arts can show us its benefit. But I am not holding my breath.

In ‘making the real’, we make ourselves. Our best selves are likely self-controlled as well as free in a broadly sanctioned way. Why has culture dropped the ball on creating a good story that we can follow? And what blend of myth and logos makes reality sing? Our time is not for dancing around the fire with faux-animal-heads on, but rather, one that tells stories that get it right. Why, it could be that, somewhere, a band of people are creating them even now.

Monday, 3 April 2023

The Chinese Room Experiment ... and Today’s AI Chatbots


By Keith Tidman

 

It was back in 1980 that the American philosopher John Searle formulated the so-called ‘Chinese room thought experiment’ in an article, his aim being to emphasise the bounds of machine cognition and to push back against what he viewed, even back then, as hyperbolic claims surrounding artificial intelligence (AI). His purpose was to make the case that computers don’t ‘think’, but rather merely manipulate symbols in the absence of understanding.

 

Searle subsequently went on to explain his rationale this way: 


‘The reason that no computer can ever be a mind is simply that a computer is only syntactical [concerned with the formal structure of language, such as the arrangement of words and phrases], and minds are more than syntactical. Minds are semantical, in the sense that they have … content [substance, meaning, and understanding]’.

 

He continued to point out, by way of further explanation, that the latest technology metaphor for purportedly representing and trying to understand the brain has consistently shifted over the centuries: for example, from Leibniz, who compared the brain to a mill, to Freud comparing it to ‘hydraulic and electromagnetic systems’, to the present-day computer. With none, frankly, yet serving as anything like good analogs of the human brain, given what we know today of the neurophysiology, experiential pathways, functionality, expression of consciousness, and emergence of mind associated with the brain.

 

In a moment, I want to segue to today’s debate over AI chatbots, but first, let’s recall Searle’s Chinese room argument in a bit more detail. It began with a person in a room, who accepts pieces of paper slipped under the door and into the room. The paper bears Chinese characters, which, unbeknownst to the people outside, the monolingual person in the room has absolutely no ability to translate. The characters unsurprisingly look like unintelligible patterns of squiggles and strokes. The person in the room then feeds those characters into a digital computer, whose program (metaphorically represented in the original description of the experiment by a book of instructions’) searches a massive database of written Chinese (originally represented by a box of symbols’).

 

The powerful computer program can hypothetically find every possible combination of Chinese words in its records. When the computer spots a match with what’s on the paper, it makes a note of the string of words that immediately follow, printing those out so the person can slip the piece of paper back out of the room. Because of the perfect Chinese response to the query sent into the room, the people outside, unaware of the computer’s and program’s presence inside, mistakenly but reasonably conclude that the person in the room has to be a native speaker of Chinese.

 

Here, as an example, is what might have been slipped under the door, into the room: 


什么是智慧 


Which is the Mandarin translation of the age-old question ‘What is wisdom?’ And here’s what might have been passed back out, the result of the computer’s search: 


了解知识的界限


Which is the Mandarin translation of ‘Understanding the boundary/limits of knowledge’, an answer (among many) convincing the people gathered in anticipation outside the room that a fluent speaker of Mandarin was within, answering their questions in informed, insightful fashion.

 

The outcome of Searle’s thought experiment seemed to satisfy the criteria of the famous Turing test (he himself called it ‘the imitation game’), designed by the computer scientist and mathematician Alan Turing in 1950. The controversial challenge he posed with the test was whether a computer could think like — that is, exhibit intelligent behaviour indistinguishable from — a human being. And who could tell.


It was in an article for the journal Mind, called ‘Computing Machinery and Intelligence’, that Turing himself set out the ‘Turing test’, which inspired Searle’s later thought experiment. After first expressing concern with the ambiguity of the words machine and think in a closed question like ‘Can machines think?’, Turing went on to describe his test as follows:

The [challenge] can be described in terms of a game, which we call the ‘imitation game’. It is played with three people, a man (A), a woman (B), and an interrogator (C) who may be of either sex. The interrogator stays in a room apart from the other two. The aim of the interrogator is to determine which of the other two is the man and which is the woman. He knows them by labels X and Y, and at the end of the game he says either ‘X is A and Y is B’ of ‘X is B and Y is A’. The interrogator is allowed to put questions to A and B thus:

C: Will X please tell me the length of his or her hair?


Now suppose X is actually A, then A must answer. It is A’s object in the game to try and cause C to make the wrong identification. His answer might therefore be: ‘My hair is shingled, and the longest strands are about nine inches long’.


In order that tone of voice may not help the interrogator, the answers should be written, or better still, typewritten. The ideal arrangement is to have a teleprompter communicating between the two rooms. Alternatively, the question and answers can be repeated by an intermediary. The object of the game is for the third party (B) to help the interrogator. The best strategy for her is probably to give truthful answers. She can add such things as ‘I am the woman, don’t listen to him!’ to her answers, but it will avail nothing as the man makes similar remarks.


We now ask the question, ‘What will happen when a machine takes the part of A in this game?’ Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, ‘Can machines think?’  

Note that as Turing framed the inquiry at the time, the question arises of whether a computer can ‘be made to play satisfactorily the part of A in the imitation game, the part of B being taken by a [person]?’ The word ‘imitation’ here is key, allowing for the hypothetical computer in Searle’s Chinese room experiment to pass the test — albeit importantly not proving that computers think semantically, which is a whole other capacity not yet achieved even by today’s strongest AI.

 

Let’s fast-forward a few decades and examine the generative AI chatbots whose development much of the world has been enthusiastically tracking in anticipation of what’s to be. When someone engages with the AI algorithms powering the bots, the AI seems to respond intelligently. The result being either back-and-forth conversations with the chatbots, or the use of carefully crafted natural-language input to prompt the bots to write speeches, correspondence, school papers, corporate reports, summaries, emails, computer code, or any number of other written products. End products are based on the bots having been ‘trained’ on the massive body of text on the internet. And where output sometimes gets reformulated by the bot based on the user’s rejiggered prompts.

 

It’s as if the chatbots think. But they don’t. Rather, the chatbots’ capacity to leverage the massive mounds of information on the internet to produce predictive responses is remarkably much more analogous to what the computer was doing in Searle’s Chinese room forty years earlier. With long-term future implications for developmental advances in neuroscience, artificial intelligence and computer science, philosophy of language and mind, epistemology, and models of consciousness, awareness, and perception.

 

In the midst of this evolution, the range of generative AI will expand AI’s reach across the multivariate domains of modern society: education, business, medicine, finance, science, governance, law, and entertainment, among them. So far, so good. Meanwhile, despite machine learning, possible errors and biases and nonsensicalness in algorithmic decision-making, should they occur, are more problematic in some domains (like medicine, military, and lending) than in others. Importantly remembering, though, that gaffs of any magnitude, type, and regularity can quickly erode trust, no matter the field.

 

Sure, current algorithms, natural-language processing, and the underpinnings of developmental engineering are more complex than when Searle first presented the Chinese room argument. But chatbots still don’t understand the meaning of content. They don’t have knowledge as such. Nor do they venture much by way of beliefs, opinions, predictions, or convictions, leaving swaths of important topics off the table. Reassembly of facts scraped from myriad sources is more the recipe of the day — and even then, errors and eyebrow-raising incoherence occurs, including unexplainably incomplete and spurious references.

 

The chatbots revealingly write output by muscularly matching words provided by the prompts with strings of words located online, including words then shown to follow probabilistically, predictively building their answers based on a form of pattern recognition. There’s still a mimicking of computational, rather than thinking, theories of mind. Sure, what the bots produce would pass the Turing test, but today surely that’s a pretty low bar. 

 

Meantime, people have argued that the AI’s writing reveals markers, such as lacking the nuance of varied cadence, phraseology, word choice, modulation, creativity, originality, and individuality, as well as the curation of appropriate content, that human beings often display when they write. At the moment, anyway, the resulting products from chatbots tend to present a formulaic feel, posing challenges to AI’s algorithms for remediation.

 

Three decades after first unspooling his ingenious Chinese room argument, Searle wrote, ‘I demonstrated years ago … that the implementation of the computer program is not itself sufficient for consciousness or intentionality [mental states representing things]’. Both then and now, that’s true enough. We’re barely closing in on completing the first lap. It’s all still computation, not thinking or understanding.


Accordingly, the ‘intelligence’ one might perceive in Searle’s computer and the program his computer runs in order to search for patterns that match the Chinese words is very much like the ‘intelligence’ one might misperceive in a chatbot’s answers to natural-language prompts. In both cases, what we may misinterpret as intelligence is really a deception of sorts. Because in both cases, what’s really happening, despite the large differences in the programs’ developmental sophistication arising from the passage of time, is little more than brute-force searches of massive amounts of information in order to predict what the next words likely should be. Often getting it right, but sometimes getting it wrong — with good, bad, or trifling consequences.

 

I propose, however, that the development of artificial intelligence — particularly what is called ‘artificial general intelligence’ (AGI) — will get us there: an analog of the human brain, with an understanding of semantic content. Where today’s chatbots will look like novelties if not entirely obedient in their functional execution, and where ‘neural networks’ of feasibly self-optimising artificial general intelligence will match up against or elastically stretch beyond human cognition, where the hotbed issues of what consciousness is get rethought.