Monday, 29 May 2023

Life in the Slow Lane


Illustration by Clifford Harper/Agraphia.co.uk
Three common plagues were cited in the early New England settlements: wolves, rattlesnakes, and mosquitoes. Our current-day ‘settlements’ – cities and towns – now have their own plagues: a crush of too many people, crummy attitudes, pollution, and retrogressive political actions. How do freedom and power play out amongst individuals and communities?

One lens that can help us gain perspective on our life in relation to necessities and obligations beyond us, is to think about our agency and our values. If we get it right about what freedom and power are, we might clarify what values we want to exercise and embody.

People pushed back against the wolves and did what they could against other ‘scourges’, most regularly by killing them. This seemed like freedom – power asserted. Over the centuries, peoples around the world – coursing through trials like wars and epidemics and bouts of oppression, as well as various forms of enlightenment and progress on human rights – have struggled to articulate freedom and power to make existence shine. To fulfill purposes is the human juggernaut; but what purposes? It is pretty vital that we figure out what freedom and power are in this time of converging crises, so that actual life might flourish. The trouble is, so many people are commonly thrown off by false and unjustifiable versions of freedom and power.

In our fast-paced life, we so-called civilised humans have to decide how to achieve balance. This means some kind of genuine honouring of life in its physical and spiritual aspects. The old work-life balance is only part of it. What does vitality itself suggest is optimal or possible, and how do we make sense of what's at stake as we prioritise between competing goods?

If a parent decides that it is a priority to take care of a newborn child rather than sacrifice that time and importance to time at work, they may well be making a fine decision. Freedom here is in the service of vital things. We might say that in general freedom is that which makes you whole and that power is the exercise of your wholeness. Or, freedom is the latitude to live optimally and power is potency for good.

Since freedom is eschewing the lesser and opting for and living what has more value, we had better do some good defining. All situations confirm that freedom only accrues with what is healthful and attends flourishing. If one says, “Top functioning for me is having a broad range of options, the whole moral range,” you can see how this is problematic. We as humans have the range, but our freedom is in limiting ourselves to the good portion.

Power is commonly considered that which lords the most force over others and exerts the biggest influence broadly. Isn’t this what a hurricane does, or a viral infection, or an invasion? If you look around, though, all the people with so-called power actually dominate using borrowed power: that is, power borrowed from others or obtained on the backs of others, whether human or otherwise. This kind of power – often manifesting in greed and exploitation – is mere thievery. And what about power over one’s own liabilities to succumb or other temptations?

For many people, life in the slow lane is much more satisfying than that in the fast one. However, the big deal may be about getting off the highway altogether. What I am suggesting is that satisfaction and contentment are in the proper measure of freedom and power. And the best definition for organisms is probably that long-established by the planet. Earth has in place various forms of ‘nature’ with common value-elements.

For us, to be natural probably means being both like and unlike the rest of nature. It is some kind of unique salubrity. An ever-greater bulk of the world lives in a busy, highly industrialized society, and the idea of living naturally seems like something that goes against our human mission to separate ourselves from the natural world. But the question remains: is the freedom and power that comes with ‘natural living’ an antiquated thing, or can you run the world on it; can it work for a life?

Kant spoke of our animality in his Religion Within the Boundaries of Mere Reason (1794) part of the Critique of Pure Reason and part of his investigation of the ethical life. In this, he argues that animality is an ineliminable and irreducible component of human nature and that the human being, taken as a natural being, is an animal being. Kant says that animality is an “original predisposition [anlage] to the good in human nature”. We increasingly see that being human means selecting the wisdom of nature, often summed up in ecological equipoise, so that we can survive, thrive, and have reason to call ourselves legitimate. Freedom in this consists of developing greater consciousness about our long-term place on Earth (if such is possible) and legitimate power in in exact proportion to the degree we limit ourselves to human ecology.

Life on its own grass-centered lane has figured out what true freedom and power are. The Vietnamese Buddhist monk and global spiritual leader Thich Nhat Hạnh once wrote:
“Around us, life bursts with miracles – a glass of water, a ray of sunshine, a leaf, a caterpillar, a flower, laughter, raindrops....When we are tired and feel discouraged by life’s daily struggles, we may not notice these miracles, but they are always there.”
Figuring out the most efficacious forms of freedom and power promises to make us treat ourselves and others more justly.

Monday, 15 May 2023

‘Game Theory’: Strategic Thinking for Optimal Solutions

Cortes began his campaign to conquer the Aztec Empire by having all but one of his ships scuttled, which meant that he and his men would either conquer the Aztecs Empire or die trying.. Initially, the Aztecs did not see the Spanish as a threat. In fact, their ruler, Moctezuma II, sent emissaries to present gifts to these foreign strangers. 



By Keith Tidman

 

The Peloponnesian War, chronicled by the historian Thucydides, pitted two major powers of Ancient Greece against each other, the Athenians and the Spartans. The Battle of Delium, which took place in 424 BC, was one of the war’s decisive battles. In two of his dialogues (Laches and Symposium), Plato had Socrates, who actually fought in the war, apocryphally recalling the battle, bearing on combatants’ strategic choices.

 

One episode recalls a soldier on the front line, awaiting the enemy to attack, pondering his options in the context of self-interest — what works best for him. For example, if his comrades are believed to be capable of successfully repelling the attack, his own role will contribute only inconsequentially to the fight, yet he risks pointlessly being killed. If, however, the enemy is certain to win the battle, the soldier’s own death is all the more likely and senseless, given that the front line will be routed, anyway, no matter what it does.

 

The soldier concludes from these mental somersaults that his best option is to flee, regardless of which side wins the battle. His ‘dominant strategy’ being to stay alive and unharmed. However, based on the same line of reasoning, all the soldier’s fellow men-in-arms should decide to flee also, to avoid the inevitability of being cut down, rather than to stand their ground. Yet, if all flee, the soldiers are guaranteed to lose the battle before the sides have even engaged.

 

This kind of strategic analysis is sometimes called game theoryHistory provides us with many other examples of game theory applied to the real world, too. In 1591, the Spanish conqueror Cortéz landed in the Western Hemisphere, intending to march inland and vanquish the Aztec Empire. He feared, however, that his soldiers, exhausted from the ocean journey, might be reluctant to fight the Aztec warriors, who happened also to greatly outnumber his own force.

 

Instead of counting on the motivation of individual soldier’s courage or even group ésprit de corps, Cortéz scuttled his fleet. His strategy was to remove the risk of the ships tempting his men to retreat rather than fight — and thus, with no option, to pursue the Aztecs in a fight-or-die (vice a fight-or-flee) scenario. The calculus for each of Cortéz’s soldiers in weighing his survivalist self-interest had shifted dramatically. At the same time, in brazenly scuttling his ships in the manner of a metaphorical weapon, Cortéz wanted to dramatically demonstrate to the enemy that for reasons the latter couldn’t fathom, his outnumbered force nonetheless appeared fearlessly confident to engage in the upcoming battle.

 

It’s a striking historical example of one way in which game theory provides means to assess situations where parties make strategic decisions that take account of each other’s possible decisions. The parties aim to arrive at best strategies in the framework of their own interests — business, economic, political, etc. — while factoring in what they believe to be the thinking (strategising) of opposite players whose interests may align or differ or even be a blend of both.

 

The term, and the philosophy of game theory, is much more recent, of course, developed in the early twentieth century by the mathematician John von Neumann and the economist Oskar Morgenstern. They focused on the theory’s application to economic decision-making, with what they considered the game-like nature of the field of economics. Some ten years later, another mathematician, called John Nash, along with others expanded the discipline, to include strategic decisions applicable to a wide range of fields and scenarios, analysing how competitors with diverse interests choose to contest with one another in pursuit of optimised outcomes. 

 

Whereas some of the earliest cases focused on ‘zero-sum’ games involving two players whose interests sharply conflicted, later scenarios and games were far more intricate. Such as ‘variable-sum’ games, where there may be all winners or all losers, as in a labour dispute. Or ‘constant-variable’ games, like poker, characterised as pure competition, entailing total conflict. The more intricately constructed games accommodate multiple players, involve a blend of shared and divergent interests, involve successive moves, and have at least one player with more information to inform and shape his own strategic choices than the information his competitors hold in hand.

 

The techniques of game theory and the scenarios examined are notable for their range of applications, including business, economics, politics, law, diplomacy, sports, social sciences, and war. Some features of the competitive scenarios are challenging to probe, such as accurately discerning the intentions of rivals and trying to discriminate behavioural patterns. That being said, many features of scenarios and alternative strategies can be studied by the methods of game theory, grounded in mathematics and logic.

 

Among the real-world applications of the methods are planning to mitigate the effects of climate extremes; running management-labour negotiations to get to a new contract and head off costly strikes; siting a power-generating plant to reflect regional needs; anticipating the choices of voter blocs; selecting and rejecting candidates for jury duty during voir dire; engaging in a price war between catty-cornered grocery stores rather than both keeping their prices aligned and high; avoiding predictable plays in sports, to make it harder to defend against; foretelling the formation of political coalitions; and negotiating a treaty between two antagonistic, saber-rattling countries to head off runaway arms spending or outright conflict.

 

Perhaps more trivially, applications of game theory stretch to so-called parlour games, too, like chess, checkers, poker, and Go, which are finite in the number of players and optional plays, and in which progress is achieved via a string of alternating single moves. The contestant who presages a competitor’s optimal answer to their own move will experience more favourable outcomes than if they try to deduce that their opponent will make a particular move associated with a particular probability ranking.

 

Given the large diversity of ‘games’, there are necessarily multiple forms of game theory. Fundamental to each theory, however, is that features of the strategising are actively managed by the players rather than through resort to just chance, hence why game theory goes several steps farther than mere probability theory.

 

The classic example of a two-person, noncooperative game is the Prisoner’s Dilemma. This is how it goes. Detectives believe that their two suspects collaborated in robbing a bank, but they don’t have enough admissible evidence to prove the charges beyond a reasonable doubt. They need more on which to base their otherwise shaky case. The prisoners are kept apart, out of hearing range of each other, as interrogators try to coax each into admitting to the crime.

 

Each prisoner mulls their options for getting the shortest prison term. But in deciding whether to confess, they’re unaware of what their accomplice will decide to do. However, both prisoners are mindful of their options and consequences: If both own up to the robbery, both get a five-year prison term; if neither confesses, both are sentenced to a one-year term (on a lesser charge); and if one squeals on the other, that one goes free, while the prisoner who stays silent goes to prison for fifteen years. 

 

The issue of trust is of course central to weighing the options presented by the ‘game’. In terms of sentences, both prisoners are better off choosing to act unselfishly and remain hush, with each serving one year. But if they choose to act selfishly in expectation of outmaneuvering the unsuspecting (presumed gullible) partner — which is to say, both prisoners picture themselves going free by spilling the beans while mistakenly anticipating that the other will stay silent — the result is much worse: a five-year sentence for both.


Presaging these types of game theoretic arguments, the English philosopher Thomas Hobbes, in Leviathan (1651), described citizens believing, on general principle, they’re best off with unrestrained freedom. Though, as Hobbes theorised, they will come to realise there are occasions when their interests will be better served by cooperating. The aim being to jointly accomplish things not doable by an individual alone. However, some individuals may inconsiderately conclude their interests will be served best by reaping the benefits of collaboration — that is, soliciting help from a neighbour in the form of physical labour, equipment, and time in tilling — but later defaulting when the occasion is for such help to be reciprocated.

 

Resentment, distrust, and cutthroat competitiveness take hold. Faith in the integrity of neighbours in the community plummets, and the chain of sharing resources to leverage the force-multiplicity of teamwork is broken. Society is worse off — where, as Hobbes memorably put it, life then becomes all the more ‘solitary, poor, nasty, brutish and short’. Hobbes’s conclusion, to avoid what he referred to as a ‘war of all against all’, was that people therefore need a central government — operating with significant authority — holding people accountable and punishing accordingly, intended to keep citizens and their transactions on the up and up.

 

What’s germane about Hobbes’s example is how its core themes resonate with today’s game theory. In particular, Hobbes’s argument regarding the need for an ‘undivided’, authoritative government is in line with modern-day game theorists’ solutions to protecting people against what theorists label as ‘social dilemmas’. That is, when people cause fissures within society by dishonourably taking advantage of other citizens rather than cooperating and reciprocating assistance, where collaboration benefits the common good. To Hobbes, the strategic play is between what he refers to as the ‘tyranny’ of an authoritative government and the ‘anarchy’ of no government. He argues that tyranny is the lesser ‘evil’ of the two. 

 

In dicing real-world ‘games’, people have rationally intuited workable strategies, with those solutions sufficing in many everyday circumstances. What the methodologies of game theory offer are ways to formalise, validate, and optimise the outcomes of select intuitions where outcomes matter more. All the while taking into account the opponent and his anticipated strategy, and extracting the highest benefits from choices based on one’s principles and preferences.

 

Monday, 1 May 2023

Problems with the Problem of Evil


By Keith Tidman

  

Do we really reside in what German polymath Gottlieb Wilhelm Leibniz referred to as ‘the best of all possible worlds’, picked by God from among an infinite variety of world orders at God’s disposal, based on the greatest number of supposed perfections? (A claim that the French Enlightenment writer Voltaire satirised in his novella Candide.)

 

How do we safely arrive at Leibniz’s sweeping assessment of ‘best’ here, given the world’s harrowing circumstances, from widespread violence to epidemics to famine, of which we’re reminded every day? After all, the Augustinian faith-based explanation for the presence of evil has been punishment for Adam and Eve’s original sin and expulsion from the Garden of Eden. From this emerged Leibniz’s term ‘theodicy’, created from two Greek words for the expression ‘justifying God’ (Theodicy: Essays on the Goodness of God, the Freedom of Man and the Origin of Evil, 1710).


No, there’s a problem … the ‘problem of evil’. If God is all powerful (omnipotent), all knowing (omniscient), all places (omnipresent), all good and loving (omnibenevolent), and all wise, then why is there evil in the very world that God is said to have designed and created? Not having averted or fixed the problem, instead permitting unrestrained reins and abiding by noninterventionism. There is not just one form of evil, but at least two: moral evil (volitionally wrongful human conduct) and natural evil (ranging from illnesses and other human suffering, to natural law causing ruinous and lethal calamities).

 

There are competitor explanations for evil, of course, like that developed by the first-century Greek bishop Saint Irenaeus, whose rationalisation was that evil presented the population with incentives and opportunities to learn, develop, and evolve toward ever-greater perfection. The shortcoming with this Irenaean description, however, is that it fails to account for the ubiquity and diversity of natural disasters, like tsunamis, volcanoes, earthquakes, wildfires, hurricanes, and many other manifestations of natural law taking its toll around the globe.

 

Yet, it has been argued that even harmful natural hazards like avalanches and lightning, not just moral indiscretions, are part of the plan for people’s moral, epistemic growth, spurring virtues like courage, charity, gratitude, patience, and compassion. It seems that both the Augustinian and Irenaean models of the universe adhere to the anthropic principle that cosmic constants are imperatively fine grained enough (balanced on a sharp edge) to allow for human life to exist at this location, at this point in time.

 

Meanwhile, although some people might conceivably respond to natural hazards and pressing moral hardships by honing their awareness, which some claim, other people are overcome by the devastating effects of the hazards. These outcomes point to another in the battery of explanations for evil, in the reassuring form of a spiritual life after death. Some people assert that such rewards may be expected to tower over mundane earthly challenges and suffering, and that the suffering that moral and natural evil evokes conditions people for the enlightenment of an afterlife. 

 

At this stage, the worldly reasons for natural hazards and moral torment (purportedly the intentions behind a god’s strategy) become apparent. Meanwhile, others argue that the searing realities of, say, the Holocaust or any other genocidal atrocities or savagery or warring in this world are not even remotely mitigated, let alone vindicated, by the anticipated jubilation of life after death, no matter the form that the latter might take.

 

Still another contending explanation is that what we label evil in terms of human conduct is not a separate ‘thing’ that happens to be negative, but rather is the absence of a particular good, such as the absence of hope, integrity, forbearance, friendship, altruism, prudence, principle, and generosity, among other virtues. In short, evil isn’t the opposite of good, but is the nonattendance of good. Not so simple to resolve in this model, however, is the following: Would not a god, as original cause, have had to create the conditions for that absence of good to come to be?

 

Others have asserted that God’s design and the presence of evil are in fact compatible, not a contradiction or intrinsic failing, and not preparation either for development in the here and now or for post-death enlightenment. American philosopher Alvin Plantinga has supported this denial of a contradiction between the existence of an all-capable and all-benevolent (almighty) god and the existence of evil:

 

‘There are people who display a sort of creative moral heroism in the face of suffering and adversity — a heroism that inspires others and creates a good situation out of a bad one. In a situation like this the evil, of course, remains evil; but the total state of affairs — someone’s bearing pain magnificently, for example — may be good. If it is, then the good present must outweigh the evil; otherwise, the total situation would not be good’ (God, Freedom, and Evil, 1977).

 

Or then, as British philosopher John Hick imagines, perhaps evil exists only as a corruption of goodness. Here is Hick’s version of the common premises stated and conclusion drawn: ‘If God is omnipotent, God can prevent evil. If God is perfectly good, God must want to prevent all evil. Evil exists. Thus, God is either not omnipotent or perfectly good, or both’. It does appear that many arguments cycle back to those similarly couched observations about incidents of seeming discrepancy.

 

Yet others have taken an opposite view, seeing incompatibilities between a world designed by a god figure and the commonness of evil. Here, the word ‘design’ conveys similarities between the evidence of complex (intelligent) design behind the cosmos’s existence and complex (intelligent) design behind many things made by humans, from particle accelerators, quantum computers, and space-based telescopes, to cuneiform clay tablets and the carved code of Hammurabi law.


Unknowability matters, however, to this aspect of design and evil. For the presence, even prevalence, of evil does not necessarily contradict the logical or metaphysical possibility of a transcendental being as designer of our world. That being said, some people postulate that the very existence, as well as the categorical abstractness of qualities and intentions, of any such overarching designer are likely to remain incurably unknowable, beyond confirmation or falsifiability.

 

Although the argument by design has circulated for millennia, it was popularised by the English theologian William Paley early in the nineteenth century. Before him, the Scottish philosopher David Hume shaped his criticism of the design argument by paraphrasing Epicurus: ‘Is God willing to prevent evil, but not able? Then he is impotent. Is he able, but not willing? Then he is malevolent. Is he both able and willing? Whence then is evil? Is he neither able nor will? Then why call him God?’ (Dialogues Concerning Natural Religion, 1779).

 

Another in the catalog of explanations of moral evil is associated with itself a provocative claim, which is that we have free will. That is, we are presented with the possibility, not inevitability, of moral evil. Left to their own unconstrained devices, people are empowered either to freely reject or freely choose immoral decisions or actions. From among a large constellation, like venality, malice, and injustice. As such, free will is essential to human agency and by extension to moral evil (for obvious reasons, leaving natural evil out). Plantinga is among those who promote this free-will defense of the existence of moral evil. 

 

Leibniz was wrong about ours being ‘the best of all possible worlds’. Better worlds are indeed imaginable, where plausibly evil in its sundry guises pales in comparison. The gauntlet as to what those better worlds resemble, among myriad possibilities, idles provocatively on the ground. For us to dare to pick up, perhaps. However, reconciling evil, in the presence of theistic paradoxes like professed omnipotence, omniscience, and omnibenevolence, remains problematic. As Candide asked, ‘If this is the best ... what are the others?

 

Wednesday, 19 April 2023

Making the Real

Prometheus in conference…
By Andrew Porter


They say that myth is the communication of the memorable, or imitation of that which is on some level more real. Our inner myths – such as memory – make real what's true for us and we often communicate these lenses in stories, writing, art, and ways of being. What a person communicates, having been on their own hero's journey where they received the boon, is a kind of myth, a display of another place, where the animals are strange and the gods walk among us.

We even make the real in creating a fiction. But isn’t the real different from fiction? Is it a caveat to say that fiction can be more real than sensible experience? If we are true to the facts and the actual events as depth of the characters involved and the flavour of the scenes we’ve lived in, are we not recounting a legitimate ‘inner tradition’? The experience is fresh and new in the telling; storytelling is the power of connection.

In making our own version of the real, teller and listener infuse myth with logos and vice versa. Poetry (of all kinds), for instance, is the intermediary between heroic times and pedestrian hearing. It is in a sense audience to itself, living the amazement in the memory and memorialising. Like any genuine recounting, poetry tries to communicate with respect for the receiver and deep understanding of what may be received. This is as much to say that the poet is more than a bridge; they are the synergy of two depths of being: past heights and current receiver; both, hopefully, sacrifice their separateness for the joining. Is a poet perhaps most authentically themselves in the bringing together of self, experience, and the other?

To locate the real means to get at the meaning beyond the bare events. This is done, I think, via another kind of central dynamic, between knowledge and sensitivity, or between reason and instinct. This middle ground is intuition, perhaps, or understanding of a rich sort, mixing reason and emotion or hearer and other land. Wonder is evoked or elicited in the clarity of ten thousand stars finding their way to eyes and brain.

Communication of the valuable, we might say, promises a complementarity between the transcendent world and the mundane world. It believes in wonder and growth. Its ultimate lesson is the good, even if of human potential. It comprehends that the real must be translated, that an insight cannot be dumped out of a bag with a shrug. At best, the communicator can feel the blazing value of the extraordinariness they have been beautifully exposed to and the worthy receiver carries it on, retains it, preserves it. This is a vital synergy. Aren’t the best times in life of this kind, when existence illuminates itself? Imagine believing what the storyteller imparts, that the gods exist, though they were somewhat mundane at the time. Spirit seems to flow when its electrons are in motion with the charge of it all.

Stories we’ve all heard are ‘invented stories’. Were they true? Art can perhaps convey a truth better than any other way could; even nature, typically banking on sharp reality with no moonshine, yet supports interpretation. If we can produce and reproduce a synergy of muthos and logos, what integration of a person or a society might ensue?

One current issue is how we interpret our place and role in history. What story are we telling ourselves? Is it illusion of the worst kind? Do we need new myths? In our narrowness we likely have a very skewed definition of real. There may be a chance to make ourselves implicate in nature's order in a human way and understand this as true techne. The arts can show us its benefit. But I am not holding my breath.

In ‘making the real’, we make ourselves. Our best selves are likely self-controlled as well as free in a broadly sanctioned way. Why has culture dropped the ball on creating a good story that we can follow? And what blend of myth and logos makes reality sing? Our time is not for dancing around the fire with faux-animal-heads on, but rather, one that tells stories that get it right. Why, it could be that, somewhere, a band of people are creating them even now.

Monday, 3 April 2023

The Chinese Room Experiment ... and Today’s AI Chatbots


By Keith Tidman

 

It was back in 1980 that the American philosopher John Searle formulated the so-called ‘Chinese room thought experiment’ in an article, his aim being to emphasise the bounds of machine cognition and to push back against what he viewed, even back then, as hyperbolic claims surrounding artificial intelligence (AI). His purpose was to make the case that computers don’t ‘think’, but rather merely manipulate symbols in the absence of understanding.

 

Searle subsequently went on to explain his rationale this way: 


‘The reason that no computer can ever be a mind is simply that a computer is only syntactical [concerned with the formal structure of language, such as the arrangement of words and phrases], and minds are more than syntactical. Minds are semantical, in the sense that they have … content [substance, meaning, and understanding]’.

 

He continued to point out, by way of further explanation, that the latest technology metaphor for purportedly representing and trying to understand the brain has consistently shifted over the centuries: for example, from Leibniz, who compared the brain to a mill, to Freud comparing it to ‘hydraulic and electromagnetic systems’, to the present-day computer. With none, frankly, yet serving as anything like good analogs of the human brain, given what we know today of the neurophysiology, experiential pathways, functionality, expression of consciousness, and emergence of mind associated with the brain.

 

In a moment, I want to segue to today’s debate over AI chatbots, but first, let’s recall Searle’s Chinese room argument in a bit more detail. It began with a person in a room, who accepts pieces of paper slipped under the door and into the room. The paper bears Chinese characters, which, unbeknownst to the people outside, the monolingual person in the room has absolutely no ability to translate. The characters unsurprisingly look like unintelligible patterns of squiggles and strokes. The person in the room then feeds those characters into a digital computer, whose program (metaphorically represented in the original description of the experiment by a book of instructions’) searches a massive database of written Chinese (originally represented by a box of symbols’).

 

The powerful computer program can hypothetically find every possible combination of Chinese words in its records. When the computer spots a match with what’s on the paper, it makes a note of the string of words that immediately follow, printing those out so the person can slip the piece of paper back out of the room. Because of the perfect Chinese response to the query sent into the room, the people outside, unaware of the computer’s and program’s presence inside, mistakenly but reasonably conclude that the person in the room has to be a native speaker of Chinese.

 

Here, as an example, is what might have been slipped under the door, into the room: 


什么是智慧 


Which is the Mandarin translation of the age-old question ‘What is wisdom?’ And here’s what might have been passed back out, the result of the computer’s search: 


了解知识的界限


Which is the Mandarin translation of ‘Understanding the boundary/limits of knowledge’, an answer (among many) convincing the people gathered in anticipation outside the room that a fluent speaker of Mandarin was within, answering their questions in informed, insightful fashion.

 

The outcome of Searle’s thought experiment seemed to satisfy the criteria of the famous Turing test (he himself called it ‘the imitation game’), designed by the computer scientist and mathematician Alan Turing in 1950. The controversial challenge he posed with the test was whether a computer could think like — that is, exhibit intelligent behaviour indistinguishable from — a human being. And who could tell.


It was in an article for the journal Mind, called ‘Computing Machinery and Intelligence’, that Turing himself set out the ‘Turing test’, which inspired Searle’s later thought experiment. After first expressing concern with the ambiguity of the words machine and think in a closed question like ‘Can machines think?’, Turing went on to describe his test as follows:

The [challenge] can be described in terms of a game, which we call the ‘imitation game’. It is played with three people, a man (A), a woman (B), and an interrogator (C) who may be of either sex. The interrogator stays in a room apart from the other two. The aim of the interrogator is to determine which of the other two is the man and which is the woman. He knows them by labels X and Y, and at the end of the game he says either ‘X is A and Y is B’ of ‘X is B and Y is A’. The interrogator is allowed to put questions to A and B thus:

C: Will X please tell me the length of his or her hair?


Now suppose X is actually A, then A must answer. It is A’s object in the game to try and cause C to make the wrong identification. His answer might therefore be: ‘My hair is shingled, and the longest strands are about nine inches long’.


In order that tone of voice may not help the interrogator, the answers should be written, or better still, typewritten. The ideal arrangement is to have a teleprompter communicating between the two rooms. Alternatively, the question and answers can be repeated by an intermediary. The object of the game is for the third party (B) to help the interrogator. The best strategy for her is probably to give truthful answers. She can add such things as ‘I am the woman, don’t listen to him!’ to her answers, but it will avail nothing as the man makes similar remarks.


We now ask the question, ‘What will happen when a machine takes the part of A in this game?’ Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, ‘Can machines think?’  

Note that as Turing framed the inquiry at the time, the question arises of whether a computer can ‘be made to play satisfactorily the part of A in the imitation game, the part of B being taken by a [person]?’ The word ‘imitation’ here is key, allowing for the hypothetical computer in Searle’s Chinese room experiment to pass the test — albeit importantly not proving that computers think semantically, which is a whole other capacity not yet achieved even by today’s strongest AI.

 

Let’s fast-forward a few decades and examine the generative AI chatbots whose development much of the world has been enthusiastically tracking in anticipation of what’s to be. When someone engages with the AI algorithms powering the bots, the AI seems to respond intelligently. The result being either back-and-forth conversations with the chatbots, or the use of carefully crafted natural-language input to prompt the bots to write speeches, correspondence, school papers, corporate reports, summaries, emails, computer code, or any number of other written products. End products are based on the bots having been ‘trained’ on the massive body of text on the internet. And where output sometimes gets reformulated by the bot based on the user’s rejiggered prompts.

 

It’s as if the chatbots think. But they don’t. Rather, the chatbots’ capacity to leverage the massive mounds of information on the internet to produce predictive responses is remarkably much more analogous to what the computer was doing in Searle’s Chinese room forty years earlier. With long-term future implications for developmental advances in neuroscience, artificial intelligence and computer science, philosophy of language and mind, epistemology, and models of consciousness, awareness, and perception.

 

In the midst of this evolution, the range of generative AI will expand AI’s reach across the multivariate domains of modern society: education, business, medicine, finance, science, governance, law, and entertainment, among them. So far, so good. Meanwhile, despite machine learning, possible errors and biases and nonsensicalness in algorithmic decision-making, should they occur, are more problematic in some domains (like medicine, military, and lending) than in others. Importantly remembering, though, that gaffs of any magnitude, type, and regularity can quickly erode trust, no matter the field.

 

Sure, current algorithms, natural-language processing, and the underpinnings of developmental engineering are more complex than when Searle first presented the Chinese room argument. But chatbots still don’t understand the meaning of content. They don’t have knowledge as such. Nor do they venture much by way of beliefs, opinions, predictions, or convictions, leaving swaths of important topics off the table. Reassembly of facts scraped from myriad sources is more the recipe of the day — and even then, errors and eyebrow-raising incoherence occurs, including unexplainably incomplete and spurious references.

 

The chatbots revealingly write output by muscularly matching words provided by the prompts with strings of words located online, including words then shown to follow probabilistically, predictively building their answers based on a form of pattern recognition. There’s still a mimicking of computational, rather than thinking, theories of mind. Sure, what the bots produce would pass the Turing test, but today surely that’s a pretty low bar. 

 

Meantime, people have argued that the AI’s writing reveals markers, such as lacking the nuance of varied cadence, phraseology, word choice, modulation, creativity, originality, and individuality, as well as the curation of appropriate content, that human beings often display when they write. At the moment, anyway, the resulting products from chatbots tend to present a formulaic feel, posing challenges to AI’s algorithms for remediation.

 

Three decades after first unspooling his ingenious Chinese room argument, Searle wrote, ‘I demonstrated years ago … that the implementation of the computer program is not itself sufficient for consciousness or intentionality [mental states representing things]’. Both then and now, that’s true enough. We’re barely closing in on completing the first lap. It’s all still computation, not thinking or understanding.


Accordingly, the ‘intelligence’ one might perceive in Searle’s computer and the program his computer runs in order to search for patterns that match the Chinese words is very much like the ‘intelligence’ one might misperceive in a chatbot’s answers to natural-language prompts. In both cases, what we may misinterpret as intelligence is really a deception of sorts. Because in both cases, what’s really happening, despite the large differences in the programs’ developmental sophistication arising from the passage of time, is little more than brute-force searches of massive amounts of information in order to predict what the next words likely should be. Often getting it right, but sometimes getting it wrong — with good, bad, or trifling consequences.

 

I propose, however, that the development of artificial intelligence — particularly what is called ‘artificial general intelligence’ (AGI) — will get us there: an analog of the human brain, with an understanding of semantic content. Where today’s chatbots will look like novelties if not entirely obedient in their functional execution, and where ‘neural networks’ of feasibly self-optimising artificial general intelligence will match up against or elastically stretch beyond human cognition, where the hotbed issues of what consciousness is get rethought.


Sunday, 26 February 2023

Universal Human Rights for Everyone, Everywhere

Jean-Jacques Rousseau

By Keith Tidman


Human rights exist only if people believe that they do and act accordingly. To that extent, we are, collectively, architects of our destiny — taking part in an exercise in the powers of human dignity and sovereignty. Might we, therefore, justly consider human rights as universal?

To presume that there are such rights, governments must be fashioned according to the people’s freely subscribed blueprints, in such ways that policymaking and consignment of authority in society represent citizens’ choices and that power is willingly shared. Such individual autonomy is itself a fundamental human right: a norm to be exercised by all, in all corners. Despite scattered conspicuous headwinds. Respect for and attachment to human rights in relations with others is binding, prevailing over the mercurial whimsy of institutional dictates.

For clarity, universal human rights are inalienable norms that apply to everyone, everywhere. No nation ought to self-immunise as an exception. These human rights are not mere privileges. By definition they represent the natural order of things; that is, these rights are naturally, not institutionally, endowed. There’s no place for governmental, legal, or social neglect or misapplication of those norms, heretically violating human dignity. This point about dignity is redolent of Jean-Jacques Rousseau’s notions of civil society, explained in his Social Contract (1762), which provocatively opens with the famous ‘Man was born free, and he is everywhere in chains’. By which Rousseau was referring to the tradeoff between people’s deference to government authority over moral behaviour in exchange for whatever freedoms civilisation might grant as part of the social contract. The contrary notion, however, asserts that human rights are natural, protected from government caprice in their unassailability — claims secured by the humanitarianism of citizens in all countries, regardless of cultural differences.

The idea that everyone has a claim to immutable rights has the appeal of providing a platform for calling out wrongful behaviour and a moral voice for preventing or remedying harms, in compliance with universal standards. The standards act as moral guarantees and assurance of oversight. The differences among cultures should not translate to the warped misplacement of relativism in calculating otherwise clear-cut universal rights aimed to protect.

International nongovernmental organisations (such as Human Rights Watch) have laboured to protect fundamental liberties around the world, investigating abuses. Several other human rights organisations, such as the United Nations, have sought to codify people's rights, like those spelled out in the UN Declaration of Human Rights. The many universal human rights listed by the declaration include these:
All human beings are born free; everyone has the right to life, liberty, and security; no one shall be subjected to torture; everyone has the right to freedom of thought, conscience, and religion; everyone has the right to education; no one shall be held in slavery; all are equal before the law’. 
(Here’s the full UN declaration, for a grasp of its breadth.) 

These aims have been ‘hallowed’ by the several documents spelling out moral canon, in aggregate amounting to an international bill of rights to which countries are to commit and abide by. This has been done without regard to appeals to national sovereignty or cultural differences, which might otherwise prejudice the process, skew policy, undermine moral universalism, lay claim to government dominion, or cater to geopolitical bickering — such things always threatening to pull the legs out from under citizens’ human rights.

These kinds of organisations have set the philosophical framework for determining, spelling out, justifying, and promoting the implementation of human rights on as maximum global scale as possible. Aristotle, in Nicomachean Ethics, wrote to this core point, saying: 
A rule of justice is natural that has the same validity everywhere, and does not depend on our accepting it’.
That is, natural justice foreruns social, historical, and political institutions shaped to bring about conformance to their arbitrary, self-serving systems of fairness and justice. Aristotle goes on:
Some people think that all rules of justice are merely conventional, because whereas a law of nature is immutable and has the same validity everywhere, as fire burns both here and in Persia, rules of justice are seen to vary. That rules of justice vary is not absolutely true, but only with qualifications. Among the gods indeed it is perhaps not true at all; but in our world, although there is such a thing as Natural Justice, all rules of justice are variable. But nevertheless there is such a thing as Natural Justice as well as justice not ordained by nature’.
Natural justice accordingly applies to everyone, everywhere, where moral beliefs are objectively corroborated as universal truths and certified as profound human goods. In this model, it is the individual who shoulders the task of appraising the moral content of institutional decision-making.

Likewise, it was John Locke, the 17th-century English philosopher, who argued, in his Two Treaties of Government, the case that individuals enjoy natural rights, entirely non-contingent of the nation-state. And that whatever authority the state might lay claim to rested in guarding, promoting, and serving the natural rights of citizens. The natural rights to life, liberty, and property set clear limits to the power of the state. There was no mystery as to Locke’s position: states existed singularly to serve the natural rights of the people.

A century later, Immanuel Kant was in the vanguard in similarly taking a strong moral position on validating the importance of human rights, chiefly the entangled ideals of equality and the moral autonomy and self-determination of rational people.

The combination of the universality and moral heft of human rights clearly imparts greater potency to people’s rights, untethered to legal, institutional force of acknowledgment. As such, human rights are enjoyed equally, by everyone, all the time. It makes sense to conclude that everyone is therefore responsible for guarding the rights of fellow citizens, not just their own. Yet, in practice it is the political regime and perhaps international organisations that bear that load.

And within the ranks of philosophers, human-rights universalism has sometimes clashed with relativists, who reject universal (objective) moral canon. They paint human rights as influenced contingently by social, historical, and cultural factors. The belief is that rights in society are considered apropos only for those countries whose culture allows. Yet, surely, relativism still permits the universality of numerous rights. We instinctively know that not all rights are relative. At the least, societies must parse which rights endure as universal and which endure as relative, and hope the former are favoured.

That optimism notwithstanding, many national governments around the world choose not to uphold, either in part or in whole, fundamental rights in their countries. Perhaps the most transfixing case for universal human rights, as entitlements, is the inhumanity that haunts swaths of the world today, instigated for the most trifling of reasons.

Monday, 13 February 2023

Picture Post #42 Tin Walls



'Because things don’t appear to be the known thing; they aren’t what they seemed to be
neither will they become what they might appear to become.'

 

Posted by Martin Cohen


 


Shanty Town 
 
OK, I said that the next Picture Post should be from Ukraine, where there are so many scenes of urban destruction, yet destruction is not only sadder than dilapidation, it is also somehow less interesting. Destruction tells a story of random violence, or the impersonal power of nature gone mad, but it is not a human story. This image, however, is a tale of human ingenuity and perseverance.

There's a kind of aesthetic too, in the parallel and vertical lines - as if drawn by a rather slapdash artist. Likewise, the rust gives the steel sheets an interest beyond their actual purpose, which would surely be just to keep the rain out.

That people live like this is really rather a terrible indictment of a world in which there is enough wealth for everyone, if it could be shared out, but to me this house is also testament to something more positive: a specially human mix of enthusiasm and tenacity.



Tuesday, 24 January 2023

‘Brain in a Vat’: A Thought Experiment


By Keith Tidman

Let’s hypothesise that someone’s brain has been removed from the body and immersed in a vat of fluids essential for keeping the brain not only alive and healthy but functioning normally — as if it is still in a human skull sustained by other bodily organs.

A version of this thought experiment was laid out by René Descartes in 1641 in the Meditations on First Philosophy, as part of inquiring whether sensory impressions are delusions. An investigation that ultimately led to his celebrated conclusion, ‘Cogito, ergo sum’ (‘I think, therefore I am’). Fast-forward to American philosopher Gilbert Harman, who modernised the what-if experiment in 1973. Harman’s update included introducing the idea of a vat (in place of the allegorical device of information being fed to someone by an ‘evil demon’, originally conceived by Descartes) in order to acknowledge the contemporary influences of neuroscience in understanding the brain and mind.

In this thought experiment, a brain separated from its body and sustained in a vat of chemicals is assumed to possess consciousness — that is, the neuronal correlates of perception, experience, awareness, wonderment, cognition, abstraction, and higher-order thought — with its nerve endings attached by wires to a quantum computer and a sophisticated program. Scientists feed the disembodied brain with electrical signals, identical to those that people are familiar with receiving during the process of interacting through the senses with a notional external world. Hooked up in this manner, the brain (mind) in the vat therefore does not physically interact with what we otherwise perceive as a material world. Conceptualizations of a physical world — fed to the brain via computer prompts and mimicking such encounters — suffice for the awareness of experience.

The aim of this what-if experiment is to test questions not about science or even ‘Matrix’-like science fiction, but about epistemology — queries such as what do we know, how do we know it, with what certainty do we know it, and why does what we know matter? Specifically, issues to do with scepticism, truth, mind, interpretation, belief, and reality-versus-illusion — influenced by the lack of irrefutable evidence that we are not, in fact, brains in vats. We might regard these notions as solipsistic, where the mind believes nothing (no mental state) exists beyond what it alone experiences and thinks it knows.

In the brain-in-a-vat scenario, the mind cannot differentiate between experiences of things and events in the physical, external world and those virtual experiences electrically prompted by the scientists who programmed the computer. Yet, since the brain is in all ways experiencing a reality, whether or not illusionary, then even in the absence of a body the mind bears the complement of higher-order qualities required to be a person, invested with full-on human-level consciousness. To the brain suspended in a vat and to the brain housed in a skull sitting atop a body, the mental life experienced is presumed to be the same.

But my question, then, is this: Is either reality — that for which the computer provides evidence and that for which external things and events provide evidence — more convincing (more real, that is) than the other? After all, are not both experiences of, say, a blue sky with puffy clouds qualitatively and notionally the same: whereby both realities are the product of impulses, even if the sources and paths of the impulses differ?

If the experiences are qualitatively the same, the philosophical sceptic might maintain that much about the external world that we surmise is true, like the briskness of a winter morning or the aroma of fresh-baked bread, is in fact hard to nail down. The reason being that in the case of a brain in a vat, the evidence of a reality provided by scientists is assumed to resemble that provided by a material external world, yet result in a different interpretation of someone’s experiences. We might wonder how many descriptions there are of how the conceptualized world corresponds to what we ambitiously call ultimate reality.

So, for example, the sceptical hypothesis asserts that if we are unsure about not being a brain in a vat, then we cannot disregard the possibility that all our propositions (alleged knowledge) about the outside physical world would not hold up to scrutiny. This argument can be expressed by the following syllogism:

1. If I know any proposition of external things and events, then I know that I am not a brain in a vat;

2. I do not know that I am not a brain in a vat;

3. Therefore, I do not know any proposition of external things and events about the external world.


Further, given that a brain in a vat and a brain in a skull would receive identical stimuli — and that the latter are the only means either brain is able to relate to its surroundings — then neither brain can determine if it is the one bathed in a vat or the one embodied in a skull. Neither mind can be sure of the soundness of what it thinks it knows, even knowledge of a world of supposed mind-independent things and events. This is the case, even though computer-generated impulses realistically substitute for not directly interacting bodily with a material external world. So, for instance, when a brain in a vat believes that ‘wind is blowing’, there is no wind — no rushing movement of air molecules — but rather the computer-coded, mental simulation of wind. That is, replication of the qualitative state of physical reality.

I would argue that the world experienced by the brain in a vat is not fictitious or unauthentic, but rather is as real to the disembodied brain and mind as the external, physical world is to the embodied brain. Both brains fashion valid representations of truth. I therefore propose that each brain is ‘sufficient’ to qualify as a person: where, notably, the brains’ housing (vat or skull) and signal pathways (digital or sensory) do not matter.

Monday, 9 January 2023

The Philosophy of Science


The solar eclipse of May 29, 1919, forced a rethink of fundamental laws of physics

By Keith Tidman


Science aims at uncovering what is true. And it is equipped with all the tools — natural laws, methods, technologies, mathematics — that it needs to succeed. Indeed, in many ways, science works exquisitely. But does science ever actually arrive at reality? Or is science, despite its persuasiveness, paradoxically consigned to forever wending closer to its goal, yet not quite arriving — as theories are either amended to fit new findings, or they have to be replaced outright?

It is the case that science relies on observation — especially measurement. Observation confirms and grounds the validity of contending models of reality, empowering critical analysis to probe the details. The role of analysis is to scrutinise a theory’s scaffolding, to better visualise the coherent whole, broadening and deepening what is understood of the natural world. To these aims, science, at its best, has a knack for abiding by the ‘laws of parsimony’ of Occam’s razor — describing complexity as simply as possible, with the fewest suppositions to get the job done.

To be clear, other fields attempt this self-scrutiny and rigour, too, in one manner or another, as they fuel humanity’s flame of creative discovery and invention. They include history, languages, aesthetics, rhetoric, ethics, anthropology, law, religion, and of course philosophy, among others. But just as these fields are unique in their mission (oriented in the present) and their vision (oriented in the future), so is science — the latter heralding a physical world thought to be rational.

Accordingly, in science, theories should agree with evidence-informed, objective observations. Results should be replicated every time that tests and observations are run, confirming predictions. This bottom-up process is driven by what is called inductive reasoning: where a general principle — a conclusion, like an explanatory theory — is derived from multiple observations in which a pattern is discerned. An example of inductive reasoning at its best is Newton’s Third Law of Motion, which states that for every action (force) there is an equal and opposite reaction. It is a law that has worked unfailingly in uncountable instances.

But such successes do not eliminate inductive reasoning’s sliver of vulnerability. Karl Popper, the 20th-century Austrian-British philosopher of science, considered all scientific knowledge to be provisional. He illustrated his point with the example of a person who, having seen only white swans, concludes all swans are white. However, the person later discovers a black swan, an event conclusively rebutting the universality of white swans. Of course, abandoning this latter principle has little consequence. But what if an exception to Newton’s universal law governing action and reaction were to appear, instead?

Perhaps, as Popper suggests, truth, scientific and otherwise, should therefore only ever be parsed as partial or incomplete, where hypotheses offer different truth-values. Our striving for unconditional truth being a task in the making. This is of particular relevance in complex areas: like the nature of being and existence (ontology); or of universal concepts, transcendental ideas, metaphysics, and the fundamentals of what we think we know and understand (epistemology). (Areas also known to attempt to reveal the truth of unobserved things.) 

And so, Popper introduced a new test of truth: ‘falsifiability’. That is, all scientific assertions should be subjected to the test of being proven false — the opposite of seeking confirmation. Einstein, too, was more interested in whether experiments disagreed with his bold conjectures, as such experiments would render his theories invalid — rather than merely provide further evidence for them.

Nonetheless, as human nature would have it, Einstein was jubilant when his prediction that massive objects bend light was confirmed by astronomical observations of light passing close to the sun during the total solar eclipse of 1919, the observation thereby requiring revision of Newton’s formulation of the laws of gravity.

Testability is also central to another aspect of epistemology. That is, to draw a line between true science — whose predictions are subject to rigorous falsification and thus potential disproof — and pseudoscience — seen as speculative, untestable predictions relying on uncontested dogma. Pseudoscience balances precariously, depending as it does on adopters’ fickle belief-commitment rather than on rigorous tests and critical analyses.

On the plus side, if theories are not successfully falsified despite earnest efforts to do so, the claims may have a greater chance of turning out true. Well, at least until new information surfaces to force change to a model. Or, until ingenious thought experiments and insights lead to the sweeping replacement of a theory. Or, until investigation explains how to merge models formerly considered defyingly unalike, yet valid in their respective domains. An example of this last point is the case of general relativity and quantum mechanics, which have remained irreconcilable in describing reality (in matters ranging from spacetime to gravity), despite physicists’ attempts. 

As to the wholesale switching out of scientific theories, it may appear compelling to make the switch, based on accumulated new findings or the sense that the old theory has major fault lines, suggesting it has run its useful course. The 20th-century American philosopher of science, Thomas Kuhn, was influential in this regard, coining the formative expression ‘paradigm shift’. The shift occurs when a new scientific theory replaces its problem-ridden predecessor, based on a consensus among scientists that the new theory (paradigm) better describes the world, offering a ‘revolutionarily’ different understanding that requires a shift in fundamental concepts.


Among the great paradigm shifts of history are Copernicuss sun-centered (heliocentric) model of planet rotation, replacing Ptolemys Earth-centered model. Another was Charles Darwins theory of natural selection as key to the biological sciences, informing the origins and evolution of species. Additionally, Einsteins theories of relativity ushered in major changes to Newtons understanding of the physical universe. Also significant was recognition that plate tectonics explain large-scale geologic change. Significant, too, was development by Neils Bohr and others of quantum mechanics, replacing classical mechanics at microscopic scales. The story of paradigm shifts is long and continues.


Science’s progress in unveiling the universe’s mysteries entails dynamic processes: One is the enduring sustainability of theories, seemingly etched in stone, that hold up under unsparing tests of verification and falsification. Another is implementation of amendments as contrary findings chip away at the efficacy of models. But then another is the revolutionarily replacement of scientific models as legacy theories become frail and fail. Reasons for belief in the methods of positivism. 


In 1960, the physicist Eugene Wigner wrote what became a famous paper in philosophy and other circles, coining the evocative expression unreasonable effectiveness. This was in reference to the role of mathematics in the natural sciences, but he could well have been speaking of the role of science itself in acquiring understanding of the world.


Monday, 26 December 2022

Picture Post #41 The Aesthetics of Destruction



'Because things don’t appear to be the known thing; they aren’t what they seemed to be
neither will they become what they might appear to become.'

 

Posted by Martin Cohen

 

 
Herald Weekly image of a store in Fukushima sometime after the nuclear reactor there partially exploded. 
 
I think the next Picture Post should be from Ukraine, where there are so many scenes of urban destruction that are at once both tragic and appalling – yet also somehow (like this scene) somehow rather calming. These are postcards from a post-apocalyptic future, words of chaos that humanity can only briefly put off.

But about this scene, in particular, which has the quality of a paper seascape, the waves created by the numerous documents and papers thrown onto the floor. Or, writing just after Christmas, it might remind some people of the detritus left after an extravagant present–giving ceremony where the parcels and wrapping paper are all that remain. 

It is not on a huge scale, this destruction, we could imagine being tasked with cleaning it up. But it’s not the kind of mess that we come across every day either.