Monday, 24 July 2017

Identity: From Theseus's Paradox to the Singularity

Posted by Keith Tidman

A "replica" of an ancient Greek merchant ship based on the remains of a ship that wrecked about 2,500 years ago.  With acknowledgements to Donald Hart Keith.
As the legend goes, Theseus was an imposing Greek hero, who consolidated power and became the mythical king of Athens. Along the way, he awed everyone by leading victorious military campaigns. The Athenians honoured Theseus by displaying his ship in the Athenian harbour. As the decades rolled by, parts of the ship rotted. To preserve the memorial, each time a plank decayed, the Athenians replaced it with a new plank of the same kind of wood. First one plank, then several, then many, then all.

As parts of the ship were replaced, at what point was it no longer the ‘ship of Theseus’? Or did the ship retain its unique (undiminished) identity the entire time, no matter how many planks were replaced? Do the answers to those two questions change if the old planks, which had been warehoused rather than disposed of, were later reassembled into the ship? Which, then, is the legendary ‘ship of Theseus’, deserving of reverence — the ship whose planks had been replaced over the years, or the ship reassembled from the stored rotten planks, or neither? The Greek biographer and philosopher Plutarch elaborated on the paradox in the first century in 'Life of Theseus'.

At the core of these questions about a mythical ship is the matter of ‘identity’. Such as how to define ‘an object’; whether an object is limited to the sum of people’s experience of it; whether an object can in some manner stay the same, regardless of the (macro or micro) changes it undergoes; whether the same rules regarding identity apply to all objects, or if there are exceptions; whether gradual and emergent, rather than immediate, change makes a difference in identity; and so forth.

The seventeenth-century English poilitical philosopher, Thomas Hobbes, weighed in on the conundrum, asking, ‘Which of the two existing ships is numerically one and the same ship as Theseus’s original ship?’ He went on to offer this take on the matter:
‘If some part of the first material has been removed or another part has been added, that ship will be another being, or another body. For, there cannot be a body “the same in number” whose parts are not all the same, because all a body’s parts, taken collectively, are the same as the whole.’
The discussion is not, of course, confined to Theseus’s ship. All physical objects are subject to change over time: suns (stars), trees, houses, cats, rugs, hammers, engines, DNA, the Andromeda galaxy, monuments, icebergs, oceans. As do differently categorised entities, such as societies, institutions, and organizations. And people’s bodies, which change with age of course — but more particularly, whose cells get replaced, in their entirety, roughly every seven years throughout one’s life. Yet, we observe that amidst such change — even radical or wholesale change — the names of things typically don’t change; we don’t start calling them something else. (Hobbes is still Hobbes seven years later, despite cellular replacement.)

The examples abound, as do the issues of identity. It was what led the ancient Greek philosopher Heraclitus to famously question whether, in light of continuous change, one can ‘step into the same river twice’—answering that it’s ‘not the same river and he’s not the same man’. And it’s what led Hobbes, in the case of the human body, to conveniently switch from the ‘same parts’ principle he had applied to Theseus’s ship, saying regarding people, ‘because of the unbroken nature of the flux by which matter decays and is replaced, he is always the same man’. (Or woman. Or child.) By extension of this principle, objects like the sun, though changing — emitting energy through nuclear fusion and undergoing cycles — have what might be called a core ‘persistence’, even as aspects of their form change.
‘If the same substance which thinks be changed,
it can be the same person, or remaining
the same, it can be a different person? — John Locke
But people, especially, are self-evidently more than just bodies. They’re also identified by their minds — knowledge, memories, creative instincts, intentions, wants, likes and dislikes, sense of self, sense of others, sense of time, dreams, curiosity, perceptions, imagination, spirituality, hopes, acquisitiveness, relationships, values, and all the rest. This aspect to ‘personal identity’, which John Locke encapsulates under the label ‘consciousness’ (self) and which undergoes continuous change, underpins the identity of a person, even over time — what has been referred to as ‘diachronic’ personal identity. In contrast, the body and mind, at any single moment in time, has been referred to as ‘synchronic’ personal identity. We remain aware of both states — continuous change and single moments — in turns (that is, the mind rapidly switching back and forth, analogous to what happens while supposedly 'multitasking'), depending on the circumstance.

The philosophical context surrounding personal identity — what’s essential and sufficient for personhood and identity — relates to today’s several variants of the so-called ‘singularity’, spurring modern-day paradoxes and thought experiments. For example, the intervention of humans to spur biological evolution — through neuroscience and artificial intelligence — beyond current physical and cognitive limitations is one way to express the ‘singularity’. One might choose to replace organs and other parts of the body — the way the planks of Theseus’s ship were replaced — with non-biological components and to install brain enhancements that make heightened intelligence (even what’s been dubbed ultraintelligence) possible. This unfolding may be continuous, undergoing a so-called phase transition.

The futurologist, Ray Kurzweil, has observed, ‘We're going to become increasingly non-biological’ — attaining a tipping point ‘where the non-biological part dominates and the biological part is not important any more’. The process entails the (re)engineering of descendants, where each milestone of change stretches the natural features of human biology. It’s where the identity conundrum is revisited, with an affirmative nod to the belief that mind and body lend themselves to major enhancement. Since such a process would occur gradually and continuously, rather than just in one fell swoop (momentary), it would fall under the rubric of ‘diachronic’ change. There’s persistence, according to which personhood — the same person — remains despite the incremental change.

In that same manner, some blend of neuroscience, artificial intelligence, heuristics, the biological sciences, and transformative, leading-edge technology, with influences from disciplines like philosophy and the social sciences, may allow a future generation to ‘upload the mind’ — scanning and mapping the mind’s salient features — from a person to another substrate. That other substrate may be biological or a many-orders-of-magnitude-more-powerful (such as quantum) computer. The uploaded mind — ‘whole-brain emulation’ — may preserve, indistinguishably, the consciousness and personal identity of the person from whom the mind came. ‘Captured’, in this term’s most benign sense, from the activities of the brain’s tens of billions of neurons and trillions of synapses.

‘Even in a different body, you’d still be you
if you had the same beliefs, the same worldview,
and the same memories.’ — Daniel Dennett
If the process can happen once, it can happen multiple times, for the same person. In that case, reflecting back on Theseus’s ship and notions of personal identity, which intuitively is the real person? Just the original? Just the first upload? The original and the first upload? The original and all the uploads? None of the uploads? How would ‘obsolescence’ fit in, or not fit in? The terms ‘person’ and ‘identity’ will certainly need to be revised, beyond the definitions already raised by philosophers through history, to reflect the new realities presented to us by rapid invention and reinvention.

Concomitantly, many issues will bubble to the surface regarding social, ethical, regulatory, legal, spiritual, and other considerations in a world of emulated (duplicated) personhood. Such as: what might be the new ethical universe that society must make sense of, and what may be the (ever-shifting) constraints; whether the original person and emulated person could claim equal rights; whether any one person (the original or emulation) could choose to die at some point; what changes society might confront, such as inequities in opportunity and shifting centers of power; what institutions might be necessary to settle the questions and manage the process in order to minimise disruption; and so forth, all the while venturing increasingly into a curiously untested zone.

The possibilities are thorny, as well as hard to anticipate in their entirety; many broad contours are apparent, with specificity to emerge at its own pace. The possibilities will become increasingly apparent as new capabilities arise (building on one another) and as society is therefore obliged, by the press of circumstances, to weigh the what and how-to — as well as the ‘ought’, of course. That qualified level of predictive certainty is not unexpected, after all: given sluggish change in the Medieval Period, our twelfth-century forebears, for example, had no problem anticipating what thirteenth-century life might offer. At that time in history, social change was more in line with the slow, plank-by-plank changes to Theseus’s ship. Today, the new dynamic of what one might call precocious change — combined with increasingly successful, productive, leveraged alliances among the various disciplines — makes gazing into the twenty-second century an unprecedentedly challenging briar patch.

New paradoxes surrounding humanity in the context of change, and thus of identity (who and what I am and will become), must certainly arise. At the very least, amidst startling, transformative self-reinvention, the question of what is the bedrock of personal identity will be paramount.

Monday, 17 July 2017

Pity the Fundamentalists

Posted by Mirjam Scarborough*
          with Thomas Scarborough

Ferdinand Hodler - Die Lebensmüden (Tired of Life) 1892

What is it that sustains the fundamentalist?  I should say, the religious fundamentalist.  In particular, the fundamentalist who is willing to give up everything for God?  Of this description, there are fundamentalists of many kinds: missionaries, militants, medics, volunteers – priests, nuns, imams, rabbis – anyone for whom God means the world, and proves it by his or her sacrifice.

I had the privilege of researching religious fundamentalism through a ten-year study of women missionaries in Central Africa.  These represented the most committed members of fundamentalist faith**.  In a sense, they were the foot soldiers of the avant garde.  They were the leading edge – ready to give up everything in the name of God: friends, comforts, health, security, even freedom, children, life itself.

The reason why they did it, not unexpectedly, was that fundamentalists see themselves as being under orders – and these are orders from God himself.  The orders may go by various names: God's summons, commission, commandment, burden, among other terms.  In the case of the women missionaries, it was a 'call'. 

This call from God is not 'empty', so to speak.  It is rich in content.  Yet one thing characterises it above all.  God’s demands are high.  His paragons are perfect.  One gives much and expects little.  The orders which the religious fundamentalist receives make the highest demands – indeed they represent, generally speaking, the hardest tasks that anyone may aspire to. 

The question of my research was simple: 'What is it that sustains such a call?'

My intuitive answer was: God himself.  It is sufficient to know that one is called by God, to see one through any challenge and hardship on earth – and then, in many cases, to add to it love.  In fact, this proved to be true, and was borne out by the research.  Orders from God – or the perception of orders from God – encouraged the religious fundamentalist to great commitment and endurance, heroism and sacrifice.

However, it didn’t last.  It couldn’t last.  In the long term – which was about four years – the heroes crumbled.  There were intense stresses.  Their expectations were deeply challenged.  They suffered severe emotional trauma and exhaustion.  In fact, it was accepted as the norm that one would 'break down' in year four. Most, if not all of the missionaries I interviewed, needed medical interventions to stabilise their condition.

Even then, statistically, 50% of them were lost to the mission every thirteen years. Worse than that, anecdotal evidence showed that their spouses and children may have suffered the deepest trauma.

The call of God sustained them at first. It inspired them to extraordinary commitment and endurance. Up to a point, my assumptions were on target. Those who feel that they are called by God – perhaps ordered, summoned, commanded, commissioned by him – are sustained by the call. But as months grow into years, they nearly all crumble. They are utterly depleted.

However, there was a surprise.  Some of fundamentalism's foot soldiers – by far not all – rebounded.  After their first leave of absence, broken and beaten, they returned to the mission field repaired, if not refreshed – to the very same circumstances – never to experience such crisis again.

What changed?  It was not their fundamentalism, really.  They did not lose the sense of being under the call and commandment of God, nor did they feel in any way that his high demands had slipped.  But they let go of personal effort, and they trusted God to do it – in spite of them.

It all turned, therefore, on their understanding of their trust in God. No longer did they trust God to give them super-human powers for the task, or an indomitable will.  Rather, in brokenness, they trusted him to bless their great weakness.  Some called it the purification of the call.  Some called it repentance.

It all hinged on this one thing: God is great – but he does not impart his greatness to us.  It does not rub off on mere mortals.

The religious fundamentalist – the avant garde – missionaries, militants, medics, volunteers – priests, nuns, imams, rabbis – anyone for whom God means the world, and prove it by their sacrifice – all are of only fleeting usefulness to the cause, if any usefulness at all, until their call is purified.  In fact, until then, if anecdotal evidence would be true, they do great damage not only to themselves but to all those close to them.

It is tragedy and ruin – until they find a realistic sense of themselves, and a realistic sense of the God they serve.  Pity the religious fundamentalists, and all those near to them – at least, those whose call has not yet been 'cleansed'.



* Rev. Dr. Mirjam Scarborough (1957-2011) was a doctor of philosophy and a missiologist.
** My study included some who are sooner referred to as 'revivalists'.

Monday, 10 July 2017

Poetry: A Notice Offering Amnesty

Posted by Chengde Chen*



A Notice Offering Amnesty
Written after the Grenfell Tower fire

By Chengde Chen

To determine the numbers of dead,
The police appeal to the survivors:

‘Please let us know your situation
And that of others you may know of.
Don’t worry about your immigration status–
We will not report it to the Home Office,
Nor will the Home Office pursue it.
So, please contact us!’

I seem to be touched by this,
But don’t really know what for.
For humanity in the law?
Or because we’re guilty of so lacking in it,
That we have to sacrifice the law
To compensate?



* Chengde Chen is the author of Five Themes of Today, Open Gate Press, London. chengde@sipgroup.com

Monday, 3 July 2017

Picture Post #26. Life-Matters



'Because things don’t appear to be the known thing; they aren’t what they seemed to be neither will they become what they might appear to become.'

Posted by Tessa den Uyl and Martin Cohen

Guatamala, 1968. Picture credit: Jill Gibson
A woman with a newborn passes by the word ‘Muerte’ written on the wall. Nothing could be more natural; birth and death simply belong to each other.

Which raises two questions: what happens when death becomes a symbol to reclaim something belonging to the past? What happens when a distinction is made about who, and who should not, live? Because then the right to live is not the same concept for all of us. 

Suppose that birth is a concept about being, and death a concept about non-being, then whatever touches upon these concepts, touches upon a principle. The problem is not birth, nor yet death itself. The problem is in the claims being made. To respect life means to respect death. Herein lies something universal.

A note by the photographer, Jill Gibson:

During the years 1966–1968, I was photo-documenting the work and progress of doctors who were examining the medical problems of children living in the pure Mayan village of Santa Maria Cauqué, located in the hills 30 minutes outside of Guatemala City. There were some days I travelled in a 4-wheel drive vehicle, up riverbed roads for five to six hours just to reach remote villages, along with a doctor. The doctor educated me about the United Fruit Company and it’s influence over the Guatemalan government, and the ramifications of U.S. involvement in the country. So, I believe the word Muerte, being a graffiti on the wall, has something to do with the resistance at the time.

There was in fact a lot of death going on then, as the country was immersed in military violence from 1965 through 1995. We saw it again first hand in 1984. During these years, the Mayans were being annihilated.

Monday, 26 June 2017

The Death Penalty: An Argument for Global Abolition


Posted by Keith Tidman

In 1957, Albert Camus wrote an essay called Reflections on the Guillotine. As well as arguing against it on grounds of principle, he also speaks of the ineffectiveness of the punishment:
‘According to one magistrate, the overwhelming majority of the murderers he had tried did not know, when they shaved themselves that morning, that they were going to kill someone that night. In short, capital punishment cannot intimidate the man who throws himself upon crime as one throws oneself into misery.’
For myself, too, the death penalty is an archaic practice, a vestige with no place in a 21st-century world. In the arena of constitutional law, the death penalty amounts to ‘cruel and unusual’ (inhumane) punishment. In the arena of ethics, the death penalty is an immoral assault on human rights, dignity, and life’s preeminence.

Through the millennia, social norms habitually tethered criminal punishment to ‘retribution’ — which minus the rhetorical dressing distils to ‘revenge’. ‘Due process of law’ and ‘equal protection under the law’ were random, rare, and capricious. In exercising retribution, societies shunted aside the rule of authentic proportionality, with execution the go-to punishment for a far-ranging set of offenses, both big and small — murder only one among them. In some societies, matters like corruption, treason, terrorism, antigovernment agitation, and even select ‘antisocial’ behaviours likewise qualified for execution — and other extreme recourses — shades of which linger today.

Resort through the ages to state-sanctioned, ceremonial killing (and other severe corporal punishment) reflected the prevailing norms of societies, with little stock placed on the deep-rooted, inviolable value of human life. The aim was variously to control, coerce, impose suffering, and ultimately dehumanise — very much as enemies in war find it easier to kill if they create ‘subhuman’ caricatures of the enemy. Despite the death penalty’s barbarity, some present-day societies retain this remnant from humanity’s darker past: According to Amnesty International, twenty-three countries — scattered among the Asia-Pacific, Africa, the United States in the Americas, and Belarus in Europe — carried out executions in 2016; while fifty-five countries sentenced people to death that year.

But condemnation of the death penalty does not, of course, preclude imposing harsh punishment for criminal activity. Even the most progressive, liberally democratic countries, abiding by enlightened notions of justice, appropriately accommodate strict punishment — though well short of society’s premeditatedly killing its citizens through application of the death penalty. The aims of severe punishment may be several and, for sure, reasonable: to preserve social orderliness, disincentivise criminal behaviour, mollify victims, reinforce legal canon, express moral indignation, cement a vision of fairness, and reprimand those found culpable. Largely fair objectives, if exercised dispassionately through due process of law. These principles are fundamental and immutable to civil, working — and rules-based — societies. Nowhere, however, does the death penalty fit in there; and nowhere is it obvious that death is a proportionate (and just) response to murder.
________________________________________

‘One ought not return injustice
for injustice’ — Socrates
________________________________________

Let’s take a moment, then, to look at punishment. Sentencing may be couched as ‘consequentialist’, in which case punishment’s purpose is utilitarian and forward looking. That is, punishment for wrongdoing anticipates future outcomes for society, such as eliminating (or more realistically, curtailing) criminal behaviour. The general interest and welfare of society — decidedly abstract notions, subject to various definitions — serve as the desired and sufficient end state.

Alternatively, punishment may be couched as ‘deontological’. In that event, the deed of punishment is itself considered a moral good, apart from consequences. Deontology entails rules-based ethics — living under the rule of law, as a norm within either liberal or conservative societies and systems of governance — while still attaining retributive objectives. Or, commonly, punishment may be understood as an alliance of both consequentialism and deontology. Regardless of choice — whether emphasis is on consequentialism or deontology or a hybrid of the two — the risk of punishing the innocent, especially given the irreversibility of the death penalty in the case of discovered mistakes, looms large. As such, the choice among consequentialism, deontology, or a hybrid matters little to any attempt to support a case for capital punishment.

Furthermore, the meting out of justice works only if knowledge is reliable and certain. That is, knowledge of individuals’ culpability, the competence of defense and prosecutorial lawyers, unbiased evidence (both exculpatory and inculpatory), the randomness of convictions across demographics, the sense of just desserts, the fairness of particular punishments (proportionality), and the prospective benefits to society of specific punitive measures. Broadly speaking, what do we know, how do we know it, and the weight of what counts — epistemological issues that are bound by the ethical issues. In many instances, racial, ethnic, gender, educational, or socioeconomic prejudices (toward defendants and victims alike) skew considerations of guilt and, in particular, the discretionary imposition of the death penalty. In some countries, politics and ideology — even what’s perceived to threaten a regime’s legitimacy — may damn the accused. To those sociological extents, ‘equal protection of the law’ becomes largely moot.

Yet at the core, neither consequentialism — purported gains to society from punishment’s outcomes — nor deontology — purported intrinsic, self-evident morality of particular sentences — rises to the level of sufficiently undergirding the ethical case for resorting to the death penalty. Nor does retribution (revenge) or proportionality (‘eye for an eye, tooth for a tooth’). After all, whether death is the proportionate response to murder remains highly suspect. Indeed, no qualitative or quantitative logic, no matter how elegantly crafted, successfully supports society’s recourse to premeditatedly and ceremoniously executing citizens as part of its penal code.
_____________________________________________

‘Capital punishment is the most
premeditated of murders’ — Albert Camus
_____________________________________________

There is no public-safety angle, furthermore, that could not be served equally well by lifetime incarceration — without, if so adjudged, consideration of rehabilitation and redemption, and thus without the possibility of parole. Indeed, evidence does not point to the death penalty improving public safety. For example, the death penalty has no deterrent value — that is, perpetrators don’t first contemplate the possibility of execution in calculating whether or not to commit murder or other violent crime. The starting position therefore ought to be that human life is sacrosanct — life’s natural origins, its natural course, and its natural end. Society ought not deviate from that principle in normalising particular punishments for criminal — even heinously criminal — behaviour. The guiding moral principle is singular: that it’s ethically unprincipled for a government to premeditatedly take its citizenries’ lives in order to punish, a measure that morally sullies the society condoning it.

Society’s applying the death penalty as an institutional sentence for a crime is a cruel vestige of a time when life was less sacred and society (the elite, that is) was less inclined to censor its own behavior: intentionally executing in order, with glaring irony, to model how killing is wrong. Society cannot compartmentalise this lethal deed, purporting that sanctioned death penalty is the exception to the ethical rule not to kill premeditatedly. Indeed, as Salil Shetty, secretary-general of Amnesty International, laconically observed, ‘the death penalty is a symptom of a culture of violence, not a solution to it’.

Although individuals, like victim family members, may instinctively and viscerally want society to thrash out in revenge on their behalf — with which many people may equally instinctively and understandably sympathise — it’s incumbent upon society to administer justice rationally, impartially, and, yes, even dispassionately. With no carveout for excepted crimes, no matter how odious, the death penalty is a corrosive practice that flagrantly mocks the basis of humanity and civilisation — that is, it scorns the very notion of a ‘civil’ society.

The death penalty is a historical legacy that should thus be consigned to the dustbin. States, across the globe, have no higher, sober moral stake than to strike the death penalty from their legal code and practices. With enough time, it will happen; the future augurs a world absent state-sanctioned execution as a misdirected exercise in the absolute power of government.

Monday, 19 June 2017

Language: Two Himalayan Mistakes

Seated Woman by Richard Diebenkorn
Posted by Thomas Scarborough
We take a lot on trust. Too much of it, mistakenly. We even have a name for it: ex verecundiam.  With this in mind, there are two things at the heart of our language, which we have mistakenly taken on trust. The first is how to circumscribe the meaning of a word, the second is how to qualify that meaning. These are not merely issues of semantics. They have profound implications for our understanding of the world. 
There was a time, not too long ago, when we had no dictionaries. In fact, it was not too long ago that we had no printing presses on which to print them. Then, when dictionaries arrived, we decided that words had definitions, and that, where applicable, each of these definitions held the fewest possible semantic features. A woman, for instance, was an ‘adult human female’, no less, and certainly no more – three features in all. While this may be too simple a description of the matter, the meaning will be clear.

We may never know who first gave us permission to do this, or on whose authority it was decided. It may go back to Aristotle. But at some time in our history, two options lay before us. One was to reduce the meaning of a word to the fewest possible semantic features. The other was to include in it every possible semantic feature. We know now what the decision was. We chose artificially and arbitrarily to radically reduce what words are.

We canvassed the literature. We canvassed the people. All had their own vast ideas and experiences about a word. Then we sought the word's pure essence, its abstract core – like the definition of the woman, an ‘adult human female’. This, however, introduced one of the biggest problems of semantics. We needed now to separate semantic features which mattered from those which did not. The artificiality and uncertainty of this dividing line – that is, between denotation and connotation – has filled many books.

Worse than this. It is easy to prove that we took the wrong option at the start.  We are in a position to demonstrate that, when we refer to a word, we refer to its maximal semantic content, not minimal. Some simple experiments prove the point. Take the sentences, ‘I entered the house. The karma was bad,’ or, ‘The car hit a ditch. The axle broke.’ What now does ‘the karma’ or ‘the axle’ refer to? It refers to the maximal content of a word. This is how, intuitively, innately, we deal with words.

Our second big mistake, which follows on from the first, was the notion of subject and predicate. We call these the ‘principal syntactic elements’ of language. They were at the forefront of Kant's philosophy. Today, the universally accepted view is that the predicate completes an idea about the subject. Take as an example the sentence, ‘’The woman (subject) dances (predicate),’ or, ‘The penny (subject) drops (predicate).’

Again, ‘the woman’ is taken as the bare-bones concept, ‘adult human female’. Add to this the predicate – the fact that the woman dances – and we expand on the concept of a woman. We already know that a woman dances, of course. We know, too, that she laughs, sleeps, eats, and a great deal more. Similarly, we define ‘the penny’ as a ‘British bronze coin’. Add to this that it can drop, and we have expanded on the concept of a penny. Of course, we know well that it clinks, shines, even melts, and much more besides.

Yet, what if the predicate serves not to expand upon the subject, but to narrow it down? In fact, if words contain every possible semantic feature, so too must subjects. A predicate takes a ‘maximal’ subject, then – the near infinite possibilities contained in ‘the woman’, or ‘the penny’ – and channels them, so to speak. ‘The woman (who can be anything) dances.’ ‘The penny (which offers a multitude of possibilities) drops.’  Predicates, then, are ‘clarifiers’, as it were. They take a thing, and narrow it down and sharpen its contours.

The application to philosophy is simple.  We discard a word’s many possibilities – those of a woman, a penny, a house, a car – in the interests of the arbitrary notion that they represent minimal meanings – so reducing them to the smallest number of semantic features people use, and throwing the rest away.

Day after day, we do this, through force of centuries of habit. With this, we instantly discard (almost) all the possibilities of a word. We meet situations without being open to their possibilities, but cobble a few predicates to bare-bones subjects, and so lose our good sense. Nuclear power is the generation of electricity, a ship is something that floats, a Führer is someone who governs. The words, being stripped of their maximal meanings, do not contain – perhaps most importantly – the possibility of evil. This greatly assists prejudice, bigotry, partiality, and discrimination.

When words are reduced to their minimal features – when we base their meaning on their denotative core – we ‘crop’ them, truncate them, reduce them, and above all, cut away from them a great many meanings which they hold, and so reduce our awareness of the world, and cosmos.  Due in no small part to the way we imagine our language to be – minimal words and minimal subjects – we have entered habits of thinking which are simplistic, reductionistic, technical – and dangerous.

But to understand words in terms of maximal meanings is to reject the reductionism of our present time, and to think expansively, creatively, intuitively, holistically. 

Monday, 12 June 2017

Seeking Reformers

Torture by Kevin (DJ) Ahzee
Posted by Sifiso Mkhonto
Unity has a mixed reputation, in South Africa.  It was under apartheid that the motto ‘unity is strength’ became the tool of exclusion.  Yet even under our new constitutional democracy, with the motto ‘Working together we can do more’, unity became an illusion.
Today we find ourselves with different kinds of unity: political party unity, religious unity, and cultural unity. Yet rather than uniting us, these ‘unities’ exist in menacing tension, and instead of being united, we seem isolated. Furthermore, as the contours of these ‘unities’ have become more apparent, they have revealed parallel power structures in our society:
  Political party unity has promoted ‘party first’, and cadre deployment.
  Religious unity has served religious leaders, who have consorted with political power, and
  Cultural unity has divided society through tribalism.
Over time, each of these unities has polarised us into captives and captors, and united us in bondage. Our ‘unities’ have become what I shall call ‘civilised oppression’. The dynamic is simple, on the surface of it. The major tool which is used to secure our captivity is patronage. Patronage is the support, encouragement, privilege, or financial aid that an organisation or individual bestows on another. It indicates the power to grant favours – but also, importantly, the need to seek them.

Underlying this dynamic, at both extremes, is the cancer we call greed. This greed then becomes institutionalised, and oppression, in the words of Iris Marion Young, becomes ‘embedded in unquestioned norms, habits, and symbols, in the assumptions underlying institutions and rules, and the collective consequences of following those rules’. Chains and prison cells are a mere shadow of the chains and prison cells of mental oppression such as this. ‘The most potent weapon in the hands of the oppressor,’ wrote the South African anti-apartheid activist Steve Biko, ‘is the mind of the oppressed.’

Since greed is embedded in each of us, and this greed has become institutionalised, we cannot eliminate the attendant oppression by getting rid of rulers or by promulgating new laws – because oppressions are systematically reproduced in the major economic, political and cultural institutions. To make matters worse, in the words of the American social psychologist Morton Deutsch, ‘while specific privileged groups are the beneficiaries of the oppression of other groups, and thus have an interest in the continuation of the status quo, they do not typically understand themselves to be agents of oppression’. They, and we, are blind.

Contrast this now with the fact that we do, in fact, live in a constitutional democracy, with a bill of rights and the rule of law. People have lost the will and the desire to insist on law because they are cowed through the dynamics of patronage. Despondency has increased – or rather, our leaders have increased our despondency – as the dynamics of greed have gained the upper hand. Iris Marion Young describes our oppression ‘as a consequence of often unconscious assumptions and reactions of well-meaning people in ordinary interactions that are supported by the media and cultural stereotypes as well as by the structural features of bureaucratic hierarchies and market mechanisms’.

The cancer is within most of us. Now where do we look for restoration? Our greatest need is for Reformers who will press for merit systems, insist that lawmakers respect the law and find strategies to eliminate patronage. They will seek unity under one constitution, one bill of rights, one law. However, this cannot be done by Reformers who do not have the cure for the cancer. I believe that freedom will flourish, citizens will emancipate themselves from mental oppression, and patronage won’t be the big elephant in the room, as soon as we implement this cure.

It is time to turn our house into a home. Find a cure for the cancer. Seek knowledgeable and principled Reformers who won’t give society the cold shoulder when symptoms of the cancer are identified even in them. Now is the time to act on the diagnosis of the cancer, and take the medication that will cure us. Those who are controlled by the disease need to repent and find their way, instead of being sidetracked by patronage.



Also by Sifiso Mkhonto: Breaking the Myth of Equality.

Monday, 5 June 2017

Picture Post #25 The Machine Age


'Because things don’t appear to be the known thing; they aren’t what they seemed to be neither will they become what they might appear to become.'

Posted by Tessa den Uyl and Martin Cohen



1950s advertising image for a new-fangled vending machine

You can just imagine the conversation...  ‘Hi Betty, can I ask you a dumb question? Better than anyone I know Bill!’

Okay, maybe that's not what the image brought to your mind, but it is what the  copywriters for the original magazine adverstisement came up with - under a heading ‘Sweet ’n’ Snarky’. Don’t ask what ‘snarky’ means exactly, as no one seems to agree, but here the image gives a particular sense to the term: ‘smart, stylish, a little bit rogueish’.

Nearly 70 years on, the machine no longer looks snarky, indeed it looks pretty unstylish and dumb. The green fascia and the plain helvetica font shouting out in red the word ‘COFFEE’ scarcely impress, as surely at the time they would have done. That’s not even to start on the drab characters in this little play, Bill, the office flirt and Betty, the attractive secretary.

In those days, the set-up might have seemed attractive; offering new technological developments combined with social engagement. Just like the characters in a popular TV soap series, the image created by others seeks to tell you who you are. Advertising media in particular have long been keen to exploit this role-play and their success offers a fascinating additional question. Which is; just why do people like to be reduced to their function, to a stereotype?
  
Of course, the advertisers were not really interested in what an actual Bill might have to talk about to an actual Betty. Real characters are multifaceted. Why, this Bill and Betty might even have both been academics chatting during a break between lectures!
‘Hi Betty, do you think these coffee machines will increase our happiness in life?’
‘Hmmm. Good question, Bill. And my answer would be ‘Yes and No’.  Soon we’ll find ourselves oppressed with new technologies but first let us celebrate the reflection of change this one represents.’
Welcome to the deep world of everyday expression, not the frothy one of advertisers’ expresso.

Monday, 29 May 2017

Why Absolute Moral Relativism Should Be Off The Table

Posted by Christian Sötemann
In the case of moral statements there can be many degrees between absolute certainty and absolute uncertainty. 
Even empirical truths, which are thoroughly supported by conclusive evidence, cannot, by their empirical nature, have the same degree of certainty as self-evident truths. There may always be an empirical case which escapes us. And so it may be questioned whether a viable moral principle really has to be either one or the other: absolutely certain or absolutely uncertain, valuable or valueless – or whether it is good enough for it to serve as an orientation, a rule of thumb, or something useful in certain types of cases.

With this in mind, given any moral principle in front of us, it could be helpful for us to differentiate between whether:
• it is only universally applicable in an orthodox way

or

• there is an overt denial of any generalisability (even for a limited type of cases) of moral values and principles.
In the first case, we may try to reconcile a concrete situation with an abstract moral rule, without rejecting the possibility of some degree of generalisation – yet in the second case, we have what we previously discussed: generalising that we would not be able to make any kind of general statement. In the second case, we have an undifferentiated position that renders all attempts at gauging arguments about ethics futile, thus condoning an equivalence of moral stances that is hardly tenable.

This liberates the moral philosopher at least in one way: absolute moral relativism can be taken off the table, while all moral standpoints may still be subjected to critical scrutiny. If I have not found any moral philosophy that I can wholeheartedly embrace, I do not automatically have to resort to absolute moral relativism. If I have not found it yet, it does not mean that it does not exist at all. The enquiring mind need not lose all of its beacons.

To put moral relativism in its most pointed form, the doctrine insists that there are moral standpoints, yet that none of them may be considered any more valid than others. This does not oblige the moral relativist to say that everything is relative, or that there are no facts at all, such as scientific findings, or logical statements. It confines the relativism to the sphere of morality.

We need to make a further distinction. The English moral philosopher Bernard Williams pointed out that there may be a 'logically unhappy attachment' between a morality of toleration, which need not be relative, and moral relativism. Yet here we find a contradiction. If toleration is the result of moral relativism – if I should not contest anyone’s moral stance, because I judge that all such stances are similarly legitimate – I am making a general moral statement, namely: 'Accept everybody’s moral preferences.' However, such generalisation is something the moral relativist claims to avoid.

A potential argument that, superficially, seems to speak for moral relativism is that it can be one of many philosophical devices that helps us to come up with counterarguments to moral positions. Frequently, this will reveal that moral principles which were thought to be universal fail to be fully applicable – or applicable at all – in the particular case. However, this can lead to a false dilemma, suggesting only polar alternatives (either this or that, with no further options in between) when others can be found. The fact that there is a moral counterargument does not have to mean that we are only left with the conclusion that all moral viewpoints are now invalid.

Moral propositions may not have the same degree of certainty as self-evident statements, which cannot be doubted successfully – such as these:
• 'Something is.'

• 'I am currently having a conscious experience.'
These propositions present themselves as immediately true to me, since a) is something in itself, as would be any contestation of the statement, and b) even doubting or denying my conscious experience happens to be just that: a conscious experience.

Rarely do we really find a philosopher who endorses complete moral relativism, maintaining that any moral position is as valid as any other. However, occasionally such relativism slips in by default – when one shrugs off the search for a moral orientation, or deems moral judgements to be mere personal or cultural preferences.

Now and again, then, we might encounter variants of absolute moral relativism, and what we could do is this: acknowledge their value for critical discussion, then take them off the table.

Monday, 22 May 2017

Healthcare ... A Universal Moral Right

A Barber-surgeon practising blood-letting
Posted by Keith Tidman

Is health care a universal moral right — an irrefutably fundamental ‘good’ within society — that all nations ought to provide as faithfully and practically as they can? Is it a right in that all human beings, worldwide, are entitled to share in as a matter of justice, fairness, dignity, and goodness?

To be clear, no one can claim a right to health as such. As a practical matter, it is an unachievable goal — but there is a perceived right to healthcare. Where health and healthcare intersect — that is, where both are foundational to society — is in the realisation that people have a need for both. Among the distinctions, ‘health’ is a result of sundry determinants, access to adequate healthcare being just one. Other determinants comprise behaviours (such as smoking, drug use, and alcohol abuse), access to nutritious and sufficient food and potable water, absence or prevalence of violence or oppression, and rates of criminal activity, among others. And to be sure, people will continue to suffer from health disorders, despite all the best of intentions by science and medicine. ‘Healthcare’, on the other hand, is something society can and does make choices about, largely as a matter of policymaking and access to resources.

The United Nations, in Article 25 of its ‘Universal Declaration of Human Rights’, provides a framework for theories of healthcare’s essential nature:
“Everyone has the right to a standard of living adequate for the health and well-being of himself and of his family, including . . . medical care and necessary social services, and the right to security in the event of . . . sickness . . . in circumstances beyond his [or her] control.”
The challenge is whether and how nations live up to that well-intentioned declaration, in the spirit of protecting the vulnerable.

At a fundamental level, healthcare ethics comprises values — judgments as to what’s right and wrong, including obligations toward the welfare of other human beings. Rights and obligations are routinely woven into the deliberations of policymakers around the world. In practice, a key challenge in ensuring just practices — and figuring out how to divvy up finite (sometimes sorely constrained) material resources and economic benefits — is how society weighs the relative value of competing demands. Those jostling demands are many and familiar: education, industrial advancement, economic growth, agricultural development, security, equality of prosperity, housing, civil peace, environmental conditions — and all the rest of the demands on resources that societies grapple with in order to prioritise spending.

These competing needs are where similar constraints and inequalities of access persist across socioeconomic demographics and groups within and across nations. Some of these needs, besides being important in their own right, also determine — even if sometimes only obliquely — to health and healthcare. Their interconnectedness and interdependence are folded into what one might label ‘entitlements’, aimed at the wellbeing of individuals and whole populations alike. They are eminently relatable, as well as part and parcel of the overarching issue of social fairness and justice.

The current vexed debate over healthcare provision within the United States among policymakers, academics, pundits, the news media, other stakeholders (such as business executives), and the public at large is just one example of how those competing needs collide. It is also evidence of how the nuts and bolts of healthcare policy rapidly become entangled in the frenzy of opposing dogmas.

On the level of ideology, the healthcare debate is a well-trodden one: how much of the solution to the availability and funding of healthcare services should rest with the public sector, including government programming, mandating, regulation, and spending; and how much (with a nod to the laissez-faire philosophy of Adam Smith in support of free markets) should rest with the private sector, incluidng businesses such as insurance companies, hospitals, and doctors? Yet often missing in all this urgency and the decisions about how to ration healthcare is that the money being spent has not resulted in best health outcomes, based on comparison of certain health metrics with select other countries.

Sparring over public-sector versus private-sector solutions to social issues — as well as over states’ rights versus federalism among the constitutionally enumerated powers — has marked American politics for generations. Healthcare has been no exception. And even in a wealthy nation like the United States, challenges in cobbling together healthcare policy have drilled down into a series of consequential factors. They include whether to exclude specified ailments from coverage, whether preexisting conditions get carved out of (affordable) insured coverage, whether to impose annual or lifetime limits on protections, how much of the nation's gross domestic product to consign to healthcare, and how many tens of millions of people might remain without healthcare or be ominously underinsured, among more — precariously resting on arbitrary decisions. True reform might require starting with a blank slate, then cherry-picking from among other countries’ models of healthcare policy, based on their lessons learned as to what did and did not work over many years. Ideas as to America’s national healthcare are still on the anvil, being hammered by Congress and others into final policy.

Amid all this policy ‘sausage making’, there’s the political sleight-of-hand rhetoric that misdirects by acts of either commission or omission within debates. Yet, do the uninsured still have a moral right to affordable healthcare? Do the underinsured still have a moral right to healthcare? Do people with preexisting conditions still have a moral right to healthcare? Do people who are older, but who do not yet qualify for age-related Medicare protections, have a moral right to healthcare? Absolutely, on all counts. The moral right to healthcare — within society’s financial means — is universal, irreducible, non-dilutable; that is, no authority may discount or deny the moral right of people to at least basic healthcare provision. Within that philosophical context of morally rightful access to healthcare, the bucket of healthcare services provided will understandably vary wildly, from one country to another, pragmatically contingent on how wealthy or poor a country is.

Of course, the needs, perceptions, priorities — and solutions — surrounding the matter of healthcare differ quite dramatically among countries. And to be clear, there’s no imperative that the provision of effective, efficient, fair healthcare services hinge on liberally democratic, Enlightenment-inspired forms of government. Apart from these or other styles of governance, there’s more fundamentally no alternative to local sovereignty in shaping policy. Consider another example of healthcare policy: the distinctly different countries of sub-Saharan Africa pose an interesting case. The value of available and robust healthcare systems is as readily recognized in this part of the world as elsewhere. However, there has been a broadly articulated belief that the healthcare provided is of poor quality. Also, healthcare is considered less important among competing national priorities — such as jobs, agriculture, poverty, corruption, and conflict, among others. Yet, surely the right to healthcare is no less essential to these many populations.

Everything is finite, of course, and healthcare resources are no exception. The provision of healthcare is subject to zero-sum budgeting: the availability of funds for healthcare must compete with the tug of providing other services — from education to defence, from housing to environmental protections, from commerce to energy, from agriculture to transportation. This reality complicates the role of government in its trying to be socially fair and responsive. Yet, it remains incumbent on governments to forge the best healthcare system that circumstances allow. Accordingly, limited resources compel nations to take a fair, rational, nondiscriminatory approach to prioritising who gets what by way of healthcare services, which medical disorders to target at the time of allocation, and how society should reasonably be expected to shoulder the burden of service delivery and costs.

As long ago as the 17th century, René Descartes declared that:
‘... the conservation of health . . . is without doubt the primary good and the foundation of all other goods of this life’. 
However, how much societies spend, and how they decide who gets what share of the available healthcare capital, are questions that continue to divide. The endgame may be summed up, to follow in the spirit of the 18th-century English philosopher Jeremy Bentham, as ‘the greatest happiness for the greatest number [of people]’ for the greatest return on investment of public and private funds dedicated to healthcare. How successfully public and private institutions — in their thinking about resources, distribution, priorities, and obligations — mobilise and agitate for greater commitment comes with implied decisions, moral and practical, about good health to be maintained or restored, lives to be saved, and general wellbeing to be sustained.

Policymakers, in channeling their nations’ integrity and conscience, are pulled in different directions by competing social imperatives. At a macro level, depending on the country, these may include different mixes of crises of the moment, political and social disorder, the shifting sands of declared ideological purity, challenges to social orthodoxy, or attention to simply satiating raw urges for influence (chasing power). In that brew of prioritisation and conflict, policymakers may struggle in coming to grips with what’s ‘too many’ or ‘too few’ resources to devote to healthcare rather than other services and perceived commitments. Decisions must take into account that healthcare is multidimensional: a social, political, economics, humanities, and ethics matter holistically rolled into one. Therefore, some models for providing healthcare turn out to be more responsible, responsive, and accountable than others. These concerns make it all the more vital for governments, institutions, philanthropic organizations, and businesses to collaborate in policymaking, public outreach, program implementation, gauging of outcomes, and decisions about change going forward.

A line is thus often drawn between healthcare needs and other national needs — with the tensions of altruism and self-interest opposed. The distinctions between decisions and actions deemed altruistic and those deemed self-interested are blurred since they must hinge on motives, which are not always transparent. In some cases, actions taken to provide healthcare nationally serve both purposes — for example, what might improve healthcare, and in turn health, on one front (continent, nation, local community) may well keep certain health disorders from another front.

The ground-level aspiration is to maintain people’s health, treat the ill, and crucially, not financially burden families, because what’s not affordable to families in effect doesn’t really exist. That nobly said, there will always be tiered access to healthcare — steered by the emptiness or fullness of coffers, political clout, effectiveness of advocacy, sense of urgency, disease burden, and beneficiaries. Tiered access prompts questions about justice, standards, and equity in healthcare’s administration — as well as about government discretion and compassion. Matters of fairness and equity are more abstract, speculative metrics than are actual healthcare outcomes with respect to a population’s wellbeing, yet the two are inseperable.

Some three centuries after Descartes’ proclamation in favour of health as ‘the primary good’, the United Nations issued to the world the ‘International Covenant on Economic, Social, and Cultural Rights’ and thereby placing its imprimatur on ‘the right of everyone to the enjoyment of the highest attainable standard of physical and mental health’. The world has made headway, where many nations have instituted intricate, encompassing healthcare systems for their own populations, while also collaborating with the governments and local communities of financially stressed nations to undergird treatments through financial aid, program design and implementation, resource distribution, teaching of indigenous populations (and local service providers), setting up of healthcare facilities, provision of preventions and cures, follow-up as to program efficacy, and accountability of responsible parties.

In short, the overarching aim is to convert ethical axioms into practical, implementable social policies and programs.

Monday, 15 May 2017

The Philosophy of Jokes

I say, I say, I say...
Posted by Martin Cohen
Ludwig Wittgenstein, that splendidly dour 20th century philosopher, usually admired for trying to make language more logical, once remarked, in his earnest Eastern European way, that a very serious work, or zery serieuse, verk in philosophy could consist entirely of jokes. 
Now Wittgenstein probably meant to shock his audience which consisted of his American friend, Norman Malcolm (who he also once, advised to avoid an academic career and to work instead on a farm) but he was also in deadly earnest. Because, humour is, as he also is on record as saying, ‘not a mood, but a way of looking at the world’. Understanding jokes, just like understanding the world, hinges on having first adopted the right kind of perspective.

So here's one to test his idea out on.
‘A traveler is staying at a monastery, where the Order has a vow of silence and can only speak at the evening meal. On his first night as they are eating, one of the monks stands up and shouts ‘Twenty two!’. Immediately the rest of the monks break out into raucous laughter. Then they return to new silence. A little while later, another shouts out ‘One hundred and ten’, to even more uproarious mirth. This goes on for two more nights with no real conversation, just different numbers being shouted out, followed by ribald laughing and much downing of ale. At last, no longer able to contain his curiosity the traveler asks the Abbot what it is all about. The Abbot explains that the monastery has only one non-religious book in it, which consists of a series of jokes each headed with its own number. Since all the monks know them by heart, instead of telling the jokes they just call out the number. 
Hearing this, the traveler decides to have a look at the book for himself. He goes to the library and carefully makes a note of the numbers of the funniest jokes. Then, that evening he stands up and calls out the number of his favourite joke – which is ‘seventy six’. But nobody laughs, instead there is an embarrassed silence. The next night he tries again, ‘One hundred and thirteen!’, he exclaims loudly into the silence - but still no response. 
After the meal he asks the Abbott if the jokes he picked were not considered funny by the monks? ‘Ooh no’, says the Abbott. ‘The jokes are funny – it’s just that some people just don't know how to tell them!’
I like that one! And incredibly, it is one of the oldest jokes around. This, we might say, is a joke with a pedigree. A version of it appears in the Philogelos, or Laughter Lover, which is a collection of some 265 jokes, written in Greek and compiled some 1,600 odd years ago. So it’s old. Nevertheless, despite its antiquity, the style of this and at least some of the other jokes is very familiar.

Clearly, humour is something that transcends communities and periods in history. It seems to draw on something common to all peoples. Yet jokes are also clearly things rooted in their times and places. At the time of this joke, monks and secret books were serious business. But the first philosophical observation to make and principle to note is that both these jokes involved one of those ‘ah-ha!’ moments.

Humour often involves a sudden, unexpected shift in perspective forcing a rapid reassessment of assumptions. Philosophy, at its best, does much the same thing.

Monday, 8 May 2017

The Pleasures of Idle Thought?

Posted by John Hansen
What is the purpose of thought?  This was the focus of a monumental series of essays, chiefly written by the English lexicographer and essayist Dr. Samuel Johnson.  His essays, however, had a sting in the tail.
During the years 1758 to 1760, the Universal Chronicle published 103 weekly essays, of which 91 were written by Dr. Johnson.  These proved to be enormously popular.  The subject of the essays was a fictional character called The Idler, whose aspiration it was to engage in the pleasures of idle thought, to “keep the mind in a state of action but not labour”. Among other things, Dr. Johnson contemplates the many forms that idleness of thought can take – of which we describe a sample here: 
There is the kind of Idler, Dr. Johnson begins, who carries idleness as a “silent and peaceful quality, that neither raises envy by ostentation, nor hatred by opposition”.  His life will be less dreadful and more peaceful if he refrains from any serious engagement with matters, and yet he should not “languish for want of amusement”.  He needs the beguilement of ideas.

There is the Idler, too, who is on the point of more serious thought, yet “always in a state of preparation”.  It cannot fully be classified as idleness, since he is constantly forming plans and accumulating materials for the “main affair”.  But perhaps he fears failure, or he is simply captivated by the methods of preparation.  The main affair never arrives.

Then there is the Idler who, in his idleness, begins to feel the stirring of a certain unease.  He fills his days with petty business, and while he does so productively, yet he does not “lie quite at rest”.  When he retires from his business to be alone, he discovers little comfort.  His thoughts “do not make him sufficiently useful to others”, and make him “weary of himself”.

In fact, in time, there is the Idler who begins to tremble at the thought that he must go home, so that friends may sleep. At this time, “all the world agrees to shut out interruption”.  While his favourite pastime has been to shut out inner reflection, yet such inner reflection now seems to press in on him from all sides.

As life nears its end, there is the Idler who fears the end, yet in continuing idleness of thought, he seeks to ignore the fact that each moment brings him closer to his demise.  He now finds that his idle thoughts have trapped him.  His own mortality is disconcerting, yet something which he has never known how to face before.

In his final essay, which is written in a “solemn week” of the Church – a week of “the review of life” and “the renovation of holy purposes” – Dr. Johnson expresses the hope that “my readers are already disposed to view every incident with seriousness and improve it by meditation”.  Any other approach to thought will finally be self-defeating.
There are many, writes Dr. Johnson, who when they finally understand this, find that it is too late for them to capture the moments lost.  The last good gesture of The Idler is to warn his readers that the hour may be at hand when “probation ceases and repentance will be vain”.  Idleness of thought is not after all as innocent as it seems.  It comes back to bite you.  The purpose of thought, then, is ultimately to engage with life’s biggest questions.

It seems a remarkable achievement that Dr. Johnson apparently held an overview of about 100 essays in his head, which followed a meaningful progression over a period of three full years.  These essays continue to provoke and inspire today.  All but one – which was thought to be seditious – were bound into a single volume. An edition which is still in print and still being read by “Idlers” today is recommended below.



Read more:

Johnson, Samuel. “The Idler.” Samuel Johnson: Selected Poetry and Prose, edited by Frank Brady and W.K. Wimsatt, University of California Press, Ltd., 1977, 241-75.

By the same author:

Eastern and Western Philosophy: Personal Identity.

Monday, 1 May 2017

Picture Post # 24 The Privilege of Being Near and Far


'Because things don’t appear to be the known thing; they aren’t what they seemed to be neither will they become what they might appear to become.'

Posted by Tessa den Uyl and Martin Cohen


Image credit: from an original photographic plate created by Thomas Scarborough

The pictured child is not far, but too far to be near; or too close to be far, but not that near.  Instead, they are halfway, as the background, or foreground" seem to be as well. In-between is where we make distinctions; the difference is always in-between. But rather than representing elements between which a difference is made, this picture seems to represent the in-between itself.

Humidity and temperature change have touched the chemicals of this slide, and ‘X-rayed by life’ in this way, existence reaffirms itself as an ever-changing movement. Within the invisible that becomes visible, we might think to collect memories, freeze moments into pictures, and hence even to think of something as permanent...  yet, little by little, these perceptions are all erased by the visible that withdraws into the unknown.

Stability does not exist. The in-between hands to us that what we think, but do not truly know, and maybe if life would see us, it would say ‘we do not know much’. Thoughts alone make a thread that by stiff perseverance does not break, however often we may have to observe that the tissue is of dubious nature...

Monday, 24 April 2017

Fact and Value: The Way Ahead

Grateful acknowledgement to Bannor Toys for the image
Posted by Thomas Scarborough
Philosophy may begin to solve a problem as soon as it has identified it.  All too often, it has not.  This post, then, is about defining a problem—no more.  It is one of the most urgent problems of philosophy.
One of the most important aspects of philosophy is ethics.  Yet there is an issue which is prior to ethics, which has to be addressed first.  It is the problem of the fact-value distinction—a problem which, since it first appeared on the philosophical map, has cut a divide between fact and value, and more importantly, philosophy and ethics.  In the words of Ludwig Wittgenstein, ethics has become ‘what we cannot speak about’.  Yet ethics is all that we do, from morning until night, from year to year.  Today, this problem has filtered through to the common person, and has caused profound disorientation in our time.  On a social level, we are conflicted and confused with multiple ethics, while on a global level, our ethics increasingly seem to have come apart, with widespread poverty, social disintegration, and environmental destruction. 

It seems easy to describe the philosophical problem, yet far from easy to offer a solution.  Should I take a walk in the woods today, or should I write letters instead?  Should I be a ‘bachelor girl’, or should I marry Joe?  Should we travel to Mars?  Should we drop the Bomb?  On the surface of it, our reasons for choosing one course of action over another might seem obvious, yet it is not something we find ourselves able to decide on the basis of facts.  The problem is basically this: we know that this is how the world ‘is’—yet how should we know how it ‘ought’ to be?  The philosopher David Hume gave the problem its classical formulation: it is impossible to derive an ‘ought’ from an ‘is’.  It is impossible to establish any value amidst an ocean of facts—and on the surface of it, Hume would seem to be unimpeachably right.  The facts cannot tell us what to do. 

As we seek a solution to the problem—because we must solve this problem if we are to find our way through to any discussion of ethics—Hume’s conclusion would seem to mean only one of two things: either he identified a problem which cannot be solved, or he was thinking in such a way that he created his own problem.  What, therefore, if Hume laid the very foundation on which the fact-value distinction rests? 

Hume considered that all knowledge may be subdivided into relations of ideas on the one hand, and matters of fact on the other.  That is, one begins with a handful of facts, then relates them to one another.  It is the simple matter of a world where facts exist, and these exist in a certain relation to one another—yet one finds no basis on which to determine what that relation ought to be.  Generations later, the philosopher Bertrand Russell wrote that many philosophers, following Kant, have maintained that relations are the work of the mind, while things in themselves have no relations.  While Russell was not saying precisely the same as Hume, he was not far off.  A similar view is reflected in the theory of language.  The philosopher Rudolf Carnap considered, in the words of philosophy professor Simon Blackburn (specifically about the ‘material mode of speech’), ‘Speech objects and their relations are the topic.’  Wittgenstein, too, held this view, in his own unique way, through his multiplicity of language-games.

A pebble is a thing.  A house is a thing.  Even gravity, ideology, taxonomy are ‘things’ in a way (we call them constructs), which in turn may be related to other things.  In a sense, even a unicorn is a thing, although we are unlikely ever to find one.  Things, then, may further be involved in what we call truth conditions—which means that they may be inserted into statements, which can be affirmed or denied.  And when we affirm such statements, we call them facts.  For example, we insert the thing ‘pebble’ into a statement: ‘A pebble sinks’—or we insert the thing ‘unicorn’ into a statement: ‘The Scots keep unicorns.’  Our things are now involved in truth conditions, which means that our world is filled with facts.  And if not facts, then denials of  facts. 

Here, I think, is where the problem lies—and the way ahead.  To say that there is a fact-value distinction means that we have first divided up our reality into things on the one hand, and relations on the other.  On what basis, then, might we find our way back to a ‘grounded’ ethics?  Personally I believe the solution lies in the direction of levelling both fact and value to value alone—or things and relations to relations alone—in all fields, including science and mathematics.  Yet even then, we would not finally have reached the goal.  Even if we should be able to see everything in terms of value, which values should then be true, and which false?  And having once solved which values are true, we would need to establish on what basis I should—or could—submit to them.

Monday, 17 April 2017

On Quanta and Trees

Does Observation Create Physical Reality?

Image found on Mythapi Facebook page. Author unknown
Posted by Keith Tidman
The intervention of conscious observation into the quantum world — that observing an object to be in a particular location causes it actually to be there — is one of the core tenets of quantum theory. A tenet rigorously upheld through multiple experiments. The observer — his or her consciousness — cannot be separated from that physical reality. There is no reality independent of observation. 
As the visionary quantum physicist John Wheeler stated it, “No . . . property is a property until it is observed.” Which seems to apply as much to macro-sized objects — things in everyday life — as to micro-sized objects.

Three hundred-plus years ago, the philosopher George Berkeley prefigured the spirit of quantum theory’s then-future influence on the nature of reality, declaring, esse est percepi [to be is to be perceived]. Perception, he presciently advocated, is the essential benchmark — the necessary condition — for existence. The reality of things thus emerges from perception. As long as conscious observation is involved — a manifestation of the observer's capacity to consummate physical reality — all objects, large and small, acquire their existence.

So, how does this work? Quantum theory explains that until observation occurs, a potential object was in what’s called a state of ‘superposition’. An object, while in superposition, can be in any number of places, with observation causing it to be in just one location. There was no object isolated in space before it was observed or measured. Upon being observed, the object went from potentiality to actuality in that one location, the same for everyone.

What’s in superposition is the so-called ‘wave function’ — a mathematical description of all the possible states of an object. Only upon being observed does the wave function instantaneously and irreversibly ‘collapse’, causing the object to be in just one location. There is no distinction between the wave function and the object. According to the physics, the wave function is the object — in one-to-one correspondence with the physical thing.

The effect of observation and measurement has also been demonstrated by the so-called ‘double-slit experiment’. A stream of photons (light particles) passes one at a time through a screen with two slits. Behind the screen is a photographic plate, to capture what comes through the slits. In the absence of an observer, each photon will have appeared to pass through both slits simultaneously before creating a distinct interference pattern on the back plate — acting, in other words, like a wave, able to pass through both slits at once. However, in the presence of an observer — a person or detecting device in front of or behind each slit to see which slit the photon goes through — the interference pattern no longer shows up. Each photon appears to have passed through only one slit or the other. The photon has no location in spacetime until it’s observed or measured.

As suggested by both examples — the collapse of a wave function and the double-slit experiment — observation may be performed by a person directly and in real time. Or, observation may be accomplished by an apparatus (detector), whose measurements are observed by scientists later. In either case, observation remains critical, as explained by physicist and philosopher Roger Penrose:
“Almost all the interpretations of quantum mechanics . . . depend to some degree on the presence of consciousness for providing the ‘observer’ that is required [for] the emergence of a classical-like world.”
Meanwhile, the effects of these events also play into what’s known as quantum entanglement — what Albert Einstein famously dubbed ‘spooky action at a distance’. Quantum entanglement occurs when two particles remain ‘connected’, without regard to time and distance — that is, instantaneously, even at enormous distances — in such a way that actions performed on one particle are observed to have an immediate and direct effect on the other. This curious phenomenon, spooky or not, has been confirmed.

So, to the point, what does all this tell us about physical reality?

Causing the reality of an object by observation points to this initial moment of creation being subjective. It’s where an observer first intervenes — until which there is only ‘potential reality’. Accordingly, ‘initial reality,’ as we might call it, requires intervention by an observer — either a person or measuring device. Again, that initial moment of reality is subjective.

However, once initial conscious observation has occurred, the object henceforth exists for everyone. Further instances of observation change nothing about the physical reality already having been created. Reality is thus locked in for everyone — everywhere. Everyone who looks will find the object there, already existing. At that moment, reality is objective — the initially observed object remains so, existing for everyone.

In sum, then, the key takeaway is the presence of both a subjective and objective aspect to reality, depending on the moment — initial observation followed by subsequent observation.

At the moment of causing the object to exist, the observer also causes that object’s entire history to exist. Observation causes both the current reality and related past realities (history) to exist. Whether this is so for literally all observed things remains debatable — quantum mechanically, cosmologically, and philosophically.

Might the notion include, for example, the whole universe — the ultimate macro-sized object? As Wheeler postulates, in our looking rearward to the universe’s beginnings, might our observations result in selecting one out of alternative possible cosmic quantum histories, back to the Big Bang almost fourteen billion years ago? And, in line with the ‘anthropic principle’, might that quantum history account for the many finely tuned features of the universe essential for its and our existence — resulting in an objective macro-reality, the same for everyone, throughout the universe?

Accordingly, Berkeley argued that observation accounts for what gives material things their experienced qualities — an object’s initially experienced reality (its presence and qualities) as well as an object’s subsequently experienced reality.

Where Berkeley’s philosophy converges with the core of this discussion regarding the basis of objects’ reality is his argument for observation — perception — being essential for something to exist. What has been characterised as Berkeley’s empirical idealism. Berkeley argued that material objects are dependent on, not independent of, observation. In this important sense, observation and existence are the same. That is, they ‘cohere’.