Monday, 16 January 2017

Are We All Scientists?

Posted by Thomas Scarborough
What is it that separates scientific discourse from our ordinary, everyday discourse? Do the two represent separate, independent languages? Or are they fundamentally the same? Are we all scientists?
I first became aware of this question – not that it was new then – when I witnessed a boatsman surfing a reef at high tide. The timing was a special skill that depended on an intimate knowledge of the regularity of the waves which bombarded the reef. Basically, said the boatsman, the waves came in threes – although it was more complex than that. Was this science? In fact, where did science begin and where did it end?

Many thinkers suppose that there are two kinds of discourse in this world: the language of science, and the language of mind. The fundamental difference, writes philosophy professor Michael Luntley, is that the language of science allows only for the physical properties of things, while the language of mind has to do with perspective.

This distinction may not in fact be necessary. Is it not a matter of perspective  as to how we arrange the physical properties of things?

The novelist and critic Samuel Butler considered (to put it too simply) that science merely has to do with the conventions on which people act, and these conventions vary. This merely needs to be noted, however. It is not of great importance to this post, other than to show that it has been considered. More important is individuation:

Our reality – if we try to imagine it before our minds make any sense of it – has been variously described as an undifferentiated stream of experience, a kaleidoscopic flux of impressions, or a swirling cloud without any determinate shape. William James famously wrote of ‘one great blooming, buzzing confusion’.

To make sense of this confusion, then, we need to break up the undifferentiated stream of experience – sounds and sights, surfaces and motions – into individual units. And while the process of doing so may seem to be quite natural and simple to us, what actually happens is extraordinarily complex.

From our earliest childhood, we begin to individuate people, playthings, animals, and a great many things besides. Before long, we begin to look at picture books in which individuated things are represented in pictures, with their names printed underneath: dog, cat, apple, orange, sun, moon – and so on.

Importantly, during this process, we strip off many of the relations which are associated with a thing, and seek instead to create something which is self-contained. In Hegelian-style philosophy, such individuated ‘things’ are said to be abstract, insofar as they are thought of in isolaton from the whole to which they belong.

Take the example of a ‘horse’. When we speak of a horse as an individuated thing, we have little interest in what it eats, or if it sleeps, or even whether it has four legs or three. It is something else that makes it a ‘horse’. To put it another way, when we individuate something, it loses some of its informational content. While in reality, it is impossible to imagine a horse without air, or food, or something to stand on – and innumerable things besides – the individuated ‘horse’ needs none of this.

Even at the same time, however, we carry all of the associations of individuated things in the back of our minds. They are present with us even as we exclude them. That is, we do not completely forget what these things are in their totality, even though we individuate them.

Consider the statement, ‘The horse fell from the top of the cliff.’ While we all know that it is likely that the horse is now dead or seriously injured, the individuated unit ‘horse’ does not obviously contain such information. To put it another way, to individuate something does not mean that we truly and completely individuate it. It may be more accurate to say that we allow some aspects of it to recede yet not to leave the picture.

In fact, this is very much what we do with scientific research. In our experiments, in order to make any progress, we screen out unwanted influences on independent variables. Physics, wrote the 20th century linguists Wilhelm Kamlah and Paul Lorenzen, investigates processes by progressively screening things out. That is, we ignore unwanted relations.

Whether we say, “This cake needs thirty minutes in a hot oven” (a highly abstracted statement), or “I wonder whether it will rain today,” we are doing what the scientist does. We are removing informational content, to relate abstract things, one to the other.

With this in mind, we ‘do science’ all day long. There is little difference, in the most fundamental way, between the Hegelian-style abstraction of our everyday thinking and our scientific pursuits – except that, with science, we make a more rigorous effort to put out of our minds the relations which are unwanted.

Our scientific discourse, therefore, is closely related our ordinary, everyday discourse. We are all ‘scientists’.

‘Ordinarily, hypotheses used in science are more precise
and less vague than those adopted in everyday affairs.”
—W.V. Quine and J.S. Ullian.

Monday, 9 January 2017

Is Consciousness Bound Inextricably by the Brain?

From Qualia to Comprehension

Posted by Keith Tidman

According to the contemporary American philosopher, Daniel Dennett, consciousness is the ‘last surviving mystery’ humankind faces. Well, that may be overstating human achievements, but at the very least, consciousness ranks among the most consequential mysteries. With its importance acknowledged, does the genesis of conscious experience rest solely in the brain? That is, should investigations of consciousness adhere to the simplest, most direct explanation, where neurophysiological activity accounts for this core feature of our being?

Consciousness is a fundamental property of life—an empirical connection to the phenomenal. Conscious states entail a wide range of (mechanistic) experiences, such as wakefulness, cognition, awareness of self and others, sentience, imagination, presence in time and space, perception, emotions, focused attention, information processing, vision of what can be, self-optimisation, memories, opinions—and much more. An element of consciousness is its ability to orchestrate how these intrinsic states of consciousness express themselves.

None of these states, however, requires the presence of a mysterious dynamic—a ‘mind’ operating dualistically separate from the neuronal, synaptic activity of the brain. In that vein, ‘Consciousness is real and irreducible’, as Dennett's contempoary, John Searle, observed in pointing out the seat of consciousness being the brain; ‘you can’t get rid of it’. Accordingly, Cartesian dualism—the mind-body distinction—has long since been displaced by today’s neuroscience, physics, mathematical descriptions, and philosophy.

Of significance, here, is that the list of conscious experiences in the neurophysiology of the brain includes colour awareness (‘blueness’ of eyes), pain from illness, happiness in children’s company, sight of northern lights, pleasure in another’s touch, hunger before a meal, smell of a petunia, sound of a violin concerto, taste of a macaroon, and myriad others. These sensations fall into a category dubbed qualia, their being the subjective, qualitative, ‘introspective’ properties of experience.

Qualia might well constitute, in the words of the Australian cognitive scientist, David Chalmers, the ‘hard problem’ in understanding consciousness; but, I would suggest, they’re not in any manner the ‘insoluble problem’. Qualia indeed pose an enigma for consciousness, but a tractable one. The reality of these experiences—what’s going on, where and how—has not yet yielded to research; however, it’s early. Qualia are likely—with time, new technologies, fresh methodologies, innovative paradigms—to also be traced back to brain activity.

In other words, these experiences are not just correlated to the neurophysiology of the brain serving as a substrate for conscious processes, they are inextricably linked to and caused by brain activity. Or, put another way, neurophysiological activity doesn’t merely represent consciousness, it is consciousness—both necessary and sufficient.

Consciousness is not unique to humans, of course. There’s a hierarchy to consciousness, tagged approximately to the biological sophistication of a species. How aware, sentient, deliberative, coherent, and complexly arranged that any one species might be, consciousness varies down to the simplest organisms. The cutoff point of consciousness, if any, is debatable. Also, if aliens of radically different intelligences and physiologies, including different brain substrates, are going about their lives in solar systems scattered throughout the universe, they likewise share properties of consciousness.

This universal presence of consciousness is different than the ‘strong’ version of panpsychism, which assigns consciousness (‘mind’) to everything—from stars to rocks to atoms. Although some philosophers through history have subscribed to this notion, there is nothing empirical (measurable) to support it—future investigation notwithstanding, of course. A takeaway from the broader discussion is that the distributed presence of conscious experience precludes any one species, human or alien, from staking its claim to ‘exceptionalism’.

Consciousness, while universal, isn’t unbounded. That said, consciousness might prove roughly analogous to physics’ dark matter, dark energy, force fields, and fundamental particles. It’s possible that the consciousness of intelligent species (with higher-order cognition) is ‘entangled’—that is, one person’s consciousness instantaneously influences that of others across space without regard to distance and time. In that sense, one person’s conscious state may not end where someone else’s begins; instead, consciousness is an integrated, universal grid.

All that said, the universe doesn’t seem to pulse as a single conscious entity or ‘living organism’. At least, it doesn't to modern physicists. On a fundamental and necessary level, however, the presence of consciousness gives the universe meaning—it provides reasons for an extraordinarily complex universe like ours to exist, allowing for what ‘awareness’ brings to the presence of intelligent, sentient, reflective species... like humans.

Yet might not hyper-capable machines too eventually attain consciousness? Powerful artificial intelligence might endow machines with the analog of ‘whole-brain’ capabilities, and thus consciousness. With time and breakthroughs, such machines might enter reality—though not posing the ‘existential threat’ some philosophers and scientists have publicly articulated. Such machines might well achieve supreme complexity—in awareness, cognition, ideation, sentience, imagination, critical thinking, volition, self-optimisation, for example—translatable to proximate ‘personhood’, exhibiting proximate consciousness.

Among what remains of the deep mysteries is this task of achiveing a better grasp of the relationship between brain properties and phenomenal properties. The promise is that in the process of developing a better understanding of consciousness, humanity will be provided with a vital key for unlocking what makes us us.

Monday, 2 January 2017

Picture Post #20 Olber's Paradox raising insoluble questions

'Because things don’t appear to be the known thing; they aren’t what they seemed to be neither will they become what they might appear to become.'

Posted by Martin Cohen and Tessa den Uyl

A NASA  image from the Hubble Telescope looking into the 'Deep Field'
This is a patch of BLACK sky - empty when initially seen - even through the largest earthbound telescopes. Yet, with the  Hubble space telescope and a long-enough exposure time, even the darkness of space soon comes to glowing life. The point is, every bit of sky is actually packed with light - not merely with stars but with uncountable distant galaxies.

Heinrich Olbers (1758–1840) was a Viennese doctor who only did astronomy in his spare time, but realised that there was a bit of a logical problem about the night sky. And ‘O’ is for ‘Olbers Paradox’*,  which can be summed up by saying that if the universe is really infinite in size, the the night sky should not only be bright – but should be infinitely bright. Put short, we should see stars everywhere we look. So why don't we and why isn't the night sky all lit up ?

The paradox touches upon profound issues in cosmology, or the study and theory of the origins of the universe. Simply saying that most of the stars are too far away to see is not enough. Certainly it is true that starlight, like any other kind of light, dims as a function of distance, but at the same time, the number of light sources in the ‘cone of vision’ increases – at exactly the same rate. In fact, on the mathematics of it, given an infinite universe, with galaxies and stars distributed uniformly, the whole night sky should appear to be not black, not speckled, but white!

Olbers’ paradox is a ‘thought experiment’ in the very good sense that most of the reasoning is done by hypotheticals. What if the universe is infinitely large? And infinitely old? If the stars and galaxies are (on average) spread out evenly?

Various possible explanations have been offered to explain the paradox. Such as that stars and galaxies are not distributed randomly, but rather clumped together leaving most of space completely empty. So, for example, there could be a lot of stars, but they hide behind one another. But in fact, observations reveal galaxies and stars to be quite evenly spread out.

What then, if perhaps the universe has only a finite number of stars and galaxies? Yet the number of stars, finite or not, is definitely still large enough to light up the entire sky…

Another idea is that there may be too much dust in space to see the distant stars? This seems tempting, but ignores known facts. Like that the dust would heat up too, and that space would have a much higher. The astronomers who took this image claim it shows some kind of spectral shift into the red specturm. Or is it only the dust? The questions are not really resolved, even yet.

So what is the best answer to Olbers’ riddle? The favoured explanation today is that although the universe may be infinitely large, it is not infinitely old, meaning that the galaxies beyond a certain distance will simply not have had enough time to send their light over to fill our night sky. If the universe is, say, 15 billion years old, then only stars and galaxies less than 15 billion light years away are going to be visible. Add to which, astronomers say that the phenomenon of red shift may mean some galaxies are receding from us so fast that their light has been ‘shifted’ beyond the visible spectrum.

After reading this, and then standing here on planet Earth and watching the night sky, one might feel a little trapped by the questions. Our sight is limited and it always will be but maybe this is our hope for we can continue to philosophise: afte rall, what are we thinking? The picture above might as well represent pieces of coloured glass, under water visions where fluorescent life flows in deep dark sees, a pattern for printed cloth. Our brain only represents what we think we see, not necessarily the reality in which we live? In the incredible immensity of space, mankind has always been aware of this, even if, once in a while, the tendency is to forget.

* Although the paradox carries Olbers’ name,  it can really be traced back to Johannes Kepler in 1610.  In Wittgenstein's Beetle and Other Classic Thought Experiments, Martin’s book, which talks a little more about all this, 

Monday, 26 December 2016

Disruptive Finance

  Posted by Martin Cohen 

It seems like every day, President-elect Trump announces some outrageous new strategy, abandons some long-standing tenet of policy, or upsets long-standing conventions. And that’s of course BEFORE becoming President!

You’d maybe have thought, as a businessman, that he’d appreciate the need for research, consultation, and caution,. But if so, you’d not understand the kind of business circles that Donald Trump moves in. He’s not so much a shopkeeper, in the mold of Britain’s Margaret Thatcher, whose father was called (albeit misleadingly) a corner-store grocer and whose motto was expenditures must match savings – as a financier in the mold of, well, Jordan Belfort  - the wolf of Wall Street.

Trump is part one of a new breed of super-wealthy and totally unscrupulous financiers whose motto is DISRUPTION. I followed the activities of some of  them in the UK, such as Edi Truell, founder and CEO of Disruptive Capital Finance, and the path led eventually to the spreading chaos (and high stock market prices) that is Britain leaving the EU. Where Trump’s plans will go is anyone’s guess – and that’s exactly how he likes it. Because in uncertainly – and upheaval – disruptive financiers make millions.

The film is based on the true story of Belfort - who ultimately came a cropper. But there’s no reason to suppose that possibility is worrying Trump or his circle of friends and advisors – like Britain’s Nigel Farage. To Americans, Farage is the man who persuaded Britons to vote to leave the European Union – but to those who know him better, Farage is a commodities trader whose worked in both London and New York. And Farage’s campaign to get Britain to up-end all its economic and political commitments was supported by a range of other figures from high finance.

Take Richard Tice,. CEO and a partner at Quidnet Capital. and co-chair of Leave.EU the official campaign for ‘Brexit’.

Tice, of course. still insists that leaving the EU can be pulled off without upending the economy. The former head of CLS Holdings Plc, a major property-investment firm, calls it a "very simple process" in which the EU would negotiate a new accord with a separate Britain in one to two years. "I don’t think there’d be any disruption at all."

Fellow Brexit campaigners  Crispin Odey, founding partner of Odey Asset Management,  and former Tory party treasurer Peter Cruddas, founder of online trading company CMC Markets, all look to a new order in which financier s are freed from regulation. Do you remember the financial crisis of 2007-8 – the one that almost brought the Western world to collapse? Well, they evidently don’t. Instead their manta is about seizing control of the levers of political power  in order to increase the ability of speculators to make money.
As Vote Leave chief executive Matthew Elliott has said: “Far from the picture of gloom painted by the Government, it is clear the City of London would not only retain its pre-eminence as the world’s most important financial centre, but would also thrive after freeing herself from the EU’s regulatory shackles.”
In both the UK and the US, an influential cadre of super-rich have clear professional reasons for wanting to change the political norms: a dislike for what they regard as overburdensome – and profit-reducing – regulation.
According to one source close to the industry: “I think there’s a genuine conviction they have that all regulation is rubbish.” But, he says, the profit potential from leaving is also a factor: “They love taking a view ... Market dislocation is fine if you’re a hedge fund guy.”
Trump is not so much a reaction to the Obama presidency – as he is to the flood of regulation that followed the 2008 financial crash. And so, to understand what’s coming next ignore all the angry tweets and photo opportunities and instead recall that classic piece of political advice: follow the money. There may be more logic to Trump and his newly assembled band of bankers and financiers’ desire to shake things up than people give him credit for. But it’s the opposite logic to what he claimed to stand for.

And a poem

one drizzled day
donald and nigel
over buttered eggs
and hot crumpet
thought to exchange keys

‘you live in my house
& i in yours donald’
said nigel
‘on the contrary
i in mine you inside’
replied donald

From: the booklet: 45th President Elect, by Ken Sequin

Monday, 19 December 2016

Is Violence Therapeutic?

Posted by Bohdana Kurylo
In his book, The Wretched of the Earth, the theorist of colonialism Frantz Fanon provides an unprecedented legitimation of violence – passing beyond mere self-defence or the removal of an oppressive social system. Violence becomes a necessary therapy to address the ‘systemised negation of the other’. Yet to what extent is violence really therapeutic? There seems to be a fine line between its utility and its harm.
Fanon offered three major reasons as to why violence is crucial for resistance:

• Violence may be a liberating force. From his observations of the behaviour of the colonisers, he concluded that the oppressed are not considered to be of equal human value. In contexts where one party possesses a clear dominance over another, universal values, such as justice or equality, apply only to the more powerful. Within this context, nonviolence is not an option, since it simply sustains the violence of the oppressors, whether physical or mental. The struggle, for the oppressed, is only a distraction from the concrete demands of emancipation.

• Violence may be a cleansing force. It rids the oppressed of their inferiority complex. Fanon claimed that the belief that emancipation must be achieved by force originates intuitively among the oppressed. He observed that, through generations, the oppressed internalise the tag of worthlessness. Anger at their powerlessness eats them from the inside, begging for an outlet. Violence becomes psychologically desirable, as it proves to the oppressed that they are as powerful and as capable as the oppressor. It forces respect – but more importantly, it gives the oppressed a sense of self-respect. By cleansing them of their inferiority complex, violence reinstates them as human beings.

• Violence may be a productive force. On a grander scale, Fanon saw violence as the means of creating a new world. Through violence, a new humanity can be achieved. Violence is instrumental in raising collective consciousness and building solidarity in the struggle for freedom. This creative characteristic of violence could bring a new political reality that comprised the creation of new values.

Ends justify means for Fanon, who accepts even absolute violence for the purposes of liberation and regeneration. Although he built on the specific case of colonial oppression, his ideas can be applied to violence against any regime in which a group’s rights are severely and systematically violated, whether there be cultural, gender, or economic oppression.

The Irish Republican Army (IRA) often referred to Fanon to justify its terrorist violence. One may recall how the partition of Ireland was followed by social, political, and economic discrimination against the Catholic population of Northern Ireland. The attempts of the British government to suppress the IRA by force only reinforced the need to find an outlet for the accumulated frustration and internalised violence. Indeed, Fanon himself claimed that terrorism may be an ‘unfortunate necessity’ to counter the retaliation of a regime after the initial revolt of the oppressed.

Nevertheless, to the extent that the violence of the IRA can be explained by Fanon, this case also disproves Fanon. In particular, the IRA experience disproves the justification of the use of violence as the only means of creating a new culture of politics. Lasting for more than thirty years, the Northern Ireland conflict shows that violence often leads to stalemate, and is unable to deliver the desired results.

The eventual willingness of the British government to recognise the legitimacy of the insurgents’ demands, however limited, offered more possibilities for creating a new culture of politics than continued bloodshed. After all, the fact that Algeria is still torn apart by violence today illustrates that the efficacy of violence in the short term can be mistaken for its efficacy in general. The danger is that the means may overwhelm the ends. Thus Fanon’s belief that, after a period of confrontation, the door would eventually be open for a modern and peaceful society seems unrealistic.

Most importantly, Fanon failed to see that reusing the methods of the oppressor is antagonistic to the idea of creating new values. For Fanon, violence signals the point of no return to the dehumanised past. Yet he was vague as to how a capitulation to anger can help establish a new humanity, for there is nothing new about the use of violence to achieve one’s aims. In fact, is it not merely an imitation of the enemy? A new system of values is rotten from the inside if it is founded on mimicking the perpetrator’s actions.

Monday, 12 December 2016

Poetry: The Name Card

The Name Card

 A poem by Chengde Chen 

Attending a conference,
you receive some name cards.
Sorting through them, you care about
not the name, but the title,
which is the weight of the card.

From it, you assess the function,
estimating the time and place
for any possible uses.
If there is no direct application,
indirect values are explored.
For instance, to refer it to a friend –
there may be a potential return
of some kind in future…

To imagine a relationship from a card
is unlike fantasizing sex from pornography,
which is, more or less, poetic.
The most non-poetic essence
of imagination
is to have interests deduced
from symbols!

Chengde Chen is the author of Five Themes of Today: philosophical poems. Readers can find out more about Chengde and his poems here

Monday, 5 December 2016

Picture Post #19 The Pillars of Creation

'Because things don’t appear to be the known thing; they aren’t what they seemed to be neither will they become what they might appear to become.'

Posted by Keith Tidman

Picture Credit: Hubble Space Telescope (NASA)

A dynamically ‘living universe’ with its own DNA captured by the Hubble space telescope. The image opens a window onto the cosmos, to wistfully wonder about reality.
Among the iconic images of space captured by the Hubble space telescope is this Eagle Nebula’s ‘Pillars of Creation’—revealing the majesty and immensity of space. The image opens a window onto the cosmos, for us to wistfully wonder about the what, how, and (especially) why of reality.

The image shows the pillars’ cosmic dust clouds, referred to as ‘elephant trunks’—revealing a universe that, like our species, undergoes evolution. One thought that intrudes is whether such an immense universe is shared by other ‘gifted’ species, scattered throughout. By extension, Hubble’s images make one wonder whether our universe is unique, or one of many—undergoing the ‘creative destruction’ of these pillars.

Does the image evoke a sense of relative peace—like our own speck in our galaxy’s outer spirals? Or a universe more typically characterised by the distantly familiar roiling, boiling violence—expressing itself in the paradoxical simultaneity of creation and destruction?

The ‘Pillars of Creation’ are—were—some 7,000 light-years away! They may even no longer exist; due to the time that light takes to get to Hubble. An ironic twist of fate, given the name. The ‘shape’ of the universe’s content is thus transitory – like our own bodies, as time elapses and we react to the environment.

For some, the ‘Pillars of Creation’—their church-like spires—inspire thoughts of divine creation. Alternatively, evidence suggests our universe rests in science. Where ‘nothingness’ isn’t possible and ‘something’—a universe—is the default.

Sunday, 4 December 2016

God: An Existential Proof

Posted by Thomas Scarborough
Ernest Hemingway has one of his characters say, 'The world breaks everyone.' In crafting this now famous line, did he hand us a new proof for the existence of God?
It all rests on the way we are motivated, and the changes our motivations undergo in the course of a lifetime.

What is it that motivates me to plant a garden (and to plant it thus), to embark on a career, or to go to war? Today there is little disagreement that, basically, I am motivated when I hold up the world in my head to the world itself. Where then I find a difference between the two, I am motivated to act. It is, writes neuropsychologist Richard Gregory, the encounter with the 'unexpected' that motivates me.

Now consider that, in one’s early years, one's motivations are fresh and new. The world in one’s head seems to offer one high hopes, pleasant dreams, a good view of humanity, and enthusiasm to spare. Yet as one progresses through life, 'the world breaks everyone'. It breaks them, not so much through the hardships it brings to bear on the body—if this should matter at all—but because of the way in which it assails the mind and emotions.

Disillusionment sets in. And this, presumably, means coming to see things for the way they are. As we grow and mature, we come to see that the world is a place where hopes wither, dreams die, good turns to bad, and our energies are sapped. We become jaded, tired, and disinterested. 'My hopes were all dead,' Charlotte Brontë has one of her characters say. 'I looked on my cherished wishes, yesterday so blooming and glowing. They lay stark, chill, livid corpses that could never revive.'

With no world now to hold up to the world, because we have finally seen the world for what it is, we lose our motivation—ultimately all motivation—because motivation is the 'unexpected'.

And so we lose the ability to live. Ernest Hemingway had no motivation to go on. He famously shot himself with a double-barrel shotgun. It is 'the very good,' he wrote, 'and the very gentle and the very brave' who go first. As for the rest—they, too, shall be found.

What then to do, when we are broken? How may a person restore any motivation at all, when they have come to see the world as it is?

It needs to be something beyond this world—and though we here 'appeal to consequences'—the argument that it must be so—indeed it must be so. We cannot go on with a view of this world which is born of the world itself. Small wonder, then, that it is central to religious thinking that 'whether we live, we live unto the Lord, and whether we die, we die unto the Lord'. We continue to strive—but we strive for something which is other-worldly.

There may be another, logical possibility. If not something beyond this world, then we need an interventionist God who through his being there, changes our expectations—a God who reaches down into our reality—a God who acts in this world. The world is not, therefore, all that I expect it to be. This, too, is a dominant religious theme: 'For by you I have run through a troop,' writes David. 'By my God have I leaped over a wall.' He could turn the tables, through his God.

What then is that motivation which lies beyond this world? What then are the interventions of God? This would seem to lie beyond the bounds of philosophy, and in the realm of theology.

Paradoxically, if we accept the 'God option' as the basis of all true motivation, then this would seem to be the option of deepest disillusionment—at the very same time as it offers us the greatest hope. One has no need for a new and fundamentally different motivation, in God, unless the world in one’s head is no longer found to be worth holding up to the world.

Monday, 28 November 2016

The Silence of God

Posted by Eugene Alper
Perhaps God is so silent with us for a reason.  If He were to answer, if He were to respond to even one question or one plea, this would spell the end of our free will.
For once we knew His preferences for us, once we could sense His approval or disapproval, we would no longer exercise our own preferences, we would not choose our actions.  We would be like children again, led by His hand.  Perhaps He did not want this.  Perhaps He did not create us to be perpetual children.  Perhaps He designed the world so we could think about it and choose our actions freely.

But mentioning free will and God's design in the same sentence presents a predicament—these two ideas need to be somehow reconciled.  For if we believe that God designed the world in a certain way, and the world includes us and our free will, its design has to be flexible enough for us to exercise our free will within it.  We should be able to choose to participate in the design or not, and if so, to which degree.  Should we choose to do something with our life—however small our contribution may be—maybe to improve the design itself, or at least to try to tinker with it, we should be able to do so.  Should we choose to stay away from participating and become hermits, for example, we should be able to do so too.  Or should we choose to participate only partially, every third Tuesday of the month, we should be free to do so as well.

This thinking smacks of being childish.  We want God's design to be there and not to be there at the same time.  We want God to be a loving father who is not overly strict.  This is how we created His image in the Old Testament: God is occasionally stern—to the point of destroying almost the entire humankind—but loving and caring the rest of the time.  This is how we created His image in the New Testament, too: God so loved the world that He sent His own Son to redeem it.  Maybe all we really want is a father again; whatever beings we imagine as our gods, we want the familiar features of our parents.  Maybe we are perpetual children after all.  We want to play in our sandbox—freely and without supervision—and build whatever we want out of sand, yet we want our father nearby for comfort and protection.

There is no need to reconcile anything.  This is how it works.  Our free will fits within God's design so well because it is free only to a degree.  Time and space are our bounds.  We have only so much time until we are gone, and we have only so much energy until it runs out.  Gravity will assure that we can jump, but not too high, that we can fly, but not too far.  We cannot cause too much damage.  Sitting in the sand, we can fight with other players, we can even kick them out, we can build our own castles or destroy theirs, but we cannot destroy the sandbox itself.  Maybe this is the secret of the design. 

Monday, 21 November 2016

Individualism vs. Personhood in Kiribati

By Berenike Neneia
The French philosophes thought of the individual as being 'prior to' the group. This has been a point of strenuous debate ever since. But whatever the case, individualism is characteristic, in some way, of the whole of our Western society today.
I myself am privileged to belong to a society which would seem to have been stranded in time – and while individualism now influences us profoundly, the cultural patterns of the past are still near. This short post serves as an introduction to a concept which is central to my culture in Kiribati: te oi n aomata.

Te oi n aomata literally means 'a real or true person'. It includes all people, whether men or women, young or old. This is not merely a living person who has concrete existence, but one who is seen by the community which surrounds him or her to have certain features, whether ascribed or acquired. Therefore it is by these features that a community's recognition of a person is 'weighed': as to whether they are an oi n aomata, 'a real or true person', or not.

Since Kiribati society is patriarchal, there is a distinction between how a man (oi ni mwane) and a woman (oi n aine) are seen as oi n aomata. Men will be considered oi n aomata through their material possessions, while women will be known as oi n aomata by their conduct – which is meant in the sense that a woman will be well mannered, respectful, obedient, and so forth. It is rare for a woman to possess or inherit the family’s vital assets such as land, house, taro pit, and canoe. The only exception is a woman who is an only child.

Prior to the coming of Europeans to the shores of Kiribati, a man who was regarded as an oi n aomata or oi ni mwane (a real or true man) was 'renowned' as one who came from a good family (which is, a family disciplined in cultural norms), in which he had a good reputation. He would be the first-born or only child, he would have many lands, and he would have a 'house' of his own: not of European design, but a cluster of structures used for meeting, cooking, sleeping, and relaxing. These belongings were very valuable, as they indicated that a man was 'in the community'.

In relation to such possessions, a man would further have the skills and the knowledge of how to fish and how to cut toddy, which were vital to the sustenance of his family. He would also know how to build his 'house', and to maintain it. As a man, he was the one who would protect his family from all harm.

These were some of the important skills which characterised an oi n mwane or 'real or true man'. He was very highly regarded in communities.

Similarly, to be an oi n aomata or oi n aine (a real or true woman), a woman had to come from a good family (again, a family disciplined in cultural norms). She would be well nurtured and well taught, and she herself would behave according to Kiribati cultural norms. She would know how to cook and to look after her family well. This means that everyone in her household would be served first, while she would be served last.

She would know how to weave mats, so that her family would have something to lie on. She would know respect and not talk back, especially to her husband, her in-laws, and elders. Crucially, a woman would remain a virgin until she was married, since this involved the pride of her family. Therefore, she would give no appearance of indiscreet or suspect behaviour.

A woman had to maintain her place within the home, and look after her family well. As such she was considered an oi n aine or 'real and true woman', since she was the backbone of her family.

Today when one speaks about people, there is a saying, 'Ai tiaki te aomata raom anne,' which refers to those who are 'no longer an (ordinary) person'. Rather, they have acquired, inherited, and possessed important things in the context of our culture, which make life much more enjoyable, much easier, and much better for all (with less complications, and less suffering).

However, where globalisation is now at the shores of Kiribati, the definition of an oi n aomata, 'a real or true person', is evolving in relation to changing patterns, norms, and life-styles of the Kiribati people. We see now the effects of these changing patterns – from a communal life to a more individualistic life-style. While this has brought various benefits to society, in many ways it has not been for the better.

Monday, 14 November 2016

Pseudo Ethics

Posted by Thomas Scarbrough
Jean-François Lyotard proposed that efficiency, above all, provides us with legitimation for human action today. If we can only do something more efficiently – or more profitably – then we have found a reason to do it. In fact society in its entirety, Lyotard considered, has become a system which must aim for efficient functioning, to the exclusion of its less efficient elements.
This is the way in which, subtly, as if by stealth – we have come fill a great value vacuum in our world with pseudo values, borrowed from the realm of fact. Philosophically, this cannot be done – yet it is done – and it happens like this:

The human sphere is exceedingly complex – and inscrutable. It is one thing for us to trace relations in our world, as by nature we all do – quite another to know how others trace relations in this world.  While our physical world is more or less open to view, this is not the case with worlds which exist inside other people's minds – people who further hide behind semiotic codes: the raising of an eyebrow, for instance, or a laugh, or an utterance.

A million examples could not speak as loudly as the fact that we have a problem in principle. Like the chess novice who randomly inserts a move into the grand master's game, as soon as we introduce others into the picture, there is a quantum leap in complexity.  Small wonder that we find it easier to speak about our world in 'factual' terms than in human terms.

Further, in the human sphere we experience frequent reversals and uncertainties – war, famine, and disease, among many other things – while through the natural sciences we are presented with continual novelty and advance. In comparison with the 'factual' sphere, the human sphere is a quagmire. This leads to a spontaneous privileging of the natural sciences.

We come to see the natural sciences as indicating values, where strictly they do not – and cannot. That is, we consider that they give us direction as to how we should behave. And so, economic indicators determine our responses to the economy, clinical indicators determine our responses to a 'clinical situation' (that is, to a patient), environmental indicators determine our responses to the state of our environment, and so on.

Yet philosophers know that we are unable, through facts, to arrive at any values. We call it the fact-value distinction, and it leaves us with only two logical extremes: logical positivism on the one hand, or ethical intuitionism on the other. That is, either we cannot speak about values at all, or we must speak about them in the face of our severance from the facts. 

We automatically, impulsively, instinctively react to graphs, charts, statistics, imagining that they give us reason to act. Yet this is illusory. While the natural sciences might seem to point us somewhere, in terms of value, strictly they do not, and cannot. It is fact seeking to show us value.

Thus we calculate, tabulate, and assess things, writes sociologist James Aho, on the basis of 'accounting calculations', the value of which has no true basis. Such calculations have under the banner of efficiency come to colonise themselves in virtually every institutional realm of modern society – while it is and has to be a philosophical mistake.

Of course, efficiency has positive aspects. We receive efficient service, we design an efficient machine, or we have an efficient economy. This alone raises the status of efficiency in our thinking. However, in the context of this discussion, where efficiency represents legitimation for human action, it has no proper place.

The idea of such efficiency has introduced us to a life which many of us would not have imagined as children: we are both processed and we process others, on the basis of data sets – while organic fields of interest such as farming, building, nursing, even sports, have been reduced to something increasingly resembling paint-by-numbers. It is called 'increased objectification'.

With the advance of efficiency as a motive for action, we have come to experience, too, widespread alienation today: feelings of powerlessness, normlessness, meaninglessness, and social isolation, which did not exist in former times. Karl Marx considered that we have been overtaken by commodity fetishism, where the devaluation of the human sphere is proportional to the over-valuation of things.

Theologian Samuel Henry Goodwin summed it up: 'We are just a number.' Through pseudo values, borrowed from the realm of fact, we are dehumanised. In fact, this must be the case as long as we take numerate approaches to human affairs on the basis that they are 'indicated' by facts. Cold fact encroaches on the complex and subtle relations which are represented by the human sciences – in fact, by life as it is lived.

Sunday, 13 November 2016

DRAFT. Does Philosophy Adjust Us Or Convert Us?

Bird's eye view of Manhattan, New York
Posted by Thomas Scarborough
Assuming philosophical thought changes us, does it do so through gradual adjustments to our lives, or does it bring us to a conversion?
In a previous post, I wrote that we are in bondage to an 'unacknowledged metaphysics' -- a term which Wilhelm Kamlah and Paul Lorenzen used to describe a metaphysics which lies beneath all metaphysics, and determines what metaphysicans will think before they even think it.

To put it simply, before we even begin to think, we are in bondage to the way our words (and concepts) will arrange themselves. Words are attracted to words like magnets – and snap-snap-snap, we have a metaphysic.  To put it another way, we are slaves to semantic structures, into which our words (and concepts) fit in predetermined ways.'It is language that speaks,' wrote George Steiner, 'not, or not primordially, man.'

There may be an escape, I wrote, if we take a bird’s eye view of our world.  Rather than working on the inside of semantic structures, we may to fly above them, to see the world as an infinite expanse of relations below.  On this basis, we may break with the 'unacknowledged metaphysics' of Kamlah and Lorenzen. A number of things change, when we take the bird’s eye view. Most importantly:
• Where previously we arranged our words (and concepts) in predetermined ways, we are now set free for more expansive thinking.

• Where previously we entertained narratives of mastery and progress, we now recognise our finitude as we survey the infinite. 

• Where previously we sought to conquer infinite relations with infinte control, we now understand our totalising urges.
The bird’s eye view, in fact, represents a complete change of perspective.  It is not merely adjusted thinking.  It is a radical switch.  Where previously we joined words to words (and concepts to concepts), we now seek to discern the meta-features of all words (and concepts). Instead of finding truth on the inside of semantic tructures, we now seek it outside.

Instead of tracing with a finger the colourful contours of oils on canvas close up, we may step back to see, say, The Scream.  Instead of exploring the streets of Manhattan footstep by footstep, we may soar above them, to look down on the whole prospect. In practice:
• We abandon the short-sighted thinking which sacrifices holistic considerations to the narrow view: the environment to business interests, family devotion to ambition, personal development to personal indulgence, to give a few examples.

• We set aside pride in our human ability and achievement, which has fuelled our headlong rush into disaster: an overemphasis on the analytic intelligence, empty materialism, and the creation of artificial needs, among other things.

• We no longer pursue totalising ambitions, which have led to untold disaster in this world: metaphysics, ideologies, world views, narratives, causes – and on a more personal level, ideals, traditions, ambitions, principles which enslave our own selves. 
We may go one step further.  The old way of thinking – which is thinking on the inside of semantic structures – leads us into various dead ends.  Dead ends which we may experience intellectually and emotionally – and some would say, spiritually.  Thus there may come a point where this seems to be untenable.  In short, a conversion may be preceded by life crisis.

Perhaps, with this, we may further solve a fundamental problem of ethics.  The first problem of ethics is how we may describe any ethics at all. It is the problem of the fact-value distinction. Yet even if should be possible for us to describe an ethics, there remains the problem of how we may make it anything other than voluntary.  A conversion, because it represents a different outlook on the world, commits us, simply because we now see things differently.

Monday, 7 November 2016

Picture Post #18 A Somersault for the Suspension of Civilisation

'Because things don’t appear to be the known thing; they aren’t what they seemed to be neither will they become what they might appear to become.'

Posted by Tessa den Uyl and Martin Cohen

Photo credit: students of  A Mundzuku Ka Hina, communications workshop. 

A life conditioned by the dictates of competition and consumption cannot but bring great social differences along in its train. When we ascribe symbolic values to a consumptive life, ideas will conform to ideals in which our moral duties are the rights of others on us.

The subtle way social disproportions are perceived as if a causa sui, something wherein the cause lies within itself creates a world of facts based upon competitive abstractions that endlessly rehearse on a Procrustean bed.

The salto (flying somersault) performed by the boy, who depends for his survival on a rubbish-dump, breaks with this gesture the conditioned life. What he breaks is to function, which means to think, alike a certain ‘life-design.’ His action shows the incompleteness of our relationships in an abstract world.

His jump is a jump into a space of non-facts.

In the suspension of the movement is the liberating being of lightness.

Monday, 31 October 2016

Nothing: A Hungarian Etymology

'Landing', 2013. Grateful acknowledgement to Sadradeen Ameen
Posted by Király V. István
In its primary and abstract appearance, nothing is precisely 'that' 'which' it is not. However, the word is still there, in the words of all the languages we know. Here we explore its primary meaning in Hungarian.
The Hungarian word for nothing – 'semmi' – is a compound of 'sem' (nor) and 'mi' (we). The negative 'sem' expresses: 'nor here' (sem itt), 'nor there' (sem ott), 'nor then' (sem akkor), 'nor me' (sem én), 'nor him, nor her' (sem ő). That is to say, I or we have searched everywhere, yet have found nothing, nowhere, never.

However much we think about it, the not of 'sem' is not the negating 'not', nor the depriving 'not' which Heidegger revealed in his analysis of 'das Nichts'. The not in the 'sem' is a searching not! It says, in fact, that searching we have not found. By this, it says that the way that we meet, face, and confront the not is actually a search. Thus the 'sem' places the negation in the mode of search, and the search into the mode of not (that is, negation).

What does all this mean in its essence?

Firstly, it means that, although the 'sem' is indeed a kind of search, which 'flows into' the not, still it always distinguishes itself from the nots it faces and encounters. For searching is not simply the repetition of a question, but a question carried around. Therefore the 'sem' is always about more than the tension between the question and its negative answer, for the negation itself – the not – is placed into the mode of search! And conversely.

Therefore the 'sem' never negates the searching itself – it only places and fixes it in its deficient modes. This way, the 'sem' emphasises, outlines, and suffuses the not, yet stimulates the search, until the exhaustion of its final emptiness. The contextually experienced not – that is, the 'sem' – is actually nothing but an endless deficiency of an emptied, exhausted, yet not suspended search.

This ensures on the one hand, the stability of the 'sem', which is inclined to hermetically close up within itself – while it ensures on the other hand, an inner impulse for the search which, emanating from it, continues to push it to its emptiness.

It is in the horizon of this impulse, then, that the 'sem' merges with the 'mi'. The 'mi' in Hungarian is at the same time an interrogative pronoun and a personal pronoun. Whether or not this linguistic identity is a 'coincidence', it conceals important speculative possibilities, for the 'mi' pronoun, with the 'sem' negative, always says that it is 'we' (mi) who questioningly search, but find 'nothing' (semmi).

Merged in their common space, the 'sem' and the 'mi' signify that the questioners – in the plurality of their searching questions – only arrived at, and ran into, the not, the negation. Therefore the Hungarian word for the nothing offers a deeper and more articulated consideration of what this word 'expresses', fixing not only the search and its deficient modes, but also the fact that it is always we who search and question, even if we cannot find ourselves in 'that' – in the nothing.

That is to say, the nothing – in this, which is one of its meanings – is precisely the strangeness, foreignness, and unusualness that belongs to our own self – and therefore all our attempts to eliminate it from our existence will always be superfluous.

Király V. István is an Associate Professor in the Hungarian Department of Philosophy of the Babes-Bolyai University, Cluj, Romania. This post is an extract selected by the Editors, and adjusted for Pi, from his bilingual Hungarian-English Philosophy of The Names of the Nothing.

Monday, 24 October 2016

Shapeshifters, Socks, and Personal Identity

Posted by Martin Cohen
Perhaps the proudest achievement of philosophy in the past thousand years is the discovery that each of us really does know that we exist. Descartes sort-of proved that with his famous saying:

"I think therefore I am."
Just unfortunate then, that there is a big question mark hanging over the word ‘I’ here – over the notion of what philosophers call ‘personal identity’. The practical reality is that neither you nor I are in fact one person but rather a stream of ever so slightly different people. Think back ten years – what did you have in common with that creature who borrowed your name back then? Not the same physical cells, certainly. They last only a few months at most. The same ideas and beliefs? But how many of us are stuck with the same ideas and beliefs over the long run? Thank goodness these too can change and shift.

In reality, we look, feel and most importantly think very differently at various points in our lives.

Such preoccupations go back a long, long way. In folk tales, for example, like those told by the Brothers Grimm, frogs become princes – or princesses! a noble daughter becomes an elegant, white deer, and a warrior hero becomes a kind of snake. In all such cases, the character of the original person is simply placed in the body of the animal, as though it were all as simple as a quick change of clothes.

Many philosophers, such as John Locke, who lived way back in the seventeenth century, have been fascinated by the idea of such ‘shapeshifting’, which they see as raising profound and subtle questions about personal identity. Locke himself tried to imagine what would happen if a prince woke up one morning to find himself in the body of a pauper – the kind of poor person he wouldn’t even notice if he rode past them in the street in his royal carriage!

As I explained in a book called Philosophy for Dummies – confusing many readers – Locke discusses the nature of identity. He uses some thought experiments too as part of this, but not, by the way (per multiple queries!) the sock example. He didn't literally wonder about how many repairs he could make to one of his socks before it somehow ceased to be the original sock. He talks, though about a prince and a cobbler and asks which ‘bit’ of a person defines them as that person?

In a chapter called ‘Of Identity and Diversity’ in the second edition of the Essay Concerning Human Understanding, he distinguishes between collections of atoms that are unique, and something made up of the same atoms in different arrangements.

Living things, like people, for example, are given their particular identity not by their atoms (because each person's atoms change regularly, as we know) but rather are defined by the particular way that they are organised. The point argued for in his famous Prince and the Cobbler example is that if the spirit of the Prince can be imagined to be transferred to the body of the Cobbler, then the resulting person is ‘really’ the Prince.

Locke’s famous definition of what it means to be a ‘Person’ is:
‘A thinking intelligent being, that has reason, and can consider it self as it self, the same thinking thing, in different times and places; which it does only by that consciousness, which is inseparable from thinking’
More recently, a university philosopher, Derek Parfit, has pondered a more modern–sounding story, all about doctors physically putting his brain into someone else's body, in such a way that all his memories, beliefs and personal habits were transferred intact. Indeed today, rather grisly proposals are being made for ‘transplants’ like this. But our interest is philosophy, and Derek’s fiendish touch is to ask what would happen if it turned out that only half a brain was enough to do this kind of ‘personality transfer’?

Why is that a fiendish question to ask? But if that were possible, potentially we could make two new Dereks out of the first one! Then how would anyone know who was the ‘real’ one?!

Okay, that's all very unlikely anyway. And yet there are real questions and plenty of grays surrounding personal identity. Today, people are undergoing operations to change their gender – transgender John becomes Jane – or do they? Chronically overweight people are struggling to ‘rediscover’ themselves as thin people – or are they a fat person whose digestion is artificially constrained? Obesity and gender dysporia alike raise profound philosophical, not merely medical questions.

On the larger scale, too, nations struggle to decide their identity - some insisting that it involves restricting certain ethnic groups, others that it rests on enforcing certain cultural practices. Yet the reality, as in the individual human body, is slow and continuous change. The perception of a fixed identity is misleading.

“You think you are, what you are not.” 

* The book is intended for introducing children to some of the big philosophical ideas. Copies can be obtained online here: 

Monday, 17 October 2016

Does History Shape Future Wars?

Posted by Keith Tidman
To be sure, lessons can be gleaned from the study of past wars, as did Thucydides, answering some of the ‘who’, ‘what’, ‘how’, ‘why’, and ‘so-what’ questions. These putative takeaways may be constructively exploited—albeit within distinct limits.
Exploited, as the military historian Trevor Dupuy said, to “determine patterns of conduct [and] performance . . . that will provide basic insights into the nature of armed conflict.” The stuff of grand strategies and humble tactics. But here’s the rub: What’s unlikely is that those historical takeaways will lead to higher-probability outcomes in future war.

The reason for this conclusion is that the inherent instability of war makes it impossible to pave the way to victory with assurance, regardless of lessons gleaned from history. There are too many variables, which rapidly pile up like grains of sand and get jostled around as events advance and recede. Some philosophers of history, such as Arthur Danto, have shed light on the whys and wherefores of all this. That is, history captures not just isolated events but rather intersections and segues between events—like synapses. These intersections result in large changes in events, making it numbingly hard to figure out what will emerge at the other end of all that bewildering change. It’s even more complicated to sort out how history’s lessons from past wars might translate to reliable prescriptions for managing future wars.

But the grounds for flawed historical prescription go beyond the fact that war’s recipe mixes both ‘art’ and ‘science’. Even in the context of blended art and science, a little historical information is not always better than none; in the case of war, a tipping point must be reached before information is good enough and plentiful enough to matter. The fact is that war is both nonlinear and dynamic. Reliable predictions—and thus prescriptions—are elusive. Certainly, war obeys physical laws; the problem is just that we can’t always get a handle on the how and why that happens, in face of all the rapidly moving, morphing parts. Hence in the eyes of those caught up in war’s mangle, events often appear to play out as if random, at times lapsing into a level of chaos that planners cannot compensate for.

This randomness is more familiarly known as the ‘fog of war’. The fog stems from the perception of confusion in the mind’s eye. Absent a full understanding of prevailing initial conditions and their intersections, this perception drives decisions and actions during war. But it does so unreliably. Complexity thus ensures that orderliness eludes the grasp of historians, policymakers, military leaders, and pundits alike. Hindsight doesn’t always help. Unforeseeable incidents, which Carl von Clausewitz dubbed friction, govern every aspect of war. This friction appears as unmanageable ‘noise’, magnified manifold when war’s tempo quickly picks up or acute danger is at hand.

The sheer multiplicity of, and interactions among, initial conditions make it impossible to predict every possible outcome or to calculate their probabilities. Such unpredictability in war provides a stark challenge to C.G. Hempel’s comfortable expectations:
“Historical explanation . . . [being] aimed at showing that some event in question was not a ‘matter of chance’, but was rather to be expected in view of certain antecedent or simultaneous conditions.” 
To the contrary, it is the very unpredictability of war that 
makes it impossible to avoid or at least contain.
The pioneering of chaos theory, by Henri Poincaré, Edward Lorenz, and others, has 
shown that events associated with dynamic, nonlinear systems—war among them—are 
extraordinarily sensitive to their initial conditions. And as Aristotle observed, “the least 
deviation . . . is multiplied later a thousandfold.”

Wars evolve as events—branching out 
in fern-like patterns—play out their consequences. 
The thread linking the lessons from history to future wars is thin and tenuous. ‘Wisdom’ 
gleaned from the past inevitably bumps up against the realities of wars’ disorder. We 
might learn much from past wars, including descriptive reconstructions of causes, 
circumstances, and happenings, but our ability to take prescriptive lessons’ forward is 
strictly limited.
In describing the events of the Peloponnesian War,

Thucydides wrote:

“If [my history] be judged by those inquirers who desire an exact knowledge of the past 
as an aid to the interpretation of the future . . . I shall be content.” 

Yet is our knowledge of history really so exact? The answer is surely 'no' – whatever the comfortable assurances of Thucydides.

Monday, 10 October 2016

Do We Need Perpetual Peace?

By Bohdana Kurylo
Immanuel Kant viewed war as an attribute of the state of nature, in which ‘the freedom of folly’ has not yet been replaced by ‘the freedom of reason’. His philosophy has influenced the ways in which contemporary philosophers conceive of political violence, and seek to eliminate it from global politics: through international law, collective security, and human rights. Yet is perpetual peace an intrinsically desirable destination for us today?
For Kant, peace was a question of knowledge – insofar as knowledge teaches us human nature and the experience of all centuries. It was a matter of scrutinising all claims to knowledge about human potential, that stem from feelings, instincts, memories, and other results of lived experience. On the basis of such knowledge, he thought, war could be eliminated.

Kant realised, however, that not all human knowledge is true. In particular, our ever-present possibility of war serves as evidence of the inadequacy of existing knowledge to conceive the means and principles by which perpetual peace may be established. Kant’s doctrine of transcendental idealism explained this inadequacy by claiming that humans experience only appearances (phenomena) and not things-in-themselves (noumena). What we think we know, is only appearance – our interpretation of the world. Beyond this lies a real world of things-in-themselves, the comprehension of which is simply unattainable for the human mind.

While realists, on this basis, insist on the inevitability of anarchy and war, Kant conceived that the noumenal realm could emancipate our reason from the limitations of empiricism, so enabling us to achieve perpetual peace. He sought to show that we have a categorical moral duty to act morally, even though the empirical world seems to be resistant to it. And since there is no scientific evidence that perpetual peace is impossible, he held that it ought to remain a possibility. Moreover, since moral practical reason claims that war is absolutely evil, humans have a moral duty to discipline their worst instincts to bring about perpetual peace.

Claiming to be guided by the universal reason, Kant proposed three institutional principles which could become the platform for a transnational civil society, superseding potential sources of conflict:
• The road to peace starts with the transition from the natural condition to an ‘original contract, upon which all rightful legislation of a people must be founded’, which needs to be republican.
• In order to overcome the natural condition internationally, external lawlessness between states should be solved by creating a ‘Federation of Free States’.
• Finally, a peaceful membership in a global republic would not be possible without ‘the right of a stranger not to be treated with hostility […] on someone else’s territory’ – the cosmopolitan right to universal hospitality.
Yet Kant, in spite of wanting to emancipate humans from natural determination and past experience, seems to have fallen under the same phenomenal influence as the realists. His pessimistic view of human instincts, which needed to be suppressed to avoid war, strongly reflected an internalisation of the social perceptions of human nature in his time. Humans, he thought, by choosing to overcome their instincts, ought to move from the tutelage of human nature to a state of freedom. The problem is that this ‘freedom’ was already socially defined. Therefore, viewing war as a purely negative phenomenon that hinders human progress, Kant never subjected his reasoning to the total scrutiny which he himself advocated.

Consequently Kant offered a rather deterministic solution, which merely aimed at social ‘tranquillisation’ through feeding people the ready-made values of global peace. Hence one observes his rather excessive emphasis on obedience to authority: ‘all resistance against the supreme legislative power […] is the greatest and most punishable crime’. Kant’s individual requires a master who will ‘break his self-will and force him to obey’. In turn, the master needs to be kept under the control of his own master. Crucially, this would destroy the liberty to conceive for oneself whether war is necessarily such a negative phenomenon.

Even such pacification, through obedience to authority, is unlikely to bring perpetual peace, for it refuses to understand the underlying factors that lead humans into war with each other. Perhaps more effective would be to try to find the cause of war, prior to searching for its cure.

Kant missed the idea that war may be the consequence of the current value system, which suppresses the true human will. Thus Friedrich Nietzsche argued for the need to revaluate values. Being unafraid of war, he recognised its creative potential to bring about a new culture of politics. Where Kant’s peace would merely be a temporary pacification, a complete revaluation of values could potentially create a society that would be beyond the issues of war and peace.

Monday, 3 October 2016

Picture Post #17 The Mask

'Because things don’t appear to be the known thing; they aren’t what they seemed to be neither will they become what they might appear to become.'

Posted by Tessa den Uyl and Martin Cohen

The headquarters of Mussolini's Italian Fascist Party, 1934 via the Rare Historical Photos website
The curious thing about this image is that it looks so much like an over-the-top film set. The dictator looks down on the hurrying-past public, from the facade of the Party HQ. Which in this case is imaginatively, yet also somehow absurdly, covered in graffiti - in the original sense of writing or drawings that have been scribbled, scratched, or painted. The 'Si, si, si' is of course Italian for 'Yes', which is actually not so sinister. The occasion was the the 1934 elections, in which Italians were called to vote either For or Against the Fascist representatives on the electoral list. Indeed, the facade was not always covered up like that.

In 1934, Mussolini had already ruled Italy for 12 years, and the election had certain fascistic features: there was only one party - the fascist one - and the ballot slip for 'Yes' was patriotically printed in the colours of the Italian flag (plus some fascist symbols), while 'No' was in fine philosophical sense a vote for nothing, and the ballot sheet was empty white.

The setting of the picture is the Palazzo Braschi in Rome, and the building was the headquarters of the Fascist Party Federation - which was the local one, not the national, Party headquarters.

According to the Fascist government that supervised the vote, anyway, the eventual vote was a massive endorsement of Il Duce with the Fascist list being approved not merely by 99% of voters but by 99.84% of voters!

But back to the building. Part of Mussolini’s and his philosopher guru, Giovanni Gentile's, grand scheme was to transform the cities into theatrical stages proclaiming Fascist values. Italian fascism is little understood, and was not identical to the later Nazi ideology - but one thing it did share was the belief in totalitarian power. As George Orwell would later portray in his dystopia, 1984, in this new world 2+2 really would equal five if the government said so. Si!