Monday, 16 July 2018

The Things | Relations Dichotomy

Yin-Yang by Sandi Baker
Posted by Thomas Scarborough
We humans have always been accused of dichotomous thinking: us and them, good and evil, for and against, and so on.  It pervades our thinking, and our existence.  Such dichotomous thinking is closely familiar to us.  Not a day goes by without someone suggesting that we should be more nuanced, less one-sided, better rounded. 
Yet there is a strange dichotomy which is more pervasive still, which passes all but unnoticed in our lives—and, I shall argue, bedevils all of our thinking.  Within its broader bounds, it goes by hundreds of names—which in itself suggests that it has too much escaped our attentions.  One might describe it as the static and dynamic, or being and becoming—but there are many ways to describe it besides:
things and relations (Kant)
objects and arrangements (Wittgenstein)
the spatial and the temporal
nouns and verbs
operators and variables
And so on. It is the simple matter of a world where things exist (we include events, which is things that happen), and exist in a certain relation to one another.  This dichotomy pervades all of our thinking—and this it does at a level which is embedded in our thinking.  As one sees, even in our grammar and our sums, for example.

It all has to do with individuation.  We all begin, apparently, with what William James called ‘one great blooming, buzzing confusion’, then we single out complexes from nature and call them things, entities, objects, even concepts—or events, actions, processes, and so on.  We distinguish these then from the relations between them.

We may have said enough in these few words to identify the presence of this dichotomy at the core of some major philosophical problems, of which just a sample here:
The fact-value distinction (Hume).  We have fact on the one hand—or statements which contain things—yet do not know on the other how we should arrange them or bring them into relation. 
The ‘own goal’ of science (Hawking).  By singling out things from nature, and discarding all that (we think) does not belong to them, we create a world of unforeseen side-effects, as we relate them.
Free will and determinism.  Free will goes to the question of cause and effect, and causation in turn is about the relation between two or more events. This, too, rests on the dichotomy of things and relations.
The mind-body problem.  This problem may rest on our experience that things exist in the world (or so we feel), while only relations can exist in our networking brain—not things, of course.
God.  The problem of God’s existence may rest on the notion of causality, since that which is caused is not influenced by God.  Again, causality rests on the distinction between events and their relations.
We may put it this way.  If we did not have this dichotomy of things (and the sort) versus relations, it would be impossible that we should have any of the problems listed above—and many more.  This suggests that we may solve these problems by doing away with one side of the dichotomy—say, things.  This would leave us only with relations, and relations within relations.  It is not an entirely new idea.

Someone might object.  Even if we have no things, objects, entities, events, actions, and so on, we do still have relations—and these relations are governed by scientific law.  But wait a moment.  Without things, there is no scientific law.  At least, not as we know it.

The fact of the dichotomy is presented here simply as food for further thought.  In my view, the dichotomy is artificial and false.  It is a reflection of something in the human fabric that insists first on our individuating things, then on relating them one to the other.  Yet there never has been anything to set this on a firm foundation.

Monday, 9 July 2018

Is Time What It Appears to Be?

Posted by Keith Tidman

Picture credit: Shutterstock via

“Time itself flows in constant motion, just like a river; for neither the river nor the swift hour can stop its course; but, as wave is pushed on by wave, and as each wave as it comes is both pressed on and itself presses the wave in front, so time both flees and follows and is ever new.” – Ovid
We understand time both metaphorically and poetically as a flowing river — a sequence of discrete but fleeting moments — coursing linearly from an onrushing future to a tangible present to an accumulating past. Yet, might ‘time’ be different than that?

Our instincts embrace this model of flowing time as reality. The metaphor extends to suppose a unidirectional flow, or an ‘arrow of time’. According to this, a rock flies through a window, shattering the glass; the splinters of glass never reform into a whole window. The model serves as a handy approximation for our everyday experiences. Yet what if the metaphor of time as a flowing river does not reflect reality? What then might be an alternative model of time?

What if, rather than the notion of flow, time actually entails only one now. Here, an important distinction must be made, for clarity. That is, time is not a sequence of ‘nows’, as proposed by some, such as the British author of alternative physics, Julian Barbour. That is, time is not points of time — corresponding to frames in a movie reel — with events and experiences following one another as ephemeral moments that if slowed down can be distinguished from one another. But, rather, time entails just one now. A model of time in which the future is an illusion — it doesn’t exist. The future isn’t a predetermined block of about-to-occur happenings or about-to-exist things. Likewise, the past is an illusion — it doesn’t exist. 

As to the past not existing, let me be specific. The point is that what we label as history, cosmology, anthropology, archaeology, evolution, and the like do not compose a separately distinguishable past. Rather, they are chronicles — memories, knowledge, understanding, awareness, information, insight, evidence — that exist only as seamless components of now. The Battle of Hastings did not add to an accumulating past as such; all that we know and have chronicled about the battle exists only in the now. ‘Now’ is the entirety of what exists — all things and all happenings: absent a future and past, absent a beginning and end. As the 4th-century philosopher St. Augustine of Hippo presciently noted:
‘There are three times: a present time about things past, a present time about things present, a present time about things future. The future exists only as expectations, the past exists only as memory, but expectation and memory exist in the present’.
In this construct, what we experience is not the flow of time — not temporal duration, as we are want to envision — but change. All the diverse things and events that compose reality undergo change. Individual things change, as does the bigger landscape of which they are a part and to which they are bound. Critically, without change, we would not experience the illusion of time. And without things and events, we would not perceive change. Indeed, as Ernst Mach, the Austrian philosopher-physicist, pointed out: ‘... time is an abstraction, at which we arrive by means of the changes of things’.

It is change, therefore, that renders the apparition of ‘time’ visible to us — that is, change tricks the mind, making time seem real rather than the illusion it is. The illusion of time nonetheless remains helpful in our everyday lives — brown leaves drop from trees in autumn, we commute to work sipping our coffee, an apple rots under a tree, the embers of a campfire cool down, the newspaper is daily delivered to our front door, a lion chases down a gazelle, an orchestra performs Chopin to rapt audience members, and so forth. These kinds of experiences provide grounds for the illusion of time to exist rather than not to exist.

As Aristotle succinctly put it: ‘there is no time apart from change’. Yet, that said, change is not time. Change and time are often conflated, where change is commonly used as a measurement of the presumed passage (flow) of time. As such, change is more real to the illusion of time’s passing than is our observing the hands of a clock rotate. The movement of a clock’s hands simply marks off arbitrarily conventional units of something we call time; however, the hands’ rotation doesn’t tell us anything about the fundamental nature of time. Change leads to the orthodox illusion of time: a distinctly separate future, present, and past morphing from one to the other. Aristotle professed regarding this measurement aspect of time’s illusion:
‘Whether if soul [mind] did not exist, time would exist or not, is a question that may be asked; for if there cannot be someone to count, there cannot be anything that can be counted.’
So it is change — or more precisely, the neurophysiological perception of change in human consciousness — that deludes us into believing in time as a flowing river: a discrete future flowing into a discrete present flowing into a discrete past. The one-way arrow of time.

In this way, the expression of dynamic change provides our everyday illusion of time, flowing inexorably and eternally, as if to flow over us. The British idealist philosopher J.M.E. McTaggart wrote in the early years of the twentieth century that ‘in all ages the belief in the unreality of time has proved singularly attractive’. He underscored the point:
‘I believe that nothing that exists can be temporal, and that therefore time is unreal.’
To conclude, then: Although the intuitive illusion of time, passing from the future to the present to the past, serves as a convenient construct in our everyday lives at work, at home, and at play, in reality this model of time and its flow is a fiction. Actual experience exists only as a single, seamless ‘now’; there is no separately discrete future or past. Our sense of time’s allegorical flow — indeed, of time itself — arises from the occurrence of ‘change’ in things and events – and is ultimately an illusion.

Monday, 2 July 2018

PP #37 A Celebration of Brashness!

'Because things don’t appear to be the known thing; they aren’t what they seemed to be neither will they become what they might appear to become.'

Posted by Tessa den Uyl and Martin Cohen

A postcard presentation of Times Square
Times Square, New York.
‘The soft rush of taxis by him, and laughter, laughters hoarse as a crow’s, incessant and loud, with the rumble of the subways underneath - and over all, the revolutions of light, the growings and recedings of light - light dividing like pearls - forming and reforming in glittering bars and circles and monstrous grotesque figures cut amazingly on the sky.’
During the so-called Jazz Age, that is the optimistic time after ‘the Great War’ and before the Depression, the rise of Nazism and the Second World War, F. Scott Fitzgerald’s metaphor in his book The Beautiful and Damned, reflects so well the human despair combined with hope.

Acts of freedom and expression intertwine to be heard and noticed, to forget and to distract, to employ, and to  hope... In those days, Times Square must have appeared promising, like a colourful stamp on the continent. But what did its message say?

Ideas about segregation and freedom brought ‘silent’ new horizons and made former distinctions tremble. With all there was to come, in those years of the Roaring Twenties, all the layers that combine to make a society were looking for ‘a voice’ and the call echoed, near and far. 
People rather grandly called Times Square the ‘crossroads of the world’ and in those days, that might have well been so. And today, on the edge of the square, the NASDAQ controls a good slice of the world’s wealth and the New York Times does likewise for the world's news. 
Yet it is after dark, after the office day has finished, that the square really comes alive. Doubtful is whether that liveliness today, is filled with the same complexity and struggle, or with that necessity literally and symbolically to survive. While it once stimulated a proper voice, ‘light dividing like pearls’, now Times Sqaure embraces more of a homogenisation and offers monstrous grotesque figures cut amazingly out of the sky.

Monday, 25 June 2018

The Importance of Being Seen

Posted by Simon Thomas
Of all our innermost human desires, nothing seems to trump that of being seen. Humanity is meant for community and for togetherness. There is something very fundamental about being acknowledged, of being seen, as a person.
The Internet with all its wonders and the ability to connect people from all over the world, does indeed meet some of this need, but virtual reality does not on any level substitute real human interaction on a personal level. It does, however, point out the fact there that there is a real need for it. A person can have untold ‘friends’ or followers on social media, but this does not replace real human conversation. And we are sick because of it.

Everyone needs to know that what they say, or the very fact of their being is important to somebody. The heart longs for a human embrace, for the attentive ear of another, who shows some interest in being with them and listening to what they say. Psychological research has shown that when people are perpetually in a situation where they are ignored, this causes real emotional pain, and that in turn will cause physical problems caused by the stress of being ignored on a ongoing basis.

This is all the more prevalent in our society of individualism. Everyone wants to connect, but there seems to be this incessant preoccupation with connecting with everything and everyone except that which is in our present reality. It is true, a person can feel lonely in crowd, and feel intense feelings of abandonment even in the company of others. This is especially true in our society with its preoccupation with distraction.

Things become more important than people, virtual friendships on-line become more important than friendships we can experience in real time and in real situations. We see, but we don’t see each other. We put each other in categories, and fail to recognise how much we are the same, with the same need for communicating and the real need for simply communicating with those who share our time and space.

Families today, too, have gone this route. People live under the same roof but do not communicate; there is no or very little interaction. The whole emphasis has shifted form ‘how can I serve’ to ‘how can I get something out of this person’.

I have this kind of relationship with my dog. It comes to me when it is hungry or wants something from me. And that is okay. Animals do not have the complex relationship needs that human beings have. But my dog has what I call ‘cupboard love’ -- he loves me for what he can get from me. But that is not to be the way we interact with our fellow human beings. It is the height of selfishness. And very often the cause of much emotional and mental anguish.

I have noticed however, that that is how the Internet of things works. Someone has something on offer, which the other wants, and what happens is that while felt needs are met on a superficial level, there is not a lasting connection. And it is understandable that people want the connection, but they don’t want to acknowledge the person they interact with. We come across many people in our daily lives, in the office, at the bus station, in the shops, at church. But as many can testify, even after we exit a party or a group of people we feel drained.

It is important to listen to one another. Even a brief interaction can be meaningful if the person we talk to makes us feel that we have been seen, that we have been acknowledged. It is not uncommon to go through a day and while we do many things in the course our day’s activities, we are left empty. What is that? Well I perceive that the reason we fail to connect is that we seem to objectify people and treat them as less than they are.

Human beings are made imago dei -- in the image of God. We were created to interact and communicate; we were made to live in community and not in isolation. To be human is to share in the common human experience, and to live in such a way that we acknowledge one another, and not allow our many distractions to detract from how we relate to one another.

Monday, 18 June 2018

White Lies – Malevolence or Defence?

Little White Lies, by e9Art
Posted by Christian Sötemann
A little thought experiment: In the year 2088, a mentally highly volatile leader of an autocratic world power is undergoing yet another personal crisis. His wife, so he has heard, is secretly planning to leave him. Without her, he sees no meaning in going on. Since he is also a narcissistic megalomaniac, in his dark mood, he decides that the world should perish if he left him. He prepares to give the order for a nuclear strike and confronts his wife on her secret plans.
Now, what would be a wise thing for her to answer, even if she actually planned on leaving him? Surely, most people would say something along those lines: Calm him down, say that everything is fine, just get him away from ordering a nuclear strike. The rest will be sorted out later. Hence she should lie to save the world from a nuclear attack.

That’s that then, right? Not so fast. In ethics, the role of the lie has been a hotly debated one. Among the ethical stances, there are some which emphasise the consequences of an action to determine whether they are moral or not. Many of the supporters of these approaches would probably have few issues with the wife’s lie. The argument would go like this: Lying in this particular case prevents unfathomable damage occurring to millions of people, so it is the right decision.

There are, however, perspectives in ethics that focus more on principles and duties rather than consequences of actions, notably in Kant’s categorical imperative: ‘Act only according to that maxim by which you can at the same time will that it should become a universal law’. From this point of view, in its strictest form, a lie cannot ever be legitimate, because human relationships would become poisoned if everybody lied to each other all the time.

In many cases, there is some validity to that principle. We have to be able, at least most of the time, to confide in what people around us tell us. The lie has to be the exception rather than the rule. Our everyday life would be seriously impaired if we all lied to each other all or most of the time. 

Still, there is a point to be made for white lies. Schopenhauer viewed lies as a legitimate form of self-defence in cases of extortion, threat or unauthorised interference or intrusion, among other things. If I am exposed to an evil will, lying can be part of the arsenal to defend myself.

For example, if somebody broke into my house, thus violating my right to privacy, my exclamation telling the burglar that the police were already on its way, would represent a perfectly legitimate lie to make this intruder leave my house as quickly as possible. Similarly, a child threatened by bullies on its way home from school might want to use the white lie that his parents or elder brother were just around the corner. There is no malevolent deceit in situations such as these.

It seems that the most important aspect here is that there is a predicament which can make a white lie a suitable means to an end. To avert a catastrophe or a crime, white lies can come into consideration. Besides, from this perspective, the ‘lie’ aspect of the white lie becomes less relevant – rather, it becomes one of several means to defend oneself. It is something one can do to get out of a dangerous situation.

The application of the categorical imperative in this case should therefore not denounce the white lie as harmful, but could be reformulated as: ‘In a dangerous situation threatening the physical and psychological integrity of an individual in an illegitimate way, every individual should have the right to undertake sufficient actions to avert this threat’.

In German, one translation of ‘white lie’ is Notlüge, meaning, literally, ‘emergency lie’. Perhaps this serves to illustrate some cases in which a white lie seems appropriate. It is something that is more a verbal form of defence rather than a mere lie.

Certainly, it would be harmful to lie all of the time. And it can be harmful to never ever lie. The potential Kantian counterargument that this takes into consideration the consequences of actions rather than a principled stance regardless of what happens afterwards is something that can be addressed.

But it represents another example of morality not necessarily being beholden to one orthodoxy throughout.  We may consider principles as well as consequences in our moral deliberations. There is something to be found between the extremes of rigidity and arbitrariness. So, we should not blame the dictator’s wife for her white lie. Those living in the year 2088 will be grateful for our leniency.

Monday, 11 June 2018

BOOK REVIEWS: Back to the Future with the Food Gathering Diet

Posted by Martin Cohen*

Back to the Future with the Food Gatherers Diet

How we imagine hunting and gathering - in this case, on the South Texas Plains

Food Sanity: How to Eat in a World of Fads and Fiction
By David Friedman (Turner 2018).

Psst! Maybe someone should have told David Friedman, well-known media personality as well as the author of this new look at food issues – there are hardly any vegans. So if you pitch a book on 'how to eat' to that crowd, you take the risk of ending up preaching to a much reduced congregation. Add to which the serious vegans in town won't like some of what Friedman has to say, because vegans don’t eat eggs and certainly don’t eat fish. All of which only goes to show, that food is a pretty controversial and divisive issue these days, and if you want to be honest, as Friedman evidently does, you're going to have to risk trampling on the dearly held, indeed dearly munched, beliefs of lots of people.

But I hope Food Sanity does find that wider readership, because I’ve read a lot of books and articles recently about food and this one really does clear out a lot of the deadwood and present some pretty mind-boggling facts (and figures) to ‘put the record straight’, as Jack Canfield (of Chicken Soup for the Soul fame) puts it, by way of an endorsement of the book.

Take one opening salvo, that as I say, will surely lose Friedman lots of readers in one fell swoop: the Paleo or ‘Caveman’ Diet. This is probably the most popular diet going and that’s likely because it fits so excellently people’s dearly held prejudices. Plus, it allows them to eat lots of beef-burgers and chips, while cutting out things like muesli which only hippies eat anyway. But oh no, Friedman has done his research and found out that Stone Age folk didn’t really eat lots of red meat washed down with a beaker of blood, as we like to imagine. Instead, using both archaeological and anthropological research as a guide, he says that the earliest human tribes spent most of their time eating fruits and seeds, which they gathered, and probably only really sharpened the spears (or so, at least, I imagine) for internecine human disputes.

Friedman finishes his deconstruction of Paleo by consideration of human biology too: notably the fact that we just aren’t built to catch our fellow animals. We lack the right claws, teeth and general physique too. He points out, a thing curiously overlooked, that Stone Age people would have been rather short and squat - not the fine figures wielding clubs that we imagine. He retells Jared Diamond’s tale of a hunting trip by one of today’s last remaining ‘stone age’ tribes, in New Guinea. At the end of the hunt, the tribe had caught only some baby birds, frogs and mushrooms.

This is all fascinating to me, but compelling too are Friedman’s physiological observations, most particularly on the acidity of the human stomach. The gastric fluids of carnivores are very acidic (pH 1), which is essential if they are to break down the proteins and to kill bacteria. Our stomachs, however, are much less acidic (pH 5), and simply can’t tolerate much uncooked meat. And if, yes, Stone Age man might have done a bit of cooking, it would probably have been rather rudimentary with parts of the meat not really cooked.

Actually, by the time I had finished reading all of the reasons that ‘humans can't eat meat’, I was left puzzled by Friedman’s conclusion which was that a significant proportion of the prehistoric human diet (nonetheless) seems to have been meat. Less surprising was Friedman’s hearty endorsement of eggs, which surely everyone has heard by now are really not dangerous, and don’t cause heart attacks after all, and fish, which he carefully defends form claims that they are today dangerously contaminated with things like mercury.

However dairy gets the thumbs down, with a disdain that I personally felt was unjustified. Dairy, after all, is much more than drinks of cow’s milk - it is goat and sheep milk, cheese and cream too -  and an inseparable part of many dishes. We are advised here instead to swap to things like ‘almond milk’, and ‘hemp milk’ but I know these substitutes very well, and, well, they ain’t one. At least Friedman doesn’t try to suggest we switch to soya milk because, as he rightly observes, that is a food disaster just in itself

There is, to be honest, a bit too much bad news in this book - so much so that I started to skip some  sections, which fortunately the book’s modular structure permits. On the other hand, Friedman makes an effort to leaven the mix by including some good news and positive suggestions, including a two page table of the healthiest foods on earth. What are they? They're all fruits and veggies - the things that Plato and Pythagoras were praising and recommending nearly three thousand years ago. It seems that it’s time, if not indeed long overdue, to go back to following their advice.

*Martin Cohen is the author of a forthcoming book on food issues too called I Think Therefore I Eat, which is also published by Turner, and due out in November 2018

Monday, 4 June 2018

Picture Post #36 A postcard from Taroudant

'Because things don’t appear to be the known thing; they aren’t what they seemed to be neither will they become what they might appear to become.'

Posted by Tessa den Uyl and Martin Cohen

A postcard from Taroudant, Maroc

One piece of advice offered is to lower the gaze, to not allow it to dwell, as if the eye serves distraction.

The woman seated in front of the painting is possibly homeless. Her posture dissolves with the two figures on the wall, characterised by their carved-out eyes, and urge us to imagine where this woman can put her gaze.

Eyes and hearts, their combination invites a myriad of symbolic attributions. One of them is that a woman with her eyes can reach the man in his heart. The carved-out eyes suggest that women, even when veiled, still look (and distract), which they should not... Or is the image saying something quite different, that the time for women to be veiled is consigned to history and that these days we can 'forget about the eyes’?

An eye is connected with light, and light with reflection. The ‘seduction’ begins with the question of where the reflection should pose its attention.

Monday, 28 May 2018

Occam's Razor: On the Virtue of Simplicity

As a Franciscan monk, simplicity was at the heart of   William's daily life.
Posted by Keith Tidman

The English philosopher and monk, William of Occam (c. 1287–1347), surely got it about right with his ‘law of parsimony’, which asserts, as a general principle, that when there are two competing explanations or theories, the one with the fewest assumptions (and fewest guesses or variables) more often is to be prefered. As the ‘More than Subtle Doctor’ couched the concept in his Summa Logicae, ‘It is futile to do with more what can be done with fewer’ — itself an example of ‘economy’. William’s law is typically referred to as Occam’s razor — the word ‘razor’ signifying a slicing away of arguably unnecessary postulates. In many instances, Occam’s razor is indeed right; in other examples, well, perhaps not. Let’s explore the ideas further.

Although the law of parsimony has always been most closely associated with William of Occam, (Occam, now called ‘Ockham’, being the village where he was born), he hasn’t been the principle’s only proponent. Just as famously, a millennia and a half earlier, the Greek philosopher Aristotle said something similar in his Posterior Analytics:
‘We may assume the superiority ceteris paribus [other things being equal] of the demonstration which derives from fewer postulates or hypotheses.’
And seven centuries after William, Albert Einstein, perhaps thinking of his own formulation of special relativity, noted that ‘the supreme goal of all theory is to make the irreducible basic elements as simple and as few as possible’. Many other philosophers, scientists, and thinkers have also admired the concept.

Science’s favoritism toward the parsimony of Occam’s razor is no more apparent than in the search for a so-called ‘theory of everything’ — an umbrella theory unifying harmoniously all the physical forces of the cosmos, including the two cornerstones of 20th-century physics: the general theory of relativity (describing the macro scale) and quantum theory (describing the micro scale). This holy grail of science has proven an immense but irresistible challenge, its having occupied much of Einstein’s life, as it has the imagination of other physicists. But the appeal to scientists is in a unified (presumed final or all-encompassing) theory being condensed into a single set of equations, or perhaps just one equation, to describe all physical reality. The appeal of the theory’s potential frugality in coherently and irreducibly explaining the universe remains immense.

Certainly, philosophers too, often regard parsimony as a virtue — although there have been exceptions. For clarity, we must first note that parsimony and simplicity are usually, as a practical matter, considered one and the same thing — that is, largely interchangeable. For its part, simplicity comes in at least two variants: one equates to the number and complexity of kinds of things hypothesised, and sometimes referred to as ‘elegance’ or ‘qualitative parsimony’; the second equates to the number and complexity of individual, independent things (entities) hypothesised, and sometimes referred to as ‘quantitative parsimony’. Intuitively, people in their daily lives usually favor simpler hypotheses; so do philosophers and scientists. For example, we assume that Earth’s gravity will always apply rather than its suddenly ceasing — that is, rather than objects falling upward unassisted.
Among the philosophers who weighed in on the principle was Thomas Aquinas, who noted in Summa Theologica in the 13th century, ‘If a thing can be done adequately by means of one, it is superfluous to do it by means of several; for we observe that nature does not employ two instruments where one suffices.’ And the 18th-century German philosopher Immanuel Kant, in the Critique of Pure Reason, similarly observed that ‘rudiments or principles must not be unnecessarily multiplied.’ In this manner, philosophers have sometimes turned to Occam’s razor to criticise broad metaphysical hypotheses that purportedly include the baggage of unnecessary ontological concepts. An example of falling under such criticism via the application of Occam’s razor is Cartesian dualism, which physicalists argue is flawed by an extra category — that is, the notion that the mind is entirely apart from the neuronal and synaptic activity of the brain (the physical and mental purportedly being two separate entities).

Returning to Einstein, his iconic equation, E=mc2, is an example of Occam’s razor. This ‘simple’ mathematical formula, which had more-complex precursors, has only two variables and one constant, relating (via conversion) the amount of energy to the amount of matter (mass) multiplied by the speed of light squared. It allows one to calculate how much energy is tied up in the mass of any given object, such as a chickpea or granite boulder. The result is a perfectly parsimonious snapshot of physical reality. But simplicity isn’t always enough, of course. There must also be consistency with the available data, with the model necessarily accommodating new (better) data as they become available.

Other eminent scientists, like the 17th-century physicist and mathematician Isaac Newton, similarly valued this principle of frugality. The first of Newton’s three ‘rules of reasoning in philosophy’ expressed in his Principia Mathematica offers:
‘We are to admit no more causes of natural things than such as are both true and sufficient to explain their appearances. . . . Nature is pleased with simplicity, and affects not the pomp of superfluous causes.’
But, as noted above, Occam’s razor doesn’t always lead to truth per se. Nor, importantly, does the notion of ‘simplicity’ necessarily equate to ease of explanation or ease of understanding. Here are two examples where frugality arguably doesn’t win the day. One theory presents a complex cosmological explanation of the Big Bang and the physical evolution of a 13.8-billion-year-old universe. A single, but very-late-on-the-stage thread of that cosmological account is the intricate biological evolution of modern human beings. A second, creationist explanation of the current universe and of human beings — with far fewer assumptions and hypotheses — describes both as having roots in a single event some 6,000 to 10,000 years ago, with the cosmos conveniently made to look older. Available evidence suggests, however, that the first explanation is correct, despite the second explanation’s parsimony.

In broad ways, Occam’s razor has been supported by the empirical successes of theories that proved parsimonious in their explanations: with fewer causes, entities, properties, variables, and processes embedded in fewer assumptions and hypotheses. However, even though people tend instinctively and understandably to be drawn toward simpler accounts of hoped-for reality, simplicity hasn’t always triumphed. For example, the earlier nature-versus-nurture debate posed a simpler, albeit false, either-or dichotomy in trying to understand a person’s development and behaviour on the basis of either the environment — the influence of external factors, such as experience and learning, on an otherwise blank slate or perhaps set of instincts — or genes and heritability — that is, biological pre-wiring. Reality is, of course, a complex mix of both nature and nurture, with one influencing the other.

To avoid such pitfalls, as the English mathematician and philosopher Alfred North Whitehead pointedly (and parsimoniously) suggested:
‘. . . every natural philosopher should seek simplicity and distrust it.

Monday, 21 May 2018

‘Purposeful Living’ Through Grief

Rainy Night In The City, by Alina Madan. Poster: Giclee Print
Posted by Lina Ufimtseva
Grief is like a rude neighbour in the night, knocking at your mind’s door at all kinds of inopportune moments.  Hush, you want to tell it, go away, let me sleep.  But not only is grief rude in its all-encompassing demands for attention, it also is disobedient, and stubbornly stays.  Often, for years.
I am stirring a pot of soup on the stove, and I switch it off.  The boiling liquid quickly settles, and the rolling of the surface stops.  ‘Just like my mother's blood,’ I think instinctively.  Her blood stopped moving, too. ‘Just so,’ I think, ‘a loved one's life can slip away, unceremoniously.’ And so, in the sudden memory which the soup brings back, grief stands rudely knocking.  Go away, go away.

Time allows for the body to regenerate and to heal, provided it is not put under more stress.  Years later, one may feel the strain in a joint from an old injury, but it will often be no more than a lingering nuisance.  Grief, on the other hand, can hit one like a train, no matter how much time has passed since tragedy struck. Why is emotional pain more difficult to bear than physical pain? 

The brain uses a single neural system to detect and feel pain.  The anterior insula cortex and the anterior cingulate cortex are responsible for detecting pain, regardless whether it is of a physical or emotional nature.  Even painkillers may numb emotional pain temporarily.  But they don’t help in healing.

This begs the question, why does emotional pain not heal as if it were physical?

Upon asking how a mother’s labour went, a woman may underplay her experience and reply that it was ‘painful’ or ‘a lot of pressure’.  Yet those mothers who lay in agony giving birth will voluntarily unleash the same process upon their bodies again and again.  Physical pain lingers only as an awareness that it was indeed at one time painful. 

Grief, however, has the unique ability to reiterate itself at the most seemingly random moments.  Therein lies a clue.  If we want physical pain to leave our bodies—assuming that, as it usually does,-- it affects only a certain limb or area of the body—we may use a crutch to prevent too much strain, say, on a leg.  But how does one rest from grief?

Generally one does not.

Our brains process the pain of grief in a non-linear manner.  Physical trauma leaves scars—smooth scars.  Emotional pain creates what I would call neural scabs of sorts that can be—and often will be—picked at, voluntarily or not.

The psychologist Thomas Crook has noted:
‘Indeed, when brain imaging studies are done on people who are grieving, increased activity is seen along a broad network of neurons.  These link areas associated not only with mood but also with memory, perception, conceptualization, and even the regulation of the heart, the digestive system, and other organs.  This shows the pervasive impact loss or even disappointment can have.’
Grief affects the neural pathways in a far more pervasive and ineluctable or ineludible manner than physical pain.  Emotional pain, like a scab, can very easily get picked by a casual scratch of an old memory, and the blood of grief starts pouring again.

Those who have been severely distraught by their circumstances often come to the conclusion that the greater meaning in life is not seeking happiness and hedonism, but in creating a purposeful living.  The word choice here: ‘a purposeful living’ rather than ‘a purposeful life’, is in itself deliberate.  Meaning is not stagnant.  One cannot create a purposeful life and leave it at that.  Purpose must continue to be lived out, to be striven for, to continue in some kind of endeavour. 

Purpose without struggle often loses its meaning.  In this light, grief can be given a purpose.  Severe emotional pain can be the catalyst to revaluate one’s values, choices, and path in life.  It can be one’s very own personal as well as professional spring board. 

Do you wish to leap into the bounds of further despair?  Go ahead, and grief will get you there.  Do you wish to see an armour around yourself unveiled?  Go ahead, and grief can give you the thickest skin and the thinnest heart you ever imagined.

Grief can and will redefine who you thought you were.  Can you hear it knocking?

Monday, 14 May 2018

African Propaganda In a Nutshell

Posted by Sifiso Mkhonto
Change is happening all over the world. It is impossible to stand still. Yet as we change, there are those who would wish to influence that change—some in a positive and some in a negative way. My intention is to focus on invidious change that others seek to bring about through propaganda. Specifically, in Africa.
Propaganda is biased, misleading, and intends to shape perceptions, manipulate cognitions, and direct behaviour. The Oxford Dictionary of Philosophy defines propaganda as ‘the active manipulation of opinion by means that include distortion or concealment of the truth.’ It usefully distinguishes between ’agitation propaganda, which seeks to change attitudes, and ‘integration propaganda’ which seeks to reinforce existing attitudes.

Africa has been the victim of both agitation propaganda and integration propaganda—and while propaganda anywhere in the world may share the same characteristics, I here offer examples which are characteristically African, which Africans are primarily aware of—or ought to be. Mark Nichol, a writer, offers these four useful descriptions of propaganda, from which I develop my thoughtful analysis:
An appeal to prejudice, or the black-and-white fallacy. Africa is a place of unusually stark contrasts, historical, cultural, social, and geographical. Politicians and religious leaders exploit this by presenting only two alternatives, one of which is identified as undesirable. They do so to exploit an audience’s desire to believe that it is morally or otherwise superior. However, the goal is the pleasure of the propagandists, regardless of whether the victim is in poverty or has riches.

An appeal to fear. Africa still wrestles with fundamental issues, more so than other regions of the world, so that it faces many fears and uncertainties. Propagandists exploit fear and doubt, disseminating false or negative information, to undermine adherence to an undesirable belief or opinion. They do so to exploit audience anxieties or concerns through fear of political identity, gender, race, tribes, and religious or traditional practices.

Half-truths. Governments and political parties in Africa tend to be secretive about information, which may further be difficult for the public to access. Full knowing the full truth, they still make statements that are partly true or otherwise deceptive to further their own agenda. The government often disguises this as a matter of national security, so that the full truth lies under a veil of secrecy.

Obfuscation and glittering generalities. In Africa, the spoken word may have priority over the written word, so that it is received personally, not critically. Propagandists resort to vague communication and word prejudices intended to confuse the audience as it seeks to interpret the message. In South Africa, the ruling party has for each election campaign used this method to continue holding power. It tells the story of apartheid history and how its injustices ought to be fixed, however may only be fixed if each person votes in remembrance of their leaders who fought the apartheid system.
Where does the solution lie? It surely lies in our personal choice, as to whether to accept or reject what we see, read, and hear. Our identity and its underlying attitudes are changed over time, through those choices that we make—and our ideology, which is the consequence of what we were and are exposed to, often plays a crucial role in shaping our perception of what is truth and propaganda.

As individuals, we need to examine our judgements of information at the bar of mature reasoning, in order to avoid judging amiss and believing the propaganda. If we continue to fail this test, propaganda will prevail as it allows what is biased popular opinion to turn into the judgement of the minority opinion.  This then infringes on the right we all ought to or do have—freedom of speech.

The theologian Isaac Watts gives us this timely advice:
‘When a man of eloquence speaks or writes upon any subject, we are too ready to run into his sentiments, being sweetly and insensibly drawn by the smoothness of his harangue, and the pathetic power of his language. Rhetoric will varnish every error so that it shall appear in the dress of truth, and put such ornaments upon vice, as to make it look like virtue: it is an art of wondrous and extensive influence: it often conceals, obscures, or overwhelms the truth and places sometimes a gross falsehood in a most alluring light.’ 
Let us use logic as the measure of reasoning and sharing information. Not biased opinion from an eloquent man.

Tuesday, 8 May 2018

Picture Post #35: The House Number

'Because things don’t appear to be the known thing; they aren’t what they seemed to be neither will they become what they might appear to become.'

Posted by Thomas Scarborough

Mountain View township, South Africa

House numbers:- laser cut aluminium, cast iron plaques, illuminated perspex, oil rubbed bronze, stencils and paint, carvings in wood. These not only identify the house, but reveal the occupant.

The number on a front door in an African township. We are immediately impressed by the attitude it expresses:- a bold and careless statement that this is no. 1251, so put that in your pipe and smoke it. 

Monday, 30 April 2018

Is There a Rational Basis For Human Compassion?

By Thomas Scarborough
Søren Kierkegaard wrote that Immanuel Kant’s moral philosophy was ‘utterly without grace’. It was a fierce condemnation of Kant.
Kant  favoured autonomy—which is defined as the capacity of an agent to act in accordance with objective morality rather than under the influence of desires. Today this is a view which, by and large, drives all of our ethical thinking. The problem, in Kierkegaard’s eyes, was that it lacked compassion. This is true. We place great emphasis on civil rights, the rule of law, social norms, and so on, while compassion is not comfortably accommodated in the scheme. How may it be possible to bridge the gap—rationally? This is the subject of this post.

Ethics is a very human thing. Regardless of the intellectual debate, or the final framing of our ethics private or public, it always originates in the human person. It is, above all, a person's formation of a certain outlook on the world. Aristotle thought of ethics as ‘the golden mean’—the balanced life—where the ‘mean’ is defined as a quality or action which is equally removed from two opposite extremes. Thus ethics represents the achievement of a balance in the human person—between economic and social goals, individual and communal goals, unity and diversity, novelty and tradition, thought and feeling, and so much more. This is our starting point in this post—that it is about balance—of which further discussion would unfortunately deny us room to develop the theme in the available space.

In order to develop the ‘golden mean’, then, it stands to reason that we should weigh a great number of opposites in our minds, not to speak of variations, one against the other. The scope of this is important here: as we do so, we typically have as our goal to balance the world around us, no more and no less. I should say, I have as my goal to balance the world around me—in my own individual mind—so as to develop (I should hope) a balanced outlook on my world. This is true—but it is simplistic. It is a more nuanced view of the process which should help us to open up our ethical thinking to human compassion.

I live in a world of others—tens, thousands, millions, in fact billions of others. As soon as I take these others into account, not merely as numbers, entities, or abstractions, I open up some important considerations. Each of these others carries in their own mind an evaluation of the world—without which my own evaluation of the world cannot be complete. It matters a great deal, not merely that others exist in my world, but that they each arrange the world in their own particular way. Therefore in a sense. we now have uncountable worlds within a world. It is easy to overlook this. These others perceive things, assess things, plan things, and act upon things which are of critical importance to that ‘golden mean’ which Aristotle spoke about. Perhaps this much goes without saying.

However this now introduces a quantum leap of complexity to my task of arranging my world, since now I must combine their world with mine—tens, thousands, even millions of worlds in other people’s minds. Then, too, this all has to do with semiotic codes, which are the means through which others reveal their own arrangement of the world—codes that are all too often all but inscrutable. A smile, a jig, a nod of the head—candles on the table, or a hush in the hallway—President Kennedy's visit to West Berlin, the Bomb under Mururoa, the public appearances or Her Majesty the Queen, and a host of so-called ‘interpretative devices’. In order to have some command of such things, I need to have an intimate ‘feel’ for others.

The existence of others in my world—further, the existence of their worlds within my world, and the ways in which they communicate their worlds with me—means that ethics may often come down to something all too human. I now need to be sensitive to the expressions, gestures, and postures of others, and a great variety of semiotic codes besides—not to speak of the sufferings, desires, and hopes which lie behind them. I need to understand—to borrow a term from the polymath Thomas Browne—‘the motto of our souls’. This represents a rapport which rests to a very large extent on a careful, sensitive reading of the many others involved in my world, whether this involvement is direct or indirectl. Thus we incorporate personal rapport in a rational ethics—which is human compassion.

Monday, 23 April 2018

Metaphysics: Does It Control Us?

Posted by Tom Johnson *

The Starry Night, by Vincent van Gogh, 1889

‘We are all our own metaphysicians.’ wrote the philosophers Godfrey Vesey and Paul Foulkes. We all have our first philosophies. These are the world views which we live by, even when we have not deliberately or carefully worked them out.

Tom Johnson was, at the time that he penned the notes below, a Western graduate and an intern in Africa. He wrote these notes for a supervisor, immediately following a period of burnout. The significance of the notes is that they reveal a close relationship between his burnout and his religious-metaphysical outlook (in his words, his ‘view of God’ and his ‘ideology’). While the notes may suggest an introverted intern, in fact he held a very public position. He finally received a positive report of his internship:
What were the signs that I was headed for a crisis? I break down my experience into five categories, which is my mental, emotional, and physical health, my patterns of activity, and what I shall call ‘spirituality’.

• Mental: I was ‘unhappy’. More than that, I was just going through the motions of day to day work, looking forward to when I would be finished in Africa, and hoping that things did not get any worse before I left. I was anxious that someone might call me out on my weakness or lack of fervour, or that I would ‘get into trouble’ for not being the type of person I should be. I felt guilty for being here, and not taking better advantage of the situation, not being more disciplined in coming to understand this culture and context better.

• Emotional: I have come to understand my stress as being manifested in different yet related emotions: among them fear, guilt, shame, depression. That is, it was these emotions which dominated my state of mind. Strangely, I am unable to recognise that I have felt this way until I manage to find my way out, and can only look back in retrospect: ‘Yes, I was feeling that way.’ It always seems to come down to a critical moment of intense anxiety. Then something changes, and I feel much better.

• Health: I had been feeling drained of energy, as though, for months, I had a head cold. Occasionally I would wake up with congested sinuses or a sore throat only to have it fade during the day. Most significantly, I was suffering headaches very often. For two consecutive months, I suffered a headache almost daily. Because of this, I was unable to exercise, as too much physical strain just worsened things, and so my health suffered. Inevitably, this made my tasks more cumbersome, too.

• Patterns of activity: I began to isolate myself from friends, started shutting myself in almost completely. I found social interaction to be very difficult. It takes a great deal of energy to go out to a situation where I will be forced to ‘fake it’ and make pleasant conversation. If I did go into social situations, I was submissive and acquiescent, always agreeable, often doubting myself. In keeping with this, I completed my assignments with the minimal lack of effort or thought.

• Finally, and perhaps most importantly, the spiritual aspect of things. Prior to burnout, I seemed to take a more authoritarian view of God. That is, I viewed God as the divine task master who told me what to do. Of course, I always fall short of this ‘god's’ expectations, and this only adds to the shame and guilt. I generally feel embarrassed about what I believe at all. I am reluctant to speak of my faith, and become anxious where others are talking about what they believe, or are confronting me on what I believe.

I began to loathe my faith, and wished that there was no God, simply so I could be free from all the fear, guilt, and shame.  I wondered whether this was all related to faith, or whether it was just part of growing to understand who I am and what I believe. I have come to accept that a major cause of my stress is when I am going through significant life changes, or changes of ideology. At this point I find it extremely difficult to commit to one set of beliefs over another, and it seems I am very easily swayed.

This leaves me constantly doubting myself and seeking to take shelter in other people's ideas and ways of being. What I crave is the ability to commit to one belief system, and the confidence to stand by those beliefs without being tempted to jump ship and view it from another perspective again.  

Thus Tom's notes end with a religious-metaphysical reflection, which significantly receives the greatest space, in fact appears to suffuse all earlier sections of his notes. In spite of Tom not being dogmatic about his faith in God, and distancing himself from this ‘god’ (uncapitalised), even wishing that his god did not exist, he is clearly deeply motivated by his faith.  This in itself is interesting. Our religious-metaphysical world view need not be settled in order to dominate us.

Apart from his religious beliefs, a ‘major cause’ of Tom's stress is ‘ideology’, or a ‘set of beliefs’ where there may not be a necessary connection with his belief in God's existence. That is, a basic belief in God does not release him from ideological or metaphysical struggles, or their consequences.

Bearing in mind that this is a single, simple case study, our ‘first philosophies’ may indeed have a profound effect on our mental, emotional, and physical health, our patterns of activity, and our ‘spirituality’. It seems that, yes, our metaphysics controls us.

* Tom Johnson is a pseudonym. His notes are used with permission.

Monday, 16 April 2018

'Evil': A Brief Search for Understanding

In medieval times, evil ws often personified in not-quite human forms

Posted by Keith Tidman

Plato may have been right in asserting that “There must always be something antagonistic to good.” Yet pause a moment, and wonder exactly why? And also what is it about ‘evil’ that means it can be understood and defined equally from both religious and secularist viewpoints? I would argue that fundamental to an exploration of both these questions is the notion that for something to be evil, there must be an essential component: moral agency. And as to this critical point, it might help to begin with a case where moral agency and evil arguably have converged.

The case in question is repeated uses of chemical weapons in Syria, made all too real recently. Graphic images of gassed children, women, and men, gasping for air and writhing in pain, have circulated globally and shocked people’s sense of humanity. The efficacy of chemical weapons against populations lies not only in the weapons’ lethality but — just as distressingly and perhaps more to the weapons’ purpose — in the resulting terror, shock, and panic, among civilians and combatants alike. Such use of chemical weapons does not take place, however, without someone, indeed many people, making a deliberate, freely made decision to engage in the practice. Here is, the intentionality of deed that infuses human moral agency and, in turn, gives rise to a shared perception that such behaviour aligns with ‘evil’.

One wonders what the calculus was among the instigators (who they are need not concern us, much as it matters from the poltiical standpoint) to begin and sustain the indiscriminate use of chemical weapons. And what were the considerations as to whom to 'sacrifice' (the question of presumed human dispensability) in the name of an ideology or quest for simple self-survival? Were the choices viewed and the decisions made on ‘utilitarian’ grounds? That is, was the intent to maim and kill in such shocking ways to demoralise and dissuade insurgency’s continuation (short-term consequences), perhaps in expectation that the conflict will end quicker (longer-term consequences)? Was it part of some larger gopolitical messaging between Russia and the United States? (Some even claim the attacks were orchestrated by the latter to discredit the former...)

Whatever the political scenario, it seems that the ‘deontological’ judgement of the act — the use of chemical weapons — has been lost. This, after all, can only make the use utterly immoral irrespective of consequences. Meanwhile, world hesitancy or confusion — fails to stop another atrocity against humanity, and the hesitancy itself has its own pernicious effects. The 19th-century British philosopher John Stuart Mill underscored this point, observing that:
“A person may cause evil to others not only by his actions but by his inaction, and in either case he is justly accountable to them for the injury.”
Keeping the preceding scenario in Syria in mind, let’s further explore the dimensions of rational moral agency and evil. Although  the label ‘evil’ is most familiar when used to qualify the affairs of human beings it can be used more widely, for example in relation to natural phenomena. Yet, I focus here on people because although, for example, predatory animals can and do cause serious harm, even death, I would argue that the behaviour of animals more fittingly falls under the rubric of ‘natural phenomena’ and that only humans are truly capable of evil.

As one distinction, people can readily anticipate — project and understand — the potential for harm, on an existential level; other species probably cannot (with research continuing). As for differentiating between, say, wrongdoing and full-on evil, context is critical. Another instantiation of evil is history’s many impositions of colonial rule, as having been practiced in all parts of the world. It not uncommonly oppressed its victims, in all manner of scarring ways, by sowing fear, injustice, stripping away of human rights, physical and emotional pain, and destruction of indigenous traditions.

This tipping point from wrongdoing, from say, someone under-reporting taxable income or skipping out on paying a restaurant bill, into full-on evil is made evident in these additional examples. These are deeds that range the gamut: serial murder that preys on communities, terrorist attacks on subway trains, genocide aimed at helpless minority groups, massacres, enslavement of people, torture, abuses of civilians during conflicts, summary executions, and mutilation, as well as child abuse, rape, racism, and environmental destruction. Such atrocities happen because people arrive at freely made choices: deliberateness, leading to causation.

These incidences, and their perpetrators (society condemns both doer and deed) aren’t just ‘wrong’, or ‘bad’, or even ‘contemptible’, they’re evil. Even though context matters and can add valuable explanation — circumstances that mitigate or extenuate deeds, including instigators’ motives — rendering judgements about evil is still possible, even if occasionally tenuously. So, for example, mitigation might include being unaware of the harmful consequences of one's actions, well-meaning intent that unpredictably goes awry, pernicious effects of a corrupting childhood, or lack of empathy of a psychopath. Under these conditions, blame and culpability hardly seem appropriate. Extenuation, on the other hand, might be deliberate, cruel infliction of pain and the pleasure derived from it, such as might occur during the venal kidnapping of a woman or child.

As for a religious dimension to moral agency, such agency might be viewed as applying to a god, in the capacity as creator of the universe. In this model of creation, such a god is seen as serving as the moral agent behind what I referred to above as ‘natural evil’ — from hurricanes, earthquakes, volcano eruptions, tsunamis, and droughts to illnesses, famine, pain, and grief. They of course often have destructive, even deadly, consequences. Importantly, that such evil occurs in the realm of nature doesn’t award it exceptional status. This, despite occasional claims to the contrary, such as the overly reductionist, but commonplace, assertion of the ancient Roman emperor-philosopher Marcus Aurelius:
 “Nothing is evil which is according to nature.”
In the case of natural events, evil may be seen as stemming less from intentions and only from the consequences of such phenomena — starvation, precarious subsistence, homelessness, broken-up families, desolation, widespread chronic diseases, rampant infant mortality, breakdown of social systems, malaise, mass exoduses of desperate migrants escaping violence, and gnawing hopelessness.

Such things have prompted faith-based debates over evil in the world. Specifically, if, as commonly assumed by religious adherents, there is a god that’s all-powerful, all-knowing, and all-benevolent, then why is there evil, including our examples above of natural evil? In one familiar take on theodicy, the 4th-century philosopher Saint Augustine offered a partial explanation, averring that:
 “God judged it better to bring good out of evil than to suffer no evil to exist.” 
 Other philosophers have asserted that the absence of evil, where people could only act for the good (as well as a god’s supposed fore-knowledge of people’s choices) would a priori render free will unnecessary and, of note, choices being predetermined.

Yet, the Gordian knot remains untied: our preceding definition of a god that is all-powerful and all-benevolent would rationally include being able to, as well as wanting to, eliminate evil and the suffering stemming from it. Especially, and surely, in the framework of that god’s own moral agency and unfettered free will. Since, however, evil and suffering are present — ubiquitously and incessantly — a reasonable inquiry is whether a god therefore exists. If one were to conclude that a god does exist, then recurring natural evil might suggest that the god did not create the universe expressly, or at least not entirely, for the benefit of humankind. That is, that humankind isn’t, perhaps, central or exceptional, but rather is incidental, to the universe’s existence. Accordingly, one might presuppose an ontological demotion.

Human moral agency remains core even when it is institutions — for example, governments and organisations of various kinds — that formalise actions. Here, again, the pitiless use of chemical weapons in Syria presents us with a case in point to better understand institutional behaviour. Importantly, however, even at the institutional level, human beings inescapably remain fundamental and essential to decisions and deeds, while institutions serve as tools to leverage those decisions and deeds. National governments around the world routinely suppress and brutalise minority populations, often with little or no provocation. Put another way, it is the people, as they course through the corridors of institutions, who serve as the central actors. They make, and bear responsibility for policies.

It is through institutions that people’s decisions and deeds become externalised — ideas instantiated in the form of policies, plans, regulations, acts, and programs. In this model of individual and collective human behaviour, institutions have the capacity for evil, even in cases when bad outcomes are unintended. Which affirms, one might note in addressing institutional behaviour, that the 20th-century French novelist and philosopher, Albert Camus, was perhaps right in observing:
“Good intentions may do as much harm as malevolence if they lack understanding.”
So, to the point: an institution’s ostensibly well-intended policy — for example, freeing up corporate enterprise to create jobs and boost national productivity — may nonetheless unintentionally cause suffering — for example, increased toxins in the soil, water, and air, affecting the health of communities. Hence again is a way in which effects, not only intentions, express bad outcomes.

But at other times, the moral agency behind decisions and deeds perpetrated by institutions’ human occupants may intentionally aim toward evil. Cases range the breadth of actions: launching wars overtly with plunder or hegemonism in mind; instigating pogroms or death fields; materially disadvantaging people based on identities like race, ethnicity, religion, or national origin (harsh treatment of migrants being a recent example); ignoring the dehumanising and stunting effects of child labour; showing policy disregard as society’s poorest elderly live in squalor; allowing industries to seep toxins into the environment for monetary gain — there are myriad examples. Institutions aren’t, therefore, simply bricks and mortar. They have a pulse, comprising the vision, philosophy, and mission of the people who design and implement their policies, benign or malign.

Evil, then, involves more than what Saint Augustine saw as the ‘privation’ of good — privation of virtuousness, equality, empathy, responsible social stewardship, health, compassion, peace, and so forth. In reality, evil is far less passive than Saint Augustine’s vision. Rather, evil arises from the deliberate, freely making of life’s decisions and one's choice to act on them, in clear contravention to humanity’s well-being. Evil is distinguished from the mere absence of good, and is much more than Plato’s insight that there must always be something ‘antagonistic’ to good. In many instances, evil is flagrant, such as in our example of the use of chemical weapons in Syria; in other instances, evil is more insidious and sometimes veiled, such as in the corruption of government plutocrats invidiously dipping into national coffers at the expense of the populace's quality of life. In either case, it is evident that evil, whether in its manmade or in its natural variant, exists in its own right and thus can be parsed and understood from both the religious and the secular vantage point.

Recent Comments