Monday, 9 December 2019

Is Torture Morally Defensible?

Posted by Keith Tidman

Far from being unconscionable, today one metric of how societies have universalised torture is that, according to Amnesty International, some 140 countries resort to it: whether for use by domestic police, intelligence agencies, military forces, or other institutions. Incongruously, many of these countries are signatories to the United Nations Convention Against Torture, the one that forbids torture, whether domestic or outsourced to countries where torture is legal (by so-called renditions).

Philosophers too are ambivalent, conjuring up difficult scenarios in which torture seems somehow the only reasonable response:
An anarchist knows the whereabouts of a powerful bomb set to kill scores of civilians.
A kidnapper has hidden a four-year-old in a makeshift underground box, holding out for a ransom.
Or perhaps an authoritarian government, feeling threatened, has identified the ringleader of swelling political street opposition, and wants to know his accomplices’ names. Soldiers have a high-ranking captive, who knows details of the enemy’s plans to launch a counteroffensive. A kingpin drug supplier, and his metastasized network of street traffickers, routinely distributes highly contaminated drugs, resulting in a rash of deaths...

Do any of these hypothetical and real-world events, where information needs to be extracted for urgent purposes, rise to the level of resorting to torture? Are there other examples to which society ought morally consent to torture? If so, for what purposes? Or is torture never morally justified?

One common opinion is that if the outcome of torture is information that saves innocent lives, the practice is morally justified. I would argue that there are at least three aspects to this claim:
  • the multiple lives that will be saved (traded off against the fewer), sometimes referred to as ‘instrumental harm’; 
  • the collective innocence, in contrast to any aspect of culpability, of those people saved from harm; and
  • the overall benefit to society, as best can credibly be predicted with information at hand.
The 18th-century philosopher Jeremy Bentham’s famous phrase that ‘It is the greatest good for the greatest number of people which is the measure of right and wrong’ seems to apply here. Historically, many people have found, rightly or not, that this principle of ‘greatest good for the greater number’ rises to the level of common sense, as well as proving simpler to apply in establishing one’s own life doctrine than from competitive standards — such as discounting outcomes for chosen behaviours.

Other thinkers, such as Joseph Priestley (18th century) and John Stuart Mill (19th century), expressed similar utilitarian arguments, though using the word ‘happiness’ rather than ‘benefit’. (Both terms might, however, strike one as equally cryptic.) Here, the standard of morality is not a rulebook rooted in solemnised creed, but a standard based in everyday principles of usefulness to the many. Torture, too, may be looked at in those lights, speaking to factors like human rights and dignity — or whether individuals, by virtue of the perceived threat, forfeit those rights.

Utilitarianism has been criticised, however, for its obtuse ‘the ends justify the means’ mentality — an approach complicated by the difficulty of predicting consequences. Similarly, some ‘bills of rights’ have attempted to provide pushback against the simple calculus of benefiting the greatest number. Instead, they advance legal positions aimed at protecting the welfare of the few (the minority) against the possible tyranny of the many (the majority). ‘Natural rights’ — the right to life and liberty — inform these protective constitutional provisions.

If torture is approved of in some situations — ‘extreme cases’ or ‘emergencies’, as society might tell itself — the bar in some cases might lower. As a possible fast track in remedying a threat — maybe an extra–judicial fast track — torture is tempting, especially when used ‘for defence’. However, the uneasiness is in torture turning into an obligation — if shrouded in an alleged moral imperative, perhaps to exploit a permissive legal system. This dynamic may prove alluring if society finds it expeditious to shoehorn more cases into the hard-to-parse ‘existential risk’.

What remains key is whether society can be trusted to make such grim moral choices — such as those requiring the resort to torture. This blurriness has propelled some toward an ‘absolutist’ stance, censuring torture in all circumstances. The French poet Charles Baudelaire felt that ‘Torture, as the art of discovering truth, is barbaric nonsense’. Paradoxically, however, absolutism in the total ban on torture might itself be regarded as immoral, if the result is death of a kidnapped child or of scores of civilians. That said, there’s no escaping the reality that torture inflicts pain (physical and/or mental), shreds human dignity, and curbs personal sovereignty. To some, many even, it thus must be viewed as reprehensible and irredeemable — decoupled from outcomes.

This is especially apparent if torture is administered to inflict pain, terrorise, humiliate, or dehumanise for purposes of deterrence or punishment. But even if torture is used to extract information — information perhaps vital, as per the scenarios listed at the beginning — there is a problem: the information acquired is suspect, tales invented just to stop pain. Long ago, Aristotle stressed this point, saying plainly: ‘Evidence from torture may be considered utterly untrustworthy’. Even absolutists, however, cannot skip being involved in defining what rises to the threshold of clearer-cut torture and what perhaps falls just below  grist for considerable contentious debate.

The question remains: can torture ever be justified? And, linked to this, which moral principles might society want to normalise? Is it true, as the French philosopher Jean-Paul Sartre noted, that ‘Torture is senseless violence, born in fear’? As societies grapple with these questions, they reduce the alternatives to two: blanket condemnation of torture (and acceptance of possible dire, even existential consequences of inaction); or instead acceptance of the utility of torture in certain situations, coupled with controversial claims about the correct definitions of the practice.

I would argue one might morally come down on the side of the defensible utility of the practice  albeit in agreed-upon circumstances (like some of those listed above), where human rights are robustly aired side by side with the exigent dangers, potential aftermaths of inertia, and hard choices societies face.

Monday, 2 December 2019

Picture Post #51: Nobody Excluded

'Because things don’t appear to be the known thing; they aren’t what they seemed to be neither will they become what they might appear to become.' 

Paris, October 2019.
Picture credit: Olivia Galisson

Posted by Tessa den Uyl

Activists draw attention to global ecological devastation in front of the fountain of Place du Châtelet. This monument was ordered by Napoleon in 1806, and built by the sculptor Boizet. It pays tribute to the victories achieved in battle, and reminds us of Napoleon’s decision to provide free drinking water to all Parisians.

Victories bring along statues, which serve historical commemoration -- though foremost, symbolically, they are built upon the idea of a future. A future that, seen from a once-upon-a-time perspective, might not have been that imaginable, as to how it would turn out.

The beginning of the world alike the end is not new to our imagination. But things have changed. We have interfered too much in the flux of ecology, for profit. We might think we are smart, but how smart we truly are will have to be proven. For neither rage nor love might provide a statue to remember.

This planet does not care about our extinction. Though we are this planet -- for without it, we simply wouldn’t be. This is not new to our imagination. More recent, instead, is the question whether our extinction is truly a problem, or do we make it a problem because we have created a mess? This time, what is foreseen is that nobody is excluded.

Monday, 25 November 2019

Prosthetics of the Brain

Posted by Emile Wolfaardt

Some creatures are able to regrow lost limbs (like crayfish, salamanders, starfish and some spiders). As humans, we are not as advanced in that department. But we can create such limbs – conventional prosthetics – artificial limbs or organs designed to provide (some) return of function. Some replacements, like glass eyes, don’t even provide that – they don’t see better, they simply look better. But a new wave of smart prosthetics is busy changing all that.

Bionic eyes are surgically implanted, and connect with retina neurons, recreating the transduction of light information back to the brain – so the brain can once again ‘see’. Bionic lenses provide augmented abilities, enabling eyes to see three times better than ‘perfect vision’. Bionic eyes will have all the abilities of modern visual technology like night vision, heat sensors, distant, infra-red and x-ray vision - and other augmented abilities. Likewise, other prosthetics will become smart, enhancing the human experience with enhanced reality.

The latest innovation in prosthetics is the revolutionary addition of machine learning and AI. Here, the wave of change is going to be of tsunamic proportion. Bioengineers are impressively pushing into this frontier, merging the human experience with superhuman abilities. The new field of development is the power of ‘smart brains’ – or neuro-mechanical algorithmic collaboration - where artificial intelligence, machine learning, and the human brain interface to create a brand-new human experience.

Neuro-mechanical algorithmic collaboration may sound like a huge tongue twister – but you already know what it means. Let’s parse it. Neuro- (of the brain), mechanical (of machines) algorithmic (all information, human or machine, is processed by way of algorithms) collaboration (working together). These BMIs (Brain Machine Interfaces) will become the norm of our future. What does that look like? The end result is the human brain having access to any and all information instantly, being able to share it with others seamlessly, and interpolating it into the situation appropriately.

For instance, a doctor in the middle of a surgery observes an unexpected bleed, instantly pulls up in his brain the last 20 occurrences of that bleed in similar situations, and is able to select the best cause and solution. Or you and I could have this conversation brain to brain, without the use of telephones or devices - simply using brain to brain communication. While that seems like a huge concept, in one sense it is not very different to what we do all the time. We use technology – the cell-phone – to communicate thoughts from one brain to another brain. Imagine if we could use technology to negate the need for the cell-phone. That is brain to brain communication.

There is a rat in a cage in Duke University, USA. In front of him are two glass doors that cannot open. He has a probe in his brain that links to a computer. In Brazil, there is another rat with a similar probe in his brain. In front of him are two wooden doors that he cannot see through. Then place a treat behind one of the glass doors in front of the rat in the USA, and his brain tells the rat in Brazil which door to open. That is brain to brain communication. Remove the probe (go wireless) and we have innate brain to brain communication.

There are many, many challenges before this can become a functional reality – but it is within sight. Amongst the biggest challenges are mapping the human brain sufficiently so we know what neurons to fire up, and creating a broad enough wireless connection to relay the enormous amount of information required to transmit even a single thought. We are making progress. Elon Musk is one of the innovators in this field. He is currently suggesting he can make changes to the brain to address Parkinson’s, Alzheimer’s, Autism and other brain disorders.

Scientists can control the movement of a rat with a PlayStation remote type control, have it climb a ladder, jump off a ledge that is higher than it would comfortably jump from, then inject endorphins into the rat’s brain that made the jump feel good.

Who knows – perhaps the opportunity lies ahead to correct socially disruptive behaviour, or criminal thinking? Would that be more effective than incarceration? Who knows - perhaps couples will be able to release endorphins into each other’s brains to establish a sense of bliss? Who knows – perhaps we will be able enhance our brains so that our knowledge is infinite, our character impeccable, and our reality phenomenal? If so, we shall be able to create our own reality, a world in which we and others live in peace and happiness. We can have the life we want in the world we choose.

Who would not want that? Or would they?

Further reading:

Monday, 18 November 2019

Getting the Ethics Right: Life and Death Decisions by Self-Driving Cars

Yes, the ethics of driverless cars are complicated.
Image credit: Iyad Rahwan
Posted by Keith Tidman

In 1967, the British philosopher Philippa Foot, daughter of a British Army major and sometime flatmate of the novelist Iris Murdoch,  published an iconic thought experiment illustrating what forever after would be known as ‘the trolley problem’. These are problems that probe our intuitions about whether it is permissible to kill one person to save many.

The issue has intrigued ethicists, sociologists, psychologists, neuroscientists, legal experts, anthropologists, and technologists alike, with recent discussions highlighting its potential relevance to future robots, drones, and self-driving cars, among other ‘smart’, increasingly autonomous technologies.

The classic version of the thought experiment goes along these lines: The driver of a runaway trolley (tram) sees that five people are ahead, working on the main track. He knows that the trolley, if left to continue straight ahead, will kill the five workers. However, the driver spots a side track, where he can choose to redirect the trolley. The catch is that a single worker is toiling on that side track, who will be killed if the driver redirects the trolley. The ethical conundrum is whether the driver should allow the trolley to stay the course and kill the five workers, or alternatively redirect the trolley and kill the single worker.

Many twists on the thought experiment have been explored. One, introduced by the American philosopher Judith Thomson a decade after Foot, involves an observer, aware of the runaway trolley, who sees a person on a bridge above the track. The observer knows that if he pushes the person onto the track, the person’s body will stop the trolley, though killing him. The ethical conundrum is whether the observer should do nothing, allowing the trolley to kill the five workers. Or push the person from the bridge, killing him alone. (Might a person choose, instead, to sacrifice himself for the greater good by leaping from the bridge onto the track?)

The ‘utilitarian’ choice, where consequences matter, is to redirect the trolley and kill the lone worker — or in the second scenario, to push the person from the bridge onto the track. This ‘consequentialist’ calculation, as it’s also known, results in the fewest deaths. On the other hand, the ‘deontological’ choice, where the morality of the act itself matters most, obliges the driver not to redirect the trolley because the act would be immoral — despite the larger number of resulting deaths. The same calculus applies to not pushing the person from the bridge — again, despite the resulting multiple deaths. Where, then, does one’s higher moral obligation lie; is it in acting, or in not acting?

The ‘doctrine of double effect’ might prove germane here. The principle, introduced by Thomas Aquinas in the thirteenth century, says that an act that causes harm, such as injuring or killing someone as a side effect (‘double effect’), may still be moral as long as it promotes some good end (as, let’s say, saving five lives rather than just the one).

Empirical research has shown that redirecting the runaway trolley toward the one worker is considered an easier choice — utilitarianism basis — whereas overwhelmingly visceral unease in pushing a person off the bridge is strong — deontological basis. Although both acts involve intentionality — resulting in killing one rather than five — it’s seemingly less morally offensive to impersonally pull a lever to redirect the trolley than to place hands on a person to push him off the bridge, sacrificing him for the good of the many.

In similar practical spirit, neuroscience has interestingly connected these reactions to regions of the brain, to show neuronal bases, by viewing subjects in a functional magnetic resonance imaging (fMRI) machine as they thought about trolley-type scenarios. Choosing, through deliberation, to steer the trolley onto the side track, reducing loss of life, resulted in more activity in the prefrontal cortex. Thinking about pushing the person from the bridge onto the track, with the attendant imagery and emotions, resulted in the amygdala showing greater activity. Follow-on studies have shown similar responses.

So, let’s now fast forward to the 21st century, to look at just one way this thought experiment might, intriguingly, become pertinent to modern technology: self-driving cars. The aim is to marry function and increasingly smart, deep-learning technology. The longer-range goal is for driverless cars to consistently outperform humans along various critical dimensions, especially human error (the latter estimated to account for some ninety percent of accidents) — while nontrivially easing congestion, improving fuel mileage, and polluting less.

As developers step toward what’s called ‘strong’ artificial intelligence — where AI (machine learning and big data) becomes increasingly capable of human-like functionality — automakers might find it prudent to fold ethics into their thinking. That is, to consider the risks on the road posed to self, passengers, drivers of other vehicles, pedestrians, and property. With the trolley problem in mind, ought, for example, the car’s ‘brain’ favour saving the driver over a pedestrian? A pedestrian over the driver? The young over the old? Women over men? Children over adults? Groups over an individual? And so forth — teasing apart the myriad conceivable circumstances. Societies, drawing from their own cultural norms, might call upon the ethicists and other experts mentioned in the opening paragraph to help get these moral choices ‘right’, in collaboration with policymakers, regulators, and manufacturers.

Thought experiments like this have gained new traction in our techno-centric world, including the forward-leaning development of ‘strong’ AI, big data, and powerful machine-learning algorithms for driverless cars: vital tools needed to address conflicting moral priorities as we venture into the longer-range future.

Monday, 11 November 2019

God: a New Argument from Design

The game of our universe does not reveal sameness

Posted by Thomas Scarborough

The venerable ‘argument from design’ proposes that the creation reveals a Creator. More than this, that the creation reveals the power and glory of God. Isaac Newton was one among many who believed it—stating in an appendix to his 1637 Principia or Principles of Mathematics:
‘This most elegant system of the sun, planets, and comets could not have arisen without the design and dominion of an intelligent and powerful being.’
The trouble is, there are alternative explanations for design—in fact complete, coherent explanations. To put it in a nutshell, there are other ways that order and design can come about. So, today, the argument is often said to be inconclusive. The evolutionary biologist, Richard Dawkins, writes that it is ‘unanswerable'—which is not to say, however, that it is disproven.

Yet suppose that we push the whole argument back—back beyond all talk of power and glory—back beyond the simplest conceptions of design, to a core, a point of ‘ground zero'. Here we find the first and most basic characteristic of design: it is more than chaos or, alternatively, it is more than featurelessness.

On the surface of it, our universe ought to be only one or the other. Our universe is governed by laws which ought not to produce any more than chaos on the one hand, or featurelessness on the other. We might use the analogy of a chess game, although the analogy only goes so far.* A careful observer of a chess match reports that the entire game is governed by rules, and there is no departure from such rules.

Yet there is clearly, at the same time, something happening in the game at a different level. Games get won, and games get lost, and games play out in different ways each time. There is something beyond the laws. We may even infer that there is intelligence behind each game – but let us not rush to go that far.

However, without seeing the players, one could assume that they must exist—or something which resembles them. To put it as basically as we can: the game lacks sameness from game to game—whether this be the sameness of chaos or the sameness of featurelessness. Something else is happening there. Now apply this to our universe. We ought to see complete chaos, or we ought to see complete featurelessness. We ought not to see asymmetry or diversity, or anything of that sort—let alone anything which could resemble design.

The problem is familiar to science. The physicist, Stephen Hawking, wrote:
‘Why is it (the universe) not in a state of complete disorder at all times? After all, this might seem more probable.’
That is, there is no good explanation for it. Given the laws of nature, we cannot derive from them a universe which is as complex as the one we see. On the other hand, biologist Stuart Kauffman writes,
‘We have no adequate theory for why our universe is complex.’
This is the opposite view. We ought not to see any complexity emerging. No matter what degree of complexity we find today, whether it be Newton's system of the universe, or the basic fact that complexity exists, it should not happen. It is as if there is more than the rules—because the game of our universe does not reveal sameness.

This idea of ‘more’—of different levels of reality—has been seriously entertained by various scientists. The  science writer Natalie Wolchover says, ‘Space-time may be a translation of some other description of reality,’ and while she does not propose the existence of the supernatural, the idea of some other description of reality could open the door to this.

Call this the ‘ground zero’, the epicentre of the argument from design. There is something going on, at a level we do not see, which we may never discover by examining the rules. In the analogy of the chess game, where we observe something beyond the rules, we may not be able to tell what that something is—yet it is clear that it is.

This argument differs from the familiar version of the theological argument from design, which generally assumes that God created the rules which the design displays. On the contrary, this argument proposes that God may exist beyond the rules, through the very fact that we see order.

* A problem with the analogy is that a chess game manifests complexity to begin with. The important point is, however, that the game reveals more than it should.

Monday, 4 November 2019

'Because things don’t appear to be the known thing; they aren’t what they seemed to be neither will they become what they might appear to become.' 

Posted by Jeremy Dyer *

This is a detail from a great work of art. Which one? Whose? We are expected to admire it, to marvel and to learn. 

What if I told you that it was a detail from one of Pollock's works? Would you then try to 'see' the elusive essence of it? On the other hand, what if I told you it was merely a photo from above the urinal in a late-night restaurant? Does that make it any more or less 'art'? 

If everything is art—the sacred mantrathen the reverse corollary must also be true. Nothing is art.

* Jeremy Dyer is an acclaimed Cape Town artist.

Monday, 28 October 2019

The Politics of the Bridge

Posted by Martin Cohen

Bridges are the stuff of superlatives and parlour games. Which is the longest bridge in the world? The tallest? The most expensive? And then there's also a prize which few seem to compete for - the prize for being the most political. The British Prime Minister, Boris Johnson’s. surprise proposal in September for a feasibility study for a bridge to Ireland threatens to scoop the pot.

But then, what is it about bridges and Mr. Johnson? Fresh from the disaster, at least in public relations terms, of his ‘Garden bridge’ (pictured above) over the river Thames, the one that Joanna Lumley said would be a “floating paradise”, the “tiara on the head of our fabulous city” and was forecast to cost £200 million before the plug was pulled on it (leaving Londoners with bills of £48 million for nothing), he announces a new bridge - this time connecting Northern Ireland across seas a thousand feet deep to Stranraer in Scotland. This one would cost a bit too - albeit Johnson suggests it would be value for money at no more than £15 billion.

If Londoners choked on a minuscule fraction of that for their new bridge, it is hard to see how exactly this new one could have been afforded. Particularly as costs of large-scale public works don't exactly have a good reputation in terms of coming in within budget.
The 55-kilometre bridge–tunnel system of the Hong Kong-Zhuhai-Macau bridge that opened last year was constructed only after delays, corruption and accidents had put its cost up to 48 billion Yuan (about £5.4 billion).

When wear and tear to the eastern span of the iconic San Francisco Bay bridge became too bad to keep patching, an entirely new bridge was built to replace it, at a final price tag of $6.5 billion (about £5.2 billion), a remarkable sum in its own right but all more indigestible because it represented a 2,500% cost overrun from the original estimate of $250 million.
Grand public works are always political. For a start, there is the money to be made on the contract, but there is also the money to be made from interest on the loans obtained. Money borrowed at a low rate from governments, can be relent at a higher rate. Even when they are run scrupulously, bridges are, like so many large construction projects, moneygorounds.

And yet, bridges have a good image, certainly compared to walls. They are said to unite, where barriers divide. "Praise the bridge that carried you safe over" says Lady Duberly at breakfast, in George Colman's play The Heir at Law. But surface appearances can be deceptive. Bridges, as recent history has shown, have a special power to divide.

That Hong Kong bridge is also a way of projecting mainland Chinese power onto its fractious new family member. President Putin's $3.7 billion Kerch Strait Bridge joining Crimea to Russia was hardly likely, as he put it, to bring “all of us closer together”. Ukrainians and the wider international community considered Russia's the bridge to be reinforcing Russian annexation of the peninsula. And if bridges are often favourably contrasted with walls, this one, it soon emerged, functioned as both: no sooner was the bridge completed than shipping trying to sail under it began to be obstructed. No wonder that Ukraine believes that there was an entirely negative and carefully secret political rationale for the bridge: to impose an economic stranglehold over Ukraine and cripple its commercial shipping industry in the Azov Sea.

In this sense, a bridge to Northern Ireland seems anything but a friendly gesture by the British, rather it smacks of old-style colonialism.

But perhaps the saddest bridge of them all was the sixteenth century Old Bridge at Mostar, commissioned by Suleiman the Magnificent in 1557 and connecting the two sides of the old city. Upon its completion it was the widest man-made arch in the world, towering forty meters (130 feet) over the river. Yet it was constructed and bound not with cement but with egg whites. No wonder, according to legend, the builder, Mimar Hayruddin, whose conditions of employment apparently included his being hanged if the bridge collapsed, carefully prepared for his own funeral on the day the scaffolding was finally removed from the completed structure.

In fact, the bridge was a fantastic piece of engineering and stood proud - until that is, in 1993 when Croatian nationalists, intent on dividing the communities either side of the river, collapsed it in a barrage of artillery shells. Thus the bridge once compared with a ‘rainbow rising up to the Milky Way’ became instead a tragic monument to hatred.

Monday, 21 October 2019

Humanism: Intersections of Morality and the Human Condition

Kant urged that we ‘treat people as ends in 
themselves, never as means to an end’
Posted by Keith Tidman

At its foundation, humanism’s aim is to empower people through conviction in the philosophical bedrock of self-determination and people’s capacity to flourish — to arrive at an understanding of truth and to shape their own lives through reason, empiricism, vision, reflection, observation, and human-centric values. Humanism casts a wide net philosophically — ethically, metaphysically, sociologically, politically, and otherwise — for the purpose of doing what’s upright in the context of individual and community dignity and worth.

Humanism provides social mores, guiding moral behaviour. The umbrella aspiration is unconditional: to improve the human condition in the present, while endowing future generations with progressively better conditions. The prominence of the word ‘flourishing’ is more than just rhetoric. In placing people at the heart of affairs, humanism stresses the importance of the individual living both free and accountable — to hand off a better world. In this endeavour, the ideal is to live unbound by undemocratic doctrine, instead prospering collaboratively with fellow citizens and communities. Immanuel Kant underscored this humanistic respect for fellow citizens, urging quite simply, in Groundwork of the Metaphysics of Morality, that we ‘treat people as ends in themselves, never as means to an end’. 

The history of humanistic thinking is not attributed to any single proto-humanist. Nor has it been confined to any single place or time. Rather, humanist beliefs trace a path through the ages, being reshaped along the way. Among the instrumental contributors were Gautama Buddha in ancient India; Lao Tzu and Confucius in ancient China; Thales, Epicurus, Pericles, Democritus, and Thucydides in ancient Greece; Lucretius and Cicero in ancient Rome; Francesco Petrarch, Sir Thomas More, Michel de Montaigne, and François Rabelais during the Renaissance; and Daniel Dennett, John Dewey, A.J. Ayer, A.C. Grayling, Bertrand Russell, and John Dewey among the modern humanist-leaning philosophers. (Dewey contributed, in the early 1930s, to drafting the original Humanist Manifest.) The point being that the story of humanism is one of ubiquity and variety; if you’re a humanist, you’re in good company. The English philosopher A.J. Ayer, in The Humanist Outlook, aptly captured the philosophy’s human-centric perspective:

‘The only possible basis for a sound morality is mutual tolerance and respect; tolerance of one another’s customs and opinions; respect for one another’s rights and feelings; awareness of one another’s needs’.

For humanists, moral decisions and deeds do not require a supernatural, transcendent being. To the contrary: the almost-universal tendency to anthropomorphise God, to attribute human characteristics to God, is an expedient to help make God relatable and familiar that can, at the same time, prove disquieting to some people. Rather, humanists’ belief is generally that any god, no matter how intense one’s faith, can only ever be an unknowable abstraction. To that point, the opinion of the eighteenth-century Scottish philosopher David Hume — ‘A wise man proportions his belief to the evidence’ — goes to the heart of humanists’ rationalist philosophy regarding faith. Yet, theism and humanism can coexist; they do not necessarily cancel each other out. Adherents of humanism have been religious, agnostic, and atheist — though it’s true that secular humanism, as a subspecies of humanism, rejects a religious basis for human morality.

For humanists there is typically no expectation of after-life rewards and punishments, mysteries associated with metaphorical teachings, or inspirational exhortations by evangelising trailblazers. There need be no ‘ghost in the machine’, to borrow an expression from British philosopher Gilbert Ryle: no invisible hand guiding the laws of nature, or making exceptions to nature’s axioms simply to make ‘miracles’ possible, or swaying human choices, or leaning on so-called revelations and mysticism, or bending the arc of human history. Rather, rationality, naturalism, and empiricism serve as the drivers of moral behaviour, individually and societally. The pre-Socratic philosopher Protagoras summed up these ideas about the challenges of knowing the supernatural:

‘About the gods, I’m unable to know whether they exist or do not exist, nor what they are like in form: for there are things that hinder sure knowledge — the obscurity of the subject and the shortness of human life’.

The critical thinking that’s fundamental to pro-social humanism thus moves the needle from an abstraction to the concreteness of natural and social science. And the handwringing over issues of theodicy no longer matters; evil simply happens naturally and unavoidably, in the course of everyday events. In that light, human nature is recognised not to be perfectible, but nonetheless can be burnished by the influences of culture, such as education, thoughtful policymaking, and exemplification of right behaviour. This model assumes a benign form of human centrism. ‘Benign’ because the model rejects doctrinaire ideology, instead acknowledging that while there may be some universal goods cutting across societies, moral decision-making takes account of the often-unique values of diverse cultures.

A quality that distinguishes humanity is its persistence in bettering the lot of people. Enabling people to live more fully  from the material to the cultural and spiritual  is the manner in which secular humanism embraces its moral obligation: obligation of the individual to family, community, nation, and globe. These interested parties must operate with a like-minded philosophical believe in the fundamental value of all life. In turn, reason and observable evidence may lead to share moral goods, as well as progress on the material and immaterial sides of life's ledger.

Humanism acknowledges the sanctification of life, instilling moral worthiness. That sanctification propels human behaviour and endeavour: from progressiveness to altruism, a global outlook, critical thinking, and inclusiveness. Humanism aspires to the greater good of humanity through the dovetailing of various goods: ranging across governance, institutions, justice, philosophical tenets, science, cultural traditions, mores, and teachings. Collectively, these make social order, from small communities to nations, possible. The naturalist Charles Darwin addressed an overarching point about this social order:

‘As man advances in civilisation, and small tribes are united into larger communities, the simplest reason would tell each individual that he ought to extend his social instincts and sympathies to all the members of the same nation, though personally unknown to him’.

Within humanism, systemic challenges regarding morality present themselves: what people can know about definitions of morality; how language bears on that discussion; the value of benefits derived from decisions, policies, and deeds; and, thornily, deciding what actually benefits humanity. There is no taxonomy of all possible goods, for handy reference; we’re left to figure it out. There is no single, unconditional moral code, good for everyone, in every circumstance, for all time. There is only a limited ability to measure the benefits of alternative actions. And there are degrees of confidence and uncertainty in the ‘truth-value’ of moral propositions.

Humanism empowers people not only to help avoid bad results, but to strive for the greatest amount of good for the greatest number of people — a utilitarian metric, based on the consequences of actions, famously espoused by the eighteenth-century philosopher Jeremy Bentham and nineteenth-century philosopher John Stuart Mill, among others. It empowers society to tame conflicting self-interests. It systematises the development of right and wrong in the light of intent, all the while imagining the ideal human condition, albeit absent the intrusion of dogma.

Agency in promoting the ‘flourishing’ of humankind, within this humanist backdrop, is shared. People’s search for truth through natural means, to advance everyone’s best interest, is preeminent. Self-realisation is the central tenet. Faith and myth are insufficient. As modern humanism proclaims, this is less a doctrine than a ‘life stance’. Social order, forged on the anvil of humanism and its core belief in being wholly responsible for our own choices and lives, through rational measures, is the product of that shared agency.

Monday, 14 October 2019

A New African Pragmatism

Natalia Goncharova, Exhilarating Cyclist, 1913.
By Sifiso Mkhonto *

Allister Marran, addressing himself to older people in these pages, wrote: 'Your time is over.'  Far from representing ageism, his attitude represents a new pragmatism in Africa. 

For the past few years, a question has lingered in my mind: are African political and business leaders concerned about the future of this continent, or are they concerned about their turn to eat, and how those in their lineage may benefit from the feast that is dished out in the back kitchen? Judging by the obvious evidence before us, we can only conclude that they are far too often unconcerned. 

We shall not delve into each problem, because history teaches us that we have a tendency to spend our resources and energy on discussing and unpacking problems, rather than executing the solution. In business, leaders do not appreciate you knocking at their door with a problem. They prefer a mere brief of the problem, and a detailed plan of the solution. This philosophy can and should be adapted to our approach to social issues that we face as a continent.

In my understanding, we should pragmatically ask at least four ‘whys’. These should be good enough to assist us in thinking of an amicable solution to major issues, among them the following:
• unemployment
• crime (including femicide, xenophobia, and gang violence)
• poverty, and
• lack of quality education
Here is a basic example of applying the first of these four points:
Why do we have such a high level of unemployment amongst the youth?
• Because there are no jobs.
Why are there no jobs?
• Because policy is not business-friendly, start-up businesses fail to create jobs, there’s too much red-tape, and young people studying in fields that are scarce of jobs.
Why, and why. All answers derived should lead us to basic solutions. We do not need ideology and political identity as a continent. These preoccupations set us ten steps back each time a pragmatic, sustainable solution is brought forth. It is the youth, today, which is determined, against all odds, to change the narrative of corrupt States, high crime levels, the stigma of stereotypical prejudices, and many other issues.

Against all the red tape, they still start businesses with no funding, they still pursue education with great sacrifice, to escape the reality of poverty. However, because of those who enjoy the buffet that is prepared and dished out in the back kitchen, many young lions and lionesses are doomed.

The solution is simple. Give young people the space they deserve – they think differently, and they are determined – to advance this continent into one of the most prosperous in the world. 'Grant an idea or belief to be true,' wrote William James, 'what concrete difference will its being true make in anyone's actual life?' Ideology and political identity have failed us. We need a new African pragmatism.

* Sifiso Mkhonto is a logistician and former student leader in South Africa.

Monday, 7 October 2019

Picture Post #49: Vision in a Suitcase

'Because things don’t appear to be the known thing; they aren’t what they seemed to be neither will they become what they might appear to become.' 

Posted by Tessa Den Uyl

Florence, 2019

The Venus by Botticelli, the David by Michelangelo, the Thinker by Rodin, names which resonate, and celebrate moments in our history which are now in the lap of technology. With new materials and with lasers, these images, and thus the names, are copied and cast into gadgets which we can grasp quickly and transport (even) in hand luggage.

These persons had a vision. In this light it just seems odd to exploit ready-mades for commerce that are not urinals, thinking of Duchamp’s ‘Fountain’ and placing a non-art object in an art space.What happens in this shop window might be thought of as the reverse. The art (and its creator) are objects available to everyone. But nothing within these statues reminds us of a vision. They are vision-less, though apparently they remind us of something else.

Does this mean that, when we have merely heard about something, scraps of such something are enough to live through the original, with all its implications and compulsiveness, in which and for which the creation came into being?

Monday, 30 September 2019

What Place for Privacy in a Digital World?

C. S. Lewis, serene at his desk...

Posted by Keith Tidman

When Albert Camus offered this soothing advice in the first half of the twentieth century, ‘Opt for privacy. . . . You need to breathe. And you need to be’, life was still uncomplicated by digital technology. Since then, we have become just so many cogwheels in the global machinery that makes up the ‘Internet of things’ — the multifarious devices that simultaneously empower us and make us vulnerable.

We are alternately thrilled with the power that these devices shower on us — providing an interactive window onto the world, and giving us voice — even as we are dismayed to see our personal information scooped up, stowed, scrutinised for nuggets, reassembled, duplicated, and given up to others. That we may not see this too, that our lives are shared without our being aware, without our freely choosing, and without our being able to prevent their commodification and monetisation only makes it much worse.

Can a human right to privacy, assumed by Camus, still fit within this digitised reality?

Louis Brandeis, a former justice on the U.S. Supreme Court, defined the ‘right to be left alone’ as the ‘most comprehensive of rights, and the right most prized by civilised people’. But that was proffered some ninety years ago. If individuals and societies still value that principle, then today they are challenged to figure out how to balance the intrusively ubiquitous connectivity of digital technology, and the sanctity of personal information implicit in the ‘right to be left alone’. That is, the fundamental human right articulated by the UN’s 1948 Universal Declaration of Human Rights:
‘No one shall be subjected to arbitrary interference with his privacy, family, home, or correspondence’.
It’s safe to assume that we’re not about to scrap our digital devices and nostalgically return to analog lives. To the contrary, inevitable shifts in society will require more dependence on increasingly sophisticated digital technology for a widening range of purposes. Participation in civic life will call for more and different devices, and greater vacuuming and moving around of information. Whether the latter will translate into further loss of the human right to privacy, as is risked, or that society manages change in order to preserve or even recover lost personal privacy, the draft of that narrative is still being written.

However, it’s important to acknowledge that intervention — by policymakers, regulators, technologists, sociologists, cultural anthropologists, and ethicists, among others — may coalesce to avoid the erosion of personal privacy taking a straight upward trajectory. Urgency, and a commitment to avoid and even reverse further erosion, will be key.

Some contemporary philosophers have argued that claims to a human right to privacy are redundant, for various reasons. An example is when privacy is presumed embedded in other human rights, such as personal property — distinguished from property held in common — and protection of our personal being. But this seems dubious; in fact, one might flip the argument on its head — that is, our founding other rights on the right of privacy, the latter being more fundamentally based in human dignity and moral values. It’s a more nuanced, ethics-based position that makes the one-dimensional assertion that ‘If you don’t have anything to hide, you have nothing to fear’ all the more specious.

Furthermore, without a right to privacy being carved out in concrete terms, such as codified in law and constitutions, it may simply get ignored, rendering it non-defendable. For all that, we value privacy, and with it to prevent other people’s intrusion and meddling in our lives. We cling to the notion of what has been dubbed the ‘inviolate personality’ — the quintessence of being a person. In endorsing this belief in individual interests, one is subscribing to Noam Chomsky’s caution that ‘It’s dangerous when people are willing to give up their privacy’. To Chomsky’s point, the informed, ‘willing’ acceptance of social media’s mining and monetising of our personal data provides a contrast.

One parallel factor is the push-pull between what may become normalised governmental access to our personal information and individuals’ assertion of confidentiality and the ‘reasonable expectation’ of privacy. The style of government — from liberal democracies to authoritarianism — matters to government access to personal information: whether for benign use or malign abuse. ‘In good conscience’ is a reasonable guiding principle in establishing the what, when, and how of government access. And in turn, it matters to a fundamental human right to privacy. Meantime, governments may see a need for tools to combat crime and terrorism, allowing surveillance and intelligence gathering through wiretaps and Internet monitoring.

Two and a half centuries ago, Benjamin Franklin foreshadowed this tension between the liberty implied in personal privacy and the safety implied in government’s interest in self-protection. He cautioned: 
‘Those who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety’. 
Yet, however amorphous these contrary claims to rights might be, as a practical matter society has to resolve the risk-benefit equation and choose how to play its hand. What we conclude is the best solution will likely keep shifting, based on norms and emerging technology.

And the notions of a human right to privacy differ as markedly among cultures as they do among individuals. The definition of privacy and its value may differ both among and within cultures. It would perhaps prove unsurprising if a culture situated in Asia, a culture situated in Africa, a culture situated in Europe, and a culture situated in South or Central America were to frame personal privacy rights differently. But only insofar as both the burgeoning of digital technology and the nature of government influence the privacy-rights landscape.

The reflex may be to anticipate that privacy and human rights will take a straight, if thorny, path. The relentless and quickening emergence of digital technologies drives this impulse. The British writer and philosopher C. S. Lewis provides social context for this impulse, saying:
‘We live … in a world starved for solitude, silence, and private.’
Despite the invasion of people’s privacy, by white-hatted parties (with benign intent) and black-hatted parties (with malign intent), I believe our record thus far represents only an embryonic, inelegant attempt to explore — with perfunctory legal, regulatory, or principled restraint — the rich utility of digital technology.

Nonetheless, if we are to steer clear of the potentially unbridled erosion of privacy rights — to uphold the human right to privacy, however measured — then it will require repeatedly revisiting what one might call the ‘digital social contract’ the community adopts: and resolving the contradiction behind being both ‘citizen-creators’ and ‘citizen-users’ of digital technologies.

Monday, 23 September 2019

The Impossibility of Determinism

Posted by Thomas Scarborough

Indeed, free will and determinism. It is a classic problem of metaphysics. No matter what we may think about it, we know that we have a problem. We know that things are physically determined. I line up dominos in a row, and topple the first of them with my finger. It is certain that the whole row of dominos will fall.

Are people then subject to the same kind of determinism? Are we just so many powerless humanoid shapes waiting to be knocked down by circumstances? Or perhaps, to what extent are we subject to such determinism? Is it possible for us to escape our own inner person? Our own history? Our own future? Are we even free to choose our own thoughts—much less our actions? Are we even free to believe? Each of these questions would seem to present us with a range of mightily confusing answers.

I suggest that it may be helpful to try to view the question from a broader perspective—the particular one that comes from consideration of the phenomenon of cause and effect. If I am controlled by indomitable causes, then I am not free. Yet if I am (freely) the cause of my own thoughts and actions, then I am free. Which then is it? Once we understand the dynamics of cause and effect, we should be in a better position to understand free will and determinism.

What is cause and effect?

In our everyday descriptions of our world, we say that, to paraphrase Simon Blackburn, causation is the relation between two events. It holds when, given that one event occurs, ‘it produces, or brings forth, or necessitates the second’. The burrowing aardvark caused the dam to burst; the lightning strike caused the thatch to burn; the medicine caused the patient to rally, and so on. Yet we notice in this something that is immediately problematic—which is that in order to say that there is causality, we need to have carefully defined events before and after.

But such definition is a problem. The philosopher-statesman Francis Bacon wrote of the ‘evil’ we find in defining natural and material things. ‘The definitions themselves consist of words, and those words beget others.’ Aristotle wrote that words consist of features (say, the features of a house), and those features must stand in a certain relation to one another (rubble, say, is not a house). Therefore, not only do we have words within words, but features and relations, too.

Where does it all end? It all ends nowhere. It is an endless regress. Bacon’s ‘evil’ means that our definitions dissipate into the universe. It seems much like having money in a bank, which has its money in another bank, which has its money in another bank, and so on. It is not hard to see that one will never find the money. Full definitions ultimately reach into the void.

If we want to be consistent about it, there are no events. In order to obtain events, we need to set artificial limits to our words—and artificial limits to reality itself, by excluding unwanted influences on our various constructions. But that is not the way the world really is in its totality. More than this, these unwanted influences always seem to enter the picture again somewhere along the line. This is a big part of the problem in our world today.

Of course, cause and effect quite simply work: he lit the fire; I broke the urn; they split the atom. This is good as far as it goes—yet again, such explanations work because we define before and after—and that very definition strips away a lot of what is really going on.

Where does this leave us? It leaves us without a reason to believe in cause and effect—even if we are naturally disposed to thinking that way. There is no rational framework to support it.

Someone might object. Even if we have no befores and afters, we still have a reality which is bound by the laws of the universe. There is therefore some kind of something which is not free. Yet every scientific law is about events before and after. Whatever is out there, it has nothing in common—that we can know of anyway—with such a scheme.

This may be a new way of putting it, but it is not a new idea. Albert Einstein, as an example, said that determinism is a feature of theories, rather than any aspect of the world directly. While, at the end of this post, we cannot prove free will, we can state that notions of determinism are out of the question, in the world as we know it. The world is something else, which we have not yet understood.

Monday, 16 September 2019

Extinction Crisis? The solution may be privatisation

Endangered species can often be protected with comparatively tiny amounts 
of resources. Pictured, the critically endangered Black-flanked rock wallaby whose 
protection needs are measured in thousands of dollars - Image via WWF Australia

Posted by Martin Cohen

Looking around the world, there are so many problems that seem so intractable and the solutions so far off, that it can seem as if it is better to, well not look around the world. 'Climate change', for example, where it has been estimated by Danish statistician and reformed ‘skeptic’, Bjorn Lomborg, that the cost of reducing the world's temperature by the end of the century by a ‘grand total of three tenths of one degree’ is ... $100 trillion. That's not small beans. In terms of charitable donations, you'd need to find 100 million people ready to chip in a million each..

For any number of reasons, that cash ain't gonna be raised and those abatement measures - however worthy - are not going to be made.

Yet in fact there are a whole range of environmental problems which do have relatively straightforwards solutions - and require only tiny investments. These small but vital programmes are often starved of resources.

Take extinctions in Australia, for example, a topic I asked Friends of the Earth (UK) to campaign on back in the 1990s  mainly to highlight UK business links to forest clearance. To run a campaign might have costs a few thousand pounds but after discussions with the then Head of FoE and meeting the senior staff including the Biodiversities campaigner for a roundtable on the issues, I was told there were no resources for it. They offered to run a Press Release campaign if I wrote it instead. And then reneged on that too.

The point is not that I don't like Friends of the Earth much, in fact I think they do a lot of good work, (they helped me lead a campaign that saved the Yorkshire Moors from a four-lane motorway, probably the only time the organisation actually reversed a road scheme that had been formally approved) but that relying on environmentalists to save the world is a mistake. The economics points at a problem and a paradox: environmental pressure groups exist and make money out of environmental horror stories - they have no financial interest in saving anything. A campaign like Climate Change in which a bottomless pit of money must be raised suits certain people very well, even though it can never achieve its ends.

Meanwhile time is running out! Talk about an ‘extinction crisis’ ... It is there all right. But the solutions don't require grandiose schemes to control the world’s climate - they require small concrete actions to preserve habitat.

Half of all the species lost in modern time have been in Australia. In the last 150 years, one in eight of Australia's mammal species - which live(d) nowhere else on earth, have been driven out of existence, as the Australians literally bulldozed their forests into desert, in pursuit of grazing for sheep and cows. At the same time, the land value stolen from the defenceless animals and plundered form Australia's native people is actually tiny.

The Bramble Cay Melomys that lived only on a tiny island in the Torres Strait could have been saved if the island had but been bought and made into a sanctuary. Instead the fate of the little rodent was determined by red tape and political indifference.

Land clearing, invasive farming, extermination programs, lack of monitoring - all these are essentially money-driven failings with economic responses possible. To save the Spotted Tailed Quoll, for example, needs only to preserve a chunk of land from the insatiable thirst of Australia's farmers for land clearance. Likewise, the Black-flanked Rock-wallaby needs a small reserve declaring to cover it's now much diminished range. Such things essentially can be investments - yet the world's billionaire philanthropists - I'm looking at you Mr Gates, Mr Buffett! - have so far directed their wealthy and otherwise worthy Foundations only to talk about human needs - medicine, education, governance even. yet biodiversity and species preservation is surely just as much a vital part of our shred human shared inheritance as any other aspect of human life.

At the moment, attention is rightly focussed on the land clearance in the Amazon rainforest, land clearance often financed directly or indirectly by Western banks and institutions. Yet here's an idea for those with resources: buy up sections of the Amazon and hold them on behalf of their indigenous peoples as ecological parks, scientific resources and sustainably farmed forests. Such privately owned 'ecofarms' would be able to resist predation by those set on both genocide and ecocide. They only need investors!

It has already been done successfully for example in the conservation-driven Kruger Private Reserves in Africa. There, the connecting of habitats alone serves to improve the survival chances of many species in the region.

Monday, 9 September 2019

‘Just War’ Theory: Its Endurance Through the Ages

The Illustrious Hugo Grotius of the Law of Warre and Peace: 
With Annotations, III Parts, and Memorials of the Author’s Life and Death.
Book with title page engraving, printed in London, England, by T. Warren for William Lee in 1654.

Posted by Keith Tidman

To some people, the term ‘just war’ may have the distinct ring of an oxymoron, the more so to advocates of pacifism. After all, as the contention goes, how can the lethal violence and destruction unleashed in war ever be just? Yet, not all of the world’s contentiousness, neither historically nor today, lends itself to nonmilitary remedies. So, coming to grips with the realpolitik of humankind inevitability waging successive wars over several millennia, philosophers, dating back to ancient Greece and Rome — like Plato, Aristotle, and Cicero — have thought about when and how war might be justified.

Building on such early luminary thinkers, the thirteenth-century philosopher and theologian Saint Thomas Aquinas, in his influential text, Summa Theologica, advanced the principles of ‘just war’ to a whole other level. Aquinas’s foundational work led to the tradition of just-war principles, broken down into jus ad bellum (the right to resort to war to begin with) and jus in bello (the right way to fight once war is underway). Centuries later came a new doctrinal category, jus post bellum (the right way to act after war has ended).

The rules that govern going to war, jus ad bellum, include the following:
• just authority, meaning that only legitimate national rulers may declare war;

• just cause, meaning that a nation may wage war only for such purposes as self-defence, defence of other nations, and intervention against the gravest inhumanity;

• right intentions, meaning the warring state stays focused on the just cause and doesn’t veer toward illegitimate causes, such as material and economic gain, hegemonic expansionism, regime change, ideological-cultural-religious dissimilarities, or unbridled militarism;

• proportionality, meaning that as best can be determined, the anticipated goods outweigh the anticipated evil that war will cause;

• a high probability of success, meaning that the war’s aim is seen as highly achievable; 

• last resort, meaning that viable, peaceful, diplomatic solutions have been explored — not just between potentially warring parties, but also with the intercession of supranational institutions, as fit — leaving no alternative to war in order to achieve the just cause.

The rules that govern the actual fighting of war, jus in bello, include the following: 
• discrimination, meaning to target only combatants and military objectives, and not civilians or fighters who have surrendered, been captured, or are injured; 

• proportionality, meaning that injury to lives and property must be in line with the military advantage to be gained; 

• responsibility, meaning that all participants in war are accountable for their behaviour; 
• necessity, meaning that the least-harmful military means, such as choice of weapons, tactics, and amount of force applied, must be resorted to.

The rules that govern behaviour following war’s end, jus post bellum, typically include the following: 
• proportionality, meaning the terms to end war and transition to peace should be reasonable and even-handed; 

• discrimination, meaning that the victor should treat the defeated party fairly and not unduly punitively; 

• restorative, meaning promoting stability, mapping infrastructural redevelopment, and guiding institutional, social, security, and legal order; 

• accountability, meaning that determination of culpability and retribution for wrongful actions (including atrocities) during hostilities are reasonable and measured.
Since the time of the early philosophers like Augustine of Hippo, Thomas Aquinas, and the ascribed ‘father of international law’ Hugo Grotius (The Law of War and Peace, frontispiece above), the principles tied to ‘just war’, and its basis in moral reciprocity, have shifted. One change has entailed the increasing secularisation of ‘just war’ from largely religious roots.

Meanwhile, the failure of the seventeenth-century Peace of Westphalia — which ended Europe’s devastating Thirty Years’ War and Eighty Years’ War, declaring that states would henceforth honour other nations’ sovereignty — has been particularly dreadful. As well intentioned as the treaty was, it failed to head off repeated militarily bloody incursions into others’ territory over the last three and a half centuries. Furthermore, the modern means of war have necessitated revisiting the principles of just wars — despite the theoretical rectitude of wars’ aims.

One factor is the extraordinary versatility, furtiveness, and lethality of modern means of war — and remarkably accelerating transformation. None of these ‘modern means’ were, of course, even imaginable as just-war doctrine was being developed over the centuries. The bristling technology is familiar: from precision (‘smart’) munitions to nuclear weapons, drones, cyber weapons, long-range missiles, stealthy designs, space-based systems, biological/chemical munitions, global power projection by sea and air, hypervelocity munitions, increasingly sophisticated, lethal, and hard-to-defeat AI weapons, and autonomous weapons (increasingly taking human controllers out of the picture). In their respective ways, these devices are intended to exacerbate the ‘friction and fog’ and lethality of war for the opponent, as well as to lessen exposure of one’s own combatants to threats. 

Weapons of a different ilk, like economic sanctions, are meant to coerce opponents into complying with demands and complying with certain behaviours, even if civilians are among the more direly affected. Tactics, too, range widely, from proxies to asymmetric conflicts, special-forces operations, terrorism (intrinsically episodic), psychological operations, targeted killings of individuals, and mercenary insertion.

So, what does this inventory of weapons and tactics portend regarding just-war principles? The answer hinges on the warring parties: who’s using which weapons in which conflict and with which tactics and objectives. The idea behind precision munitions, for example, is to pinpoint combatant targets while minimising harm to civilians and civilian property.

Intentions aren’t foolproof, however, as demonstrated in any number of currently ongoing wars. Yet, one might argue that, on balance, the results are ‘better’ than in earlier conflicts in which, for example, blankets of inaccurate gravity (‘dumb’) bombs were dropped, and where indifference among combatants as to the effects on innocents — impinging on noncombatant immunity — had become the rule rather than the exception.

There are current ‘hot’ conflicts to which one might readily apply just-war theory. Yemen, Somalia, Libya, Syria, Ukraine, India/Pakistan, Iraq, and Afghanistan, among sundry others, come to mind. (As well as brinkmanship, such as with Iran, North Korea, and Venezuela.) The nature of these conflicts ranges from international to civil to terrorist to hybrid. Their adherence to jus ad bellum and jus in bello narratives and prescriptions differ radically from one to another. These conflicts’ jus post bellum narratives — meaning the right way to act after war has ended — have still to reveal their final chapter in concrete treaties, as for example in the current negotiations between the Taliban and United States in Afghanistan, almost two decades into that wearyingly ongoing war. 

The reality is that the breach left by these sundry wars, either as they end abruptly or simply peter out in exhaustion, will be filled by another. As long as the realpolitik inevitability of war continues to haunt us, humanity needs Aquinas’s guidance.

Just-war doctrine, though developed in another age and necessarily having undergone evolutionary adaptation to parallel wars’ changes, remains enduringly relevant — not to anaesthetise the populace, let alone to entirely cleanse war ethically, but as a practical way to embed some measure of order in the otherwise unbridled messiness of war.

Recent Comments