Monday, 19 October 2020

Is Technology ‘What Makes us Human’?


Posted by Keith Tidman

Technology and human behaviour have historically always been intertwined, defining us as the species we are. Today, technology’s ubiquity means that our lives’ ever-faster turn toward it and its multiplicity of forms have given it stealth-like properties. Increasingly, for many people, technology seems just to happen, and the human agency behind it appears veiled. Yet at the same time, perhaps counterintuitively, what appears to us to happen ‘behind the curtain’ hints that technology is fundamentally rooted in human nature. 


Certainly, there is a delicate affinity between science and technology: the former uncovers how the world happens to be, while the latter helps science to convert those realities into artefacts. As science changes, technologists see opportunities: through invention, design, engineering, and application. This restlessly visionary process is not just incidental, I suggest, but rather is intrinsic to us.

 

Our species comprises enthusiastic toolmakers. The coupling of science and technology has led to humanity’s rich array of transformative products, from particle accelerators to world-spanning aircraft, to magnetic-resonance imaging devices, to the space-station laboratory and universe-imaging space telescopes. The alliance has brought us gene-editing technologies and bioengineering, robotics driven by artificial intelligence, energy-generating solar panels, and multifunctional ‘smart phones’.

 

There’s an ‘everywhereness’ of many such devices in the world, reaching into our lives, increasingly creating a one-world community linked by mutual interdependence on many fronts. The role of toolmaker-cum-technologist has become integrated, metaphorically speaking, into our species’ biological motherboard. In this way, technology has becomes the tipping point of globalisation’s irrepressibility.

 

René Descartes went so far as to profess that science would enable humankind to ‘become the masters and possessors of nature’. An overreach, perhaps — the despoiling of aspects of nature, such as the air, land, and ecosystems at our over-eager hands convinces us of that — but the trend line today points in the direction Descartes declared, just as electric light frees swaths of the world’s population from dependence on daylight.

 

Technology was supercharged by the science of the Newtonian world, which saw the universe as a machine, and its subsequent vaulting to the world of digits has had obvious magnifying effects. These will next become amplified as the world of machine learning takes center stage. Yet human imagination and creativity have had a powerfully galvanizing influence over the transformation. 

 

Technology itself is morally impartial, and as such neither blameworthy nor praiseworthy. Despite how ‘clever’ it becomes, for the foreseeable future technology does not yet have agency — or preference of any kind. However, on the horizon, much cleverer, even self-optimising technology might start to exhibit moral partiality. But as to the point about responsibility and accountability, it is how technology is employed, through users, which gives rise to considerations of morality.

 

A car, for example, is a morally impartial technology. No nefarious intent can be fairly ascribed to either inventor or owner. However, as soon as someone chooses to exercise his agency and drive the car into a crowd with the intent to hurt, he turns the vehicle from its original purpose as an empowering tool for transportation into an empowering weapon of sorts. But no one wags their finger remonstratively at the car.

 

Technology influences our values and norms, prompting culture to morph — sometimes gradually, other times hurriedly. It’s what defines us, at least in large part, as human beings. At the same time, the incorporation and acceptance of technology is decidedly seductive. Witness the new Digital Revolution. Technology’s sway is hard to discount, and even harder to rebuff, especially once it has established roots deep into culture’s rich subsurface soil. But this sway can also be overstated.

 

To that last point, despite technology’s ubiquity, it has not entirely pulled the rug from under other values, like those around community, spirituality, integrity, loyalty, respect, leadership, generosity, and accountability, among others. Indeed, technology might be construed as serving as a multiplier of opportunities for development and improvement, empowering individuals, communities, and institutions alike. How the fifteenth-century printing press democratised access to knowledge, became a tool that spurred revolutions, and helped spark the Enlightenment was one instance of this influential effect.


Today, rockets satisfy our impulse to explore space; the anticipated advent of quantum computers promises dramatic advances in machine learning as well as the modeling of natural events and behaviours, unbreakable encryption, and the development of drugs; nanotechnology leads to the creation of revolutionary materials — and all the time the Internet increasingly connects the world in ways once beyond the imagination.

 

In this manner, there are cascading events that work both ways: human needs and wants drive technology; and technology drives human needs and wants. Technological change thus is a Janus figure with two faces: one looking toward the past, as we figure out what is important and which lessons to apply; and the other looking toward the future, as we innovate. Accordingly, both traditional and new values become expressed, more than just obliquely, by the technology we invent, in a cycle of generation and regeneration.

 

Despite technology’s occasional fails, few people are really prepared to live unconditionally with nature, strictly on nature’s terms. To do so remains a romanticised vision, worthy of the likes of American idealist Henry David Thoreau. Rather, whether rightly or wrongly, more often we have seen our higher interests to make life yet a bit easier, a bit more palatable. 

 

Philosopher Martin Heidegger declared, rather dismally, that we are relegated to ‘remain unfree and chained to technology’. But I think his view is an unappreciative, undeservedly dismissive view of technology’s advantages, across domains: agriculture, education, industry, medicine, business, sanitation, transportation, building, entertainment, materials, information, and communication, among others. Domains where considerations like resource sustainability, ethics, and social justice have been key.

 

For me, in its reach, technology’s pulse has a sociocultural aspect, both shaping and drawing upon social, political, and cultural values. And to get the right balance among those values is a moral, not just a pragmatic, responsibility — one that requires being vigilant in making choices from among alternative priorities and goals. 

 

In innumerable ways, it is through technology, incubated in science, that civilisation has pushed back against the Hobbesian ‘nastiness and brutishness’ of human existence. That’s the record of history. In meantime, we concede the paradox of complex technology championing a simplified, pleasanter life. And as such, our tool-making impulse toward technological solutions, despite occasional fails, will continue to animate what makes us deeply human.

 

Monday, 12 October 2020

REVIEW: The Leader's Bookshelf (2020)

By Thomas Scarborough


BOOK REVIEW: The Leader’s Bookshelf: 25 Great Books and Their Readers

Martin Cohen. Rowman & Littlefield, $32 (288p) ISBN 978-1-53813-576-1

The Philosopher by Marlina Vera 2018
It was Martin Cohen's sideways look at philosophy which propelled him into the limelight with Routledge's 101 Philosophy Problems (1999). In his latest book, the author would seem to recall his offbeat roots—like a band returning to its original sound.

There is an obsession in business and management circles today with leadership theory (I myself hold two Master's degrees in leadership!) and the books which propound it. There are hundreds of them, if not thousands, many of them fresh off the press. Mostly, they adhere to the ‘transformational’ model—which typically advises vision, character, and influence, and a few things besides. Such books are generally written by people who claim to have tried the formula and succeeded (many have not).


Yet, rather than read books by leaders, why not read the books the leaders read? What were their own sources of inspiration? It would seem to make eminent sense. What's more, for the doubters, Martin Cohen meticulously traces how exactly the leaders' reading is connected with their leadership: thought leaders, political leaders, corporate leaders, and leaders of many kinds. While this is not an entirely new idea,* it is still fresh, and reveals approaches to leadership which are in some way the same—only different—to those of the ‘transformational’ leadership genre.


Martin Cohen selects twenty-five ‘great books’ (by Plato, George Orwell, Herman Melville, Alex Haley, and so on) and twenty-one people who read them (Harry Kroto, Jacob Riis, Rachel Carson, Malcom X and so on), mixing them all into ten chapters.  With a potpourri like this, one is hardly going to find a systematic leadership theory. A review in Publisher’s Weekly calls the book a ‘fun yet haphazard survey’. Yet there is ‘method in the madness’. One finds it in the chapter titles. The ten chapters of the book represent an orderly progression of concepts. It seems worth listing the chapter titles here:

Meet the Wild Things (which is to say, tame the wild things of life)

Roll the Dice (which is to say, just give it a go, and see)

Save the Planet—One Page at a Time! (give a care for the wider world)

Search for Life’s Purpose

See the World in the Wider Social Context

Be Ready to Reinvent Yourself

Set Your Thinking Free

Make a Huge Profit—and Then Share It

Recognise the Power of Symbols

Follow Your Personal Legend

In each of these chapters, Martin Cohen describes the books, and describes the people who read them, then ties the two together—and like the best of biographers and historians, drops a sprinkle-sugar of fascinating facts and anecdotes into his text: for instance, John D. Rockefeller’s (miserly) penny in the Sunday School plate, a lost and lonely young Barack Obama’s attachment to a children’s book, or Richard Branson’s zany experiments with chance.


Is there any leadership theory we can glean from the book? In spite of its free-wheeling style, there surely is. All these leaders found a guiding thought which resonated with them, and they stuck to it; they often had a vision for a wider world, and its many subtleties and interconnections; they found, too, the ‘vision, character, and influence’ of the leadership books—yet so very differently. Theirs was vision which was not bound to material outcomes, character which did not always match cultural norms, and influence which seemed an after-effect rather than a carefully nurtured goal in itself.


In an important sense, one needs to note that this book is not a standard work of research—and yet it is thoughtful, balanced, and broad. It represents personal insight and wisdom, from a well informed philosopher. This is what Cohen brings to the book. In fact, the more serious leadership theory often is little more than unsupported conjecture, where the conjectural nature of it is well disguised.


There is something of an Easter egg for philosophers at the very end of the book—tucked away in the afterword. Cohen says that time and again, in the reading of successful people, ‘philosophers and philosophical works pop up as aspirational or influential texts more often than any others’. At the end of the day, it is philosophers who rule the world—by proxy as it were. And yet, what do the philosophers themselves read? In the case of Ludwig Wittgenstein anyway (one of the thought leaders described in this book) it turns out that it was a work of literary imagination, indeed humour: The Life and Opinions of Tristram Shandy, by Laurence Sterne (1759). This is one of the many surprising literary connections made by Cohen's book.



* A popular book of its kind, also The Leader's Bookshelf (without subtitle), surveys the reading of high-ranking military officers of the US Navy. Published by the Naval Institute Press (2017).

Monday, 5 October 2020

Picture Post 58: The Underpass



'Because things don’t appear to be the known thing; they aren’t what they seemed to be neither will they become what they might appear to become.'

Posted by Tessa den Uyl


What is graffiti? Urban art, identity statements, politics, distraction, public empowerment, vandalism, property, religion, claiming ownership—graffiti embraces them all. Not always do we know its meanings, though habitually we recognise it when we see it.


Aesthetically speaking, graffiti might not be attractive—though this does not explain why often it is at one and the same time accepted and abolished. Obviously graffiti tends to move against the mainstream, though its form it is somehow of the same language—eradicated in that contradiction which suits the social order.


This makes graffiti a scribble in a world where its echo is instantaneously consumed. On the other hand, it is a manifestation and a message, noticed by the unconventional way it is proposed.


Yet the incompatible is never as discordant as it might initially appear. Graffiti exposes the innate ambivalence of our societies and legal systems, by being an illegal form of expression while also being sold for high prices in mainstream museums. While some graffitists obtain copyright on their work, others are prosecuted for vandalism.


Similar to the man depicted in the picture above who, exceptionally, has become a legend and a symbol for a generation and beyond, it might well be that the influence of graffiti will have clearer definition in the future, to become what it is not yet.


Whatever the case, it seems bound to tell us more about ourselves than we initially imagined some swiftly drawn assumptions of its meaning could provoke.

Monday, 28 September 2020

Hell: A Thought Experiment

by Thomas Scarborough

Going Down with the Cash by Peter Gourfain 1998
Various religions have concepts of hell. However, nowhere is the doctrine clung to so tightly or debated so vigorously as in the Christian faith.

Yet it is, too, a philosophical subject, which has been treated philosophically in recent years by the universities of Oxford, Stanford, Alaska, and Tennessee—among various others. With this in mind, this short post presents a thought experiment—and a fundamentally philosophical one at that.


While the Christian faith rests on revelation, and its central teachings are known through revelation, there are various interpretations of revelation. In the case of hell, a good many. The most basic variations concerning hell—if they are not major views, then notable ones—are these:


The literal or orthodox view, that hell is a place of eternal conscious torment

The metaphorical view, that hell's torments are symbolic, yet real in some way

The ‘circles of hell’: the view that there are degrees of eternal torment, suited to the crimes

A ‘temporary torment’: the view that hell is not eternal, but finite in time

Annihiliationism and conditionalism, which hold that the wicked—or unbelieving—do not live after death

The universalist view, which holds that there is no hell, but all are saved


In reality, these views have many subtleties—even many designations—and it needs to be borne in mind that this brief survey is far too simple. Yet it gives an idea.


With regard to the more literal views on hell, the most basic problem from a human point of view—apart from the question of the existence of hell itself—is that we find it difficult to imagine eternal torment. We revolt against the idea. Also, we find it difficult to reconcile it with a loving God—in spite of Scripture's copious emphasis on the dangers of hell.


On the other hand, many feel it would be a travesty of justice not to have a hell. Many, too, have felt the fear of hell—call it a supernatural fear. This has been particularly prominent at times of spiritual revival.


The theological response to people’s qualms, most generally, is that we do not need to understand hell, or the God who prepared it for some. Ultimately it is about the sovereignty of God, and the revelation of Scripture. Yet have we fully explored the concepts, or driven deep enough with alternatives? A thought experiment may make this clear.


What is hell? Apart from representing some form of torment, there are at least two features which are central to the literal view: it is said to separate people from God, and it offers no hope. There is no exit. Those who are consigned to hell cannot view from afar the perfect person and purpose of God, and perhaps thereby have some small comfort. Nor can they strengthen themselves with the thought that this, too, shall pass.


With this rudimentary overview, then, our thought experiment is this:


If I should find myself in hell—whether I had thought I knew anything about it in my lifetime or not—would anything in my experience of hell contradict its eternity? Even if, that is, hell were not eternal?


Perhaps we may call this a phenomenalist view of hell. Those condemned to hell would experience it is an eternal torment—a place without God, and without hope—which, after all, is by very definition what hell is, at least to those of a more literal persuasion.


In short, would the sense-experience of those in hell be in any way distinguishable from a literal view of hell? Similarly, could Scriptural descriptions of hell as eternal—with banishment from God's presence, and the absence of hope—reveal to us which of the two is true? On the surface of it, no.


On the surface of it, this might promise to solve some critical theological and philosophical problems. One could reconcile the literal view with various other views, because eternity is something which is experienced. There need not be, then, metaphysical truths at stake. One could see complete justice done, while both believing and not believing, as it were, that hell is eternal. And one could ultimately reconcile the unmitigated torments of hell with God’s love.


However, before we congratulate ourselves on having solved the mysteries of hell, there are some further things we need, philosophically, to consider:


Would this not give us the deus deceptor of Descartes—a God who deceives us into believing that the torments of hell are eternal?

How should we distinguish the experience of hell and hell itself, and consider that the one is better than the other? What can be worse than eternal torment?

Would our thought experiment not open the possibility that heaven is not eternal, in the objective sense?


Further, this would surely reflect on views other than a literal one. If the essence of hell lies in the experience of it, then even if we should allow a ‘temporary torment’—namely, the belief that hell is not eternal but finite in time—would we not through this introduce hope to hell? Surely any torment can be borne bravely where there is hope—not to speak of the hope of Paradise! Yet if the ‘circles of hell’ is correct—namely, that there are degrees of torment, but no hope of an exit—is there any judgement without hope which can be a bearable one?


What then might our thought experiment teach us?


It separates objective and subjective views of eternity—which may not have been done before. Yet this seems to offer us little to ameliorate the sufferings of hell. Further, a phenomenalist view of hell might worsen the terrors of the age-old view of the ‘circles of hell’, and—too much, it might be said—improve the situation of those in a ‘temporary torment.


All in all, there is perhaps little to suggest that one may reduce the concept of hell, no matter which view of hell we espouse—given, that is, that we admit its existence at all. Happily, for those who believe, there would seem to be little to suggest that one may reduce the concept of heaven either.

Monday, 21 September 2020

‘What Are We?’ “Self-reflective Consciousness, Cooperation, and the Agents of Our Future Evolution”

Cueva de las Manos, Río Pinturas

Posted by John Hands 

‘What are we?’ This is arguably the fundamental philosophical question. Indeed, ‘What are we?’ along with ‘Where do we come from?’ and ‘Why do we exist?’ are questions that humans have been asking for at least 25,000 years. During all of this time we have sought answers from the supernatural. About 3,000 years ago, however, we began to seek answers through philosophical reasoning and insight. Then, around 150 years ago, we began to seek answers through science: through systematic, preferably measurable, observation or experiment. 

As a science graduate and former tutor in physics for Britain's ‘Open University*’, I wanted to find out what answers science currently gives. But I couldn’t find any book that did so. There are two reasons for this.

  • First, the exponential increase in empirical data generated by rapid developments in technology had resulted in the branching of science into increasingly narrow, specialized fields. I wanted to step back from the focus of one leaf on one branch and see what the whole evolutionary tree shows us. 
  • Second, most science books advocate a particular theory, and often present it as fact. But scientific explanations change as new data is obtained and new thinking develops. 

And so I decided to write ‘the book that hadn’t been written’: an impartial evaluation of the current theories that explain how we evolved, not just from the first life on Earth, but where that came from, right back to the primordial matter and energy at the beginning of the universe of which we ultimately consist. I called it COSMOSAPIENS Human Evolution from the Origin of the Universe* and in the event it took more than 10 years to research and write. What’s more, the conclusions I reached surprised me. I had assumed that the Big Bang was well-established science. But the more I investigated the more I discovered that the Big Bang Theory had been contradicted by observational evidence stretching back 60 years. Cosmologists had continually changed this theory as more sophisticated observations and experiments produced ever more contradictions with the theory.

The latest theory is called the Concordance Model. It might more accurately be described as ‘The Inflationary-before-or-after-the-Hot Big Bang-unknown-27% Dark Matter-unknown-68% Dark Energy model’. Its central axiom, that the universe inflated at a trillion trillion trillion times the speed of light in a trillion trillion trillionth of a second is untestable. Hence it is not scientific.

The problem arises because these cosmological theories are mathematical models. They are simplified solutions of Einstein’s field equations of general relativity applied to the universe. They are based on assumptions that the latest observations show to be invalid. That’s one surprising conclusion I found. 

Another surprise came when I examined the orthodox theory for the last 65 years in the UK and the USA of how and why life on Earth evolved into so many different species. It is known as NeoDarwinism, and was popularised by Richard Dawkins in his bestselling book, The Selfish Gene, where it says that biological evolution is caused by genes selfishly competing with each other to survive and replicate.

NeoDarwinism is based on the fallacy of ascribing intention to an acid, deoxyribonucleic acid, of which genes are composed. Dawkins admits that this language is sloppy and says he could express it in scientific terms. But I’ve read the book twice and he never does manage to do this. Moreover, the theory is contradicted by substantial behavioural, genetic, and genomic evidence. When confronted by such, instead of modifying the theory to take account of the evidence, as a scientist should do, Dawkins lamely says “genes must have misfired”. 

The fact is, he couldn’t modify the theory because the evidence shows that Darwinian competition causes not the evolution of species but the destruction of species. It is cooperation, not competition, that has caused the evolution of successively more complex species.

Today, most biologists assert that we differ only in degree from other animals. I think that this too is wrong. What marked our emergence as a distinct species some 25,000 years ago wasn’t the size or shape of our skulls, or that we walked upright, or that we lacked bodily hair, or the genes we possess. These are differences in degree from other animals. What made us unique was reflective consciousness.

Consciousness is a characteristic of a living thing as distinct from an inanimate thing like a rock. It is possessed in rudimentary form by the simplest species like bacteria. In the evolutionary lineage leading to humans, consciousness increased with increasing neural complexity and centration in the brain until, with humans, it became conscious of itself. We are the only species that not only knows but also knows that it knows. We reflect on ourselves and our place in the cosmos. We ask questions like: What are we? Where did we come from? Why do we exist? 

This self-reflective consciousness has transformed existing abilities and generated new ones. It has transformed comprehension, learning, invention, and communication, which all other animals have in varying degrees. It has generated new abilities, like imagination, insight, abstraction, written language, belief, and morality that no other animal has. Its possession marks a difference in kind, not merely degree, from other animals, just as there is a difference in kind between inanimate matter, like a rock, and living things, like bacteria and animals. 

Moreover, Homo sapiens is the only known species that is still evolving. Our evolution is not morphological—physical characteristics—or genetic, but noetic, meaning ‘relating to mental activity’. It is an evolution of the mind, and has been occurring in three overlapping phases: primeval, philosophical, and scientific. 

Primeval thinking was dominated by the foreknowledge of death and the need to survive. Accordingly, imagination gave rise to superstition, which is a belief that usually arises from a lack of understanding of natural phenomena or fear of the unknown. 

It is evidenced by legends and myths, the beliefs in animism, totemism, and ancestor worship of hunter-gatherers, to polytheism in city-states in which the pantheon of gods reflected the social hierarchy of their societies, and finally to a monotheism in which other gods were demoted to angels or subsumed into one God, reflecting the absolute power of king or emperor. 

The instinct for competition and aggression, which had been ingrained over millions of years of prehuman ancestry, remained a powerful characteristic of humans, interacting with, and dominating, reflective consciousness. 

The second phase of reflective consciousness, philosophical thinking, emerged roughly 1500 to 500 BCE. It was characterised by humans going beyond superstition to use reasoning and insight, often after disciplined meditation, to answer questions. In all cultures it produced the ethical view that we should treat all others, including our enemies, as ourselves. This ran counter to the predominant instinct of aggression and competition. 

The third phase, scientific thinking, gradually emerged from natural philosophy around 1600 CE. It branched into the physical sciences, the life sciences, and medical sciences. 

Physics, the fundamental science, then started to converge, rapidly so over the last 65 years, towards a single theory that describes all the interactions between all forms of matter. According to this view, all physical phenomena are lower energy manifestations of a single energy at the beginning of the universe. This is similar in very many respects to the insight of philosophers of all cultures that there is an underlying energy in the cosmos that gives rise to all matter and energy. 

During this period, reflective consciousness has produced an increasing convergence of humankind. The development of technology has led to globalisation, both physically and electronically, in trade, science, education, politics (United Nations), and altruistic activities such as UNICEF and Médecins Sans Frontières. It has also produced a ‘complexification’ of human societies, a reduction in aggression, an increase in cooperation, and the ability to determine humankind’s future. 

This whole process of human evolution has been accelerating. Primeval thinking emerges roughly 25,000 years ago, philosophical thinking emerges about 3,000 years ago, scientific thinking emerges some 400 years ago, while convergent thinking begins barely 65 years ago. 

I think that when we examine the evidence of our evolution from primordial matter and energy at the beginning of the universe, we see a consistent pattern. This shows that we humans are the unfinished product of an accelerating cosmic evolutionary process characterised by cooperation, increasing complexity and convergence, and that – uniquely as far we know – we are the self-reflective agents of our future evolution. 


 

*For further details and reviews of John’s new book, see https://johnhands.com 

Editor's note. The UK’s ‘Open University’ differs from other universities through its the policy of open admissions and its emphasis on distance and online learning programs.

Monday, 14 September 2020

Poetry: The Non-linear Mathematics of History



Posted by Chengde Chen *


Things are so obvious, why can’t we see them?
We are still obsessed with developing technology
as if we wished to hasten our extinction
This is because history is deceptive
We have no understanding of the mathematics of history
hence are immersed in a linear perception of ‘progress’:
history has proved that man controls technology
so technology must do more good than harm
This has been our experience of thousands of years
thus our unshakable faith and confidence

We, of course, need to rely on history
which seems to be the only thing we have
Yet, history is not a piece of repeatable music
but more of non-linear mathematics
Some histories may be mirrors of futures
while some futures have no reflection of history at all

It is hard to establish such a non-linear understanding
as it’s so different from our intuition
Thanks to the difficulty, as a famous tale relates
Dahir, an Indian wise-man of 3000 years ago
almost made the King bankrupt his Kingdom!

One day, the chess-loving King challenged Dahir
by asking him to play the final phase of a losing battle
As it seemed impossible for anyone to turn the table
the King promised Dahir smugly:
‘If you can win, I’ll meet you a request of any kind!’
Dahir, with his superior intelligence, did win
but he only made a very small request:
‘I would like to have some grain
placed on the chessboard in the following way:
one for the first square
two for the second square
four for the third square
and so on and so forth
so that each square is twice that of the previous one
until all sixty four squares of the chessboard are placed’


What an insignificant request, the King thought
and approved it immediately
He ordered his soldiers to bring in a sack of grain
and to place them in the way requested
When one sack was finished, another was served
Then another, and another…
until they exhausted all the grain in the Kingdom
it was still far from completing the 64 squares
The grains required are such astronomical quantity that
even the amount of grain in today’s world
does not come near it (over 1000 billion tonnes)!
It was the modest figures of the early counting
as well as the linear intuition about ‘history’
that made the King miscalculate the matter completely
He is still in debt to Dahir to this day!

Technological progress is the kind of exponential curve
but it is even more deceptive
It had crawled very slowly for very long in ancient times
but rose quicker and quicker in recent centuries
People, however, have considered the change linearly
assuming the rate of growth the same as the past
Hence a common-sense conviction:
we have always progressed through technology
so through it we can always progress into the future
technology has always become more and more advanced
so with it we can always be more and more powerful

Oh, the linear thinking of progress!
History is not optics
nor is the future a mirror image of the past

In the past man was a small member of the club of nature
while today, we have changed the weather, raised oceans
and created new species, as well as new forms of energy
If we cannot see such a world of difference
we are as miscalculating as the old King was!

We cannot, however, afford to miscalculate
as we would have no time even to be surprised
The surface value of history is its usefulness
The deeper value of history is to prove itself useless

The history in which we controlled technology
was only history, no matter how brilliant it was
The future may mean a ruthless breaking away from it!



Editor's note. The amount required is 2 raised to the power of 64 minus one. Wikipedia offers that the total number of grains is eighteen quintillion, four hundred and forty-six quadrillion seven hundred and forty-four trillion seventy-three billion seven hundred and nine million five hundred and fifty-one thousand six hundred and fifteen (18,446,744,073,709,551,615) and that this is “about 2,000 times annual world production”. 
 
* Chengde Chen is the author of the philosophical poems collection: Five Themes of Today, Open Gate Press, London. He can be contacted on chengde.chen@hotmail.com

Monday, 7 September 2020

‘Mary’s Room’: A Thought Experiment

Posted by Keith Tidman
Can we fully understand the world through thought and language—or do we only really understand it through experience? And if only through experience, can we truly communicate with one another on every level? These were some of the questions which lay behind a famous thought experiment of 1982:
A brilliant neurophysiologist, Mary, knows all there is to know about her academic specialty, the science of vision: the physics, biology, chemistry, physiology, and neuroscience. Also how we see colour.

There’s a catch, however: Mary has lived her entire life in a totally black-and-white room, watching a black-and-white screen, and reading black-and-white books. An entirely monochromatic existence. Then, unexpectedly, her screen reveals a bright-red tomato.

What was it like for Mary to experience colour for the first time? Or as the Australian philosopher Frank Jackson asked, who originated this thought experiment, ‘Will [Mary] learn anything or not?’ *

Jackson’s original takeaway from his scenario was that Mary’s first-time experience of red amounted to new knowledge—despite her comprehensive scientific knowledge in the field of colour vision. Jackson believed at the time that colour perception cannot entirely be understood without a person visually experiencing colour.

However, not everyone agreed. Some proposed that Mary’s knowledge, in the absence of first-hand experience, was at best only ever going to be partial, never complete. Indeed, renowned philosopher Thomas Nagel, of ‘what is it like to be a bat’ fame, was in the camp of those who argue that some information can only be understood subjectively.

Yet, Mary's complete acquaintance with the science of vision might well be all there is to understanding the formation of knowledge about colour perception. Philosopher and neurobiologist Owen Flanagan was on-board, concluding that seeing red is a physical occurrence. As he put it, 'Mary knows everything about colour vision that can be expressed in the vocabularies of a complete physics, chemistry, and neuroscience.

Mary would not have learned anything new, then, when the bright-red tomato popped up on her screen. Through the completeness of her knowledge of the science of colour vision, she already fully knew what her exposure to the red tomato would entail by way of sensations. No qualities of the experience were unknowable. The key is in how the brain gives rise to subjective knowledge and experience.

The matter boils down to whether there are nonphysical, qualitative sensations—like colour, taste, smell, feeling, and emotion—that require experience in order for us to become fully familiar with them. Are there limits to our comprehension of something we don’t actually experience? If so, Mary did learn something new by seeing red for the first time.

A few years after Frank Jackson first presented the ‘Mary’s room’ thought experiment, he changed his mind. After considering opposing viewpoints, he came to believe that there was nothing apart from redness’s physical description, of which Mary was fully aware. This time, he concluded that first-hand experiences, too, are scientifically objective, fully measurable events in the brain and thus knowable by someone with Mary’s comprehension and expertise.

This switching of his original position was prompted, in part, by American philosopher and cognitive scientist Daniel Dennett. Dennett asserted that if Mary indeed knew ‘absolutely everything about colour’, as Jackson’s thought experiment presumes, by definition her all-encompassing knowledge would include the science behind people’s ability to comprehend the actual sensation of colour.

To these points, Mary’s factual expertise in the science of colour experience—and the experience’s equivalence and measurability in the brain—appears sufficient to conclude she already knew what red would look like. The experience of red was part of her comprehension of human cognitive functions. Not just with regard to colour, but also to the full array of human mental states: for instance, pain, sweetness, coldness, exhilaration, tedium—ad infinitum.

As Jackson ultimately concluded, the gist is that, given the special particulars of the thought experiment—Mary acquired ‘all the physical information there is to obtain about what goes on when we see ripe tomatoes, or the sky, and use terms like red and blue’—Mary did not acquire new information upon first seeing the red tomato. She didn’t learn anything. Her awareness of redness was already complete.



* Frank Jackson, 'Epiphenomenal Qualia', Philosophical Quarterly, 32, April 1982.

Monday, 31 August 2020

Thought Experiment: How do you Price the Office Parlour Palm?

Posted by Martin Cohen
Here's one of a collection of short puzzles that might be considered an A-Z of the tricks of high finance: Not so much 'P is for Parlour Palm' though as 'C is for Cheap Collateral'.

This is the idea that if a bank agrees to loan the office parlour palm to the next door bank for a million dollars, and in return to rent their aspidistra for a million dollars, they both can update their asset sheets accordingly!

Now that's magic. But it was also the basis of the B for Bubble that brought down most of the world's banking system in 2008!

Of course, banks don't do silly stuff like buy each others' pot plants. But they do buy each others' packaged securities. And for many years, these packages became more and more complex, and thus more and more about buyer and seller agreeing on what mysterious qualities made the deal realistic. We know where that ended up: with thousands of dodgy loans to people who had no income or maybe had even died being bundled up and sold as top quality assets. Banks are plagued by problems with so-called ‘ghost’ collateral that disappears or is pledged to several lenders at the same time! After the crisis, the European Central Bank looked at the use of such devices and in a discussion paper wrote:

"the use of collateral is neither a sufficient nor a necessary condition for financial stability."*

The logicians could not have put it better!


https://www.ecb.europa.eu/pub/pdf/scpwps/ecb.wp2107.en.pdf

Monday, 24 August 2020

The Necessity of Free Will

Eric Hanson, ArtAsiaPacific Magazine, Mar/Apr 2013
by Thomas Scarborough

I propose to solve the problem of free will.

The problem is, quite simply, the view that we live in a world where causality reigns supreme. If causality reigns supreme, then there can be no free will. And if we admit quantum indeterminacy to the picture, neither is indeterminacy free will.

I propose that the problem rests on an ancient conceptual dichotomy: the things-relations distinction. I propose, too, that this distinction is illusory. Aristotle called it the features-dispositions distinction. Wittgenstein called it the objects-arrangements distinction. We find it, too, in language (the nouns-verbs distinction), and in maths (variables-operators).

The alternative is obvious: there is no such distinction, but rather a fusion of things.  The philosopher Mel Thompson describes our world as ‘a seamless web of causality that goes forwards and backward in time and outwards in space’. ‘Seamless’, if we take it to mean exactly that, implies that there are no seams; there is no separation between things; therefore there is no relation between them.

Our reality has been variously described as an undifferentiated stream of experience, a kaleidoscopic flux of impressions, a swirling cloud without determinate shape. To make sense of this, then, we need to separate it into sounds and sights, surfaces and motions—which is individual things. We take aspects of a seamless whole, and we isolate them from the whole. Once done, we are able to trace relations between them.

With this, we have the basis of causality.  But in a seamless reality, where there is a fusion of things, all things cause all things. Even the language which we speak has an urge towards such fusion. There is an ‘evil’, wrote the philosopher and statesman Francis Bacon, in defining natural and material things.  ‘Definitions themselves consist of words, and those words beget others’.  Ultimately, our words reach into everything.

In the midst of an undifferentiated expanse, therefore, we create things, and we create causes. We isolate causes from the seamless whole—and with them, effects. But these causes must always strip something off.  This is why our thinking in terms of causality—which is supremely embodied in the modern scientific method—must bring about unwanted side effects of all kinds, through stripped-off relations.

When we say that A causes B we are, as it were, placing our drawing compass on the seamless web of causality and demarcating a circle in the midst of it: 'A'.  Outside of this circle lies the entire, seamless universe, and this knows no 'things'—until we create them in its midst. And when we create them, we create the intractable problem as to what a relation actually is.  A property?  An attribute?

Someone might object. Even if we have no things, no objects, no features (and so on) with which to create causality, we still have a reality which is bound by the laws of the universe. There is therefore some kind of something which is not free. Yet every scientific law is about A causes B. Whatever is out there, it has nothing in common with such a scheme—that we can know of anyway.

One more step is required to prove free will. Every cause that I identify is a creation of my own mind, in that it is freely chosen.  I am free to create it—which is, to demarcate the circle with the drawing compass. When I say that A caused B, I omit C, D, E, and every other possible cause, with the exception of what I want to create.  This is a choice without any kind of necessity.

I fire a shot at a clay pigeon. I choose the cause, and with the cause I choose the effect, and the pigeon shatters in the sky.  Now I see a nearby church bell.   I choose the cause, and I choose the effect, and an entire village awakes from its slumbers on a drowsy afternoon.   In this lies free will.  Cause and effect might seem iron clad—yet it is itself freely chosen.

But did I not cause my causes to be created?  Are not the causes and effects we invent themselves caused in some way?  This possibility is excluded.  We would need to readmit A’s and B’s to our scheme before we could claim cause.

David Bohm wrote that quantum theory is ‘the dropping of the notion of analysis of the world into relatively autonomous parts, separately existent but in interaction’.  In fact, this applies in every sphere.  Causality is illusory.  Not only that, but to say that any such illusion is caused is to admit causality through the back door.  There is no back door. 

Monday, 17 August 2020

And the Universe Shrugged




Posted by Keith Tidman

It’s not a question of whether humankind will become extinct, but when.

To be clear, I’m not talking about a devastatingly runaway climate; the predations of human beings on ecosystems; an asteroid slamming into Earth; a super-volcano erupting; a thermonuclear conflagration; a global contagion; rogue artificial intelligence; an eventual red-giant sun engulfing us; the pending collision of the Milky Way and Andromeda galaxies. Nor am I talking about the record of short-lived survival of our forerunners, like the Neanderthals, Denisovans, and Homo erectus, all of whom slid into extinction after unimpressive spans.

Rather, I’m speaking of cosmic death!

Cosmic death will occur according to standard physics, including cosmology. Because of the accelerating expansion of the universe and the irrepressibility of entropy — the headlong plunge toward evermore disorder and chaos — eventually no new stars will form, and existing stars will burn out. The universe will become uninhabitable long before its actual demise. Eventually a near vacuum will result. Particles that remain will be so unimaginably distanced from one another that they’ll seldom, if ever, interact. This is the ultimate end of the universe, when entropy reaches its maximum or so-called thermodynamic equilibrium, more descriptively dubbed ‘heat death’. There’s no place to duck; spacefaring won’t make a difference. Nowhere in the universe is immune.

Assuredly, heat death will take trillions of years to happen. However, might anyone imagine that the timeframe veils the true metaphysical significance of universal extinction, including the extinction of humans and all other conscious, intelligent life? And does it really make a difference if it’s tens of years or tens of trillions of years? Don’t the same ontological questions about being still searingly pertain, irrespective of timescale? Furthermore, does it really make a difference if this would be the so-called ‘sixth extinction’, or the thousandth, or the millionth, or the billionth? Again, don’t the same questions still pertain? There remains, amidst all this, the reality of finality. The consequences — the upshot of why this actuality matters to us existentially — stay the same, immune to time.

So, to ask ‘what is the meaning of life?’ — that old chestnut from inquiring minds through the millennia — likely becomes moot and even unanswerable, in the face of surefire universal extinction. As we contemplate the wafer-thin slice of time that makes up our eighty-or-so-year lifespans, the question seems to make a bit of sense. That select, very manageable timeframe puts us into our comfort zone; we can assure ourselves of meaning, to a degree. But the cosmological context of cosmic heat death contemptuously renders the question about life’s purpose without an answer; all bets are off. And, in face of cosmic thermodynamic death, it’s easy to shift to another chestnut: why, in light of all this, is there something rather than nothing? All this while we may justifiably stay in awe of the universe’s size and majesty, yet know the timing and inevitability of our own extinction rests deterministically in its hands.

A more suitable question might be whether we were given, evolutionarily, consciousness and higher-order intelligence for a reason, making it possible for us to reflect on and try to make sense of the universe. And where that ‘reason’ for our being might originate: an ethereal source, something intrinsic to the cosmos itself, or other. It’s possible that the answer is simply that humankind is incidental, consigning issues like beginnings to unimportance or even nonsense. After all, if the universe dies, and is itself therefore arguably incidental, we may be incidental, too. Again, the fact that the timeframe is huge is immaterial to these inquiries. Also immaterial is whether there might, hypothetically, be another, follow-on Big Bang. Whereby the cosmological process restarts, to include a set of natural physical laws, the possible evolution of intelligent life, and, let’s not overlook it, entropy all over again.

We compartmentalise our lives, to make sense of the bits and pieces that competitively and sometimes contradictorily impact us daily. And in the case of cosmic death and the extinction of life — ours and everyone else’s possibly dotting the universe — that event’s speck-like remoteness in distant time and the vastness of space understandably mollifies. This, despite the event’s unavoidability and hard-to-fathom, hard-to-internalise conclusiveness, existential warts and all. To include, one might suppose, the end of history, the end of physics, and the end of metaphysics! This end of everything might challenge claims to any singular specialness of our and other species, all jointly riding our home planets to this peculiar end. 

Perhaps we have no choice, in the meantime, to conduct ourselves in ways that reflect our belief systems and acknowledge the institutional tools (sociological, political, spiritual) used to referee those beliefs. As an everyday priority, we’ll surely continue to convert those beliefs into norms, to improve society and the quality of life in concrete, actionable ways. Those norms and institutions enable us to live an orderly existence — one that our minds can plumb and make rational sense of. Even though that may largely be a salve, it may be our best (realistically, only) default behaviour in contending with daily realities, ranging from the humdrum to the spectacular. We tend to practice what’s called ‘manic defence’, whereby people distract themselves by focusing on things other than what causes their anxiety and discomfort.

The alternative — to capitulate, falling back upon self-indulgent nihilism — is untenable, insupportable, and unsustainable. We are, after all, quite a resilient species. And we live every day with comparatively attainable horizons. There remains, moreover, a richness to our existence, when our existence is considered outside of extraordinary universal timeframes. Accordingly, we go on with our lives with optimism, not dwelling on the fact that something existential will eventually happen — our collective whistling past the graveyard, one might say. We seldom, if ever, factor this universal expiry date into our thinking — understandably so. There would be little to gain, on any practical level, in doing otherwise. Cosmic thermodynamic death, after all, doesn’t concern considerations of morality. Cosmic death is an amoral event, devoid of concerns about its rightness or wrongness. It will happen matter of factly.

Meanwhile, might the only response to cosmic extinction — and with it, our extinction — be for the universe and humanity to shrug?

Monday, 10 August 2020

A New Dark Age?

Genseric's Invasion of Rome, by Karl Bryullov, 1833
By Allister Marran
Are we living through a mini Dark Age in what was supposed to be a time of Enlightenment? Will history see this moment in the same light as it saw the decline of Western civilisation during the thousand lost years from 500 to 1 500 AD?
The democratisation of and free access to information with the rise of the Internet, mobile phones, and social media should have made people smarter, more knowledegable, and aware of the world around them.

Being able to access information previously locked behind the paywall of a university education, a military career, or a scientific laboratory seemed to be a renaissance-like utopia ushering in the next stage of the socio-cultural evolution of humankind.

But instead, we are now ostensibly far dumber for it, most notably because information without context is worthless, and the weaponisation of information and context, by nefarious actors trying to forge a new narrative that seeks to divide and conquer instead of unite and build, has had a devastating effect on world politics and social cohesion.

The value of information is that it is held in the trust that it is authentic, and this is where the manipulation of data has seen the biggest gains for bad actors seeking power and influence.

A false fact, or distorted perspective, can be drip fed into social conscience using complex social media algorithms which identify those most willing to buy into the lie using confirmation bias. The influencers are encouraged to share and comment, this lending the lie credibility, and ensuring that the common man or woman will continue to share it downstream, until a lie becomes generally accepted as truth for those wanting to believe it.

The right uses innate prejudice and hatred to rally support for bigotry and lies, which the anonymity of the Internet and a carefully chosen Twitter handle protects like a KKK mask of old.

And with the advent of cancel culture and left wing propaganda, people are too scared to challenge obvious falsehoods that emerge on the left, for fear of being cancelled themselves, a modern version of the Salem Witch Trials if you will.

This has permeated into every facet of life, not least of which being science and education. No student or professor dare take on any controversial research project, lest they be cancelled, stripped of tenure, or harmed if their scientifically verifiable results are taken to task by anyone on the left or right. Truth is no longer important; pandering to an already decided audience is the only thing that matters.

This is how progress stalls. This is how the last dark age began, when science and truth was made to conform to the beliefs and norms of the religious conveniences of men and women.

Perhaps humankind was not meant to have unfettered access to knowledge and information, as with great power comes great responsibility, and the average person does not have the ability to filter out the nonsense and internalise the good data.

The current course on which we are headed has dire consequences for everyone, as we have not existed this close together physically, yet far apart ideologically, for almost a hundred years, and without some method of bringing us all back together again, human being will either end up in conflict, or less desirably, enter another thousand years of Dark Ages.

Monday, 3 August 2020

Picture Post 57: A Clown's Playground



'Because things don’t appear to be the known thing; they aren’t what they seemed to be neither will they become what they might appear to become.'

Photo credit: Rebecca Tidman
Posted by
Tessa den Uyl

Putting on a clown’s nose is a subtle and non-violent gesture to distinguish, but what exactly? The red ball on the nose un-identifies its wearer immediately, almost as if to become part of another species. Clowns may be funny and dramatic, stupid and incredibly smart, poetic without prose, offensive, scary, or sweet. Clowns attain to a world that mirrors the exaggeration of our being human.

The image of the clown offers the spectator a space to de-personalise in its turn, and in this psychological game the clown creates its playground. If a clown communicates, this is by touching all the unfolded layers we carry along within ourselves.


Indeed, clowns could very well have become a branch like ‘action psychotherapy’, only that clowns are much older than psychotherapy itself. Perhaps this is why many ideas about clowns are misapprehended, and a partly negative view or childishness in their regard belongs, not that much to them, but rather to how being human has been abased by their former appearances.

Monday, 27 July 2020

Poem: Fragility

By Jeremy Dyer


Shattered Glass Shoots, by Claus Bellers.

Fragility is a foolish thing
I don’t believe you're made of glass
Stop wallowing in your suffering
It’s just a pose, now move your arse.

Your sensitivity is a sham
You’re hard as nails so drop the scam
Just pull yourself together now
You're not a sacred Indian cow.

Fragility is a hard tiara
With metal thorns to make you bleed
I don't want your psycho-drama
Just tell me what the hell you need.

Fragility is here to stay
First blowing up then tearing down
To get the child her selfish way
Bipolar like a circus clown.

Fragility, the role you wear
Spewing out your evil wrath
Mercenary, the cross you bear
Exploiting all who cross your path.

Fragility, the cruellest mask
Deceiving all with poison smile
Killing the ones you take to task
Victimising all the while.


Editors' note: In recent weeks, fragility as a social term has been covered, among others, by The Guardian, The New York Times, and The Atlantic. Where an issue becomes all too familiar, poetry may infuse fresh vigour. 'The function of poetry,' wrote the linguist and literary theorist Roman Jakobson, 'is to point out that the sign is not identical to the referent.'

Monday, 20 July 2020

Miracles: Confirmable, or Chimerical?

Posted by Keith Tidman

Multiplication of the Loaves, by Georges, Mount Athos.
We are often passionately told of claims to experienced miracles, in both the religious and secular worlds. The word ‘miracle’ coming from the Latin mirari, meaning to wonder. But what are these miracles that some people wonder about, and do they happen as told?

Scottish philosopher David Hume, as sceptic on this matter, defined a miracle as ‘a violation of the laws of nature’ — with much else to say on the issue in his An Enquiry Concerning Human Understanding (1748). He proceeded to define the transgression of nature as due to a ‘particular volition of the Deity, or by the interposition of some invisible agent’. Though how much credence might one place in ‘invisible agents’?

Other philosophers, like Denmark’s Søren Kierkegaard in his pseudonymous persona Johannes Climacus, also placed themselves in Hume’s camp on the matter of miracles. Earlier, Dutch philosopher Baruch Spinoza wrote of miracles as events whose source and cause remain unknown to us (Tractatus Theologico-Politicus, 1670). Yet, countless other people around the world, of many religious persuasions, earnestly assert that the entreaty to miracles is one of the cornerstones of their faith. Indeed, some three-fourths of survey respondents indicated they believe in miracles, while nearly half said they have personally experienced or seen a miracle (Princeton Survey Research Associates, 2000; Harris poll, 2013).

One line of reasoning as to whether miracles are credible might start with the definition of miracles, such as transgressions of natural events uncontested convincingly by scientists or other specialists. The sufficiency of proof that a miracle really did occur and was not, deus ex machina, just imagined or stemming from a lack of understanding of the laws underlying nature is a very tall order, as surely it should be.

Purported proof would come from people who affirm they witnessed the event, raising questions about witnesses’ reliability and motives. In this regard, it would be required to eliminate obvious delusions, fraud, optical illusions, distortions, and the like. The testimony of witnesses in such matters is, understandably, often suspect. There are demanding conditions regarding definitions and authentication — such as of ‘natural events’, where scientific hypotheses famously, but for good reason, change to conform to new knowledge acquired through disciplined investigation. These conditions lead many people to dismiss the occurrence of miracles as pragmatically untenable, requiring by extension nothing less than a leap of faith.

But a leap of faith suggests that the alleged miracle happened through the interposition of a supernatural power, like a god or other transcendent, creative force of origin. This notion of an original source gives rise, I argue, to various problematic aspects to weigh.

One might wonder, for example, why a god would have created the cosmos to conform to what by all measures is a finely grained set of natural laws regarding cosmic reality, only later to decide, on rare occasion, to intervene. That is, where a god suspends or alters original laws in order to allow miracles. The assumption being that cosmic laws encompass all physical things, forces, and the interactions among them. So, a god choosing not to let select original laws remain in equilibrium, uninterrupted, seems selective — incongruously so, given theistic presumptions about a transcendent power’s omniscience and omnipotence and omniwisdom.

One wonders, thereby, what’s so peculiarly special about humankind to deserve to receive miracles — symbolic gestures, some say. Additionally, one might reasonably ponder why it was necessary for a god to turn to the device of miracles in order for people to extract signals regarding purported divine intent.

One might also wonder, in this theistic context, whether something was wrong with the suspended law to begin with, to necessitate suspension. That is, perhaps it is reasonable to conclude from miracles-based change that some identified law is not, as might have been supposed, inalterably good in all circumstances, for all eternity. Or, instead, maybe nothing was in fact defective in the original natural law, after all, there having been merely an erroneous read of what was really going on and why. A rationale, thereby, for alleged miracles — and the imagined compelling reasons to interfere in the cosmos — to appear disputable and nebulous.

The presumptive notion of ‘god in the gaps’ seems tenuously to pertain here, where a god is invoked to fill the gaps in human knowledge — what is not yet known at some point in history — and thus by extension allows for miracles to substitute for what reason and confirmable empirical evidence might otherwise and eventually tell us.

As Voltaire further ventured, ‘It is . . . impious to ascribe miracles to God; they would indicate a lack of forethought, or of power, or both’ (Philosophical Dictionary, 1764). Yet, unsurprisingly, contentions like Voltaire’s aren’t definitive as a closing chapter to the accounting. There’s another facet to the discussion that we need to get at — a nonreligious aspect.

In a secular setting, the list of problematic considerations regarding miracles doesn’t grow easier to resolve. The challenges remain knotty. A reasonable assumption, in this irreligious context, is that the cosmos was not created by a god, but rather was self-caused (causa sui). In this model, there were no ‘prior’ events pointing to the cosmos’s lineage. A cosmos that possesses integrally within itself a complete explanation for its existence. Or, a cosmos that has no beginning — a boundless construct having existed infinitely.

One might wonder whether a cosmos’s existence is the default, stemming from the cosmological contention that ‘nothingness’ cannot exist, implying no beginning or end. One might further ponder how such a cosmos — in the absence of a transcendent force powerful enough to tinker with it — might temporarily suspend or alter a natural law in order to accommodate the appearance of a happening identifiable as a miracle. I propose there would be no mechanism to cause such an alteration to the cosmic fabric to happen. On those bases, it may seem there’s no logical reason for (no possibility of) miracles. Indeed, the scientific method does itself call for further examining what may have been considered a natural law whenever there are repeated exceptions or contradictions to it, rather than assuming that a miracle is recurring.

Hume proclaimed that ‘no testimony is sufficient to establish a miracle’; centuries earlier, Augustine of Hippo articulated a third, and broader take on the subject. He pointedly asked, ‘Is not the universe itself a miracle?’ (The City of God, 426 AD). Here, one might reasonably interpret ‘a miracle’ as synonymous for a less emotionally charged, temporal superlative like ‘remarkable’. I suspect most of us agree that our vast, roiling cosmos is indeed a marvel, though debatably not necessitating an originating spiritual framework like Augustine’s. 

No matter how supposed miracles are perceived, internalised, and retold, the critical issue of what can or cannot be confirmed dovetails to an assessment of the ‘knowledge’ in hand: what one knows, how one knows it, and with what level of certainty one knows it. So much of reality boils down to probabilities as the measuring stick; the evidence for miracles is no exception. If we’re left with only gossamer-thin substantiation, or no truly credible substantiation, or no realistically potential path to substantiation — which appears the case — claims of miracles may, I offer, be dismissed as improbable or even phantasmal.

Monday, 13 July 2020

Staring Statistics in the Face

By Thomas Scarborough

George W. Buck’s dictum has it, ‘Statistics don’t lie.’ Yet the present pandemic should give us reason for pause. The statistics have been grossly at variance with one another.

According to a paper in The Lancet, statistics ‘in the initial period’ estimated a case fatality rate or CFR of 15%. Then, on 3 March, the World Health Organisation announced, ‘Globally, about 3.4% of reported COVID-19 cases have died.’ By 16 June, however, an epidemiologist was quoted in Nature, ‘Studies ... are tending to converge around 0.5–1%’ (now estimating the infection fatality rate, or IFR).

Indeed it is not as simple as all this—but the purpose here is not to side with any particular figures. The purpose is to ask how our statistics could be so wrong. Wrong, rather than, shall we say, slanted. Statistical errors have been of such a magnitude as is hard to believe. A two-fold error should be an enormity, let alone ten-fold, or twenty-fold, or more.

The statistics, in turn, have had major consequences. The Lancet rightly observes, ‘Hard outcomes such as the CFR have a crucial part in forming strategies at national and international levels.’ This was borne out in March, when the World Health Organisation added to its announcement of a 3.4% CFR, ‘It can be contained—which is why we must do everything we can to contain it’. And so we did. At that point, human activity across the globe—sometimes vital human activity—came to a halt.

Over the months, the figures have been adjusted, updated, modified, revised, corrected, and in some cases, deleted. We are at risk of forgetting now. The discrepancies over time could easily slip our attention, where we should be staring them in the face.

The statistical errors are a philosophical problem. Cambridge philosopher Simon Blackburn points out two problems with regard to fact. Fact, he writes, 'may itself involve value judgements, as may the selection of particular facts as the essential ones'. The first of these problems is fairly obvious. For example, ‘Beethoven is overrated’ might seem at first to represent a statement of fact, where it really does not. The second problem is critical. We select facts, yet do so on a doubtful basis.

Facts do not exist in isolation. We typically insert them into equations, algorithms, models (and so on). In fact, we need to form an opinion about the relevance of the facts before we even seek them out—learning algorithms not excepted. In the case of the present pandemic, we began with deaths ÷ cases x 100 = CFR. We may reduce this to the equation a ÷ b x 100 = c. Yet notice now that we have selected variables a, b, and c, to the exclusion of all others. Say, x, y, or z.

What then gave us the authority to select a, b, and c? In fact, before we make any such selection, we need to 'scope the system'. We need to demarcate our enterprise, or we shall easily lose control of it. One cannot introduce any and every variable into the mix. Again, in the words of Simon Blackburn, it is the ‘essential’ facts we need. This in fact requires wisdom—a wisdom we cannot do without. In the words of the statistician William Briggs, we need ‘slow, maturing thought’.

Swiss Policy Research comments on the early phase of the pandemic, ‘Many people with only mild or no symptoms were not taken into account.’ This goes to the selection of facts, and reveals why statistics may be so deceptive. They are facts, indeed, but they are selected facts. For this reason, we have witnessed a sequence of events over recent months, something like this:
At first we focused on the case fatality rate or CFR
Then we took the infection fatality rate into account, or IFR
Then we took social values into account (which led to some crisis of thought)
Now we take non-viral fatalities into account (which begins to look catastrophic)
This is too simple, yet it illustrates the point. Statistics require the wisdom to tell how we should delineate relevance. Statistics do not select themselves. Subjective humans do it. In fact, I would contend that the selection of facts in the case of the pandemic was largely subconscious and cultural. It stands to reason that, if we have dominant social values, these will tend to come first in our selection process.

In our early response to the pandemic, we quickly developed a mindset—a mental inertia which prevented us from following the most productive steps and the most adaptive reasoning, and every tragic death reinforced this mindset, and distracted us. Time will tell, but today we generally project that far more people will die through our response to the pandemic than died from the pandemic itself—let alone the suffering.

The biggest lesson we should be taking away from it is that we humans are not rational. Knowledge, wrote Confucius, is to know both what one knows, and what one does not know. We do not know how to handle statistics.

Recent Comments