Monday, 7 November 2022

Free Will, the ‘Block Universe’, and Eternalism

In this image, the light trail left by traffic illustrates an idea central to the growing block universe theory of time, that the past, present, and future coexist.  

By Keith Tidman

The block universe is already filled with every event that ever happens. It is where what are traditionally dubbed the past, present, and future exist simultaneously, not as classically flowing linearly from one to the other. As such, these three distinct aspects to time, which by definition exclude the notion of tense, are equally real. None is in any way advantaged over the others.


The orthodox model of a ‘block universe’ describes a four-dimensional universe, resembling a cube, which merges the three dimensions of space and one of time, along the lines that Albert Einstein theorised in his special relativity.


Might this tell us something about the possibility of free will in such a universe? Before we try to answer, let’s explore more particulars about the block universe itself.

 

If observed from outside, the block would appear to hold all of space and time. The spacetime coordinates of someone’s birth and death — and every occurrence bracketed in between — accordingly exist concurrently somewhere within the block. The occurrences are inalterably and forever in the block. This portrayal of foreverness is sometimes referred to as ‘eternalism’, defined as a complete history of all possible events.

 

Conventionally, the block is considered static. But maybe it’s not. What if, for example, what we ordinarily call ‘time’ is better called change? After all, the second law of thermodynamics tells us that the state of entropy of the entire universe — meaning the presence of disorder — will always result in a net increase. It never decreases. Until, that is, the universe ultimately ends. Demonstrating how change, as in the case of entropy, moves inexorably in one direction. The inevitability of such change has a special place for humankind, as reality transforms.

 

Entropy is thus consummate change, on a cosmic scale, which is how the illusion of something we call ‘the arrow of time’ manifests itself in our conscious minds. As such, change, not time, is what is truly fundamental in nature. Change defines our world. Which, in turn, means that what the block universe comprises is necessarily dynamical and fluid, rather than frozen and still. By extension, the block universe challenges the concept of eternalism.

 

This also means that cause and effect exist (as do correlation and effect) as fundamental features of a universe in which ‘becoming’, in the form of change, is rooted. Despite past, present, and future coexisting within the block universe, causes still necessarily precede and can never follow the effects of what appears as relentless change. Such change serves, in place of illusory time, as one axis matched up with three-dimensional space. The traditional picture of the block universe comprising nondynamical events would contradict the role of cause in making things happen.

 

So, let’s return to the issue of free will within the block universe.

 

First off, the block universe has typically been described as deterministic. That is, if every event within the universe happens simultaneously according to the precise space and time coordinates the model calls for, then everything has been inescapably preordained, or predetermined. It all just is. Free will in such a situation becomes every bit as much an illusion as time.

 

But there’s a caveat pushing back against that last point. In the absence of freewill, humans would resemble automatons. We would be contraption-like assemblages of parts that move but lack agency, and would be devoid of meaningful identity and true humanity. We, and events, could be seen as two-dimensional set pieces on a stage, deterministically scripted. With no stage direction or audience — and worse, no meaning. Some might proclaim that our sense of autonomy is yet another illusion, along with time. But I believe, given our species’ active role within this dynamical cosmos, that reality is otherwise.

 

Further, determinism would take us off the hook of accountability and consequences. Fate, bubbling up from the capriciousness of nature’s supposed mechanistic forces, would situate us in a world stripped of responsibility. A world in which our lives are pointlessly set to automatic. Where the distinction between good and evil becomes fuzzy. In this world, ethical norms are arbitrary and fickle — a mere stage prop, giving the appearance of consequences to actions.

 

And yet, the blueprint above replacing the concept of time with that of change puts free will back into play, allowing a universe in which our conscious minds freely make decisions and behave accordingly. Or, at least, seemingly so. In particular, for there to be events at the space-change coordinates of the block universe, there must be something capable of driving (causing) change. The events aren’t simply fated. That ‘something’ can only be choice associated with truly libertarian free will.

 

There’s one other aspect to free will that should be mentioned. Given that motion within the three-dimensional space of the block universe can occur, not only the what but also the where of events can be changed. Again, agency is required to freely choose. It’s like shuffling cards: the cards remain the same, but their ‘coordinates’ (location) change.

 

In refutation of determinism, the nature of change as described above allows that what decisions we make and actions we take within the block universe are expressions of libertarian free will. Our choices become new threads woven through the block universe’s fabric — threads that prove dissoluble, however, through the ceaselessness of change.

 

Monday, 31 October 2022

Beetle in a Box: A Thought Experiment


By Keith Tidman


Let’s hypothesise that everyone in a community has a box containing a ‘beetle’. Each person can peer into only his or her box, and never into anyone else’s. Each person insists, upon looking into their own box, that they know what a ‘beetle’ is.

But there’s a catch: Each box might contain something different from some or all the others; each box might contain something that continually changes; or each box might actually contain nothing at all. Yet upon being asked, each person resolutely continues to use the word ‘beetle’ to describe what’s in their box. Refusing, even if probed, to more fully describe what they see, even if not showing it. The word ‘beetle’ thus simply meaning ‘that thing inside a person’s box’.

So, what does the thought experiment, set out by the influential twentieth-century philosopher Ludwig Wittgenstein in his book Philosophical Investigations,  tell us about language, mind, and reality?

As part of this experiment, Wittgenstein introduced the concept of a ‘private language’. That is, a language with a vocabulary and structure that only its originator and sole user understands, all the while untranslatable and obscure to everyone else. The original notion of a private (personal) language was in being analogous to what a person might use in attempting to convey his or her unique experiences, perceptions, and senses — the person’s individualised mental state. However, one criticism of such a personal language, by reason of being mostly unfathomable to others, is in its not holding to the definitional purpose of a working language as we commonly know it: to communicate with others, using mutually agreed-upon and comprehended guidelines.

Notably, however, the idea of a ‘private language’ has been subject to different interpretations over the years — besides in expressing to others one’s own mental state — on account of what some people have held are its inherent ambiguities. Even on its surface, such a private language does seem handicapped, inadequate for faithfully representing external reality among multiple users. A language unable to tie external reality to ‘internal’ reality — to a person’s ‘immediate private sensations’, as Wittgenstein put it, such as pain the individual feels. That is, to the user’s subjective, qualitative state of mind. Yet, the idea that people’s frames of mind, subjective experiences, and sense of awareness are unknowable by others, or at least uncertainly known, seems to come to us quite naturally.

Conventionally speaking, we become familiar with what something is because of its intrinsic physical characteristics. That ‘something’ has an external, material reality, comfortably and knowingly acknowledged by others in abidance to norms within the community. The something holds to the familiar terms of the ‘public language’ we use to describe it. It conveys knowledge. It denotes the world as we know it, precipitated by the habitual awareness of things and events. There’s a reassuringly objective concreteness to it.

So, if you were to describe to someone else some of the conventional features of, say, a sheet of paper or of an airplane or of a dog, we would imagine that other people could fathom, with minimal cognitive effort and without bewilderment, what the item you were describing was. A ‘private language’ can’t do any of that, its denying us a universally agreed-upon understanding of what Wittgenstein’s beetle-in-the-box might actually be. To the point about effectiveness, a ‘private language’ — where definitions of terms may be adversely arbitrary, unorthodox, imprecise, and unfamiliar  differs greatly from a ‘public language’ — where definitions of terms and syntactical form stick to conventional doctrine.

Meanwhile, such a realisation about the shortcomings of a ‘private language’ points to an analogy applicable to a ‘shared’ (or public) language: What happens in the case of expressing one’s personal, private experiences? Is it even possible to do so in an intelligible fashion? The discussion now pivots to the realm of the mind, interrogating aspects such as perception, appearance, attention, awareness, understanding, belief, and knowledge.

For example, if someone is in pain, or feeling joy, fear, or boredom, what’s actually conveyed and understood in trying to project their situation to other people? It’s likely that only they can understand their own mental state: their pain, joy, fear, or boredom. And any person with whom they are speaking, while perhaps genuinely empathetic and commiserative, in reality can only infer the other individual’s pain while understanding only their own.

Put another way, neither person can look into the other’s ‘box’; neither can reach into the other’s mind and hope to know. There are epistemic (knowledge-related) limits to how familiar we can be with another person’s subjective experience, even to the extent of the experience’s validation. Pain, joy, fear, and boredom are inexpressible and incomprehensible, beyond rough generalizations and approximations, whether resorting to either a ‘private’ or public language.

What’s important is that subjective feelings obscurely lack form — like the mysterious ‘beetle’. They lack the concrete, external reality mentioned previously. The reason being that your feelings and those of the other person are individualised, qualitative, and subjective. They are what philosophy of mind calls qualia. Such that your worry, pleasure, pride, and anxiety likely don’t squarely align with mine or the next person’s. Defaulting, as Wittgenstein put it, to a ‘language game’ with consequences, with its own puzzling syntactical rules and lexicon. And as such, the game’s challenge to translate reality into precise, logical, decipherable meaning.

All of which echoes Wittgenstein’s counsel against the inchoate, rudimentary notion of a ‘private language’, precisely because of its lacking necessary social, cultural, historical, and semiotic context. A social backdrop whereby a language must be predictably translatable into coherent concepts (with the notable exception of qualia). Such as giving things identifiable, inherent form readily perceived by others, according to the norms of social engagement and shared discourse among people within a community.

Shape-shifting ‘beetles’ are a convenient analogue of shape-shifting mental states. Reflecting altering ways our qualitative, subjective states of mind influence our choices and behaviours, through which other people develop some sense of our states of mind and how others may define us  a process that, because  of its mercurial nature, is seldom reliable. The limitations discussed here of Wittgenstein’s ‘private language’ arguably render such a medium of communication unhelpful to this process.

We make assumptions, based on looking in the box at our metaphorical beetle (the thing or idea or sensation inside), that will uncover a link: a connection between internal, subjective reality — like the pain that Wittgenstein’s theorising demonstrably focused on, but also happiness, surprise, sadness, enthrallment, envy, boredom — and external, objective reality. However, the dynamics of linguistically expressing qualitative, individualised mental states like pain need to be better understood.

So, what truths about others states of mind are closed off from us, because we’re restricted to looking at only our own ‘beetle’ (experience, perception, sensation)? And because we have to reconcile ourselves to trying to bridge gaps in our knowledge by imperfectly divining, based on externalities like behaviour and language, what’s inside the boxes’ (minds) of everyone else?

Monday, 17 October 2022

Science and Humanity

by Allister John Marran


NASA's Double Asteroid Redirection Test, 26 September 2022

We have officially transitioned backwards as a species into an era of personal belief over facts, of emotion over intellect, of blind trust over earned authority.

We have striven to become significantly more fallible by sitting at the camp fire exchanging stories, choosing to first believe and then explore, a honeycomb of fictional realities.

Our little rock doesn't stand still. It moves at thousands of kilometres an hour around the sun, while rotating constantly on its own axis, with gravity pulling eternally—and yet some very clever people place explosives in a cylinder and fire it upwards, breaking free of our planet and then moving at thirty times the speed of sound to another celestial body which is also rotating around the sun on its own axis.

They aim the rocket at an empty point in space, knowing that the other planet will arrive at the exact moment the rocket does.

We can do all of that, and do it safely and reliably, not because of faith or emotion, not because of belief or trust. The numbers tell them it will be there, every time, to the second.

Science and mathematics do not care about your feelings or your complex personal belief structures. They do not worry about offending people or massaging ones’ scruples. It simply and succinctly solves a physical or theoretical problem as efficiently as possible.

Mathematics is the universal language. Unchanging and uncompromised.

But emotion and belief and trust are the language of mankind, it's what makes us human, a most endearing quality that allows love and hate, care and neglect, laughter and crying, and great triumph and cruelty.

The great works of Shakespeare and Tolkien and King and Koontz could simply not be written in the language of maths. They require a suspension of disbelief and an emotional core.

Because human behaviour hardly ever adds up.

But our strength is our weakness, and it's the exploitation of these analogue traits which has led us to place a greater importance on our beliefs than the facts.

More than ever, nefarious actors are taking the political, religious or social stage, and asking you to forget the truth, ignore the facts, trample the math, destroy the science and just believe them.

Trust them implicitly. Don't over-think, don't look too deeply, don't add it up or use common sense to interrogate the facts. Just trust them.

And so we now live in an age where conspiracy theoriests can mobilise an army, televangelists can ask their congregation for another eight hundred million to buy another jet, politicians can command more loyalty the more they lie and cheat and thieve, and Finding Bigfoot can enter a twelfth season without ever finding Bigfoot.

It's not necessary to destroy your humanity in order to defeat these exploitative forces trying to cajole you into believing nonsense. You don't have to stop your suspension of disbelief, or temper your emotion, or stop loving the ones you love.

You just need to compartmentalise or segment various types of knowledge and activity, and treat each one a little differently.

When you read Shakespeare or watch a romantic comedy or praise your God or watch your football team, let it all out, go to town, laugh and weep and give it your best.

But don't ever give a person the keys to your soul or your belief structure. Don't allow a politician to get you worked up. Don't let your guard down when you need to keep your wits.

Know when to use the language of people or the language of maths and science. Become fully bilingual and know when to change between the two.

Monday, 10 October 2022

I Stand in the Middle of the Ocean

by Tioti Timon *



In the middle of the ocean I stand
without anyone to help.
Days, months, and years have left me behind.
I search for my home,
I call you by name – Kiribati, Where are you?
Hear the voice of my song.
Rise up, rise up, you the centre of the world.
Arise from the depth of the Ocean
So you may be seen from afar
Be lifted higher, and higher
With no friends to help me
They left me days and years ago


—Tom Toakai



In the middle of the ocean means ‘the deep sea’ or ‘deep void’ where feet cannot stand. Standing in the middle of the deep sea means living without a grounding, or strong foundation to stand on. The tone of this song harks back to the 1960s, when Kiribati was still under the British Empire. With a limitation of natural resources, Kiribati relied on the phosphate island of Banaba as the only resource for its economic development.

However, the phosphate was mined by the British, and when it was exhausted, they granted us independence, and left us with a legacy that ignored economic development.** The impact of climate change reflects the continuous roughshod treatment of poor and small island nations like Kiribati, by the powerful nations of the developed world. We have been ignored, and now we are paying the cost of what rich countries are doing for their own benefit, development, and security.

The second line, ‘I stand without anyone to help, days and years’ expresses the complaint of the people of Kiribati, after being used, and then left to stand on their own without a single viable industry on the islands. Being left by the British with limited resources has given the people of Kiribati a hard time to develop their country. Even though Kiribati is poor, and constantly oppressed and victimised by the impacts of climate change, the song encourages the people to fight for their land, their rights and their freedoms.

The Kiribati phrase, ‘Ko mena ia?’ literally means ‘Kiribati, where are you?’ In this song, the composer reminds his people to call out the name of their country, which seems to be lost in the middle of the Pacific Ocean, after being left helpless by the British, and at the same time destroyed by the ignorance of rich countries. The composer suggests that calling out the name of a country is a source of strength, to enable its inhabitants to stand up for their country.

Even though Kiribati was left with very little, we should own the name of our country, and not accept that the situation is lost and hopeless, because we have our islands, and we also have our ocean—our home and refuge, our well-being and our future. We should not remain silent, but must keep on calling the name of our islands to rise from beneath the ocean. This means that we must not rely on other sources to build our lives, but rather to build with our own lands, culture, and ways of living. As islanders, we must return to our home, the home of our ancestors, our cultural ways of living, built by our own ancestral wisdom and knowledge, and not by foreigners.

‘Rise up from the ocean,’ serves to remind the people to rise and stand on their own feet, utilising their own knowledge and skills to bring out what is there in their ocean. It is a wake-up call to the new generation who are caught up with the influences of a new civilisation that replaces traditional ways of living.

‘Arise, arise from the bottom of the ocean, so that you will be seen by those from afar’ is thus a challenge for the islands to rise up, not only to cry out for help, but to do something about it for themselves. It is a call for action by the islanders themselves, to rise up as lights to the world, to tell the world that ‘We are the sea, we are the ocean, we must wake up to this ancient truth and together use it.’

We have a freedom which must not be allowed to be taken away again. We must not allow others to determine our own future, but rather create a future that matches our own plans and dreams. We need to learn from our experiences, the impact of globalisation and climate change ‘to cherish our identities and rediscover ourselves as guardians of the best for the next generations’.***




* Rev. Dr. Tioti Timon is principal of Tangintebu Theological College in the Central Pacific.
** Tabai, Ieremia. 1987:42 . ‘A Kiribati View.’  *** 9th Assembly of Pacific Conference of Churches Report. 2007:17.

Monday, 3 October 2022

Picture Post # 79: Home



'Because things don’t appear to be the known thing; they aren’t what they seemed to be
neither will they become what they might appear to become.'

 

Posted by Priyanka Gupta *

 

 
Slum Dweller in Bangalore. Photo by Priyanka Gupta
 
Slums fill Bangalore city. Most of the people staying in them are construction workers who set up temporary abodes near the construction area. Sometimes you will see the slum dwellers going on with their regular activities out on the road. What option do they have?

The scene seems so complete and wholesome with the little girl child happily enjoying her play time in the toy car while watching her mother cook. But if I would have to cook on the street on a temporary earthen chulha every day, I would pull my hair out. Or would I?


* Priyanka Gupta, a former investment banker, writes about alternative ways of living, learning, and exploring. Read more about her at On My Canvas.

Monday, 26 September 2022

Where Do Ideas Come From?


By Keith Tidman

Just as cosmic clouds of dust and gas, spanning many light-years, serve as ‘nurseries’ of new stars, could it be that the human mind similarly serves as a nursery, where untold thought fragments coalesce into full-fledged ideas?

At its best, this metaphor for bringing to bear creative ideas would provide us with a different way of looking at some of the most remarkable human achievements in the course of history.

These are things like Michelangelo’s inspired painting, sculpting, architecture, and engineering. The paradigm-shifting science of Niels Bohr and Max Planck developing quantum theory. The remarkable compositions of Mozart. The eternal triumvirate of Socrates, Plato, and Aristotle — whose intellectual hold remains to today. The piercing insights into human nature memorably expressed by Shakespeare. The democratic spread of knowledge achieved through Gutenberg’s printing press. And so many more, of course.

To borrow from Newton (with his nod to the generations of luminaries who set the stage for his own influences upon science and mathematics), might humbler souls, too, learn to ‘stand on the shoulders of such giants’, even if in less remarkable ways? Yet still to reach beyond the rote? And, if so, how might that work?

I would say that, for a start, it is essential for the mind to be unconstrained by conformance and orthodox groupthink in viewing and reconceiving the world: a quest for patterns. The creative process must not be sapped by concern over not getting endeavours right the first or second or third time. Doubting ideas, putting them to the test through decomposition and recomposition, adds to the rigour of those that optimally survive exploitation and scrutiny.

To find solutions that move significantly beyond the prevailing norms requires the mind to be undaunted, undistracted, and unflagging. Sometimes, how the creative process starts out — the initial conditions, as well as the increasing numbers of branching paths along which those conditions travel — greatly shapes eventual outcomes; other times, not. All part of the interlacing of analysis and serendipitous discovery. I think that tracing the genealogy of how ideas coalesce informs that process.

For a start, there’s a materialistic aspect to innovative thought, where the mind is demystified from some unmeasurable, ethereal other. That is, ideas are the product of neuronal activity in the fine-grained circuity of the brain, where hundreds of trillions of synapses, acting like switches and routers and storage devices, sort out and connect thoughts and deliver clever solutions. Vastly more synapses, one might note, than there are stars in our Milky Way galaxy!

The whispering unconscious mind, present in reposed moments such as twilight or midnight or simply gazing into the distance, associated with ‘alpha brain waves’, is often where creative, innovative insights dwell, being readied to emerge. It’s where the critical mass of creative insights is housed, rising to challenge rigid intellectual canon. This activity finds a force magnifier in the ‘parallel processing’ of others’ minds during the frothy back and forth of collaborative dialogue.

The panoply of surrounding influences helps the mind set up stencils for transitioning inspiration into mature ideas. These influences may germinate from individuals in one’s own creative orbit, or as inspiration derived from the culture and community of which one is a part. Yet, synthesising creative ideas across fields, resulting in multidisciplinary teams whose members complement one another, works effectively to kindle fresh insights and solutions.

Thoughts may be collaboratively exchanged within and among teams, pushing boundaries and inciting vision and understanding. It’s incremental, with ideas stepwise building on ideas in the manner famously acknowledged by Newton. Ultimately, at its best the process leads to the diffusion of ideas, across communities, as grist for others engaged in reflection and the generation of new takes on things. Chance happenings and spontaneous hunches matter, too, with blanks cooperatively filled in with others’ intuitions.

As an example, consider that, in a 1959 talk, the Nobel prize winning physicist, Richard Feynman, challenged the world to shrink text to such an extent that the entire twenty-four-volume Encyclopedia Britannica could fit onto the head of a pin. (A challenge perhaps reminiscent of the whimsical question about ‘the number of angels fitting on the head of a pin’, at the time intended to mock medieval scholasticism.) Meanwhile, Feynman believed there was no reason technology couldn’t be developed to accomplish the task. The challenge was met, through the scaling of nanotechnology, two and a half decades later. Never say never, when it comes to laying down novel intellectual markers.

I suggest that the most-fundamental dimension to the origination of such mind-stretching ideas as Feynman’s is curiosity — to wonder at the world as it has been, as it is now, and crucially as it might become. To doggedly stay on the trail of discovery through such measures as what-if deconstruction, reimagination, and reassembly. To ferret out what stands apart from the banal. And to create ways to ensure the right-fitting application of such reinvention.

Related is a knack for spotting otherwise secreted links between outwardly dissimilar and disconnected things and circumstances. Such links become apparent as a result of combining attentiveness, openness, resourcefulness, and imagination. A sense that there might be more to what’s locked in one’s gaze than what immediately springs to mind. Where, frankly, the trite expression ‘thinking outside-the-box’ is itself an ironic example of ‘thinking inside-the-box’.

Forging creative results from the junction of farsightedness and ingenuity is hard — to get from the ordinary to the extraordinary is a difficult, craggy path. Expertise and extensive knowledge is the metaphorical cosmic dust required in order to coalesce into the imaginatively original ideas sought.

Case in point is the technically grounded Edison, blessed with vision and critical-thinking competencies, experiencing a prolific string of inventive, life-changing eureka moments. Another example is Darwin, prepared to arrive at his long-marinating epiphany into the brave world of ‘natural selection’. Such incubation of ideas, venturing into uncharted waters, has proven immensely fruitful.

Thus, the ‘nurseries’ of thought fragments, coalescing into complex ideas, can provide insight into reality — and grist for future visionaries.

Monday, 19 September 2022

Neo-Medievalism and the New Latin

By Emile Wolfaardt

Medieval Latin (or Ecclesiastical Latin, as it is sometimes called), was the primary language of the church in Europe during the Dark Ages. The Bible and its laws and commands were all in Latin, as were the punishments to be meted out for those who breached its dictates. This left interpretation and application up to the proclivities of the clergy. Because the populace could not understand Latin, there was no accountability for those who wielded the Latin sword.

We may have outgrown the too-simplistic ideas of infanticidal nuns and the horror stories of medieval torture devices (for the most part, anyway). Yet the tragedy of the self-serving ecclesiastical economies, the gorgonising abuse of spiritual authority, the opprobrious intrusion of privacy, and disenfranchisement of the masses still cast a dark shadow of systemic exploitation and widespread corruption over that period. The few who birthed into the ranks of the bourgeois ruled with deleterious absolutism and no accountability. The middle class was all but absent, and the subjugated masses lived in abject poverty without regard or recourse. There was no pathway to restation themselves in life. It was effectively a two-class social stratification system that enslaved by keeping people economically disenfranchised and functionally dependent. Their beliefs were defined, their behavior was regulated, and their liberties were determined by those whose best interest was to keep them stationed where they were.

It is the position of this writer that there are some alarming perspectives and dangerous parallels to that abuse in our day and age that we need to be aware of.

There has been a gargantuan shift in the techno-world that is obfuscatious and ubiquitous. With the ushering in of the digital age, marketers realised that the more information they could glean from our choices and conduct, the better they could influence our thinking. They started analysing our purchasing history, listening to our conversations, tracking key words, identifying our interests. They learned that people who say or text the word ‘camping’ may be in the market for a tent, and that people who buy rifles, are part of a shooting club, and live in a particular area are more likely to affiliate with a certain party. They learned that there was no such thing as excess data – that all data is useful and could be manipulated for financial gain.

Where we find ourselves today is that the marketing world has ushered in a new economic model that sees human experiences as free raw material to be taken, manipulated, and traded at will, with or without the consent of the individual. Google's vision statement for 2022 is ‘to provide access to the world's information in one click’. Everything, from your heart rate read by your watch, your texts surveyed by your phone’s software, your words recorded by the myriad listening devices around you, your location identified by twenty apps on your phone, your GPS, your doorbell, and the security cameras around your home are garnering your data. And we even pay for these things. It is easier to find a route using a GPS than a map, and the convenience of a smart technology seems, at first glance anyway, like a reasonable exchange.

Our data is being harvested systematically, and sold for profit without our consent or remuneration. Our search history, buying practices, biometric data, contacts, location, sleeping habits, exercise routine, self-discipline, articles we pause our scrolling to peruse, even whether we use exclamation marks in our texts – the list continues almost endlessly – and a trillion other bits of data each day is recorded. Then it is analysed for behavioural patterns, organised to manipulate our choices, and sold to assist advertisers to prise the hard-earned dollars out of our hands. It is written in a language very few people can understand, imposed upon us without our understanding, and used for financial gain by those who do not have our best interest at heart. Our personal and private data is the traded for profit without our knowledge, consent, or benefit.

A new form of economic oppression has emerged, ruthlessly designed, implemented by the digital bourgeois, and built exclusively on harvesting our personal and private data – and we gladly exchanged it for the conveniences it offered. As a society, we have been gaslighted into accepting this new norm. We are fed the information they choose to feed us, are subject to their manipulation, and we are simply fodder for their profit machine. We are indeed in the oppressive age of Neo-Medievalism, and computer code is the new Latin.

It seems to have happened so quickly, permeated our lives so completely, and that without our knowledge or consent.

But it is not hopeless. As oppressive as the Dark Ages were, that period came to an end. Why? Because there were people who saw what was happening, vocalised and organised themselves around a healthier social model, and educated themselves around human rights, oppression, and accountable leadership. After all – look at us now. We were birthed out of that period by those who ushered in the Enlightenment and ultimately Modernity.

Reformation starts with being aware, with educating oneself, with speaking up, and with joining our voices with others. There is huge value to this digital age we have wholeheartedly embraced. However, instead of allowing it to oppress us, we must take back control of our data where we can. We must do what we need to, to maximise the opportunities it provides, join with those who see it for what it is, help others to retain their freedom, and be a part of the wave of people and organisations looking for integrity, openness, and redefinition in the process. The digital age with its AI potential is here to stay. This is good. Let’s be a part of building a system that serves the needs of the many, that benefits humanity as a whole, and that lifts us all to a better place.

Monday, 12 September 2022

The Uncaused Multiverse: And What It Signifies


By Keith Tidman

Here’s an argument that seems like commonsense: everything that exists has a cause; the universe exists; and so, therefore, the universe has a cause. A related argument goes on to say that the events that led to the universe must themselves ultimately originate from an uncaused event, bringing the regress of causes to a halt.

But is such a model of cosmic creation right?


Cosmologists assert that our universe was created by the Big Bang, an origin story developed by the Belgian physicist and Catholic priest Georges Lemaitre in 1931. However, we ought not to confuse the so-called singularity — a tiny point of infinite density — and the follow-on Big Bang event with creation or causation per se, as if those events preceded the universe. Rather, they were early components of a universe that by then already existed, though in its infancy.

It’s often considered problematic to ask ‘what came before the Big Bang’, given the event is said to have led to the creation of space and time (I address ‘time’ in some detail below). By extension, the notion of nothingness prior to the Big Bang is equally problematic, because, correctly defined, nothingness is the total, absolute absence of everything — even energy and space. Although cosmologists claim that quantum fluctuations, or short bursts of energy in space, allowed the Big Bang to happen, we are surely then obliged to ask what allowed those fluctuations to happen.

Yet, it’s generally agreed you can’t get something from nothing. Which makes it all the more meaningful that by nothingness, we are not talking about space that happens to be empty, but rather the absence of space itself.

I therefore propose, instead, that there has always been something, an infinity where something is the default condition, corresponding to the impossibility of nothingness. Further, nothingness is inconceivable, in that we are incapable of visualising nothingness. As soon as we attempt to imagine nothingness, our minds — the act of thinking about it — causes the otherwise abstraction of ‘nothingness’ to turn into the concreteness of ‘something’: a thing with features. We can’t resist that outcome, for we have no basis in reality and in experience that we can match up with this absolute absence of everything, including space, no matter how hard we try to picture it in our mind’s eye.

The notion of infinity in this model of being excludes not just a ‘first universe’, but likewise excludes a ‘first cause’ or ‘prime mover’. By its very definition, infinity has no starting point: no point of origin; no uncaused cause. That’s key; nothing and no one turned on some metaphorical switch, to get the ball rolling.

What I wish to convey is a model of multiple universes existing — each living and dying — within an infinitely bigger whole, where infinity excludes a ‘first cause’ or ‘first universe’.

In this scenario, where something has always prevailed over nothingness, the topic of time inevitably raises its head, needing to be addressed. We cannot ignore it. But, I suggest, time appears problematic only because it's misconceived. Rather, time is not something that suddenly lurches out of the starting gate upon the occurrence of a Big Bang, in the manner that cosmologists and philosophers have typically described how it happens. Instead, when properly understood, time is best reflected in the unfolding of change.

The so-called ‘arrow of time’ traditionally appears to us in the three-way guise of the past leading to (causing) the present leading to the future. Allegorically, like a river. However, I propose that past and future are artificial constructs of the mind that simply give us a handy mechanism by which to live with the consequences of what we customarily call time: by that, meaning the consequences of change, and thus of causation. Accordingly, it is change through which time (temporal duration) is made visible to us; that is, the neurophysiological perception of change in human consciousness.

As such, only the present — a single, seamless ‘now’ — exists in context of our experience. To be sure, future and past give us a practical mental framework for modeling a world in ways that conveniently help us to make sense of it on an everyday level. Such as for hypothesising about what might be ahead and chronicling events for possible retrieval in the ‘now’. However, future and past are figments, of which we have to make the best. ‘Time reflected as change’ fits the cosmological model described here.

A process called ‘entropy’ lets us look at this time-as-change model on a cosmic scale. How? Well, entropy is the irresistible increase in net disorder — that is, evolving change — in a single universe. Despite spotty semblances of increased order in a universe — from the formation of new stars and galaxies to someone baking an apple pie — such localised instances of increased order are more than offset by the governing physical laws of thermodynamics.

These physical laws result in increasing net disorder, randomness, and uncertainty during the life cycle of a universe. That is, the arrow of change playing out as universes live and peter out because of heat death — or as a result of universes reversing their expansion and unwinding, erasing everything, only to rebound. Entropy, then, is really super-charged change running its course within each universe, giving us the impression of something we dub time.

I propose that in this cosmological model, the universe we inhabit is no more unique and alone than our solar system or beyond it our spiral galaxy, the Milky Way. The multiplicity of such things that we observe and readily accept within our universe arguably mirrors a similar multiplicity beyond our universe. These multiple universes may be regarded as occurring both in succession and in parallel, entailing variants of Big Bangs and entropy-driven ‘heat deaths’, within an infinitely larger whole of which they are a part.

In this multiverse reality of cosmic roiling, the likelihood of dissimilar natural laws from one universe to another, across the infinite many, matters as to each world’s developmental direction. For example, in both the science and philosophy of cosmology, the so-called ‘fine-tuning principle’ — known, too, as the anthropic principle — argues that with enough different universes, there’s a high probability some worlds will have natural laws and physical constants allowing for the kick-start and evolution of complex intelligent forms of life.

There’s one last consequence of the infinite, uncaused multiverse described here. Which is the absence of intent, and thus absence of intelligent design, when it comes to the physical laws and materialisation of sophisticated, conscious species pondering their home worlds. I propose that the fine-tuning of constants within these worlds does not undo the incidental nature of such reality.

The special appeal of this kind of multiverse is that it alone allows for the entirety of what can exist.

Monday, 5 September 2022

Picture Post #78 Human Loss



'Because things don’t appear to be the known thing; they aren’t what they seemed to be
neither will they become what they might appear to become.'

 

Posted by Jeremy Dyer *


Prague, Czech Republic. Monument to the Victims of Communism

I have viewed this powerful, symbolic artwork in Prague, which also makes an arresting image. If asked to interpret the artwork, we might imagine it is depicting the misery of loss in some form—perhaps Alzheimers, loss of identity, or personal catastrophe.

Today it might represent alienation from society, as aspects of our literal and ideological worlds are constantly being buffeted around us. What are you busy losing? What parts of you have faded away, and how do you grieve for that? What things are gone forever and what might still be resurrected in your life? How do you mourn that which has been forgotten by you? Does it speak to your life?

Officially, though, the installation represents the personal human cost brought about by the historical evil of Communism. And today, passers-by ignore it as they go about their daily business, even as a steady trickle of tourists take selfies there.

------------------------------------------

* Jeremy Dyer is a psychologist and artist.

Monday, 29 August 2022

Replacing Nature

by Thomas Scarborough

Koeberg Nuclear Power Station, Cape Town

The 2017 film Blade Runner 49 was ‘visually amazing’, receiving eight nominations and two awards at the 71st British Academy Film Awards, among other important accolades. But beyond the visuals, there was some serious philosophy. Blade Runner 49 portrays a world which, according to Laura Holt of the Centre for the Study of Existential Risk, cuts the ‘umbilical cord’ which connects human survival with the biosphere.

Today, this cutting of the umbilical cord would seem to be a slow but relentless process. The more organised we become, the more there is to go wrong. The more there is to go wrong, the more we need to insure life against it. The ‘progress’ of the Enlightenment has become the progress of human domination. This has come at the cost, according to the World Wildlife Fund, of the massive retreat of nature: an average 68% drop in biodiversity since 1970.

The theologian Dietrich Bonhoeffer, before his execution by the Nazi regime in 1945, wrote a synopsis of an enivsaged book. His notes were published posthumously in English in 1953, in Prisoner of God. Since these were abbreviated, I paraphrase here (the original translation appears below):

‘The Coming of Age of Humanity.

‘Humanity will seek to insure life against accident and ill-fortune. If the elimination of danger proves to be impossible, they will seek at least to minimise it. Insurance, while it thrives upon accidents, seeks also to mitigate their effects. This is a Western phenomenon. The goal is ultimately to be independent of nature. Our immediate environment is destined, not to be nature as before, but organisation. Yet this immunity from nature will produce a new crop of dangers, which is the very organisation.’
This was a prescient observation by a man who wrote nearly eighty years ago. At a glance, one might suppose that he was speaking of totalitarianism. It is, however, not the totalinarianism of the state, but what we now call ‘the science of scarcity’. How to provide more, and more, with less, across all lands and seas, for a global population.

Apart from being a relentless process, this becomes more and more dangerous to human stability. Close to my home in Cape Town, there is a nuclear power station. On some days, its twin domes rise hauntingly above the mists on the shore. In 2006, apparently, a single bolt broke loose in a generator, so disabling half the nuclear plant. It went into a controlled shutdown, and could not be raised to life for months. The reason for this was that replacement parts needed to be imported from France.

The incident showed how perilously close human organisation may sometimes be to disintegration. Fuel distribution, desalination plants, food production, transportation, communications, and any number of things besides, may be laid lame through fairly small and localised problems. The war in Ukraine, while not small, has revealed how a localised catastrophe can now destabilise the whole world. Too often, where we engineer things to create a more predictable and dependable world than nature provides, we come another step closer to the edge.

While we have applied much attention to the problems, we seem to find no reason to stop the latent and relentless process of separating ourselves from nature. And those who perhaps see clearly, do not have the power to prevent it. It is not ‘as before’, wrote Bonhoeffer. ‘Before’ (in his continuing notes), humankind had the spiritual vitality to defeat ‘the blasphemies of hybris’. He wrote, ‘Man is once more faced with the problem of himself. He can cope with every danger except the danger of human nature itself.’

Prominent thinkers have said no, wait, stop. Let go of the steering wheel. We are headed for, as it were, Blade Runner 49. The late biologist and naturalist Edward O. Wilson proposed that half the earth should be rewilded. Most recently, Laura Holt called for ‘relinquished areas’ of nature. I myself have proposed that large areas of the planet be prohibitos autem terra: under a ban. I propose that we are not capable of stopping ourselves in any other way.

-----------------------

* Original translation: ‘The coming of age of humanity (along the lines already suggested). The insuring of life against accident, ill-fortune. If elimination of danger impossible, at least its minimisation. Insurance (which although it thrives upon accidents, seeks to mitigate their effects) a western phenomenon. The goal, to be independent of nature. Our immediate environment not nature, as formerly, but organization. But this immunity produces a new crop of dangers, i.e. the very organisation.’

Monday, 22 August 2022

Thence We Will Create Superhumans

by Corinne Othenin-Girard *


IMAGINE A WORLD IN WHICH parents have the option to go to a geneticist to discuss the ‘genetic fix’ choices of their unborn child.

If you should think that this is a fantasy of a dystopian fiction, you would be mistaken. Not only is the above, to a point, technologically possible today, but the parents' option could be made possible, too, in the not-too-distant future. 

Human Genome Editing is a kind of genetic engineering, where DNA is deleted and inserted, modified and replaced. 

The main argument in support of this technology is that it would be used to prevent the transmission of genetic diseases from one generation to the other. 

There seems now to be an instrumentalisation of individuals with disability, which means that concepts become instruments which serve as a guide to action. The proponents of (Germline) Genome Editing are using ‘the prevention of disability’ as a concept that coincides with how people with disabilities are usually portrayed and viewed by the broad public. 

There are two kinds of such editing—Somatic Genome Editing, and Germline Genome Editing—and there are, broadly, three possible applications. These applications include the following: 

1. Somatic Genome Editing is performed in the non-reproductive cells, and may contribute to treating diseases in existing individuals. It is said that it has the potential to revolutionise healthcare. A stunning success of this method was shown recently in the (possibly permanent) cure of hemophilia. And by now nearly 300 experimental gene-based therapies are in clinical testing. Changes made by somatic genome therapy are not passed down to future generations.

2. Germline Genome Editing is performed in early-stage embryo (before ‘it’ is even called an embryo), or in germ cells (sperm and egg cells). These modifications affect all cells of the potential future child, and will also be passed on to future generations. This technology would be used to prevent the transmission of diseases from one generation to the next. In other words, Genome editing would be used for fixing genetic ‘defects’ or ‘variations’ which cause rare diseases. Germline Genome Editing does not treat, cure, or prevent disease in any living individual. It is used to create embryos with altered genomes. 

3. From there on, the technology of Germline Genome Editing will inevitably expand into the area of generating ‘new’ or ‘improved’ abilities. Any gene can change, based on the ability-development it promises. ‘Treating disease’ or ‘preventing disability’ would therefore merge with ‘enhancement’. If genome editing should be deemed to be ‘sufficiently safe’, it could be applied to all kinds of gene variations—and that which is seen as ‘normal’ might be up for debate. The proponents of enhancement by genome editing mean to improve the human body and mind to its maximum potential. They conceive the natural human body as limited, defective, and in need of improvement, and support functioning beyond species-typical boundaries. 

Assuming that so-called ‘glitches’ of gene editing would be overcome, is it ethically acceptable to use this technology in order to ‘design’ future babies? It has already been done, in fact, and this issue has already come up, through the so-called CRISPR-Baby Scandal. In 2018, a Chinese researcher He Jiankui made the first CRIPRS-edited babies, twin girls called Lulu and Nana. Many researchers condemned his action. The actual editing wasn’t executed well. 

At the moment, public opinion is thought to carry a lot of weight. Therefore, various polls have been conducted to assess it. For example, the parent may have a severe heritable muscle disease: whether gene editing for (unborn) babies is acceptable, when it greatly reduces their risk of serious diseases or conditions. Assuming, again, that the technology is safe and effective. 

But for the technology to be declared as safe, don’t individuals with changed DNA need to be monitored throughout their life? 

The emerging field of enhancement medicine is due to push the boundaries through genetic manipulation, and will apply a shift to what is the human norm. 

Would using genome editing technology to create the 'perfect' or 'ideal' human risk making us become less tolerant of 'imperfections'? A person who couldn't embrace the norm of perfection would be perceived as 'disabled' and not as a person with a difference that needs to be sustained.

A genuinely inclusive and pro-equality society has no preferences between all possible future persons. Instead all existing and future individuals are perceived as having equal worth and value.

-------------------------------------

* Corinne Othenin-Girard is a PhD student in sociology in Basle, Switzerland. She is currently working on a participatory project on the topic of Human Germline Genome Editing. Corinne invites readers of Pi to join a Zoom Conference, 9 September 2022 on Human Germline Gene editing (HGGE), more specifically on how it could change the future of humanity.

Monday, 15 August 2022

The Tangled Web We Weave


By Keith Tidman
 

Kant believed, as a universal ethical principle, that lying was always morally wrong. But was he right? And how might we decide that?

 

The eighteenth-century German philosopher asserted that everyone had ‘intrinsic worth’: that people are characteristically rational and free to make their own choices. Lying, he believed, degrades that aspect of moral worth, withdrawing others’ ability to exercise autonomy and make logical decisions, as we presume they might in possessing truth. 

 

Kant’s ground-level belief in these regards was that we should value others strictly ‘as ends’, and never see people ‘as merely means to ends’. A maxim that’s valued and commonly espoused in human affairs today, too, even if people sometimes come up short.

 

The belief that judgements of morality should be based on universal principles, or ‘directives’, without reference to the practical outcomes, is termed deontology. For example, according to this approach, all lies are immoral and condemnable. There are no attempts to parse right and wrong, to dig into nuance. It’s blanket censure.

 

But it’s easy to think of innumerable drawbacks to the inviolable rule of wholesale condemnation. Consider how you might respond to a terrorist demanding the place and time of a meeting to be held by the intended target. Deontologists like Kant would consider such a lie immoral.

 

Virtue ethics, to this extent compatible with Kant’s beliefs, also says that lying is morally wrong. Their reasoning, though, is that it violates a core virtue: honesty. Virtue ethicists are concerned to protect people’s character, where ‘virtues’ — like fairness, generosity, compassion, courage, fidelity, integrity, prudence, and kindness — lead people to behave in ways others will judge morally laudable. 

 

Other philosophers argue that, instead of turning to the rules-based beliefs of Kant and of virtue ethicists, we ought to weigh the (supposed) benefits and harms of a lie’s outcomes. This principle is called  consequentialist ethics, mirroring the utilitarianism of eighteenth/nineteenth-century philosophers Jeremy Bentham and John Stuart Mill, emphasising greatest happiness. 

 

Advocates of consequentialism claim that actions, including lying, are morally acceptable when the results of behaviour maximise benefits and minimise harms. A tall order! A lie is not always immoral, as long as outcomes on net balance favour the stakeholders.

 

Take the case of your saving a toddler from a burning house. Perhaps, however, you believe in not taking credit for altruism, concerned about being perceived conceitedly self-serving. You thus tell the emergency responders a different story about how the child came to safety, a lie that harms no one. Per Bentham’s utilitarianism, the ‘deception’ in this instance is not immoral.

 

Kant’s dyed-in-the-wool unforgiveness of lies invites examples that challenge the concept’s wisdom. Take the historical case of a Jewish woman concealed, from Nazi military occupiers, under the floorboards of a farmer’s cottage. The situation seems clear-cut, perhaps.

 

If grilled by enemy soldiers as to the woman’s whereabouts, the farmer lies rather than dooming her to being shot or sent to a concentration camp. The farmer chooses good over bad, echoing consequentialism and virtue ethics. His choice answers the question whether the lie elicits the better outcome than would truth. It would have been immoral not to lie.

 

Of course, the consequences of lying, even for an honorable person, may sometimes be hard to get right, differing in significant ways from reality or subjectively the greater good. One may overvalue or undervalue benefits — nontrivial possibilities.

 

But maybe what matters most in gauging consequences are motive and goal. As long as the purpose is to benefit, not to beguile or harm, then trust remains intact — of great benefit in itself.

 

Consider two more cases as examples. In the first, a doctor knowingly gives a cancer-ridden patient and family false (inflated) hope for recovery from treatment. In the second, a politician knowingly gives constituents false (inflated) expectations of benefits from legislation he sponsored and pushed through.

 

The doctor and politician both engage in ‘deceptions’, but critically with very different intent: Rightly or wrongly, the doctor believes, on personal principle, that he is being kind by uplifting the patient’s despondency. And the politician, rightly or wrongly, believes that his hold on his legislative seat will be bolstered, convinced that’s to his constituents’ benefit.

 

From a deontological — rules-focused — standpoint, both lies are immoral. Both parties know that they mislead — that what they say is false. (Though both might prefer to say something like they ‘bent the truth’, as if more palatable.) But how about from the standpoint of either consequentialism or virtue ethics? 

 

The Roman orator Quintilian is supposed to have advised, ‘A liar should have a good memory’. Handy practical advice, for those who ‘weave tangled webs’, benign or malign, and attempt to evade being called out for duplicity.

 

And damning all lies seems like a crude, blunt tool, with no real value by being wholly unworkable outside Kant’s absolutist disposition toward the matter; no one could unswervingly meet that rigorous standard. Indeed, a study by psychologist Robert Feldman claimed that people lie two to three times, in trivial and major ways, for every ten minutes of conversation! 

 

However, consequentialism and virtue ethics have their own shortcomings. They leave us with the problematic task of figuring out which consequences and virtues matter best in a given situation, and tailoring our decisions and actions accordingly. No small feat.

 

So, in parsing which lies on balance are ‘beneficial’ or ‘harmful’, and how to arrive at those assessments, ethicists still haven’t ventured close to crafting an airtight model: one that dots all the i’s and crosses all the t’s of the ethics of lying. 


At the very least, we can say that, no, Kant got it wrong in overbearingly rebuffing all lies as immoral. Not seeking reasonable exceptions may have been obvious folly. Yet, that may be cold comfort for some people, as lapses into excessive risk — weaving evermore tangled webs — court danger by unwary souls.


Meantime, while some more than others may feel they have been cut some slack, they might be advised to keep Quintilian’s advice close.




* ’O what a tangled web we weave / When first we practice to deceive’, Sir Walter Scott, poem, ‘Marmion: A Tale of Flodden Field’.

 

Monday, 8 August 2022

A Linguistic Theory of Creation

by Thomas Scarborough

Creation of the Earth, by Wenceslas Hollar (1607-1677)

Perhaps it has been obscured through familiarity. There is an obvious curiosity in the opening chapters of Genesis (the creation of the world). Step by step, God creates the world, then names the world—repeatedly both coupling and separating his* creating and his naming.
Would it not be more natural simply to describe God’s creative acts without embellishment? Would not a description of his creative acts alone suffice? Unless God's naming has some special significance in the narrative, it may seem quite superfluous.

Under any circumstances, the opening chapters of Genesis are supremely difficult to interpret. Bearing this very much in mind, the purpose here is to present an alternative view—unfinished, unrefined—as a new possibility.

Existing interpretations of Genesis include the following:

  • Heaven and earth were created in six days
  • The six days were six (longer) periods of time
  • The earth’s great age was ‘created into’ a six-day sequence
  • Genesis represents the re-creation of the world
  • Genesis stitches various creation stories together
  • Its purpose is to glorify God, not first to be factual
  • It is a synopsis, which may not be sequential
  • It is a myth
  • It is a spiritual allegory
  • It describes a dream of Moses

Here, then, is a new alternative—presented merely as a possibility—for greater minds to examine the rough edges and (possibly) inadmissible ideas on an exceedingly complex text.

We begin with a simple linguistic fact. Names, in the Bible, were often commemorative. The ATS Bible Dictionary sums it up well: ‘Names were assumed afterwards to commemorate some striking occurrence in one’s history.’ Therefore, an event took place—then it, or the place of its happening, was named: Babel, Israel, the Passover, and so on. In fact, often with a pause.

If we assume that the creation account in Genesis includes, similarly, a commemorative naming, then the account may separate a stage-by-stage creation of the world from a stage-by-stage naming of it. With this in mind, there would then be four stages to each act of creation in Genesis. For example, in the NASB translation of the Bible (abridged):

  • ‘Then God said, Let there be light.’
  • ‘And there was light.’
  • ‘And God called the light day.’
  • ‘And there was evening and there was morning, one day.’

One may reduce this to two stages:

  • God created.
  • Then God named it.
 
And with some nuance, we may possibly say:

  • God created, within unspecified periods of time.
  • God named his creation during equal pauses (days), as commemorative acts.


In this case, Genesis could be viewed as a series of linguistic events. Its opening verses could set the tone, as a linguistic announcement: ‘And the earth was formless and void’—reminiscent of the linguist Ferdinand de Saussure, ‘In itself, thought is like a swirling cloud, where no shape is intrinsically determinate. No ideas are established in advance, and nothing is distinct, before the introduction of linguistic structure.’ 

Further, one may see a major linguistic shift in Genesis 3:7: ‘Then the eyes of both of them were opened …’ We have, from this point, the language of ‘ought’, as the first rational creatures ostensibly discern right from wrong. Then, needless to say, Babel represents a major linguistic shift in Genesis chapter 11, as languages (plural) appear.

From this, two major issues arise.

Firstly, is God's creating, in each stage of creation, coincident with his naming of it? In other words, did God name things on the same day that he created them, or did he name them afterwards? 

If it was on the same day that he created them, then the theory suggested here would presumably unravel. But arguably, in its favour, each naming is preceded by the word ‘And … ,’ which in the creation account is mostly used to indicate sequences in time. ‘And God called ...’ may represent separate periods of time in which namings occurred, after acts of creation.

A possible problem lies in Genesis 5:2, ‘God named them … in the day they were created.’ However, the word ‘day’ may here encompass every day, as we find in Genesis 2:4. ‘In the day’ may not refer to the separate stages of creation of Genesis chapter 1.

A second issue arises: God's naming does not seem to appear in the text consistently. ‘God called …’ appears only three times in Genesis 1, in connection with the first three days of creation. 

However Genesis, in general, liberally makes use of related words. Take the key words ‘God created ...’ Alternatives that we find in the text are ‘made, ‘formed’, ‘brought forth’, and so on. The same is true of the key words ‘God called …’ Alternatives are ‘saw’, ‘blessed’, ‘sanctified’. An act of commemoration may be implied in all of these words.

In short, the time periods which are described in Genesis may be attached, not first to the creation of the world, but to God’s naming of it—and, incidentally, to man's naming of it. On the sixth day, ‘the man gave names …’

Such a theory would potentially remove major problems of other creation theories. In particular, it could possibly move beyond both literal and liberal readings of Genesis, without colliding with them.

----------------------------------

* I follow Rabbi Aryeh Kaplan: “We refer to G-d using masculine terms simply for convenience’s sake.

Also by Thomas Scarborough: Hell: A Thought Experiment.