Monday 25 April 2022

The Dark Future of Freedom

by Emile Wolfaardt

Is freedom really our best option as we build a future enhanced by digital prompts, limits, and controls?

We have already surrendered many of our personal freedoms for the sake of safety – and yet we are just on the brink of a general transition to a society totally governed by instrumentation. Stop! Please read that sentence again! 

Consider for example how vehicles unlock automatically as authorised owners approach them, warn drivers when their driving is erratic, alter the braking system for the sake of safety and resist switching lanes unless the indicator is on. We are rapidly moving to a place where vehicles will not start if the driver has more alcohol in their system than is allowed, or if the license has expired or the monthly payments fall into arrears.

There is a proposal in the European Union to equip all new cars with a system that will monitor where people drive, when and above all, at what speed. The date will be transmitted in real time to the authorities.

Our surrender of freedoms, however, has advantages. Cell-phones alert us if those with contagions are close to us, and Artificial Intelligence (AI) and smart algorithms now land our aeroplanes and park our cars. When it comes to driving, AI has a far better track record than humans. In a recent study, Google claimed that its autonomous cars were ‘10x safer than the best drivers,’ and ‘40x safer than teenagers.’ AI promises, reasonably, to provide health protection and disease detection. Today, hospitals are using solutions based on Machine Learning and Artificial Intelligence to read scans. Researchers from Stanford developed an algorithm to assess chest X-rays for signs of disease. This algorithm can recognise up to fourteen types of medical condition – and was better at diagnosing pneumonia than several expert radiologists working together.

Not only that, but AI promises to both reduce human error and intervene in criminal behavior. PredPol is a US based company that uses Big Data and Machine Learning to predict the time and place of a potential offence. The software looks at existing data on past crimes and predicts when and where the next crime is most likely to happen – and has demonstrated a 7.4% reduction in crime across cities in the US and created a new avenue of study in Predictive Policing. It already knows the type of person who is likely to commit the crime and tracks their movement toward the place of anticipated criminal behavior.

Here is the challenge – this shift to AI, or ‘instrumentation’ as it is commonly called, has been both obfuscatious and ubiquitous. And here are the two big questions about this colossal shift that nobody is talking about.

Firstly, the entire move to the instrumentation of society is predicated on the wholesale surrender of personal data. Phone, watches, GPS systems, voicemails, e-mails, texts, online tracking, transactions records, and countless other instruments capture data about us all the time. This data is used to analyse, predict, influence, and control our behaviour. In the absence of any governing laws or regulation, the Googles, Amazons, and Facebooks of the world have obfuscated the fact that they collect hundreds of billions of bits of personal data every minute – including where you go, when you sleep, what you look at on your watch or phone or other device, which neighbour you speak to across the fence, how your pulse increases when you listen to a particular song, how many exclamation marks you put in your texts, etc. and they collect your data whether or not you want or allow them to.

Opting out is nothing more than donning the Emperor’s new clothes. Your personal data is collated and interpreted, and then sold on a massive scale to companies without your permission or remuneration. Not only are Google, Amazon and Facebook (etc.) marketing products to you, but they are altering you, based on their knowledge of you, to purchase the products they want you to purchase. Perhaps they know a user has a particular love for animals, and that she bought a Labrador after seeing it in the window of a pet store. She has fond memories of sitting in her living room talking to her Lab while ‘How Much is that Doggy in the Window’ played in the background. She then lost her beautiful Labrador to cancer. And would you know it – an ad ‘catches her attention’ on her phone or her Facebook feed with a Labrador just like hers, with a familiar voice singing a familiar song taking her back to her warm memories, and then the ad turns to collecting money for Canine Cancer. This is known as active priming.

According to Google, an elderly couple recently were caught in a life-threatening emergency and needed to get to the doctor urgently. They headed to the garage and climbed into their car – but because they were late on their payments, AI shut their car down – it would not start. We have moved from active priming into invasive control.

Secondly, data harvesting has become so essential to the business model that it is already past the point of reversal. It is ubiquitous. When challenged about this by the US House recently, Mark Zuckerberg offered that Facebook would be more conscientious about regulating themselves. The fox offered to guard the henhouse. Because this transition was both hidden and wholesale, by the time lawmakers started to see the trend it was too late. And too many Zuckerbucks had been ingested by the political system. The collaboration of big data has become irreversible – and now practically defies regulation.

We have transitioned from the Industrial Age where products were developed to ease our lives, to the Age of Capitalism where marketing is focused on attracting our attention by appealing to our innate desire to avoid pain or attract pleasure. We are now in what is defined as the Age of Surveillance Capitalism. In this sinister market we are being surveilled and adjusted to buy what AI tells us to buy. While it used to be true that ‘if the service is free, you are the product,’ it is now more accurately said that ‘if the service is free, you are the carcass ravaged of all of your personal data and freedom to choose.’ You are no longer the product, your data is the product, and you are simply the nameless carrier that funnels the data.

And all of this is marketed under the reasonable promise of a more cohesive and confluent society where poverty, disease, crime and human error is minimised, and a Global Base Income is being promised to everyone. We are told we are now safer than in a world where criminals have the freedom to act at will, dictators can obliterate their opponents, and human errors cost tens of millions of lives every year. Human behaviour is regulated and checked when necessary, disease is identified and cured before it ever proliferates, and resources are protected and maximised for the common betterment. We are now only free to act in conformity with the common good.

This is the dark future of freedom we are already committed to – albeit unknowingly. The only question remaining is this – whose common good are we free to act in conformity with? We may have come so far in the subtle and ubiquitous loss of our freedoms, but it may not be too late to take back control. We need to self-educate, stand together, and push back against the wholesale surrender of our freedom without our awareness.

6 comments:

Keith said...

You suggest, Emile, that ‘AI promises to both reduce human error and intervene in criminal behaviour’. And that ‘big data and machine learning [have been employed] to predict the time and place of a potential offence’.

Interesting; but this strikes me as a tall order. And reminds me of the storyline of Steven Spielberg’s 2002 movie “The Minority Report,” in which a ‘Precrime Unit’ uses technology to ‘arrest and convict murderers before they commit the crime’. Shades, maybe, of the ‘butterfly effect’.

My concern is the ability of human-coded AI, based in the squishy sociology and psychology of human behaviour, ‘to predict the time and place of a potential offence’ without the possibility of at-least-inadvertent system biases. I wonder what assumptions the algorithms make.

I’m a big fan of the future of AI, especially if years from now it can be teamed up with quantum computers for all kinds of sophisticated uses. Longer term, AI may indeed be capable of doing all that better than people can. Meantime, I’m keeping a healthily skeptical eye on assumptions of systems being bias-free.

Thomas O. Scarborough said...

I think this is a very real concern, Keith, concerning 'system biases'. I consider it the biggest danger of AI or machine learning. Not only does it make assumptions about human behaviour, but it excludes a vast amount of information in that area -- and in the area of the environment. That might be all right in moderation, but the calculations are done an uncountable number of times per second in every town and city.

Emile said...

@Keith - your concerns reflect the weakness in the system and the innate concern for the loss of control that we should all have. The fear should not only provoke a discussion on the biases of the programmers, but also of the learning algorithms. A clinical world bereft of true empathy and genuine concern is so sterile, it seems the ask is that we have to sacrifice our humanity and our brokeness for the sake of the collective good. The clinical sterility of instrumentation may offer safety and productivity, but the surrender of control and our humaness may too bitter a moist our lips on.

Emile said...

@Thomas O. Scarborough - the bias is either going to be flavored by the programmers choice or, ultimately, by the Learning Machine's bent toward some algorithmic instrumentation version of objectivism. The great loss is the mutualism we experience in our collective humanism, the compassion and empathy we experience in our collective limitations, and the fundamental engagement in choices that comes from the consequences of our ability to fail.

Keith said...

Picking up on your observations, Emile, I suggest it’s little wonder that artificial intelligence replicates existing prejudices. The designer carries to the task his or her own baggage of predispositions that skew decision-making — which business analytics often show is for the worst. No matter the care to make the algorithms bias-free, at least some of the designer’s prejudices wheedle their way into the product.

However, it’s also important to recognize another critical aspect of bias, apart from the role of coders. That is, in some cases, as part of machine learning, prejudicial decision-making — such as who may or may not be approved for, say, a business loan — might stem from the AI recognizing patterns in data related to an institution’s (like a company’s or organization's) past decisions and concluding, falsely, that’s the model to replicate.

It’s therefore necessary to continually audit algorithms, to keep purging quirks. I would argue that to do that effectively, during the design phase, developers should routinely resort to multidisciplinary, multiethnic, multiracial, multigender “murder boards” of critical reviewers to diagnostically examine algorithms from diverse social perspectives before product launch.

Emile said...

Exactly Keith, with the amount of responsibility and authority we are moving toward devolving upon the machines, one can only hope that their learning is toward greater objectivity, more obvious fairness. I am not sure we can ever teach them wisdom, just to replicate wise choices.

Post a Comment