The Future of Humanity

By John Glynn –

“If you want a vision of the future, imagine a boot stamping on a human face – forever” – George Orwell, 1984

In a brilliant article for The New York Times, Geoffrey Nunberg, the widely respected linguist, researcher and an adjunct professor at the UC Berkeley School of Information, observed that “Orwellian” is “the most widely used adjective derived from the name of a modern writer.” In the words of Dr. Nunberg, “it’s more common than ‘Kafkaesque,’ ‘Hemingwayesque’ and ‘Dickensian’ put together. It even noses out the rival political reproach ‘Machiavellian’, which had a 500-year head start.”

Just about anything, it appears, can be rendered “Orwellian.” A quick Google search tells us that we live in Orwellian times; Facebook and Google are Orwellian entities; the legal system is Orwellian; Donald Trump, Vladimir Putin, Viktor Orban, Theresa May, Roger Duterte and Kim Jong-un each govern in an Orwellian manner; Trump engages in Trumpthink; and American universities now operate on an Orwellian level. Quite simply, everything is Orwellian.

Overused to the point of meaninglessness, especially if by pseudo-intellectuals, the term “Orwellian” needs a day off. However, before it takes a well-earned rest, I need to use it a few more times. Why? Because of all the great writers, nobody described danger as deliciously as George Orwell. 

Picture a primary school in China. What do you see? Whatever you happen to be picturing, it probably doesn’t involve brain-wave reading headbands. I recently came across a story involving an elite primary school in Hangzhou, Zhejiang Province. Here, students are made to wear brainwave-reading headbands that monitor attention levels throughout the day. 

In a series of photos (link here), you can clearly see pictures of students at Jiangnan Experimental, each one kitted out with black electronic headbands. Is this classroom a prototype? Can we expect most classes to look this in the future? Instead of reminding kids not to forget their lunch and homework, exasperated parents will be shouting, “your headband, dear, don’t forget your brainwave-reading headband.”

Interestingly, the devices weren’t produced in China; they were produced in the city of Boston, by BrainCo Inc., a Harvard University-funded startup. The high-tech company, which has benefited from millions of dollars in investment, specializes in the development of data heavy technology. 

The aforementioned students were sporting BrainCo’s flagship product. Aptly named Focus 1, this piece of head gear detects and quantifies students’ concentration levels. 

Moreover, the not so stylish headbands feed into something called Focus EDU, a portal which, according to company’s website, allows teachers “to assess the effectiveness of their teaching methods in real time and make adjustments accordingly.” 

To many, this technology may appear useful, even beneficial. However, on closer inspection of the photos, you can clearly see a digital screen that displays real-time ranking of students’ concentration levels. When the class ends, the portal provides the teacher with a report that ranks students’ concentration scores, from highest to lowest.

Doesn’t this all sound a little Orwellian to you? In this classroom, Big Brother is keeping watch, quite literally. 

The thought of a discerning device on a child’s head is a little eerie, a little creepy, a little Orwellian-y. However, when it comes to China and questionable practices, what else would you expect?

After all, over the last couple of years, China has locked up close to a million Chinese Uighur Muslims. And, if the Chinese authorities are prepared to subject young children to dubious practices, one shudders to think what many of the Uighur Muslims have been subjected to (and continue to be subjected to). 

This is a country that excels at using digital technology to monitor its citizens. This is a country that has implemented a fully functional social credit system replete with an inescapable network of surveillance cameras keyed to facial recognition.

Of course, communist China operated a repressive regime long before savvy technology arrived on the scene; now, though, with advancements in technology, one can’t help but think that we are entering an age of unprecedented despotic control.  

Tyranny 2.0

In 2018, the South China Morning Post ran with a particularly disturbing story: at a factory in Hangzhou, China, production line workers were pictured wearing brain-reading headwear. The technology, which has become a new norm in this particular factory, enables employers to read workers’ emotions and use sophisticated algorithms to detect emotional spikes in happiness, anxiety and rage. Of course, the thought of an employer forcing an employee to wear such a device is sickening, but – again – not necessarily surprising.

Orwell once said, ”Political language . . . is designed to make lies sound truthful and murder respectable, and to give an appearance of solidity to pure wind.” Can the same be said for this type of technology, where powerful entities fetishize data and analytics to ostensibly “optimize” the lives of students and workers? Is this the “calm” before the runaway technocratic storm, before many of the social designs people hold most dear are abolished?

What if the Chinese government decides to implement such a procedure on a grand scale? What if the government makes it compulsory to wear headsets at all times? What if it becomes illegal to remove the headset without permission? 

Yes, this is all very disturbing…. but these oppressive occurrences are confined to China; such brutality could never happen in the West.

Wait a second. Define “it.” 

If by “it” you mean a loss of freedom due to tyrannical technological advancements, what makes you so sure that “it” can’t happen here?

After all, artificial intelligence could spur the creation of a robot dictator that could rule mankind forever. Elon Musk’s words, not mine. And I think we all agree on one thing — Elon Musk is a man who knows a thing or two about technology. 

Musk made these remarks in an excellent 2018 documentary called “Do You Trust This Computer?” When asked about the future of human agency, the South African-born entrepreneur and businessman replied, “If one company or small group of people manages to develop god-like superintelligence, they could take over the world.” 

Not finished there, the 47-year-old was eager to point out the one major difference between a human dictator and a domineering form of AI, “When there’s an evil dictator, that human is going to die. But for an AI, there will be no death. It would live forever. And then you would have an immortal dictator from which we could never escape.” 

Do you think AI creation should be illegal or monitored by government agencies with stricter laws?

Musk’s warnings should concern us all, on many levels. He’s saying that governments or other entities have the potential to create a treacherous AI that could outlive human leaders and never be destroyed. Not just in China, folks. The only way to avoid this is to democratize AI. What would this involve?

Basically, AI democratization involves standardizing and automating data processing. This way, more people are able to apply it. The ultimate goal for proponents of democratization is to make AI accessible for every application, every business process and every employee. 

Alas, going forward, AI will probably be the domain of large players with deep pockets who can set standards and manipulate policies. 

Do you have still have doubts about the possible threats that await us in the future? If so, try this little experiment: approach someone in their 50’s or 60’s. If you don’t know these people, please approach with caution. Ask these folks gently about their childhood, and whether they ever pictured a time when ‘selfies’ and social network like Facebook would dominate society. Or a day when something called Google would exist, or Alexa, or Siri, or Amazon, or Uber, or Netflix. 

Ask them if they saw a future filled with catfishing and ghosting. Ask them if they pictured a world full of endless digital distractions and irresistible entertainment, a time when humanity would be swept away by a huge technological current.

What’s my point?

Just because you can’t picture something occurring, it doesn’t mean that this “something,” whatever it may be, can’t occur. The truth is that very few thought, back in the 60’s and 70’s, that technologies like smartphones and GPS systems could exist, never mind supercomputers or artificial intelligence. 

Even those futurists who predicted the ubiquity of technological devices failed to imagine a day where millions of us would be sharing dick picks and cat videos.

When it comes to technology, change occurs rapidly.  The change is often so profound that people who’ve lived pre-leap struggle to find their footing once the tectonic plates of technology begin to move. 

All of this ties in with the singularity, a moment when technology manages to trump the human brain; a moment when the limitations of human intelligence are surpassed by artificial intelligence. 

Some futurists argue that such a paradigm shift is inevitable. In 1993, Vernor Vinge, the much lauded scientist and science fiction writer, had this to say: “Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.”

Ray Kurzweil, one of the most respected futurists of all time, basically agrees with Vinge. He believes that the human race will reach the singularity by creating a super-human artificial intelligence. 

Kurzwell also believes that such a system could conceive of ideas that no human being has ever comprehended. The system will create technological tools that will revolutionize society. These tools could be beneficial to mankind, but they could also prove to be detrimental. 

All of this brings to mind Mary Shelley’s Frankenstein, written some 200 years ago. This is not just a tale of tragedy; it’s a cautionary tale that shows the dangers of playing God.

Human beings have – knowingly or unknowingly – adopted the role of the “mad scientist,” and we may have to deal with an “uncontrollable creature” very soon.  Far from an implausible fantasy, Shelley’s novel imagined what could happen if people, particularly immoral or irrational scientists, went too far.

How long do we have until the scientists go too far and this “creature” becomes uncontrollable? Well, Elon Musk thinks that superintelligence will happen in his lifetime. If this super-intelligent AI has a goal, it’s very possible that it may try to achieve it in a way that humans might not agree with. If this is the case, there would be no way to stop it.

After all, AI doesn’t have to be evil to destroy humanity — if AI has a goal and humanity just happens to be in the way, it could very well destroy humanity as a matter of course without even thinking about it – no hard feelings. 

In the words of Musk, “It’s just like if we’re building a road and an anthill happens to be in the way, we don’t hate ants, we’re just building a road. Goodbye anthill.”

We live in a world that likes its morality to be black and white, that likes its heroes on one side and villains on the other; but the lines are becoming blurrier by the day, especially when you can’t even imagine what a possible villain may look like.

As advertisers know only too well, no audience is easier to trick than one that is smugly confident of its own superiority. We may be a sophisticated bunch, but our refusal to accept the possibility of oppressive technology could be our greatest downfall.

Image List:

  1. Cover Image: Title: See no Evil, Hear no Evil, Speak no Evil monkey icon set. Simple modern vector illustration. Image ID : 56096586 Media Type : Vector Copyright : sudowoodo  (Follow)
  2. Title: Scary man threatens and attacks, 3d render. Image ID : 11196310 Media Type : Stock Photo Copyright : Lyudmyla Kharlamova  (Follow)
  3. Title: Masonic symbol, All Seeing Eye inside triangle with beams. Isolated vector illustration, geometric line icon. Image ID : 59697724 Media Type : Vector Copyright : sudowoodo  (Follow)

There are no comments

Add yours

This site uses Akismet to reduce spam. Learn how your comment data is processed.