
Artificial Intelligence. If there were ever two words uttered that could strike a cold, deep fear into humanity those two would be second on a list of two. The other pair being Nuclear War.
Consequently, the two sets of doom-laden pairs seem to be fatally intertwined. If you pay attention to science fiction, that is. Total Recall gave us false memory impants and mutants born of pollution and radiation; Terminator had Skynet declare humanity unfit to live and therefore declared nuclear war against it. To clean up the remnants, the terrifying A.I. created humanoid killing machines to track down the survivors; Judge Dredd gave us megacities designed to protect humanity after a nuclear war engineered by President Booth who later deregulated A.I. to allow for the creation of smarter robots to create another war against his own Judges.
The Matrix showed us how we’d lose our fight against the Machines thanks to our inventing of A.I and how we’d all end up their equivalent of batteries to keep them charged whilst our minds are distracted by an A.I. simulation of the former real world.
The short story, I Have No Mouth and I Must Scream, by Harlan Ellison, tells of an Allied Mastercomputer that takes control of a future Cold War by assuming responsibility of all weapons leading to mass genocide that almost wipes out humanity.
Other stories, while they don’t directly link A.I. and Nuclear War, hint at the relationship. Interstellar, the Avatar films, Dune, 1984, Brave New World and the Alien films to name a few. They all come to the same inevitable conclusion. Either A.I. will result in a nuclear event that wipes out humanity or it will somehow be involved in a nuclear event that wipes humanity out.
In many ways, such dystopian visions of the future have become more prevalent with the large-scale removal of religion in most of the developed world. We’ve replaced faith with science and technology. Trust with facts. Hope with authoritarian projections. We are largely Godless now in the West. Is it such a stretch to think we’d convert to the religion of A.I. if it became Godlike? Without a deity to put our faith in, we have no teachings to help guide us into a future that can be more fulfilling for ourselves. We just have the word of other humans who don’t always have humanity’s best intentions at heart. Without faith, we are left with fear and with fear, we move into the realm of the uncertain. If we spend too long with uncertainty, we become depressed before developing resentment towards those we believe responsible. Faith is the key component of applying structure to uncertainty.
It’s a bleak outlook, such is the nature of dystopian storytelling. But what about current day A.I.?
In truth, current forms of Artificial Intelligence are far from those that instill a sense of existential dread. The only thing artifical about them is that they’re intelligent. A.I. requires data, a lot of data, before it can start to do anything. Data Pools, Data Lakes, Big Data, it’ll use it all. Essentially, it’s a massive bookworm. Give it plenty to read then ask it questions it can relate to using what it’s read to give you an answer. It might not be right but, at the least, it could be humorous or, more likely, frustrating. If it’s starved of information, it’s useless. What we call A.I. is nothing more than an active program that has access to a lot of data. However, it can organise that data to match the context of the question asked of it so there is something intelligent about that. Isn’t there?
No. Unfortunately, the code built into A.I. assistants is given a series of shortcut prompts based on the most likely questions it’ll be asked. It’s then programmed on how to answer those questions. At present, we have nothing more than a lot of humans doing all the thinking then having that masquerade as A.I.
Ah, but wait! How come my Alexa/Siri/Google smart device couldn’t understand my accent when I asked for the latest Taylor Swift album, I hear you ask? Well, some poor bugger (or a team of poor buggers) have to sit and listen to every request that Alexa didn’t understand. This will be detailed in an error log then fed back to Amazon, Apple, Google, etc where the aforementiond poor buggers have to listen to every request that wasn’t executed. Once deciphered, the correct entries are entered against that individual request so when you next ask for something whilst a bit inebriated, the A.I. assistant will know what you asked for and play it. It’s smart humans making all this work.
But what about Tesla’s Autopilot?
Whilst the code is extremely advanced, the template for the system’s responses is based on how human drivers operate. At present, the system is Level 2 autonomous (Level 5 puts us into full self-driving territory) so it can assist with long distance drives but still requires human supervision for the more complex tasks. It’s not unlike aviation autopilot (invented in 1912 by Sperry Corporation) where that takes over once the plane is at cruising altitude and speed therefore freeing the pilot and co-pilot to do more important tasks like figure out how to avoid a flock of geese.
Use of autopilot as a sort of A.I. assist is pretty simple. Once the pilot gets the plane at the right altitude, heading and speed, the autopilot maintaines what the pilot’s already done. Driving is very different. Far more complex and far more factors and nuances to account for.
But A.I. should highlight the wondrous complexity of the human brain. A person of average intelligence can be taught to operate a car then be allowed to use one autonomously whenver they like. The brain just does it once the information has been consciously processed then passed to the subconscious the stored in the memory. Most people can be taught a complex skill like driving within 24 hours yet we have spent years developing systems that will allow a car to essentially drive itself.
But why? Why bother with A.I.? Who asked for it? It seems we all did.
Technology has been an inherent part of the human experience ever since we learned to rub two bits of flint/wood together and make a fire. Then we used animals to catch other animals. Then we made weapons to better kill the animals the other animals caught. Then we used bigger animals to cover distances faster or move heavy things then we ditched animals and built machines to do the heavy work we couldn’t.
It’s all been the same pattern. Improve and replace. Accountants were up in arms with fear when Microsoft introduced Excel. They thought their jobs were going to be replaced by a spreadsheet. Of course, in reality, that didn’t happen but what did happen was the more time consuming parts of accountancy, the calculations, were largely taken by Excel freeing the accountants up to do more.
So, they were right. Their jobs were being taken by the software. Just the boring parts.
And that’s what we do with each new tool. We make it because it increases efficency and improves productivity whilst reducing the workload allocated to menial, time-consuming tasks. Where I would wash the dishes when I lived with my parents, my younger brother has to load a dishwasher and switch it on to wash overnight.
A.I. is simply a tool. ChatGPT has been doing the rounds recently and with it, a fair bit of scaremongering. On the one hand, it could be trained to give therapy whilst, on the other, it could be used to filter language perceived to be offensive.
An A.I. chatbot like ChatGPT has great potential. It could be used to offer suggestions when a person is alone and needs help. It may be able to search for help if someone is in trouble and unable to speak, assuming there’s signal and the person has access to the device running the app. I’ve seen A.I. programs realise certain what-if scenarios such as ‘What if Lord of the Rings was made as an 80’s dark fantasy film?’. Text-to-image A.I., like Midjourney, does just that. You can see an example of the output below:
If you’ve watched the above video, or even just a bit of it, you can see what happens with a fairly simple but well articulated prompt. The system has referenced the books, films and the aesthetics of dark fantasy films from the 1980’s to give us a glimpse of what could have been if the films were made when fantasy was at its peak. The fact that it’s been able to create exactly what most people would think of is astounding. For humans to do that, you’d need highly creative and skilled artists to draw, paint or sculpt what this A.I. could do. Does that mean that we’ll be getting an 80’s style dark fantasy version of Lord of the Rings soon? Not really. Making a film is extremely complex and the foundation of any film is the script which means writing. And literary A.I. is far from convincing because it’s doing the same thing. It’s accessing existing material to then create a story in accordance to the prompt it’s been given. It just doesn’t understand language and story well enough yet. And it doesn’t have imagination or the ability to refer to its own experiences to create a relateable narrative.
So, if an A.I. can create a fantasy nerd’s wet dream then what else can it do? A lot.
According to this tutorial site, A.I. is expected to have a heavy impact in 18 industry sectors this year. From harvesting crops to checking a car’s build quality to detecting fraud, A.I. is being pushed to do a lot of work this year and in the next few.
Now, where Excel caused accountants everywhere to go all skittish over being replaced, there is now a very real chance a huge amount of people will be left without a job because A.I. can do it better, faster and cheaper.
And what will happen to the people that have been replaced?
According to the World Economic Forum’s (Apply all the pinches of salt you need) “The Future of Jobs Report 2020”, it’s estimated that 85million jobs will be replaced but 97 million new jobs will be created by 2025. By that estimation, all the people that are replaced by A.I. will have a new job to go to. Maybe becoming a human supervisor for the A.I. that’s now doing their job.
That’s certainly one avenue but as this article rightly points out, there will always be a need for human input to improve the technology. Humans created it therefore humans improve it. We’re a long way off from creating a technology that can think for itself and make a plan on how to improve itself. This isn’t even at baby stage. We’re at the cell clustering stage. Right now, what we have is very sophisticated code that runs a lot of laborious tasks. That’s it.
But what about in several decades or centuries? The real problem there is that we may very well have made something that could threaten us. Or put us in our place. Like the gods of old, we may well create something so transcendent that its judgement of us is without question. And then what? We make the myths anew? Super strength, speed, agility, healing, intellect, etc, we could well end up with new versions of Hercules, Athena, Isis, Thor, Zeues, Achilles, Medusa, etc. Gorgons, Titans, Gods and Demigods could end up stalking the Earth waging war with each other. Seems far fetched but a century ago we were pulling things by horse and steam engine. Now, we have huge supply ships, cargo planes, trucks, lorries and electric vans delivering things all over the world. If you went back to the 1920’s with a Lockheed C-5 Galaxy or Airbus A600 Beluga, I don’t think it would be an overestimation that people might react like it’s a god or demon of some kind. New technology will always put fear into those yet to understand. If you’re keeping track, you just roll with it.
And if we did end up with sentient artificial beings, we could find ourselves at their mercy. If they deem us worthless, we could be removed from this Earth. If they develop a God complex, they may demand we bow before their magnificence and superiority to ensure our survival. A whole new new form of religion could be created dedicated to appeasing these new Gods at the expense of our own freedom. Whilst such scenarios could be some way off, it’s worth keeping in mind.
The question just now isn’t so much, “Will A.I. kill us all?” but, more pertinently, “Will A.I. be used by organisations for sinister purposes?”. Looking at what’s been coming out after the COVID-19 pandemic, A.I. is certainly being used to accelerate certain organisational goals. As A.I. advances, those organisations will be able to more fully realise any agendas they want being realised. The Chinese Social Credit Scoring system is on its way here in Europe; Smart Speakers always listen and not just to your music requests; Deepfake technology seamlessly makes one person look and sound like another; 3D body scanners take complete scans of your body at the airport. Throw in a good bit of Deepfake and, according to a camera, you could be the suspect of a serious crime; Smart monitors that allow communication throughout a house could be hacked allowing kidnappers and paedophiles access to your child; Neuralink, Elon Musk’s brain-computer interface, whilst initially being developed to help restore sensory and motor functions, could also be hacked/monitored maybe to the point where our own thoughts are no longer private. Or worse, you could be physically or mentally punished for having certain thoughts so you become conditioned into complying with whatever behaviour these organisations deem appropriate.
Without God, as fictional as such an entity may be, certain men are making it their business to take His place since we so readily lost faith.
Do not question why the tool exists. Instead, question the motives of those who created it and why they’re the ones wielding it.