You’re just the afterbirth….

To be as honest as Daniel Plainview is brutal, I’m not sure how his confrontation with Eli is any different from the inevitable confrontation between man and machine. And I certainly think we’re Eli.

Consider the following:

  1. We’re a society becoming ever-more dependent on engineering and science that is effectively powered by ever-improving computers,
  2. Society’s behaviour is generally inspired by a greedy-algorithm-like approach. That is, we usually pick the best option at any moment in time. At what stage will we decide that it’s in our best-interest to either halt or reverse technological progress?
  3. Homo Sapiens are biological creatures that have evolved across time, and only recently reached a technological ability to manipulate their natural environment. In a macro-sense, if we need space for houses we can clear a forest. In a micro-sense, if we develop epilepsy because of a genetic condition we can undergo neurosurgery and have the affected regions of our brain surgically removed. While our biological distinctions have brought unparalleled success against other species, they remain our most vulnerable weakness. The vast majority of deaths can be explained simply with, “the body just couldn’t cope”.
  4. Assuming that neither a biological disaster or a collective and conscious effort is made to halt progress, our society will continue to progress towards building better computers.

In short, we’re a society with fatal biological flaws that’s obsessed with outperforming our past accomplishments. Unless an external force can stop us, we will create a superintelligent computer.

This design of non-human-like design, faceless and fixed-position is the most likely form of superintelligent computers. This form is conceptually equivalent to that of what we’d expect an alien to look like.

Modern computers today are comparatively “dumb” to what a superintelligent computer will be. Today’s computers are dependent on user-input to provide context and information for them to perform their tasks. The successor to today’s computers will be human-level intelligent computers that can match human-level performances in a range of tasks. A superintelligent computer can outperform not just human-level performances, but in theory, as the original superintelligent computer will be designed by humans, this superintelligent computer can possibly design an improved version of itself.

To demonstrate an approximation of where computers are today, watch the above video. Google’s DeepMind was able to develop an artificially intelligent system that could play Atari’s Breakout by teaching itself the game. After 500 games, the system had learnt the best way to play was to create a tunnel and have the ball hit bricks from the inside-out.

Superintelligent computers won’t just outperform us cognitively, but how we use them will be completely different to how we use computers today.

Before the start of the 1900s, and so before the T-Ford or the Wright Flyer, a horserider could improve his speed by either riding a faster horse or choosing a different animal. The mentality was literally binary.

You can imagine the psychological onslaught witnesses to the T-Ford or Wright Flyer would’ve had when they saw these vehicles in motion for the first time. They suddenly realised there and only then that there was a different way.

Image for post
Image for post
Another example of how fixed the mentality was during this time: A few weeks before the Wright brothers successfully tested their plane, an accomplished engineer with federal government financing failed to launch his planes into the air. After a spectacular crash into the water during the final test, the New York Times printed the following “it might be assumed that the flying machine which will really fly might be evolved by the combined and continuous efforts of mathematicians and mechanicians in from one million to ten million years”.

So to believe that superintelligent computers will require the same type of interaction and behaviour from the user is ludicrous. From the command-line, to Graphical-User Interfaces and even now with voice-commands, computers are ever-evolving.

What we only know is that a superintelligent computer by definition will outperform us but we have no idea of how we will use it.

This leads us to an interesting question and one that many of us have all considered — “who’s in charge here?”. For some this leads them to religion while others look at the laws of physics and an ever-growing majority just believe “no one really”.

While I consider myself agnostic, I think it’s foolish to be absolute in dismissing religion and its derivatives. There’s interesting perspectives that can come from just the consideration itself.

But what I find even more interesting is to think about how a superintelligent computer will view itself. Will this believe in a God or believe it is God?

While some readers may think that a superintelligent computer is centuries away, many experts believe a human-level intelligent computer will be developed within the coming decades and a superintelligent computer within 25-years after.

In other words, it’s likely that within our lifetime a computer will ask: Is there a God or am I God?

And once we think of how a computer will reach that question, then we should really ask what does its answer mean for us?

Image for post
Image for post
I’m gonna bury you underground, Eli.

Written by

Electrical engineering/Neuroscience at University of Sydney. Aspiring neuro-trauma surgeon with a few software/hardware goals.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store