Should We Fear Artificial Intelligence?

Recently Elon Musk reiterated his warnings about the dangers faced by Artificial Intelligence (AI). Speaking at the South by South West (SXSW) festival in Austin, Texas, he reaffirmed his belief that the only way humanity can possibly survive is by making the bold move of colonising Mars. He said that this was due to the impending danger of the next world war. He strongly believes that this world war and consequent destruction would not be down to nuclear weapons, but AI.


“AI is far more dangerous than nukes,"

"I’m not normally an advocate of regulation and oversight," he added.

"This is a situation where you have a very serious danger to the public. There needs to be a public body that has insight and oversight so that everyone is delivering AI safely. This is extremely important.

"Nobody would suggest that we allow anyone to just build nuclear warheads if they want, that would be insane.

"My point was AI is far more dangerous than nukes. So why do we have no regulatory oversight? It’s insane."

Are Musks concerns rational? Or is this just an overreaction to the recent boom in technology and maybe one too many Sci-Fi films?


What Exactly is AI?

When many people think of AI their minds automatically move to scenarios where robots of extreme intelligence and ability have taken over the world. But this is not the case.

Broadly speaking, AI is merely the ability of a computer to understand what you are asking, search it’s vast database and memory banks and relate to you the most precise answer. Have you ever asked your phone for recommendations and received a clear, concise answer? Well that is AI in play. At the moment the basic versions of AI are available to the majority of the public. Think of Siri, Bixby or Alexa. At the moment it is in it’s early, more basic version, but we are seeing new advances daily, and the next decade will prove to be an exciting time in the AI revolution.

Why We Shouldn’t Fear AI

As noted in Entrepreneur magazine, for AI to overthrow humanity, these four things would need to happen:

  1. An AI would have to develop a sense of self distinct from others and have the intellectual capacity to step outside the intended purpose of its programmed boundaries
  2. It would have to develop, out of the billions of possible feelings, a desire for something that it believes is incompatible with human existence
  3. It would have to choose a plan for dealing with its feelings (out of the billions of possible plans) that involved death, destruction and mayhem
  4. It would have to have the computing power / intelligence / resources to enact such a plan

For any of this to occur, AI would need to develop what we understand as “consciousness”. Seeing as, for now, AI cannot understand rules outside what it is specifically programmed for, the reality of AI achieving any of the four points is very unlikely if not impossible. 


See Similar: 

Tech Leaders VS Killer Robots

The Rise of The Internet of Things