12
Mar
AI

Elon Musk recently reiterated his warnings about the dangers faced by Artificial Intelligence (AI). Speaking at the South by South West (SXSW) festival in Austin Texas, he reaffirmed his belief that the only way humanity can possibly survive is by making the bold move of colonising Mars. He explained that this was due to the impending danger of the next world war, believing that the consequent destruction would not be from nuclear weapons, but AI.

"This is a situation where you have a very serious danger to the public. There needs to be a public body that has insight and oversight so that everyone is delivering AI safely, this is extremely important. Nobody would suggest that we allow anyone to just build nuclear warheads if they want, that would be insane. My point is that AI is far more dangerous than nukes. So why do we have no regulatory oversight? It’s insane."

Are Musk's concerns rational? Or is this just an overreaction to the recent boom in technology and one too many Sci-Fi films?

What Exactly is AI?

When many people think of AI, they automatically think of robots with extreme intelligence and ability. However, this is not always the case.

AI is merely the ability of a computer to understand what you are asking and search it's vast database and memory banks to provide you with the most precise answer. For example, AI could be asking your phone for recommendations and receiving a clear, concise answer. The most basic examples of AI are available to the public, including Siri, Bixby or Alexa. AI has advanced significantly over the past number of years and the next decade will be an exciting time in the AI revolution. 

Why We Shouldn’t Fear AI

As noted in Entrepreneur magazine, for AI to overthrow humanity, these four things would need to happen:

  1. AI would have to develop a sense of self, distinct from others, and have the intellectual capacity to step outside the intended purpose of its programmed boundaries.
  2. It would have to develop, out of the billions of possible feelings, a desire for something that it believes is incompatible with human existence.
  3. It would have to choose a plan for dealing with its feelings (out of the billions of possible plans) that involved death, destruction, and mayhem.
  4. It would have to have the computing power/intelligence/resources to enact such a plan.

For this to occur, AI would need to develop what we understand as “consciousness”. At the moment, AI cannot understand rules outside what it is specifically programmed for and the reality of AI achieving any of the four points is very unlikely if not impossible.