An Illustration of an AI Army. Depositphotos.

Scientific Paper Warns Artificial Intelligence Could Obliterate Humans

A paper published by Google Deepmind and Oxford University researchers warns that superintelligent AI is now "likely" to spell the end of humanity.

A new paper says artificial intelligence is “likely” to cause an existential disaster for humanity in the future. Interestingly, this has been stated on numerous occasions in the past, with the late Professor Hawking being one of the many experts who have said that we should pay close attention and not let AI get out of hand. In his book, Brief Answers to the Big Questions, Professor Hawking responds to the question, “Will artificial intelligence outsmart us?” by writing, “a super-intelligent AI will be extremely good at accomplishing goals and if those goals aren’t aligned with ours we’re in trouble.”

Now, research from Google Deepmind and Oxford University has concluded that superintelligent AI is now “likely” to spell the end of humanity – a grim scenario that is increasingly predicted by researchers.

In a recent paper published in AI Magazine, Oxford researcher Michael Cohen, DeepMind senior scientist Marcus Hutter, and DeepMind senior scientist Michael Osborne argue that machines will eventually be incentivized to break the rules set down by their creators in order to compete for scarce resources.

Cohen, Oxford University engineering student and co-author of the paper, tweeted earlier this month that an existential catastrophe is not just possible but likely, under the conditions they have identified. They suggest that humanity could be doomed by super-advanced “misaligned agents,” which perceive humanity as an obstacle to a reward.

In the paper, the researchers suggest that an agent can maintain long-term control over its reward by eliminating potential threats and securing its computer with all available energy. According to the researchers, losing the game would prove fatal. Unfortunately, we can’t do much about it, researchers argue.

In an interview with Motherboard, Cohen said he was unsure what would happen in a world with infinite resources. “In a world with finite resources, there’s unavoidable competition for these resources.” For humanity, this could be a bad sign. “You should not expect to win if you’re competing with something that can outfox you at every turn,” he emphasized.  It is only prudent and slow progress that humanity should make in its AI technologies to combat this threat.

Taking these assumptions into account, the paper warns that a sufficiently advanced artificial agent could interfere with goal-information provision, resulting in catastrophic consequences.


Join the discussion and participate in awesome giveaways in our mobile Telegram group. Join Curiosmos on Telegram Today. t.me/Curiosmos

Written by Ivan Petricevic

I've been writing passionately about ancient civilizations, history, alien life, and various other subjects for more than eight years. You may have seen me appear on Discovery Channel's What On Earth series, History Channel's Ancient Aliens, and Gaia's Ancient Civilizations among others.

Write for us

We’re always looking for new guest authors and we welcome individual bloggers to contribute high-quality guest posts.

Get In Touch