Will the advanced artificial intelligence of the future be self aware? Credit: Enterprise Talk

Can Artificial Intelligence Be Self-Aware?

How far will artificial intelligence get in the near or distant future? Should we be worried about the potential risks of this technology or will experts find a way to successfully control it?

advertisement

Can artificial intelligence be aware of its own existence?

In recent years, the internet has been replete with headlines like:

advertisement
  • The neural network has learned to draw portraits!
  • The neural network has learned to come up with April Fools’ jokes!
  • Neural networks beat people in Go, chess, and StarCraft!
  • The neural network has learned to write music!

Looking at such headlines, an average person may get the impression that maybe today or tomorrow SkyNet will send an army of biorobots to “kill all people”.

All these developments, although they are of great practical importance, do not bring us much closer to creating a real AI with self-awareness and intelligence in the sense in which a person possesses it.

There are two directions in the development of intelligent systems, which are called weak and strong artificial intelligence.

Weak AI is an intelligent system designed for solving strictly one problem, such as recognizing text in a photo or generating well portraits of people.

Strong AI is an artificial intelligence with intelligence and self-awareness.

About 99% of all news in the press about AI success is related to success in the area of ​​weak AI. We have given the above examples of what neural networks have learned to do, but it is important to understand that these are all different neural networks.

A neural network that draws pictures is useless for playing Go, and a neural network that perfectly plays Go will not write music. Moreover, often a neural network demonstrates a complete lack of understanding of the problem it solves.

So, for example, the AlphaStar created by DeepMind, which plays StarCraft 2 better than 99% of human players, feels more or less confident in games developing according to the standard scenario, but barely comes across some kind of creative strategy, he immediately begins to make absurd decisions. It is not necessary to expect that self-consciousness will one day awaken in such a neural network.

Strong artificial intelligence systems are a completely different matter. Most researchers are inclined to believe that strong artificial intelligence will be aware of its existence and will be intelligent in the same sense as a human being.

One of the most promising directions in this area is the creation of a computer model of the human brain. It is believed that in a sufficiently complex and accurate model of the human brain, self-awareness will inevitably arise.

advertisement
Estimation of the computing power needed to model the human brain. The red line shows where we are now. Credit: Wikipedia
Estimation of the computing power needed to model the human brain. The red line shows where we are now. Credit: Wikipedia

However, at the moment, our technical capabilities do not allow us to build models of the human brain, even at the level of simple modeling of neural and synaptic connections. Modern computing power makes it possible to simulate a neural network comparable in complexity to a cat’s brain.

But the brain is not just a bunch of neurons randomly connected to each other. Its functional structure is also extremely important, so it is still very far from the creation of a full-fledged model of a cat’s brain.

I think when we learn to model the functional structure of the brain of mammals such as cats, dogs, etc. we will understand much better how realistic it is to achieve self-awareness from a computer model of the brain.

At the same time, this approach has quite obvious disadvantages. First, there is no guarantee that intelligence and self-awareness will actually arise in a fairly complex model of the brain. Secondly, some researchers believe in the principle that modeling the human brain as a way to create strong artificial intelligence is the wrong way.

It is about the same as if we needed a vehicle instead of designing a wheel, carts, cars, long and tediously trying to create an ideal model of mechanical human legs. It is quite possible that we should think about a fundamentally different way to create artificial intelligence.

To end things on a high note, I have to mention some important opinions from people like Stephen Hawking and Elon Musk who have expressed their worries that Artificial intelligence could potentially endanger the future of mankind.

This does not mean that they stand against the technological advancement of AI but they expressed their opinion that we need to be well prepared before we release such technology. As Stephen Hawking once said, there is no guarantee whether AI will help us in the future or turn against us. There is a real potential risk that AI could disrupt the world economy in a devastating way.

Elon Musk, on the other hand, expressed his opinion that artificial intelligence could become the reason for World War III.

Even Russian President Vladimir Putin addressed the current race for artificial intelligence. According to him, AI will give us many opportunities but it could also lead to unpredictable problems.

We can only wonder what exactly will happen when modern technology achieves advanced artificial intelligence. Do you think that it will bring a positive change or quite the opposite – potential devastation?

Join the discussion and participate in awesome giveaways in our mobile Telegram group. Join Curiosmos on Telegram Today. t.me/Curiosmos


Sources:

advertisement

• Ann, K. (2019, October 15). Philosophical Aspects of Human Brain Emulation and Simulation. Retrieved December 01, 2020, from https://towardsdatascience.com/philosophical-aspects-of-human-brain-emulation-and-simulation-d64047b26640

• Browne, R. (2017, September 04). Elon Musk says global race for A.I. will be the most likely cause of World War III. Retrieved December 01, 2020, from https://www.cnbc.com/2017/09/04/elon-musk-says-global-race-for-ai-will-be-most-likely-cause-of-ww3.html

• Hildt, E. (2019, June 18). Artificial Intelligence: Does Consciousness Matter? Retrieved December 01, 2020, from https://www.frontiersin.org/articles/10.3389/fpsyg.2019.01535/full

• Kharpal, A. (2017, November 06). Stephen Hawking says A.I. could be ‘worst event in the history of our civilization’. Retrieved December 01, 2020, from https://www.cnbc.com/2017/11/06/stephen-hawking-ai-could-be-worst-event-in-civilization.html

• Koch, C. (2019, December 01). Will Machines Ever Become Conscious? Retrieved December 01, 2020, from https://www.scientificamerican.com/article/will-machines-ever-become-conscious/

• Weisberger, M. (2018, May 24). Will AI Ever Become Conscious? Retrieved December 01, 2020, from https://www.livescience.com/62656-when-will-ai-be-conscious.html

Written by Vladislav Tchakarov

Hello, my name is Vladislav and I am glad to have you here on Curiosmos. As a history student, I have a strong passion for history and science, and the opportunity to research and write in this field on a daily basis is a dream come true.

Write for us

We’re always looking for new guest authors and we welcome individual bloggers to contribute high-quality guest posts.

Get In Touch