A leading Meta Artificial Intelligence (AI) researcher wants to "inject" Common Sense and Intuition into Artificial Intelligence.
Earlier this year, a Google engineer made headlines around the world when he claimed that Google’s most advanced Artificial Intelligence system laMDA was conscious and aware of its existence. The story went on for weeks, the engineer was put on administrative leave and eventually fired. Before he left Google, the story developed to the point where the AI was alleged to have hired an attorney. The attorney eventually backed down, and the story, like many others, faded away.
But in today’s world, news about Artificial Intelligence is more than common. And that is not necessarily a bad thing. Artificial Intelligence can greatly help scientists in numerous fields, and I have written about it in previous articles, like here and here. Although experts such as the late Professor Stephen Hawking have warned us we must learn how to control AI, scientists these days are looking for ways to improve the speed of so-called machine learning. And unsurprisingly, some experts in the field say that injecting intuition and artificial common sense into AI could be one of the ways we can make AI systems more than just good.
Facebook’s head of AI seems to be quietly constructing a roadmap to “autonomous” AI while the rest of the company is pursuing Mark Zuckerberg’s metaverse dreams. Yann LeCun, the head of Meta AI and a renowned computer scientist, presented a paper at Berkeley last week. The author describes how the current AI effort lacks “common sense” and outlines a path toward making machines learn “as efficiently as humans” as they become more autonomous. Common sense, as LeCun describes it, is a collection of “world models” that allow humans and animals to predict whether events are likely or unlikely, possible or impossible.
In order for self-driving cars to avoid skidding and going too fast in corners, thousands of tests may be required, the AI researcher explained. Nevertheless, humans can predict such outcomes and largely avoid fatal mistakes when learning new skills through their intimate understanding of intuitive physics.” LeCun proposes reorganizing algorithm training methodology in order to build a combination of common sense that we humans take for granted by bridging the gap between the many trial-and-error iterations required to train neural networks and organic knowledge’s “intuitive” nature.
In spite of this, moving AI from its current state-which is beyond a doubt impressive to something more like human intelligence will probably require some kind of intuition. In his late-September Berkeley talk, LeCun said: “It’s a practical problem because we really want machines with common sense.” “We want driverless cars, we want home robots, we want intelligent virtual assistants,” he added.
Next-generation algorithms will be trained by Meta AI’s chief using a number of moving parts, including a system to replicate short-term memory, one to train self-critical neural networks, and one to synthesize all inputs into useful information with the “configurator” module. The combination of these components will enable artificial intelligence to mimic the processes of the human mind, which at the same time, is both fascinating and terrifying.
Have something to add? Visit Curiosmos on Facebook. Join the discussion and participate in awesome giveaways in our mobile Telegram group. Join Curiosmos on Telegram Today. t.me/Curiosmos