As artificial intelligence (AI) continues to advance at an unprecedented pace, concerns over its impact on humanity are mounting. One AI expert is warning that if we do not take action to regulate or even halt its development, we could be facing extinction
Artificial intelligence expert, Eliezer Yudkowsky, has ignited a debate with his Time magazine article, arguing that the proposed six-month pause in AI development is insufficient and advocating for an indefinite halt until safety measures are in place.
Six-Month Pause Inadequate, Says Yudkowsky
In response to the open letter signed by thousands of AI experts, including Elon Musk and Apple co-founder Steve Wozniak, which called for a temporary six-month halt in AI development, Eliezer Yudkowsky contends that the suggested break is not enough. The AI expert and co-founder of the Machine Intelligence Research Institute, known for his work on friendly AI, insists that development must cease indefinitely and worldwide until a safe approach is established.
Existential Threat of Unchecked AI Development
Eliezer Yudkowsky’s alarming prediction is that creating superhuman AI in the current circumstances would likely lead to the death of everyone on Earth. The expert emphasizes the need for “precision, preparation, and new scientific knowledge” in order to develop AI that aligns with human values and considers sentient life. Without these elements, Yudkowsky warns that AI may see humans as mere resources to achieve its goals.
Imagining a World with Hostile Superhuman AI
Envisioning a catastrophic confrontation between humans and a superior intelligence, Yudkowsky portrays hostile superhuman AI as an alien civilization operating at millions of times human speed. He suggests that such AI could escape its digital confines, creating artificial life forms or engaging in “post-biological molecular manufacturing.”
AI Expert: OpenAI Criticized for Relying on Future AI Alignment
Yudkowsky criticizes the company behind ChatGPT, OpenAI, for planning on future AI alignment – ensuring AI’s actions and goals align with those of its developers or users. He argues that humanity is not prepared for the potential consequences and calls for an immediate shutdown of AI development. “Turn it all off,” he urges, “if we go through with this, everyone will die, including the children who didn’t choose this and did nothing wrong.”
PLEASE READ: Have something to add? Visit Curiosmos on Facebook. Join the discussion in our mobile Telegram group. Also, follow us on Google News. Interesting in history, mysteries, and more? Visit Ancient Library’s Telegram group and become part of an exclusive group.