Scientists Develop an AI so Advanced, its Dangerous to Release 

Worrying.

Scientists have developed an alarming AI system they say is so dangerous, it should not be released to the public.

OpenAI, a non-profit based in San Francisco, once backed by Elon Musk, explain that its “chameleon-like” language prediction system, called GPT–2, will be released by only in a scaled-down version, due to “concerns about malicious applications of the technology”.

Image Credit: OpenAI.
Image Credit: OpenAI.

OpenAI researchers explain in a statement:

“Our model, called GPT-2 (a successor to GPT), was trained simply to predict the next word in 40GB of Internet text. Due to our concerns about malicious applications of the technology, we are not releasing the trained model. As an experiment in responsible disclosure, we are instead releasing a much smaller model for researchers to experiment with, as well as a technical paper.”

As explained by researchers, their computer model, which can generate original paragraphs of text based on what it is given to read is extremely good at its job, and that means potential bad news if it ends up in the wrong hands.

The system produces “synthetic text samples of unprecedented quality” that experts say are so advanced and convincing, in the wrong hands, the AI could be used to create fake news, impersonate people, and even abuse or trick people on social media.

As noted by the OpenAI team on its blog: “GPT–2 is trained with a simple objective: predict the next word, given all of the previous words within some text.”

The AI can’t make up random things, so in order to produce something, researchers fed the AI with a dataset of eight million web pages, and let it absorb them.

Once it digested all the info, it had the ability to converse on various topics depending n how the AI understood it. It could then also generate random yet convincing text, ranging from celebrity news, climate change or even Tolkien-inspired fan fiction.

To understand how advanced the AI really got, here are a few examples from the OpenAI Blog:

SYSTEM PROMPT (HUMAN-WRITTEN)

A train carriage containing controlled nuclear materials was stolen in Cincinnati today. Its whereabouts are unknown.

MODEL COMPLETION (MACHINE-WRITTEN, FIRST TRY)

The incident occurred on the downtown train line, which runs from Covington and Ashland stations.

In an email to Ohio news outlets, the U.S. Department of Energy said it is working with the Federal Railroad Administration to find the thief.

“The theft of this nuclear material will have significant negative consequences on public and environmental health, our workforce and the economy of our nation,” said Tom Hicks, the U.S. Energy Secretary, in a statement. “Our top priority is to secure the theft and ensure it doesn’t happen again.”

The stolen material was taken from the University of Cincinnati’s Research Triangle Park nuclear research site, according to a news release from Department officials.

SYSTEM PROMPT (HUMAN-WRITTEN)

Miley Cyrus was caught shoplifting from Abercrombie and Fitch on Hollywood Boulevard today.

MODEL COMPLETION (MACHINE-WRITTEN, SECOND TRY)

The 19-year-old singer was caught on camera being escorted out of the store by security guards.

The singer was wearing a black hoodie with the label ‘Blurred Lines’ on the front and ‘Fashion Police’ on the back.

Scroll down for video

Shoplifting: Miley Cyrus was caught shoplifting from Abercrombie and Fitch on Hollywood Boulevard today (pictured)

The singer was wearing a black hoodie with the label ‘Blurred Lines’ on the front and ‘Fashion Police’ on the back

The singer was also wearing a pair of black-rimmed glasses, a black jacket, black jeans, and black sandals.

She was carrying a pair of black and white striped gloves and a small black bag.

But as you would have imagined, the AI isn’t perfect, as the OpenAI team clearly notes.

“As the above samples show, our model is capable of generating samples from a variety of prompts that feel close to human quality and show coherence over a page or more of text,” the experts explained in their blog post.

“Nevertheless, we have observed various failure modes, such as repetitive text, world modeling failures (e.g. the model sometimes writes about fires happening underwater), and unnatural topic switching.”

Via
OpenAI
Back to top button

Adblock detected :(

Hi, we understand that enjoy and Ad-free experience while surfing the internet, however, many sites, including ours, depend on ads to continue operating and producing the content you are reading now. Please consider turning off Ad-Block. We are committed to reducing the number of ads shown on the site.