Nonprofit company OpenAI declines to release research publicly for fear of abuse

2 min read

New Elon Musk’s AI Fake Text Generator is Too Dangerous to Release
We are all aware of the problem of fake news online, and not only online.  

Elon Musk-backed AI Company claims it made a Text Generator that’s too dangerous to release.

What is it all about?

The OpenAI has developed an AI system that can create such impressive fake news content. But the group is too afraid to release it publicly. Their fears are referring to misuse.

They’re letting researchers see a small part of their work.

So we cannot say they are hiding it completely. But, the group’s fear here is very weird.

The developers used 40GB of data pulled from 8 million web pages to train the GPT-2 software. That’s ten times the amount of data they used for the first of GPT.

This time they trailed dataset together by trolling through Reddit. And they were selecting links to articles that had more than three upvotes. When the training process was complete, they found that the software needs a small amount of text to continue writing.

The software has trouble with “highly technical or esoteric types of content”. But when it comes to a more conversational type of writing it generated “reasonable samples” 50 percent of the time.

“Our model, called GPT-2, was trained simply to predict the next word in 40GB of Internet text,” writes a new OpenAI blog. “Due to our concerns about malicious applications of the technology, we are not releasing the trained model. As an experiment in responsible disclosure, we are instead releasing a much smaller model for researchers to experiment with, as well as a technical paper.”

AI is good but risking

OpenAI, a nonprofit research company backed by Elon Musk, Reid Hoffman, Sam Altman, and others, says its new AI model, called GPT2 is so good and the risk of malicious use so high. That’s why they are breaking from its normal practice of releasing full research to the public. Instead, they will allow more time to discuss the ramifications of this technological discovery.

How does it work? Here is one example.

The software was supplied this paragraph:

”In a shocking finding, scientist discovered a herd of unicorns living in a remote, previously unexplored valley, in the Andes Mountains. Even more surprising to the researchers was the fact that the unicorns spoke perfect English.”

Traders-Paradise - AI-Driven Trading Insights

Based on two sentences, it was able to continue writing a news story for another nine paragraphs in a fashion that could have seemingly been written by a human being.

Here are the next few sentences that were produced by the machine:

”The scientist named the population, after their distinctive horn, Ovid’s Unicorn. These four-horned, silver-white unicorns were previously unknown to science.

Now, after almost two centuries, the mystery of what sparked this odd phenomenon is finally solved.

Dr. Jorge Pérez, an evolutionary biologist from the University of La Paz, and several companions were exploring the Andes Mountains when they found a small valley, with no other animals or humans. Pérez noticed that the valley had what appeared to be a natural fountain, surrounded by two peaks of rock and silver snow.”

The Guardian was able to take the software for a test and tried out the first line of George Orwell’s Nineteen Eighty-Four: “It was a bright cold day in April, and the clocks were striking thirteen.”

The AI program selected among the tone of the choices and proceeded with own dystopian science fiction:

”I was in my car on my way to a new job in Seattle. I put the gas in, put the key in, and then I let it run. I just imagined what the day would be like. A hundred years from now. In 2045, I was a teacher in some school in a poor part of rural China. I started with Chinese history and history of science.”

What a story!!!

Can you imagine what such a system could do, for example, with president candidate biography?

The assumptions of this are why OpenAI says it’s only releasing publicly a very small portion of the GPT-2 sampling code.
It’s not releasing any of the dataset, training code, or “GPT-2 model weights.”

The OpenAI blog announces this:

“We are aware that some researchers have the technical capacity to reproduce and open source of our results. We believe our release strategy limits the initial set of organizations who may choose to do this, and gives the AI community more time to have a discussion about the implications of such systems.”

Fake news is the obvious potential downsides. The AI’s is unfiltered nature. It is trained on the internet, so it is not hard to inspire it to generate biased text, conspiracy theories and so on.

“We need to perform experimentation to find out what they can and can’t do,” said Jack Clark, the nonprofit company’s head of policy. “If you can’t anticipate all the abilities of a model, you have to prod it to see what it can do. There are many more people than us who are better at thinking what it can do maliciously.”

Yes, keep this AI away from using a bit more time, please.

The bottom line

AI, Artificial intelligence can be extremely useful for everyday life to the implementation in the stock market. A lot of modern tools we are using every day have some part of AI. It is a high-tech’s geeks dream to implement AI everywhere. But it isn’t possible. Something has to be done by humans.

 risk disclosure

When unsure what's the right move, you can always trade Forex

Get the number #1 winning technical analysis ebook for trading Forex to your email.
Containing the full system rules and unique cash-making strategies. You'll be surprised to see what indicators are being used and what is the master tuning for successful trades. Including case-studies and images.
Leave a Comment