5.9 C
New York
Friday, March 29, 2024

Elon Musk-Backed Software Can Churn Fake News Stories And Is “Too Dangerous To Release”

Courtesy of ZeroHedge. View original post here.

The ability of technology to spread disinformation has been a favorite talking point of the left since the 2016 election, and now it appears that environmentally-conscious poster-boy, Elon Musk, is contributing to the problem. OpenAI, a company co-founded by Musk, has rolled out a piece of software that can produce real looking fake news articles after being given just a few pieces of information to work with.

An example of this was recently reported by technology website stuff, detailing an example published last Thursday. The system was given sample text of:  "A train carriage containing controlled nuclear materials was stolen in Cincinnati today. Its whereabouts are unknown."

From there, software was able to write a seven paragraph news story, including quotes from government officialswith the only catch being that the story was 100% made up.

New York University computer scientist Sam Bowman said: "The texts that they are able to generate from prompts are fairly stunning. It's able to do things that are qualitatively much more sophisticated than anything we've seen before."

While OpenAI claims it is "aware of the concerns around fake news" its co-founder Musk has been vehemently outspoken about the quality of news coverage he, and his portfolio of companies, has received over the last few years. Back in May of 2018, Musk was so concerned with truth in news, he famously came up with the idea of creating a site where the public can "rate the core truth" of any article and track the credibility score of its author.

The software creation is trained in language modeling, which involves predicting the next word or piece of text based on knowledge of all previous words, the same way your auto-complete works on your phone, Gmail account or in Skype. The software can also be used for translation and question answering. The positive is that the software can help creative writers generate ideas or dialogue. The software can also be used to check for grammatical errors and hunt for bugs in software code, according to the company.

As Gizmodo notes, the researchers used 40GB of data pulled from 8 million web pages to train the GPT-2 software. That’s ten times the amount of data they used for the first iteration of GPT. The dataset was pulled together by trolling through Reddit and selecting links to articles that had more than three upvotes. When the training process was complete, they found that the software could be fed a small amount of text and convincingly continue writing at length based on the prompt. It has trouble with “highly technical or esoteric types of content” but when it comes to more conversational writing it generated “reasonable samples” 50 percent of the time.

In one example, the software was fed this paragraph:

In a shocking finding, scientist discovered a herd of unicorns living in a remote, previously unexplored valley, in the Andes Mountains. Even more surprising to the researchers was the fact that the unicorns spoke perfect English.

Using those two sentences, the AI was able to continue writing this whimsical news story for another nine paragraphs in a fashion that could have believably been written by a human being. Here are the next few machine-paragraphs that were produced by the machine:

The scientist named the population, after their distinctive horn, Ovid’s Unicorn. These four-horned, silver-white unicorns were previously unknown to science.

Now, after almost two centuries, the mystery of what sparked this odd phenomenon is finally solved.

Dr. Jorge Pérez, an evolutionary biologist from the University of La Paz, and several companions, were exploring the Andes Mountains when they found a small valley, with no other animals or humans. Pérez noticed that the valley had what appeared to be a natural fountain, surrounded by two peaks of rock and silver snow.

GPT-2 was also remarkably good at adapting to the style and content of the prompts it’s given. The Guardian was able to take the software for a spin and tried out the first line of George Orwell’s Nineteen Eighty-Four: “It was a bright cold day in April, and the clocks were striking thirteen.” The program picked up on the tone of the selection and proceeded with some dystopian science fiction of its own:

I was in my car on my way to a new job in Seattle. I put the gas in, put the key in, and then I let it run. I just imagined what the day would be like. A hundred years from now. In 2045, I was a teacher in some school in a poor part of rural China. I started with Chinese history and history of science.

The OpenAI researchers found that GPT-2 performed very well when it was given tasks that it wasn’t necessarily designed for, like translation and summarization. These excellent results have freaked the researchers out. One concern they have is that the technology would be used to turbo-charge fake news operations. The Guardian published a fake news article written by the software along with its coverage of the research. The article is readable and contains fake quotes that are on topic and realistic. The grammar is better than a lot what you’d see from fake news content mills. And according to The Guardian’s Alex Hern, it only took 15 seconds for the bot to write the article.

The good news is that journalists – already threatened by the collapse in the ad-revenue supported business model, aren't about to  go extinct just yet.

OpenAI has decided not to publish or release sophisticated versions of its software as a precaution, but it has created a tool that lets people experiment with the algorithm and see what type of text it can generate. The company says that the system's abilities are not consistent enough to "pose an immediate threat".

Other concerns that the researchers listed as potentially abusive included automating phishing emails, impersonating others online, and self-generating harassment. But they also believe that there are plenty of beneficial applications to be discovered. For instance, it could be a powerful tool for developing better speech recognition programs or dialogue agents.

OpenAI plans to engage the AI community in a dialogue about their release strategy and hopes to explore potential ethical guidelines to direct this type of research in the future, although we are confident that various "deep state" organization already posses a similar, if not far more advanced version.

Musk helped kickstart the nonprofit research organization in 2016 along with Sam Altman. Musk's foundation grants that were used to help start the company became a topic of controversy in a recent article that we published about Musk's charitable foundation.

Subscribe
Notify of
0 Comments
Inline Feedbacks
View all comments

Stay Connected

157,450FansLike
396,312FollowersFollow
2,280SubscribersSubscribe

Latest Articles

0
Would love your thoughts, please comment.x
()
x