Company Builds AI Text Generator Too Dangerous for Public Release
As the age of automation draws increasingly nearer, the gatekeepers of this exciting, yet frightening, technology are being forced to make ethical decisions on whether or not their A.I. will better humanity or cause its downfall. And just recently the creators of the machine learning research group OpenAI, backed by Elon Musk, Sam Altman, and Reid Hoffman, decided one of their creations was too dangerous to release in full as they realized their AI was too good at generating “deepfakes for text.”
The nonprofit research firm’s GPT2 text generator was fed over 10 million news articles from Reddit – about 40 GBs worth of text – to generate an intuitive program that completes any input sentence into a full-length news article — a fake news article.
A demonstration of the technology can be seen in the below video posted by the Guardian, which shows what happened when the first sentence of an article was input to the bot. Within seconds, the tool generates a fabricated paragraph that reads in a journalistic tone, and sounds like it could actually be reporting legitimate news.
Entering the opening lines of Jane Austen’s Pride and Prejudice and George Orwell’s 1984, had the same effect – without hesitation, the bot filled in the next paragraphs with sentences that read fluently and made perfect sense, albeit they weren’t the sentences from the book.
In a world rife with fake news, ambiguity, and attempts to mislead through media, Musk and his colleagues must have immediately realized the implications GPT2 had when it came to exacerbating these issues.
Just imagine if someone could take an entire article and replace it with deepfakes generated by an AI algorithm such as this?
It would be even worse if only small snippets of an article were replaced by erroneous outputs from the bot, creating slight variations, imperceptible to those who wouldn’t know to verify what they were reading, while the rest of the document might look identical to the real thing.
But then there’s also the really fun applications for this technology that Musk and company certainly played around with for a while. Like the fake news article they had the bot create that outlined the unprecedented discovery of a unicorn the bot named Ovid’s Unicorn:
But while OpenAI had the prescience to withhold their research from the public, it seems this level of technology could easily be created by another research group with fewer ethical reservations.
The journalists at the Guardian also decided to feed the first two paragraphs of their article on GPT2 to itself, out of curiosity of what it might say. Though the results were entirely fake, it wasn’t necessarily the eerie sentience one might have expected from an intelligent bot. Though it said it hoped its creators would release a safe and useful version of it to the public.
Nice try GPT2, but we’re not falling for your devious tricks.
In addition to deciding their text-generating tool was too dangerous to be publicly released, Musk recently decided to leave OpenAI due to what he said was some of the decision making occurring there. It’s unclear whether this had anything to do with the conversations around GPT2.
But in all of the fear surrounding the repercussions this technology could have for news and media, there is certainly one group looking forward to the potential these bots portend: students looking for an easy way out of writing that English essay. Their poor teachers…
For more on the rise of artificial intelligence check out this episode of Deep Space :
Is The Universe One Big Interconnected Neural Network?
Is the universe an interconnected neural network? A new way of thinking is emerging about how different areas of physics and the universe could be connected to create a model that ties together traditional scientific thought with new ideas in quantum physics.
For years physicists have tried to unify classical and quantum physics. Classical physics goes back to the time of Sir Isaac Newton and is based on mechanical, physical equations; that everything operates like clockwork, predictably and knowably.
Quantum physics, on the other hand, looks at microscopic, subatomic scales and how they interact at the levels of particles, waves, and forcefields. But the fundamental laws of physics at this quantum level are the antithesis of their behavior at the classical level. Instead of certainty, you have uncertainty. So how do we connect these different views with a so-called “Theory of Everything”?
A recent paper by University of Minnesota Duluth Physics Professor Vitaly Vanchurin, argues that this seeming paradox can exist if the universe is connected in a neural network.