Company Builds AI Text Generator Too Dangerous for Public Release
As the age of automation draws increasingly nearer, the gatekeepers of this exciting, yet frightening, technology are being forced to make ethical decisions on whether or not their A.I. will better humanity or cause its downfall. And just recently the creators of the machine learning research group OpenAI, backed by Elon Musk, Sam Altman, and Reid Hoffman, decided one of their creations was too dangerous to release in full as they realized their AI was too good at generating “deepfakes for text.”
The nonprofit research firm’s GPT2 text generator was fed over 10 million news articles from Reddit – about 40 GBs worth of text – to generate an intuitive program that completes any input sentence into a full-length news article — a fake news article.
A demonstration of the technology can be seen in the below video posted by the Guardian, which shows what happened when the first sentence of an article was input to the bot. Within seconds, the tool generates a fabricated paragraph that reads in a journalistic tone, and sounds like it could actually be reporting legitimate news.
Entering the opening lines of Jane Austen’s Pride and Prejudice and George Orwell’s 1984, had the same effect – without hesitation, the bot filled in the next paragraphs with sentences that read fluently and made perfect sense, albeit they weren’t the sentences from the book.
In a world rife with fake news, ambiguity, and attempts to mislead through media, Musk and his colleagues must have immediately realized the implications GPT2 had when it came to exacerbating these issues.
Just imagine if someone could take an entire article and replace it with deepfakes generated by an AI algorithm such as this?
It would be even worse if only small snippets of an article were replaced by erroneous outputs from the bot, creating slight variations, imperceptible to those who wouldn’t know to verify what they were reading, while the rest of the document might look identical to the real thing.
But then there’s also the really fun applications for this technology that Musk and company certainly played around with for a while. Like the fake news article they had the bot create that outlined the unprecedented discovery of a unicorn the bot named Ovid’s Unicorn:
But while OpenAI had the prescience to withhold their research from the public, it seems this level of technology could easily be created by another research group with fewer ethical reservations.
The journalists at the Guardian also decided to feed the first two paragraphs of their article on GPT2 to itself, out of curiosity of what it might say. Though the results were entirely fake, it wasn’t necessarily the eerie sentience one might have expected from an intelligent bot. Though it said it hoped its creators would release a safe and useful version of it to the public.
Nice try GPT2, but we’re not falling for your devious tricks.
In addition to deciding their text-generating tool was too dangerous to be publicly released, Musk recently decided to leave OpenAI due to what he said was some of the decision making occurring there. It’s unclear whether this had anything to do with the conversations around GPT2.
But in all of the fear surrounding the repercussions this technology could have for news and media, there is certainly one group looking forward to the potential these bots portend: students looking for an easy way out of writing that English essay. Their poor teachers…
For more on the rise of artificial intelligence check out this episode of Deep Space :
Cymatics Could Help Surgeons Identify Cancer Cells for Tumor Removal
The study of cymatics has fascinated researchers for years. Now, one scientist has found a practical way to use the phenomenon to enhance targeted cancer treatments.
The study of cymatics, or the spontaneous, geometric patterns produced by sound when it encounters water or particulate matter on a surface, was coined by Swiss researcher Hans Jenny in 1967. Jenny documented the patterns that appeared when putting sand or fluid on a metal plate that was connected to a sonic frequency oscillator.
Today, acoustic-physics scientist John Stuart Reid has partnered with Dr. Sungchul Ji at Rutgers University, to apply cymatic imaging to identify cancer cells compared to healthy cells. The two hope to develop this technology to allow surgeons the ability to more precisely target cancerous cells when removing tumors.
“So, what we do with the Cymascope instrument is to literally imprint sound onto the surface and indeed the sub-surface of pure, medical-grade water and thereby make it visible with specific lighting techniques. It’s actually quite difficult for a surgeon to remove a tumor in its entirety,” Reid said.
While this type of technology would aid any procedure requiring the surgical removal of a tumor, it would be particularly groundbreaking for brain surgery and other highly sensitive areas in which healthy cells must be carefully navigated.