17 hours ago
|February 15th 2019|
The creators of a revolutionary #AIsystem that can write news stories and works of fiction – dubbed “ #deepfakesfortext” – have taken the unusual step of not releasing their research publicly, for fear of potential misuse.
OpenAI, an nonprofit research company backed by #ElonMusk, #ReidHoffman, #SamAltman, and others, says its new #AImodel, called #GPT2 is so good and the risk of #malicious use so high that it is breaking from its normal practice of releasing the full research to the public in order to allow more time to discuss the ramifications of the #technological breakthrough.
How #OpenAI writes convincing news stories and works of fiction - video
At its core, GPT2 is a #textgenerator. The AI system is fed text, anything from a few words to a whole page, and asked to write the next few sentences based on its #predictions of what should come next. The system is pushing the boundaries of what was thought possible, both in terms of the quality of the output, and the wide variety of potential uses.
When used to simply generate new text, GPT2 is capable of writing plausible passages that match what it is given in both style and subject. It rarely shows any of the quirks that mark out previous AI systems, such as forgetting what it is writing about midway through a paragraph, or mangling the syntax of long sentences.
Feed it the opening line of #GeorgeOrwell’s #NineteenEightyFour – “It was a bright cold day in April, and the clocks were striking thirteen” – and the system recognises the vaguely futuristic tone and the novelistic style, and continues with:
“I was in my car on my way to a new job in Seattle. I put the gas in, put the key in, and then I let it run. I just imagined what the day would be like. A hundred years from now. In 2045, I was a teacher in some school in a poor part of rural China. I started with Chinese history and history of science.”
Source: The Guardian