The latest press release from OpenAI, a research platform co-founded by billionaire entrepreneur Elon Musk, reassures us that the term fake news is a phenomenon we all have to care about.
SEE ALSO: THIS AI IS WRITING HORROR STORIES THAT WILL KEEP YOU UP AT NIGHT
OpenAI has said they won't release their latest text producing algorithm fearing that it is good enough to be really dangerous or harmful.
Artificial text at its finest
The name of this new natural language model is GPT-2, and it is programmed to foresee the upcoming word of a random paragraph it studies from the 40 gigabytes text database.
Scientists did an outstanding job since it not only predicted the next word but developed in a way so that it can also mimic stylistic attributes of the sample.
That all means that an open-source program, available for the broadest public is on the level we have only before seen in operation in blockbuster sci-fi’s. OpenAI is not publishing the second version of GPT because the company worries about potential abuse.
The new language model seems to be too convincing; artificial intelligence does the job so good so that it makes human and robot created text indistinguishable.
Benefits and hazards
There is a lot at stake with this new technology. On the one hand, the use of AI in the realm of language provides numerous advantages.
It makes a lot of things faster and easier for users; chatbots are more and more capable of dialogs and speech recognition, which are crucially important for people living with disabilities. (In addition to the fact how much faster and easier they make our daily online interactions.)
If it is easy to understand the pros, then it is even easier to get the other, darker side of their potential implementations. Developments like this can be used in a way so that they not only contribute to the production of those fake news but take off on their own, providing more realistic false information using the infinitely fertile textual soil of the world wide web.
This is the reason behind the company’s decision to (at least) postpone the publishing of the GPT-2 until further investigation. That attitude shows responsibility on the side of the engineers working on it. Jack Clark, OpenAI’s policy director called the companies decision a ‘very tough balancing act for us’.
Although many claims that the decision also goes upstream with the open-source nature of the firm. Thus it is time for both sides to revisit the old routines and come up with a system able to control the safety while providing freedom.
We, at Interesting Engineering, don’t use AI to write articles but only well-read human intelligence which can be detected by the coherent style of our respective authors. Or I am an even better contender than OpenAI’s GPT-2, with a thirst for irony yet to be satiated.