OpenAI, a brainchild of SpaceX and Tesla boss Elon Musk, has developed a generalized language AI designed to answer questions, summarize stories, and translate text. It didn't take long for the developers to realize that the program is also able to produce believable fake news stories that can be used for disinformation.

The scary thing is that OpenAI's tech is still in a developmental stage. Analysts suggest that the AI would be able to write and deliver news that borderline, or rather can blur the lines between ethically correct news and fake news. Put simply, OpenAI's program would be able to output plausible-looking content that it would need thorough research just to disprove it.

People might say that an AI that could write fake news is a far cry from an AI gone wild and hell-bent for world domination and conquest. And while it may be true, the dangers of fake news are real and should be taken seriously. After all, human beings rely heavily on news, be it from the newsrooms, prints, or social media.

People know the dangers that come from believing fake news stories. However, some people are just so gullible and would believe everything they read or see on the Internet. Just quote a reliable source and mix in some believable data, and some would label this content as reliable information. These days, people just don't want to dig enough to uncover the truth.

And while social networking sites like Facebook has started their crackdown on fake news sites, the fact still remains that these fake news contents still propagate quickly. And the sad fact is that once the idea is planted, others would follow suit. Whether it is for a more sinister master plan, for "hit" revenues, or just for the plain purpose of gathering attention to themselves, some people would willingly post an unverified content.

The good news is that there is still time to develop new algorithms that could prevent AI's from writing fake but convincing materials. However, the developers may believe that they could write and enforce safeguards in their algorithms to prevent their AI's from doing the unexpected.

The problem is that these developers are threading a relatively new field, a sub-branch of science that they don't fully understand. And, of course, like evolution, there is no way to tell for sure whether these AI's would behave the way that they are expected to, or whether they would evolve on their own.

From practical applications to science fiction, artificial intelligence (AI) has become a regular fixture in human consciousness. Artificial intelligence definitely has its advantages but the dangers concerning the development of an AI that could behave in an unexpected manner are real.

We have literally thousands in literature that explore the possibilities of AI going bad. While materials are purely speculative science, there is an inherent grain of truth in there.

The AI will probably lack some critical human characteristics, i.e., creativity and instincts, the fact that it may be able to output news, fake or real, could mean that there would be fewer wordsmiths in the future. And while it fails in comparison to a Cylon revolution or a Skynet takeover, the real-life danger of believable fake news is still quite real.