A program run by the U.S. Defense Advanced Research Projects Agency (DARPA) has announced the development of artificial intelligence (AI) tools that can automatically spot ultra realistic fake videos called "Deepfakes."

An odd but new word that combines "deep learning" and "fake," Deepfake is an artificial intelligence-based human image synthesis technique. It combines and superimposes existing images and videos onto source images or videos.

An incredible Deepfake featuring former U.S. president Barack Obama mimicking words spoken by American actor and comedian Jordan Peel can be viewed here: https://www.youtube.com/watch?time_continue=54&v=cQ54GDm1eL0.

The first forensics tools that can spot Deepfakes in real time were developed by a DARPA program called Media Forensics (MediFor). It took researchers at MediFor two years to crack this problem. What it's developed is a suite of AI tools that can automatically spot AI-created fakes, said the MIT Technology Review.

MediFor's work on the forensics tools actually predates the widely reported Deepfakes phenomenon. MediFor began tackling the problem of Deepfakes about two years ago but turned its attention to AI-produced forgery only recently.

MediFor has brought researchers to the point of being able to spot subtle clues in manipulated videos and images on Generative Adversarial Networks (GANs), which allows them to detect the presence of alterations, said Matthew Turek, who runs MediaFor.

GANs are a class of AI algorithms used in unsupervised machine learning. They're implemented by a system of two neural networks vying with each other in a zero-sum game framework. GANs can generate photographs that often look authentic to human observers at first glance.

GANs are relatively new but have stunned the machine-learning milieu. They have been used to manipulate images of actors, actresses and celebrities. GANs can also be easily used to change facial expressions such that a smile becomes a frown.

One clue that's proved helpful in detecting Deepfakes is a person's eyelids. Researchers discovered that faces made with deepfake techniques rarely if ever, blink. When they do blink, it looks fake. This quirk is caused by Deepfakes being "trained" on photos and not on videos. It also because still photos typically show people with their eyes wide open.

Other clues a video might be a Deepfake include unnatural head movements or weird eye colors. These flaws are physiological that indicate Deepfakes still have difficulties mimicking video, according to Hany Farid, a leading digital forensics expert at Dartmouth University.

Deepfake have been used since 2017 to create fake sex videos of celebrities and politicians. In January 2018, Deepfakes reached a terrifying milestone with the launch of a desktop app called "FakeApp" that allows users to easily create and share videos where faces have been swapped.

FakeApp uses an artificial neural network; graphics processors and three to four gigabytes of storage space to generate fake videos. The developer of FakeApp said an even more user-friendly version is on the way.