What is Deepfake Technology?

Deepfake is an AI-supported technique used for creating realistic-looking fake media content – typically a video – of people by swapping their faces with other person’s face saying and doing things that in fact they have never actually done.

One of its better known examples is porn videos having the mocked face of popular celebrities on the adult star.

Videos are not the only targeted media content, ultra-realistic fake voice-based contents can also be created through Deepfake. Apart from these two it has plenty of other applications too.

How Does Deepfake Technology Work?

A ML (machine learning) technique known as GAN (generative adversarial network) is used to make Deepfake videos. GANs use two ML models: a generator and a discriminator. These are constantly competing against each other.

The generator will try to create a realistic image from a sample data set, while the discriminator will attempt to determine if it is indeed a fraud. If the discriminator fails to detect the forgery or in other words the generator fools the discriminator, the discriminator then uses the information gathered to become a better judge. Likewise, if the discriminator determines the image created by the generator is a fake, the generator will get better at creating a fake image. The unceasing cycle can continue until a media content (image, video, or audio) is no longer noticeably fake to human perspective.

The larger the quantity of training data sets the better the GAN works. That’s why much of the early Deepfake footages tend to feature famous politicians and Hollywood celebrities. They have many videos that GAN can use to create very realistic Deepfakes.

The Origins

As mentioned above, one of the better known examples of Deepfake are porn videos having the mocked face of popular celebrities; they were also one of the first Deepfake videos. Nicholas Cage memes were also popular, among other fun inventions.

It was ’17 when the word Deepfake became synonymous with this technique, thanks to a Reddit user who went by the name of “Deepfakes”. The user was joined by others at the r/deepfakes subreddit, where they shared their fake videos (mostly starring politicians and actors) with the world. The site administrators had to shut the subreddit down but by this time the technology became well known and available. 

The real creator of GAN is Ian Goodfellow. Along with his colleagues, he introduced the concept to the University of Montreal in ’14. 

What are the Dangers of Deepfake Technology?

Content manipulation is not a new concept; it could be done before but used to require serious skills. There were two important things that you needed to have:

  • a really powerful computer
  • a really good reason (or just too much free time) to make fake content

Deepfake creation software such as FakeApp doesn’t cost you a penny, is easy to find & access, and doesn’t really need a powerful computer to function. And because it does all of the work on its own, you don’t need to be a skilled editor to make ultra-realistic Deepfake media.

This is why everyone; the celebrities, the common people and the governments are worried about the Deepfake movement. Deepfake creation can be used for falsifying almost anything, people with ornery intentions could use these to impersonate people and exploit their friends, families, and colleagues. Careers and lives may be compromised and even ruined outright by malicious Deepfakes. Also, fake videos of world leaders can be used to start international incidents and even wars.

Another reason why it’s worrying is that important personalities could also deny past actions. Because Deepfake content seem so real, anyone could claim a real clip is a Deepfake.

Finding a solution

Although, the Deepfakes are very close to real but a trained eye can still spot them by paying close attention. The lack of blinking along with other human nuances, and details that may be off, such as wrongly-angled shadows, are dead giveaways that are usually not that hard to spot especially for a trained eye. But, Deepfake technology is evolving; the concern is that somewhere in the future it will evolve to a point where we may not be able to tell the difference between a fake and a real one.

Many popular sites including Twitter, Pornhub, and Reddit have tried to get rid of such content but have not been successful so far. Just recently, several prominent US universities and companies including Facebook and Microsoft have formed a consortium behind the Deepfake Detection Challenge (DFDC). This initiative seeks to motivate researchers to develop technologies that can detect if AI has been used to alter a video.

On the more official side, DARPA (Defense Advanced Research Projects Agency), an agency of the United States Department of Defense is working with research institutions and the University of Colorado to create a way to spot deepfakes. Work is ongoing on AI-based countermeasures to Deepfakes, but as mentioned above the technology continues to evolve, so these countermeasures need to keep pace too. 

Until we have a solution, something that can help us spot irregularities in such videos, the best we can do is to be more observant and less gullible. Don’t assume anything until you have done your research about a video, image, or audio. This is something we should have been doing already, anyways.

Leave a Comment

The reCAPTCHA verification period has expired. Please reload the page.