This Person Does Not Exist: Deep Fakes and Synthetic Truth

See all these people below?

You know what they all have in common?

None of them exist.

I don’t mean they’re dead. I mean, they’ve just never existed.

All of these faces are AI-generated images from

(Refresh the page and a completely new face will be generated.)

Let’s take a closer look.

(You can play the ‘real-or-fake face game’ at

While something doesn’t look right for some, image synthesis techniques have come a long way.

An extension of this is deep fake: AI-generated video and audio.


Very brief note on how the technology works

Artificial Intelligence (AI) has been around since the 60s. Since then there’s been alternating hot periods of development and cold periods of stagnation (AI winters). In contrast to rule-based programming, machine learning is a subset of AI where a computer figures things out on its own. When people say ‘AI’ these days, they’re probably referring to machine learning. The 3 drivers behind AI’s rapid advancement and attention over the past decade are: (i) better algorithms, (ii) more data, and (iii) more and cheaper computing power.

Generative Adversarial Networks (GANs) is one such algorithm advancement. This machine learning model has two components. A generator to create fake images and a discriminator to distinguish real versus fake images. The generator and discriminator work together and help each other in a cooperative zero-sum game to make the fake images look more and more real. This method is so powerful because GANs essentially create their own training data.


Some applications of content-generating AI

  • Impressions of product prototypes, properties before construction, your new face after plastic surgery etc could be generated much faster, and with a much more photo-realistic finish. For example, images could be generated based on an outline such as this one developed by NVIDIA:
GauGAN Turns Doodles into Stunning, Photorealistic Landscapes
  • Image enhancement e.g. turn black-and-white images into colour ones. You can upload images here and try it yourself.
AI app transforms black white photos colour
  • Creating music

  • And, of course, in porn. Substituting someone’s face over a porn star’s.


Implications of new technology

As with any new technology, there are a number of questions to consider.

  • Applications: Intended and unintended. Obvious and subtle.
  • Downstream effects on: society? business? policy? other technologies?
    – will it amplify or shake up existing social social structures?
    – which industries will be most affected? how will business models need to adapt?
    – etc
  • First order effects? Second order, third order etc…
  • What does this enable and disable?
  • Positive and negative consequences?
  • Biggest winners and losers?

That’s an awful lot to think about. But my high level conclusion is this:

AI-generated content means cheaper, better, and faster creation.

But its abuse with fake content warrants more truth-seeking than ever.


Fake news galore

Even before the era of deep fakes, the media is polluted with so much misleading content.

Deep fakes will make synthetic truth even easier to fabricate, and make it much more convincing.


In both mainstream media, as well as decentralised social media. 

This Thai ad illustrates the point well. Don’t judge too quickly.

In another example of malicious editing, a couple weeks ago there was a video where you can see Chinese police officers getting out of a car with guns, hear shots being fired, then there’s a scene of dead civilians. The video circulated with titles like “Corrupt Chinese police officers shoot down suspected coronavirus infected civilians.” Turns out the clip of armed Chinese police was to shoot down a rabies-infected dog that was attacking people, and the dead civilians was from a traffic accident from another town. 

Perfect Example of malicious editing

What’s terrifying is, if so many suckers fell for these kind of things, how many more would fall for misleading deep fake content?

This is the price we pay to live in an age of information abundance.

It’s easy to say “people are stupid, isn’t it obvious that’s fake?”

Yeah well AI-generated content will be increasingly less obvious going forward.

It’s easy to say “people are stupid, they should cross-check the facts.”

Since it’s impractical to truth-check every piece of content we consume, it’d be sensible to do more checking on the significant/important the content. Check the validity. Is it from a reputable source? Triangulate your sources. And as the less significant content, always keep in mind that this may not be true.

Yes, it’s not all doom and gloom. There’s DeepFake detectors in rapid development too to offset the dangerous deepfakes.

But we can’t rely on it being everywhere, every time.

Hopefully, what I’m saying is not new and boringly obvious. The AI era just means we have to do even more of this, and a much better job. Filtering signal from noise and truth from content.

The consequence of failure is falling into a sucker spiral. An echo chamber. A dangerous self-reinforcing loop of deepening your pre-existing biases, personalised social media news feed, and increasing susceptibility to fall for fake content. 

It’s easy to say “I’m pretty smart, I wouldn’t fall in such a spiral.” But that’s what everyone stuck in the spiral says.

If you’re really smart, you’d acknowledge that you’re a human being and it’s highly probable that you’ll be a sucker every now and then.

That’s okay. You don’t have to take on the truth-seeking burden alone. Surround yourselves with people that call you out on your bullshit, that challenge your views, and those who are equally passionate about getting it right. 

Care more about getting it right than being right.

That Obama deep fake PSA video you’ve probably already seen wraps all this up elegantly:

“This is a dangerous time... be more vigilant with what we trust from the internet….

It may sound basic but how we move forward in the age of information is going to be the difference between whether we survive or whether we become some kind of fucked up dystopia.

Thank you and stay woke bitches.”

Thanks for reading!

Whenever I’ve accumulated enough interesting things to share, I send out an email newsletter. Subscribe here:

Additional unorganised thoughts I’ll dump here for now

  • Imagine a hedge fund hacking into a video conference call of a rival fund, using deep fake to pose as the managing partner and ordering a suicidal sell order for example.
  • Could additional AI advancements in this area challenge the notion that machines are incapable of producing artistic masterpieces? Are the professions that we have to date considered most creative / un-automatable no longer safe from automation?
  • This tech could make games, simulations, and other virtual worlds that much more realistic. Combine this with AR/VR and everyone go around creating their own worlds. Like in the movie Inception.
  • But what if it becomes difficult to distinguish synthetic reality from natural reality? What if we end up preferring the synthetic reality?
  • On a brighter note, there are non-media content creation applications of GANs too. The cooperative zero-sum game algorithms could be leveraged to identify molecules, chemicals, materials, shapes etc for all sorts of areas. Medicine, agriculture, space travel, etc.