Cybersecurity

Deepfake: Can we still trust our eyes?

For a deepfake, a program uses AI to learn what a person looks like and what facial expressions he or she has. With a click of the mouse, the face can then be transferred to any person. “This makes it possible to create videos in which people appear to say or do things that they have never said or done,” explains physicist and moderator Harald Lesch on the YouTube channel “Terra X Lesch & Co.”

Prof. Harald Lesch, Copyright: ZDF/Johanna Brinckmann

For a deepfake, a program uses AI to learn what a person looks like and what facial expressions he or she has. With a click of the mouse, the face can then be transferred to any person. “This makes it possible to create videos in which people appear to say or do things that they have never said or done,” explains physicist and moderator Harald Lesch on the YouTube channel “Terra X Lesch & Co.”

Meanwhile, deepfake programs are even available for download as freeware for everyone. While the digital fake is becoming better and better and more lifelike at a rapid pace, a parallel race is underway to expose it using artificial intelligence.

So how can we arm ourselves for a future in which we can no longer trust our eyes? And: What does this new technology teach us about our perception? The following report attempts to find answers.

Text: Ingo Schenk

Deepfakes – der Manipulation ausgeliefert? (in German only)