Deepfakes are digital media in which a person in an existing image or video is replaced with someone else’s likeness. While the act of faking content is not new, deepfakes leverage powerful techniques from machine learning and artificial intelligence to manipulate or generate visual and audio content with a high potential to deceive.
It only takes a few steps to make a face-swap video.
First, thousands of face shots of the two people are fed through an AI algorithm called an encoder. The encoder finds and learns similarities between the two faces, and reduces them to their shared common features, compressing the images in the process.
A second AI algorithm called a decoder is then taught to recover the faces from the compressed images. Because the faces are different, one decoder is trained to recover the first person’s face, and another decoder to recover the second person’s face.
To perform the face swap, you simply feed encoded images into the “wrong” decoder. For example, a compressed image of person A’s face is fed into the decoder trained on person B. The decoder then reconstructs the face of person B with the expressions and orientation of face A. For a convincing video, this has to be done on every frame.
has been deepfaked
with Mr Bean.
So why are
The more widespread impact of deepfakes, along with other fake media and news, is to create a zero-trust society, where people cannot, or no longer bother to, distinguish truth from falsehood. This is when trust becomes eroded, and it will becomes easier to raise doubts about specific events.