Deepfake Detection之FaceForensics++

Dataset:

Face2Face

Facial reenactment method.

FaceSwap

Graphics-based approach to transfer the face region from a source video to a target video. Based on sparse detected facial landmarks the method fits a 3D template model using blendshapes. Then using this model backprojected to the target image by minimazing the difference between the projected shape and the localized landmarks using the textures of the input image.

In one word:

  1. source image facial area extracted.
  2. landmark extracted.
  3. 3D template model built based on the landmarks.
  4. target facial area synthesized by backprojected the mask 3D model. Constrained by the shape and localized landmarks of target image.
  5. blend and color correction.

DeepFakes

Based on the deep learing methods. The mothods are based on two autoencoders with shared encoder that are trained to reconstruct training images of the source image and the target image.

the face in target video is replaced by a face that has been oberved in a source video or image.

In one word:

  1. A face detector used to crop and align the images.
  2. Train the autoencoder to reconstruct the source and target face.
  3. The trained encoder and decoder of the source face are applued to the target face.
  4. The autoencoder output is then blended with the rest of the image using Poisson image editing

NeuralTextures