1

enter image description here

As far as I know,

  1. FaceNet requires a square image as an input.

  2. MTCNN can detect and crop the original image as a square, but distortion occurs.

Is it okay to feed the converted (now square) distorted image into FaceNet? Does it affect the accuracy of calculation similarity (embedding)?

For similarity (classification of known faces), I am going to put some custom layers upon FaceNet.

(If it's okay, maybe because every other image would be distorted no matter what? So, it would not compare normal image vs distorted image, but distorted image vs distorted image, which would be fair?)

Original issue: https://github.com/timesler/facenet-pytorch/issues/181.

nbro
  • 42,615
  • 12
  • 119
  • 217
jjangga
  • 111
  • 1

1 Answers1

1

As Neil Slater said it depends on how the model was trained. Now if you go to the FaceNet implementation in TF in github you can see that in the face alignment they do resize without aspect ratio keeping (old scipy.misc.imresize does not keep aspect ratio) so if you are using this implementation the answer to your question is: it does not affect the accuracy.

But if we want to elaborate the golden rule is: not to change the face aspect ratio because that is causing a distortion of the input data for the embedding computation.

In most face recognition tasks once you detect a bounding box you do a resize with aspect ratio keeping. You can accomplish that with several libraries:

I would recommend to use resize with aspect ratio keeping.

JVGD
  • 1,198
  • 1
  • 8
  • 15