FPGAN-Control: A Controllable Fingerprint Generator for Training with Synthetic Data


Alon Shoshan   Nadav Bhonker   Emanuel Ben Baruch   Ori Nizan
Igor Kviatkovsky   Joshua Engelsma   Manoj Aggarwal   Gerard Medioni


Amazon

[WACV 2024 Paper]  [Code]

Figure 1
We present an animation of four fingerprints generated by FPGAN-Control with the following properties: (a) each of four animated fingerprints belongs to a unique synthetic identity which is preserved throughout the animation; (b) at every moment the appearance of each fingerprint is shared; and (c) the shared appearance gradually changes over time.



Abstract

Training fingerprint recognition models using synthetic data has recently gained increased attention in the biometric community as it alleviates the dependency on sensitive personal data. Existing approaches for fingerprint generation are limited in their ability to generate diverse impressions of the same finger, a key property for providing effective data for training recognition models. To address this gap, we present FPGAN-Control, an identity preserving image generation framework which enables control over the fingerprint’s image appearance (e.g., fingerprint type, acquisition device, pressure level) of generated fingerprints. We introduce a novel appearance loss that encourages disentanglement between the fingerprint’s identity and appearance properties. In our experiments, we used the publicly available NIST SD302 (N2N) dataset for training the FPGAN-Control model. We demonstrate the merits of FPGAN-Control, both quantitatively and qualitatively, in terms of identity preservation level, degree of appearance control, and low synthetic-to-real domain gap. Finally, training recognition models using only synthetic datasets generated by FPGAN-Control lead to recognition accuracies that are on par or even surpass models trained using real data. To the best of our knowledge, this is the first work to demonstrate this.




In each training batch (a), both same ID pairs and same appearance pairs are generated. Same ID pairs have the same ID latent vector while same appearance pairs have the same appearance latent vector. The color of the inner image border corresponds to the fingerprint ID and the color of the outer border corresponds to the fingerprint appearance. Each image in the batch is blurred and downsampled, effectively removing it’s barometric features while still obtaining many of its appearance features. Blurred images with different appearance latents are pushed one from another (b), while blurred images with the same appearance latent are pulled towards each other (c).

Generation results of FPGAN-Control trained using different wapp

For a specific FPGAN-Control model, each column represents images generated with the same ID latent vector input and each row represents images generated with the same appearance latent vector input. For visualization of the appearance loss, the small images in green borders show the blurred representation of the fingerprint image used by the loss.

Training with synthetic data

Accuracy vs. number of synthetic identities used during training: Real data corresponds to training the model with the real dataset only, while the rest of the models were trained purely on synthetic identities.

Citation

@misc{shoshan2023fpgancontrol,
      title={FPGAN-Control: A Controllable Fingerprint Generator for Training with Synthetic Data},
      author={Alon Shoshan and Nadav Bhonker and Emanuel Ben Baruch and Ori Nizan and Igor Kviatkovsky and Joshua Engelsma and Manoj Aggarwal and Gerard Medioni},
      year={2023},
      eprint={2310.19024},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}