2nd FRCSyn: Face Recognition Challenge in the Era of Synthetic Data at CVPR 2024 Forum

Go back to competition Back to thread list Post in this thread

> Knowledge distillation using synthetic data

In most synthetic face recognition datasets an off-the-shelf pretrained face recognition model (trained on real data) is used in the process of generating/cleaning dataset. Since it is allowed to use any synthetic data generation, I was wondering if it is allowed to apply knowledge distillation from embeddings of an off-the-shelf pretrained face recognition model using synthetic data (following the limits on the number of images in the definition of subtasks)?

An example of this approach is a recently proposed knowledge distillation method using synthetic data, called SynthDistill, which outperforms training using a synthetic dataset: https://arxiv.org/pdf/2308.14852.pdf

Posted by: hatef @ March 14, 2024, 10:23 a.m.

Following our rules, as long as the two models are trained according to the limitations of each subtask (check https://codalab.lisn.upsaclay.fr/competitions/16970#learn_the_details-registration), how the models are trained is left to the participant's choice.

Posted by: FRCSyn @ March 19, 2024, 12:06 p.m.
Post in this thread