3DFAW is intended to bring together computer vision and multimedia researchers whose work is related to 2D or 3D face alignment. We are soliciting original contributions which address a wide range of theoretical and application issues of 3D face alignment for computer vision applications and multimedia including, including but not limited to:
For more details on 3DFAW challenge, visit the challenge website.
For queries regarding codalab competition/submission etc. please email rohithkp(at)andrew(dot)cmu(dot)edu
The 2nd 3DFAW Challenge evaluates 3D face reconstruction methods on a new large corpora of profile-to-profile face videos annotated with corresponding high-resolution 3D ground truth meshes. The corpora includes profile-to-profile videos obtained under a range of conditions:
Submissions are evaluated using the following evaluation metric:
ARME is an evaluation metric, which is the average of the point-to-mesh distance between the ground truth and predicted and vice verse. The Euclidean error is used as the distance metric between point and mesh, and is computed as below:
Where A,B are meshes, Ai is a vertex in mesh A, Bv is the closest vertex to Ai on B, and similarly Be is the closest edge on B to Ai, and Bf is the closest face on B to Ai. The above error calculates the shortest distance between a vertex in A to the surface of the mesh B. The closest vertices on the mesh B are found using a nearest neighbor search. We can then calculate ARMSE using the following equation:
Here, ARMSE is defined between 2 meshes X, Y with Nx and Ny being the number of vertices in each respectively. Using the earlier error metric E(Ai,B), the distance between the predicted and ground truth are calculated and RMSE scores between the predicted and ground truth and vice versa averaged out to provide the final ARMSE score. The 2 different E(Ai,B) are calculated with each of X,Y meshes being A, because the nearest neighbor search for the closest vertices is not symmetric between the 2 meshes X, Y. Finaly, I is the outer inter-occular distance on the ground truth mesh Y, i.e. the euclidean distance between the 19th and 28th landmark points of the 51 dlib facial landmarks. For more information regarding the landmarks, please refer the submission tab.
Good luck!
Official Rules
Common terms used in these rules:
These are the official rules that govern how the 3D Face Alignment in the Wild from Video (3DFAW-Video) Challenge will operate. This challenge will be simply referred to as the “contest” or the “challenge” throughout the rest of these rules and may be abbreviated on our website, in our documentation, and other publications as 3DFAW-Video.
In these rules, “organizers”, “we,” “our,” and “us” refer to the organizers of the 3DFAW Challenge; “Database” refer to all the distributed image and annotation data; "participant”, “you,” and “yourself” refer to an eligible contest participant.
Contest Description
This is a skill-based contest and chance plays no part in the determination of the winner(s).
Data Description and Usage Terms
The datasets may be used for the 3DFAW-Video Challenge of ICCV 2019 only. The recipient of the datasets must be a full-time faculty, researcher or employee of an organization (not a student) and must agree to the following terms:
Laszlo A. Jeni, Huiyuan Yang, Rohith K. Pillai, Zheng Zhang, Jeffrey Cohn, and Lijun Yin, “3D Dense Face Reconstruction from Video (3DFAW-Video) Challenge”, 2nd Workshop and Challenge on 3D Face Alignment in the Wild – Dense Reconstruction from Video (3DFAW-Video) 2019, in conjunction with IEEE International Conference on Computer Vision (ICCV), 2019.
Eligibility criteria
This contest is void within the geographic area identified above and wherever else prohibited by law.
Entry
On-line notification
We will post changes in the rules or changes in the data as well as the names of confirmed winners (after contest decisions are made by the judges) online on the https://3dfaw.github.io
Conditions.
By entering this contest you agree all terms of use. You understand that the violation of the use will be pursued.
This contest is void within the geographic area identified above and wherever else prohibited by law.
Entry
On-line notification
We will post changes in the rules or changes in the data as well as the names of confirmed winners (after contest decisions are made by the judges) online on https://3dfaw.github.io.
Conditions.
By entering this contest you agree all terms of use. You understand that the violation of the use will be pursued.
Participants should submit their results as a single zip archive, containing prediction meshes for every subject in the wavefront '.obj' file and the landmarks as a text file. The naming of the meshes and the landmark files must follow as prescribed and also formatted appropriately. All the files should be zipped together from within a directory containing only the mesh and vertex landmark files. A sample submission file in the right format can be downloaded here. Please note that all the subjects in the data set fold that is tested, must be in your submission zip file.
The mesh must be named 'predxxx.obj', where the 'xxx' should be replaced by the subject's number. Makes sure that the predicted mesh is valid '.obj' or wavefront file format.
There must be a corresponding landmarks file for each subject containing the indices to vertices on the predicted mesh file for the 51 specific landmarks provided by dlib library as shown in the figure below. The indexing must be 0-based and must be named VertexLandmarksxxx.txt, where the 'xxx' should be replaced by the subject's number. The file must have 51 lines, one for each index corresponding to the specific landmark given by the line number (0-based numbering).
Start: July 4, 2019, midnight
Start: Aug. 1, 2019, midnight
Start: Aug. 20, 2019, 11:59 p.m.
Aug. 20, 2019, 11:59 p.m.
You must be logged in to participate in competitions.
Sign In