2nd 3D Face Alignment in the Wild Challenge: Dense Reconstruction from Video

Organized by rohithkp - Current server time: Sept. 19, 2019, 9:07 a.m. UTC

Previous

Post-Challenge
Aug. 20, 2019, 11:59 p.m. UTC

Current

Post-Challenge
Aug. 20, 2019, 11:59 p.m. UTC

End

Competition Ends
Aug. 20, 2019, 11:59 p.m. UTC

Description

3DFAW is intended to bring together computer vision and multimedia researchers whose work is related to 2D or 3D face alignment. We are soliciting original contributions which address a wide range of theoretical and application issues of 3D face alignment for computer vision applications and multimedia including, including but not limited to:

  • 3D and 2D face alignment from 2D dimensional images
  • Model- and stereo-based 3D face reconstruction
  • Dense and sparse face tracking from 2D and 3D dimensional inputs
  • Applications of face alignment
  • Face alignment for embedded and mobile devices
  • Facial expression retargeting (avatar animation)
  • Face alignment-based user interfaces

For more details on 3DFAW challenge, visit the challenge website.

For queries regarding codalab competition/submission etc. please email rohithkp(at)andrew(dot)cmu(dot)edu

Challenge

The 2nd 3DFAW Challenge evaluates 3D face reconstruction methods on a new large corpora of profile-to-profile face videos annotated with corresponding high-resolution 3D ground truth meshes. The corpora includes profile-to-profile videos obtained under a range of conditions:

  • high-definition in-the-lab video,
  • unconstrained video from an iPhone device

Vizualizatiion of 3dfaw video and mesh

 

 

 

 

Evaluation Criteria

Submissions are evaluated using the following evaluation metric:

  • Average Root Mean Square Error - ARMSE

 

ARME is an evaluation metric, which is the average of the point-to-mesh distance between the ground truth and predicted and vice verse. The Euclidean error is used as the distance metric between point and mesh, and is computed as below:

 

Where A,B are meshes, Ai is a vertex in mesh A, Bv is the closest vertex to Ai on B, and similarly Be is the closest edge on B to Ai, and Bf is the closest face on B to Ai. The above error calculates the shortest distance between a vertex in A to the surface of the mesh B. The closest vertices on the mesh B are found using a nearest neighbor search. We can then calculate ARMSE using the following equation:

Here, ARMSE is defined between 2 meshes X, Y with Nx and Ny being the number of vertices in each respectively. Using the earlier error metric E(Ai,B), the distance between the predicted and ground truth are calculated and RMSE scores between the predicted and ground truth and vice versa averaged out to provide the final ARMSE score.  The 2 different E(Ai,B) are calculated with each of X,Y meshes being A, because the nearest neighbor search for the closest vertices is not symmetric between the 2 meshes X, Y. Finaly, I is the outer inter-occular distance on the ground truth mesh Y, i.e. the euclidean distance between the 19th and 28th landmark points of the 51 dlib facial landmarks. For more information regarding the landmarks, please refer the submission tab.

 

Good luck!

 

 

 

Official Rules

Common terms used in these rules:

These are the official rules that govern how the 3D Face Alignment in the Wild from Video (3DFAW-Video) Challenge will operate. This challenge will be simply referred to as the “contest” or the “challenge” throughout the rest of these rules and may be abbreviated on our website, in our documentation, and other publications as 3DFAW-Video.

In these rules, “organizers”, “we,” “our,” and “us” refer to the organizers of the 3DFAW Challenge; “Database” refer to all the distributed image and annotation data; "participant”, “you,” and “yourself” refer to an eligible contest participant.

Contest Description

This is a skill-based contest and chance plays no part in the determination of the winner(s).

  1. Focus of the Contest: 2D videos obtained under a range of conditions with the objective of performing automatic 3D face reconstruction
  2. All eligible entries received will be judged using the criteria described below to determine winners

Data Description and Usage Terms

  1. “3DFAW-Video” Database was developed in the group of Dr. Lijun Yin of the State University of New York at Binghamton, Dr. Jeffrey Cohn of the University of Pittsburgh, and Dr. Laszlo Jeni of Carnegie Mellon University. The data include 3D face mesh models, 2D videos from both a 3D imaging system and a mobile phone per subject. The project was funded by the United States National Science Foundation for human affective behavior research.
  2. The organizers make no warranties regarding the Database, including but not limited to warranties of non-infringement or fitness for a particular purpose. The copyright of the images remain the property of their respective owners. By downloading and making use of the data, you accept full responsibility for using the data. You shall defend and indemnify the organizers, including their employees, Trustees, officers and agents, against any and all claims arising from your use of the data.
  3. Test data: The organizers will use test data to perform the final evaluation, hence the participants’ final entry will be based on test data.
  4. Training and validation data: The contest organizers will make available to the participants a training dataset with truth values, and a validation set with no truth values. The validation data will be used by the participants for practice purposes to validate their systems. It will be similar in composition to the test set (validation labels will be provided in the final test stage of the challenge).

The datasets may be used for the 3DFAW-Video Challenge of ICCV 2019 only. The recipient of the datasets must be a full-time faculty, researcher or employee of an organization (not a student) and must agree to the following terms:

  1. The participants receive a non-exclusive, non-transferable license to use the Database for internal research purposes. You may not sell, rent, lease, sublicense, lend, time-share or transfer, in whole or in part, or provide third parties access to the Database.
  2. The data will be used for non-for-profit research only. Any use of the Database in the development of a commercial product is prohibited.
  3. If this Database is used, in whole or in part, for any publishable work, the following papers must be referenced:

Laszlo A. Jeni, Huiyuan Yang, Rohith K. Pillai,  Zheng Zhang, Jeffrey Cohn, and Lijun Yin,  “3D Dense Face Reconstruction from Video (3DFAW-Video) Challenge”,  2nd Workshop and Challenge on 3D Face Alignment in the Wild – Dense Reconstruction from Video (3DFAW-Video) 2019, in conjunction with IEEE International Conference on Computer Vision (ICCV), 2019.

 

 

Eligibility criteria

  1. You are an individual or a team of people desiring to contribute to the tasks of the challenge and accepting to follow its rules; and
  2. You are NOT a resident of any country constrained by US export regulations included in the OFAC sanction page http://www.treasury.gov/resource-center/sanctions/Programs/Pages/Programs.aspx. Therefore residents of these countries / regions are not eligible to participate; and
  3. You are not involved in any part of the administration and execution of this contest; and
  4. You are not an immediate family (parent, sibling, spouse, or child) or household member of a person involved in any part of the administration and execution of this contest.

This contest is void within the geographic area identified above and wherever else prohibited by law.

Entry

  1. All members of the team meets eligibility criteria.
  2. To be considered in the competition, provide a description of your approach. To be considered for publication in the proceedings, submit a workshop paper. All participants are invited to submit a maximum 8-page paper for the proceedings of the ECCV 2016 - 3D Face Alignment in the Wild (3DFAW) Challenge workshop.
  3. Workshop report: The organizers may write and publish a summary of the results.
  4. Submission: The entries of the participants will be submitted on-line via the Codalab web platform. During the development period, the participants will receive immediate feed-back on validation data released for practice purpose. For the final evaluation, the results will be computed automatically on test data submissions. The performances on test data will not be released until the challenge is over.
  5. Original work, permissions: In addition, by submitting your entry into this contest you confirm that, to the best of your knowledge:
    • Your entry is your own original work; and
    • Your entry only includes material that you own, or that you have permission from the copyright / trademark owner to use.

On-line notification

We will post changes in the rules or changes in the data as well as the names of confirmed winners (after contest decisions are made by the judges) online on the https://3dfaw.github.io

Conditions.

By entering this contest you agree all terms of use. You understand that the violation of the use will be pursued.

This contest is void within the geographic area identified above and wherever else prohibited by law.

Entry

  1. All members of the team meets eligibility criteria.
  2. To be considered in the competition, provide a description of your approach.
  3. Workshop report: The organizers may write and publish a summary of the results.
  4. Submission: The entries of the participants will be submitted on-line via the Codalab web platform. During the development period, the participants will receive immediate feed-back on validation data released for practice purpose. For the final evaluation, the results will be computed automatically on test data submissions. The performances on test data will not be released until the challenge is over.
  5. Original work, permissions: In addition, by submitting your entry into this contest you confirm that, to the best of your knowledge:
    • Your entry is your own original work; and
    • Your entry only includes material that you own, or that you have permission from the copyright / trademark owner to use.

On-line notification

We will post changes in the rules or changes in the data as well as the names of confirmed winners (after contest decisions are made by the judges) online on https://3dfaw.github.io.

Conditions.

By entering this contest you agree all terms of use. You understand that the violation of the use will be pursued.

Submission

Participants should submit their results as a single zip archive, containing prediction meshes for every subject in the wavefront '.obj' file and the landmarks as a text file. The naming of the meshes and the landmark files must follow as prescribed and also formatted appropriately. All the files should be zipped together from within a directory containing only the mesh and vertex landmark files. A sample submission file in the right format can be downloaded here. Please note that all the subjects in the data set fold that is tested, must be in your submission zip file.

Predicted Mesh

The mesh must be named 'predxxx.obj', where  the 'xxx' should be replaced by the subject's number. Makes sure that the predicted mesh is valid '.obj' or wavefront file format.

Landmarks Files

There must be a corresponding landmarks file for each subject containing the indices to vertices on the predicted mesh file for the 51 specific landmarks provided by dlib library as shown in the figure below. The indexing must be 0-based and must be named VertexLandmarksxxx.txt, where the 'xxx' should be replaced by the subject's number. The file must have 51 lines, one for each index corresponding to the specific landmark given by the line number (0-based numbering).


 

Validation

Start: July 4, 2019, midnight

Testing/Challenge

Start: Aug. 1, 2019, midnight

Post-Challenge

Start: Aug. 20, 2019, 11:59 p.m.

Competition Ends

Aug. 20, 2019, 11:59 p.m.

You must be logged in to participate in competitions.

Sign In