Pick The Sneak Peek

Organized by orange - Current server time: Oct. 27, 2020, 11:31 a.m. UTC

Previous

Development
Nov. 25, 2016, 10:33 a.m. UTC

Current

Final
April 29, 2017, midnight UTC

End

Competition Ends
Never

Pick The Sneak Peek

 

 

Brought to you by Orange team

In 2000, 60,234 titles between movies and TV shows were released, according to the IMDB source. In 2010, 165,830 titles and in 2016, 190,275 titles were filmed. We can only notice that the movie release industry is in perpetual increase and the databases aggregating the data are in need of more information to expand.

The idea behind this challenge is to facilitate the genre labeling of movies from their summaries and thus to help with categorization of the movies database.

The tagging of movies' genres is still a manual process which involves the collection of users' suggestions.  Automatic genres classification of a movie based on its summary not only speeds up the classification process by providing a list of suggestion but the result may potentially be more accurate than an untrained human. 

This challenge was generated using chalab.

 

Contact :

  • Gabriel BELLARD - gabriel.bellard@gmail.com
  • Asma KHOUFI DOUBEY - a.khoufi@gmail.com
  • Daro HENG - hengdaro1@gmail.com
  • Forcefidele KIEN - forcefidele@gmail.com
  • Guillaume LORRE - guillaume.lorre@telecom-paristech.fr
  • Xiyu ZHANG - xiyu.zhang@u-psud.fr

Pick The Sneak Peek: Evaluation

The submission will be evaluated using the Weighted F1-Score.

It is the scoring method given by scikit-learn with a weighted average and used in multi-label tasks.

The evaluation metric for this competition is Weighted F1-Score used in multi-label learning literature. The F1 score, commonly used in information retrieval, measures accuracy using the statistics precision p and recall r. Precision is the ratio of true positives (TP) to all predicted positives (TP + FP). Recall is the ratio of true positives to all actual positives (TP + FN). The F1 score is given by:

The F1 metric weights recall and precision equally, and a good retrieval algorithm will maximize both precision and recall simultaneously. Thus, moderately good performance on both will be favored over extremely good performance on one and poor performance on the other.

Submission details 

For the submission, we provide a starting kit, in the "participate" (Get Data) section, with a submission example. You can modify it, recompute the results, compress it back to .zip in order to submit the compressed file into the "participate' section.

Terms and Conditions

This competition is organized solely for test purposes. No prizes will be awarded.

The authors decline responsibility for mistakes, incompleteness or lack of quality of the information provided in the challenge website. The authors are not responsible for any contents linked or referred to from the pages of this site, which are external to this site. The authors intended not to use any copyrighted material or, if not possible, to indicate the copyright of the respective object. The authors intended not to violate any patent rights or, if not possible, to indicate the patents of the respective objects. The payment of royalties or other fees for use of methods, which may be protected by patents, remains the responsibility of the users.

ALL INFORMATION, SOFTWARE, DOCUMENTATION, AND DATA ARE PROVIDED "AS-IS" THE ORGANIZERS DISCLAIM ANY EXPRESSED OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. IN NO EVENT SHALL ISABELLE GUYON AND/OR OTHER ORGANIZERS BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF SOFTWARE, DOCUMENTS, MATERIALS, PUBLICATIONS, OR INFORMATION MADE AVAILABLE THROUGH THIS WEBSITE.

Participation in the organized challenge is not-binding and without obligation. Parts of the pages or the complete publication and information might be extended, changed or partly or completely deleted by the authors without notice.

Development

Start: Nov. 25, 2016, 10:33 a.m.

Description: Development phase: create models and submit them or directly submit results on validation and/or test data; feed-back are provided on the validation set only.

Final

Start: April 29, 2017, midnight

Description: Final phase: submissions from the previous phase are automatically cloned and used to compute the final score. The results on the test set will be revealed when the organizers make them available.

Competition Ends

Never

You must be logged in to participate in competitions.

Sign In