This challenge addresses the issues of detecting fake paintings. Recently, artificially created paintings has raised up a lot of issues as people wonder whether they could be considered ”art” and how harmful they could be to authentic art. While artificial images open ways for stunning and always more precised results, it is hard to bestow upon them any creative aspect since they all follow a certain algorithmicpattern (even if some artificial creations were estimated at about 10 000 $ ). Recent works show that it seems to become increasingly difficult for the naked eye to tell the difference between artificial paintings and real ones. However, as there are often more fake artistic objects circulating than authentic ones, this could pose a threat of art forgery and have negative impact on some traditional arts (e.g. aboriginal art where 80% of the products are inauthentic).
Nowadays, as computer graphics techniques for image generation are reaching stunning levels of quality, it becomes more and more challenging to detect fake from true, authentic images. This challenge uses paintings from WikiArt (the Visual Art Encyclopedia). Half of the images in this data set are fake paintings which were generated through Generative Adversarial Network. The task for this challenge is a binary classification whose goal is to detect the fake paintings.
In this challenge, we give participants thepreprocessed image (an image of dimension 64*64*3 is preprocessed as a vector of 1*200 where we extract 200 principal components (PCA) of the image as the data of the challenge, that's why we call it Persodata challenge with preprocessed data.
Here is an exemple of some fake paintings (top) and real paintings (bottom) :
If you don't have your own computer, you can use our docker. If you have your own computer, you work on your computer.
Retrieve the Docker image:
docker pull mok0na/l2rpn:2.0
Retrieve the notebook README.ipynb to help you start this competition. It is available in the starting kit to download from the Files tab in the Partipate tab.
Download starting-kit and public data
unzip starting_kit -d
starting-kit
cp -r starting-kit ~/aux
Run the jupyter notebook:
docker run --name persodata -it -p 5000:8888 -v ~/aux:/home/aux mok0na/l2rpn:2.0 jupyter notebook --ip 0.0.0.0 --notebook-dir=/home/aux --allow-root
With this commande, it wil give you a link to the notebook. Open the link and replace the port 5000 instead of 8888. e.g. : http://127.0.0.1:5000/?token=2b4e492be2f542b1ed5a645fa2cfbedcff9e67d50bb35380
If you use some libraries who don't exist in the docker, you can tape the commande : docker exec-it persodata bash and you will enter in the docker
For exemple, for install the library tqdm, you tape the commande pip install tqdm in docker and this library will be installed.
To reuse the docker : restart the docker and open the link http://127.0.0.1:5000
docker start persodata
References and credits:
https://www.wikiart.org/.
Members of PersoData : Jiaxin Gao, Issa Hammoud, Valentin Carpentier, Hugues Ali Mehenni, Hugo Boulanger et Min Li
If you have any question for this challenge, you can contact us : persodata@chalearn.org
The competition protocol was designed by Isabelle Guyon.
The starting kit was adapted from an Jupyper notebook designed by Balazs Kegl for the RAMP platform.
This challenge was generated using Chalab, a competition wizard designed by Laurent Senta.
The problem is a binary classification problem. Each sample (image) is characterized by 200 features which are their principal components. You must predict whether the images are fake (0) or real (1).
You are given for training a data matrix X_train of dimension 65856 x 200 and an array y_train of labels of dimension 65856. You must train a model which predicts the labels for two test matrices X_valid and X_test.
There are 2 phases:
This sample competition allows you to submit either:
The submissions are evaluated using the AUC metric. To know more about our metric, you can consult https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_auc_score.html
Submissions must be made before the end of phase 1. You may submit 5 submissions every day and 100 in total.
This challenge is governed by the general ChaLearn contest rules.
Start: Nov. 15, 2018, midnight
Description: Development phase: tune your models and submit prediction results, trained model, or untrained model.
Start: April 30, 2019, midnight
Description: Final phase (no submission, your last submission from the previous phase is automatically forwarded).
Never
You must be logged in to participate in competitions.
Sign In# | Username | Score |
---|---|---|
1 | Monet | 0.9514 |
2 | Kahlo | 0.9497 |
3 | mokakill | 0.9464 |