The data set contains 9100 color images of size 128 x 128, images of 13 class, each having 700 images.
The classes are some kind of landscape or a natural environment, and more specifically :
beach, chaparral, cloud, desert, forest, island, lake, meadow, mountain, river, sea_ice, snowberg, wetland.
The goal of this challenge will be to classify each image and assign to it the correct label.
It is good to know a little in advance how to handle this challenge which is slightly more complicated than the one with the preprocessed data.
So to guide you a little bit, with our experience on this chalenge, a rather effective way to solve this classification task could be to use Deep Learning, more precisely, convolutional networks (CNNs).
Below some sample images :
Presentation video :
If you want to use the competition's docker, you can use the commands below :
sudo docker run --name areal -it -v path/to/data:/home/aux -p 5000:8888 areal/codalab:pytorchv2
In the docker, run : jupyter-notebook --ip 0.0.0.0 --allow-root
After that, you have to copy-paste the line containing the link at the end and follow the instructions to entrer the address in your own browser outside the docker (don't forget to change 8888 to 5000 in the adress after the :), for example http://0.0.0.0:5000/?token=968e48d718ab5676d8c23c4a0d7bf3b088151d38fd3e1d86
.
The notebook will be opened and you just have to navigate to find the README.ipynb (home -> aux -> starting_kit).
Once open, check that the kernel is a Python3 kernel. If it is, you're good to go. Else just change it to a Python3 kernel.
If you want to quit the docker, you can just type exit
in the terminal, and type docker start areal
to restart it.
References and credits:
Gong Cheng, Junwei Han, and Xiaoqiang Lu, RemoteSensing Image Scene Classification: Benchmark andState of the Art. IEEE International.
The competition protocol was designed by Isabelle Guyon.
The starting kit was adapted from an Jupyper notebook designed by Balazs Kegl for the RAMP platform.
This challenge was generated using Chalab, a competition wizard designed by Laurent Senta.
This challenge was created by the Areal team, composed of David Biard, Samuel Berrien, Théo Cornille, Robin Duraz, Hao Liu and Trung Vu-Thanh. You can contact them at areal@chalearn.org
The original data is a large-scale dataset, termed "NWPU-RESISC45", which is a publicly available benchmark for Remote Sensing Image Scene Classification (RESISC), created by Northwestern Polytechnical University (NWPU). This dataset contains 31,500 images, covering 45 scene classes with 700 images (256 x 256 pixels) per class.
The problem is a multicalss classification problem. Each sample (an image) is characterized by its 128*128 RGB pixels. You must predict the categories of 13 classes.
You are given for training a data matrix X_train of dimension 5200 samples x 49152 (128 * 128 * 3) features and an array y_train of labels of dimension 5200 samples. You must train a model which predicts the labels for two test matrices X_valid and X_test, each having 1950 samples.
There are 2 phases:
This sample competition allows you to submit either:
The submissions are evaluated using the Accuracy metric. This metric determines de classification quality and it is computed by dividing the number of true positive (data correctly classified) by total number of data. This kind of metric is at the same time simple and informative on the performance for this classification task given that the class distribution within this project is balanced.
Submissions must be made before the end of phase 1. You may submit 5 submissions every day and 100 in total.
This challenge is governed by the general ChaLearn contest rules.
Start: Sept. 15, 2020, midnight
Description: Development phase: tune your models and submit prediction results, trained model, or untrained model.
Start: Jan. 1, 2021, midnight
Description: Final phase (no submission, your last submission from the previous phase is automatically forwarded).
Never
You must be logged in to participate in competitions.
Sign In# | Username | Score |
---|---|---|
1 | grp.Sputnik | 0.0785 |
2 | pavao | -1.0000 |
3 | persodata | -1.0000 |