Microscopy Challenge - Data Science Africa 2019

Organized by herilalaina - Current server time: Aug. 23, 2019, 12:42 a.m. UTC

Previous

Development Phase
May 1, 2019, midnight UTC

Current

Final Phase
June 5, 2019, midnight UTC

End

Competition Ends
Never

Challenge on Microscopy dataset

Brought to you by Data Science Africa committee

Source: http://air.ug/microscopy/

Presentation

Although microscopes are common in Uganda and other developing countries, a shortage of lab technicians to operate them means that access to quality diagnostic services is limited for much of the population. This leads to misdiagnoses of disease, which in turn causes life-threatening conditions to be incorrectly treated, drug resistance, and the economic burden of buying unecessary drugs. Even where health facilities have lab technicians, they are often oversubscribed and have difficulty spending enough time on each sample to give a confident diagnosis. Given that smartphones are widely owned across the developing world, there is a technological opportunity to address this problem: phones can be used to capture and process microscopy images. This project aims to produce a functioning point-of-care diagnosis system on this principle, capable of running on multiple microscope and phone combinations. Our work exploits recent technological advances in 3D printing and deep learning to produce effective hardware and software respectively.

Our challenge

The goal is to train machine learning methods to recognise different pathogen objects, and to make this accessible in the form of an Android application usable at the point of care. This work began with machine learning methods based on extracting statistical characterisations of the shapes in each image. Illustration

How to participate

You need to download the starting kit.

Prerequisites:

Install Anaconda Python 3.6.6, Tensorflow (2.0.0), opencv-python (4.0.1), scikit-image (0.15.0)

or

run your code within the Codalab docker (inside the docker, python 3.6 is called python3):

  • `docker pull herilalaina/dsa:3.0`
  • `docker run -it -p 8888:8888 -v `pwd`:/home/aux herilalaina/dsa:3.0`
  • `DockerPrompt# cd /home/aux`
  • `DockerPrompt# python3 ingestion_program/ingestion.py sample_data sample_result_submission ingestion_program sample_code_submission`
  • `DockerPrompt# python3 scoring_program/score.py sample_data sample_result_submission scoring_output`
  • `DockerPrompt# exit`

Usage of the starting kit:

  • The two files sample_*_submission.zip are sample submissions ready to go!
  • The file README.ipynb contains step-by-step instructions on how to create a sample submission for the challenge
  • At the prompt type:    jupyter-notebook —ip=0.0.0.0 --allow-root
  • modify sample_code_submission to provide a better model

For submission, you can either submit code or prediction files. 

  • zip the contents of sample_code_submission (without the directory, but with metadata)
  • (or) download the public_data and run (double check you are running the correct version of python):
    • `python ingestion_program/ingestion.py public_data sample_result_submission ingestion_program sample_code_submission`
    • then zip the contents of sample_result_submission (without the directory).

Evaluation

The goal is to count the number of parasites on each image. We divide the whole process into two step pipeline.

  • Step 1: Binary classification problem. You need to implement a machine learning model in order to classify whether a patch is postive or negative.
  • Step 2: Regression problem. You need to predict the number of parasites on each image (using model trained on Step 1).


There are 2 phases:

  • Phase 1: development phase. We provide you with labeled training data and unlabeled validation and test data. Make predictions for both datasets. However, you will receive feed-back on your performance on the validation set only. The performance of your LAST submission will be displayed on the leaderboard.
  • Phase 2: final phase. You do not need to do anything. Your last submission of phase 1 will be automatically forwarded. Your performance on the test set will appear on the leaderboard when the organizers finish checking the submissions.

This sample competition allows you to submit either:

  • Only prediction results (no code).
  • A pre-trained prediction model.
  • A prediction model that must be trained and tested.

The submissions are evaluated using the AUC ROC metric (step 1) and Mean Squared Error for (step 2).

Rules

Submissions must be made before the end of phase 1. You may submit 5 submissions every day and 100 in total.

This challenge is governed by the general ChaLearn contest rules.

Development Phase

Start: May 1, 2019, midnight

Description: Development phase: tune your models and submit prediction results, trained model, or untrained model.

Final Phase

Start: June 5, 2019, midnight

Description: Final phase

Competition Ends

Never

You must be logged in to participate in competitions.

Sign In
# Username Score
1 Abebaw 5.0000
2 ade_adebayo 4.5000
3 fzapfack 7.5000