BBO (GINA+MLP+SGD)

Organized by guyon - Current server time: Feb. 26, 2021, 1:47 p.m. UTC

Current

Post-challenge
Jan. 24, 2021, midnight UTC

End

Competition Ends
Never

BBO challenge

Neurips 2020 BBO post-challenge: find the best black-box optimizer for machine learning

 

This is  an exact clone of the Neurips 2020 BBO challenge ended on October 15, 2020, formatted as an ever-lasting benchmark for research purposes, with a subset of the original challenge tasks.

The purpose is to evaluate black-box optimization algorithms on real-world objective functions. The problems chosen come from hyper-parameter (hp) selection/tuning in Machine Learning (ML) problems. The task submitted to the optimizer is:

        maximize R(hp)

        hp ∈ HP

where R(hp) =  cross-validation-accuracy { dataset, ML-algorithm(hp) } and HP space includes conrinuous and discrete variables.

The participants must submit a Python class containing the optimizer, which consider R(hp) as a black-box, that is the optimizaer does NOT have access to the mathematical formula of R, all it can do is to query values of R at given points (which is time consuming). 

The optimizer's class can include custom data members (e.g. storing past values of R) and must include at leaat two methods:

  1. hp=suggest(...) # this is called by the Codalab platform to get the next point queried by the optimizer (then the platform calls R(hp) =  cross-validation-accuracy { dataset, ML-algorithm(hp) } to get R(hp) 
  2. observe(hp, R)  # this is called by the Codalab platform to give back to the optimizer object the objective function value computed R(hp) at hp

Hence:

S = search space = HP space

A = moves in mixed categorical and continuous space (suggest)

R = cross-validation-accuracy { dataset, ML-algorithm(hp) }

I = values of R at given points only (observe)

We provide the starting kit of the original challenge, which contains all the information needed to use this benchmark:

  1. Download the starting kit by clicking Participate -> Files -> Starting Kit
  2. Unzip the downloaded starting kit
  3. Place yourself in the starting kit directory
  4. Run the following command:
    1. ./prepare_upload.sh ./example_submissions/random-search
  5. The resulted upload_random-search.zip is ready to be submitted to Participate -> Submit / View results

That's it, you can already submit your first submission! That's easy, right? However this submission is only a baseline solution. If you want to do better than this baseline, it is sufficient to modify optimizer.py. Optimizer submissions should follow the template, for a suggest-observe interface. Roughly speaking, you should modify suggest function and observe function, which are two important components of black box optimization algorithms. 

 

BBO challenge: Evaluation

We re-open the Neurips 2020 BBO challenge to make it an ever-lasting benchmark. There is only one phase: post-challenge phase, which allows to test black box algorithms on this benchmark. This particular instance of benchmark is limited to a single task (one dataset and one algorithm), specifically (GINA, MLP), where GINA is  subset of MNIST and MLP is a fully connected multi-layer Perceptron trained with stochastic gradient descent.

The score by which the optimizer is evaluated id the cross-valiation accuracy. 

BBO benchmarck Rules

This benchmark has NO prizes and is just for educational purposes.

The original terms of the BBO challenge terms and conditions do NOT apply.

Download Size (mb) Phase
Starting Kit 0.548 #1 Post-challenge

Post-challenge

Start: Jan. 24, 2021, midnight

Description: Only one phase: post-challenge phase

Competition Ends

Never

You must be logged in to participate in competitions.

Sign In
# Username Score
1 guyon 81.60