Give Me Some Credit

Organized by credit - Current server time: Oct. 27, 2020, 11:28 a.m. UTC

Previous

Development
Oct. 22, 2017, 6:53 p.m. UTC

Current

Final
April 30, 2018, 6:53 p.m. UTC

End

Competition Ends
Never

Give Me Some Credit: Overview

Brought to you by Credit Team

In an economic and financial world full of risks and challenges, the task of detecting potential threats has become an urgency, especially in the banking sector. Machine learning is an awesome tool to analyze customer's data and make accurate decisions. This is why it is increasingly used in this field.

People in the United States are more and more interested in taking bank loans. The following figure [1] shows the total mortgage debt outstanding in the United States from 2001 to 2016. We see that the total mortgage debt outstanding in the U.S. amounted to approximately 14.29 trillion U.S. dollars in 2016.

The goal of this challenge (inspired from Give Me Some Credit Kaggle Challenge [2]) is to use information about the borrower (expressed in 56 features) to predict if we should give him some credit or not

 

 

To which borrowers should we give a credit ? Regarding what ?

 
 
It's important to have an idea about the classes distribution:

Classes distribution


We can see that the two classes are unbalanced. Indeed, most (93%) of customers will normally not face a financial distress in the next two years so the loan is granted for them.

This project treats a serious real-world problem of risk detection. So, resolving such problems helps improve the security in the financial domain, and leads to more confidence between customers and companies.

 

References and credits: 

[1] statista
[2] Closed Kaggle Challenge where the dataset come from.
* The competition protocol was designed by Isabelle Guyon. 
* This challenge was generated using ChaLab.

Credit team members from Paris-Saclay's M2 AIC (Machine learning, Information and Content) :

Eden Belouadah
Taycir Yahmed
Ghiles Sidi Saïd
Adrien Pavão (Team coordinator)

Contact :

credit [at] chalearn [dot] org

Give Me Some Credit: Evaluation

 

Because we are handling unbalanced binary classification problem, AUC_Binary metric is the most suitable one for evaluating different submissions. This metric computes the balanced accuracy (that is the average of the per class accuracies). The metric is re-scaled linearly between 0 and 1, 0 corresponding to a random guess and 1 to perfect predictions.

AUC metric

AUC means Area Under Curve. The metric consists in the area under the ROC curve.

Give Me Some Credit: Rules

Submissions must be made before the end of phase 1. You may submit 5 submissions every day and 100 in total.

This challenge is governed by the general ChaLearn contest rules.

 

This competition is organized solely for test purposes. No prizes will be awarded.

The authors decline responsibility for mistakes, incompleteness or lack of quality of the information provided in the challenge website. The authors are not responsible for any contents linked or referred to from the pages of this site, which are external to this site. The authors intended not to use any copyrighted material or, if not possible, to indicate the copyright of the respective object. The authors intended not to violate any patent rights or, if not possible, to indicate the patents of the respective objects. The payment of royalties or other fees for use of methods, which may be protected by patents, remains the responsibility of the users.

ALL INFORMATION, SOFTWARE, DOCUMENTATION, AND DATA ARE PROVIDED "AS-IS" THE ORGANIZERS DISCLAIM ANY EXPRESSED OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. IN NO EVENT SHALL ISABELLE GUYON AND/OR OTHER ORGANIZERS BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF SOFTWARE, DOCUMENTS, MATERIALS, PUBLICATIONS, OR INFORMATION MADE AVAILABLE THROUGH THIS WEBSITE.

Participation in the organized challenge is not-binding and without obligation. Parts of the pages or the complete publication and information might be extended, changed or partly or completely deleted by the authors without notice.

Development

Start: Oct. 22, 2017, 6:53 p.m.

Description: Development phase: We provide you with labeled training data and unlabeled validation and test data. Make predictions for both datasets. However, you will receive feed-back on your performance on the validation set only. The performance of your LAST submission will be displayed on the leaderboard. Preparing your submission with the starting kit is the easiest. For more Details, see "Evaluation Tab"

Final

Start: April 30, 2018, 6:53 p.m.

Description: Final phase: You do not need to do anything. Your last submission of phase 1 will be automatically forwarded. Your performance on the test set will appear on the leaderboard when the organizers finish checking the submissions.

Competition Ends

Never

You must be logged in to participate in competitions.

Sign In
# Username Score
1 reference3 0.7148
2 reference5 0.7130
3 reference2 0.6644