Compare Neural Network with XGBoost method in kaggle moa.
Experiment | Log loss |
---|---|
Baseline | 0.02191 |
Baseline with categories information | 0.02213 |
Baseline with categories information (ema) | 0.02031 |
XGBoost | 0.01671 |
Neural Network:
- Step 1. Switch the
ex_name
ininit.py
.ex_name
can bebaseline
,add_cate_x
,add_cate_x_ema
. - Step 2. Adjust the
batch_size
ininit.py
according to your GPU memory. Higher is perferred. - Step 3. run the main.py by
python main.py
.
XGBoost:
- Step 1. run the XGBoost by
python XGBoost.py
.
├── data - the data can be downloaded from kaggle websites
│ ├── sample_submission.csv
│ ├── test_features.csv
│ ├── train_features.csv
│ ├── train_targets_nonscored.csv
│ └── train_targets_scored.csv
├── dataset.py - the definition of torch.utils.data.Dataset class
├── epoch_fun.py - train, validate and test functions
├── init.py - the training configuration of neural network
├── main.py - the main class of framework
├── model.py - the definition of the model class
├── Report.md - the report of the experimental result
├── run_train.py - the implement of cross validation and epoch loop
├── utils.py - the utils of your project
└── XGBoost.py - the implement of XGBoost methods
We highly appreciate @YasufumiNakama for sharing his great kaggle noteboos. This repo is mainly based on it. Moreover, thank @fadel for his plug-and-play ema module and @FChmiel for his carefully tuned XGBoost model.
Any kind of enhancement or contribution is welcomed.
The code is licensed with the MIT license.