Skip to content

Latest commit

 

History

History
60 lines (48 loc) · 2.41 KB

README.md

File metadata and controls

60 lines (48 loc) · 2.41 KB

kaggle moa

License PRs Welcome

Compare Neural Network with XGBoost method in kaggle moa.

Main Result

Experiment Log loss
Baseline 0.02191
Baseline with categories information 0.02213
Baseline with categories information (ema) 0.02031
XGBoost 0.01671

How to use

Neural Network:

  • Step 1. Switch the ex_name in init.py. ex_name can be baseline, add_cate_x, add_cate_x_ema.
  • Step 2. Adjust the batch_size in init.py according to your GPU memory. Higher is perferred.
  • Step 3. run the main.py by python main.py.

XGBoost:

  • Step 1. run the XGBoost by python XGBoost.py.

File Structure

├── data - the data can be downloaded from kaggle websites
│   ├── sample_submission.csv
│   ├── test_features.csv
│   ├── train_features.csv
│   ├── train_targets_nonscored.csv
│   └── train_targets_scored.csv
├── dataset.py - the definition of torch.utils.data.Dataset class
├── epoch_fun.py - train, validate and test functions
├── init.py - the training configuration of neural network
├── main.py - the main class of framework
├── model.py - the definition of the model class
├── Report.md - the report of the experimental result
├── run_train.py - the implement of cross validation and epoch loop
├── utils.py - the utils of your project
└── XGBoost.py - the implement of XGBoost methods

Acknowledgment

We highly appreciate @YasufumiNakama for sharing his great kaggle noteboos. This repo is mainly based on it. Moreover, thank @fadel for his plug-and-play ema module and @FChmiel for his carefully tuned XGBoost model.

Contributing

Any kind of enhancement or contribution is welcomed.

License

The code is licensed with the MIT license.