Main libraries used:
Install the requirements (inside the project folder):
pip install -r requirements.txt
python3 training_reg.py --gpus 4 --batch_size 64 --patience 10 --encoder_model roberta-base --max_epochs 20 --aux_task emotions --learning_rate 0.00003 --nr_frozen_epochs 0 --extra_dropout 0.05 --warmup_proportion 0.1 --loss_aux 0.95 --warmup_aux 8 --seed 1
python3 training.py --gpus 4 --batch_size 64 --patience 10 --encoder_model roberta-base --max_epochs 20 --aux_task None --learning_rate 0.00005 --nr_frozen_epochs 0 --extra_dropout 0.2 --warmup_proportion 0.1 --loss_aux 0.75 --warmup_aux 8 --seed 1
Use the code at folder Three-task:
python3 training_reg.py --gpus 4 --batch_size 64 --patience 10 --encoder_model roberta-base --max_epochs 20 --aux_task emotions --learning_rate 0.00003 --nr_frozen_epochs 0 --extra_dropout 0.05 --warmup_proportion 0.1 --loss_aux 0.95 --warmup_aux 8 --seed 1
python3 testing.py --checkpoint_path path_to_model_checkpoint
The code was based on Huggingface Transformers and thelightning-text-classification repository.