How green is continual learning, really? Analyzing the Energy Consumption in Continual Training of Vision Foundation Models (GreenFOMO@ECCV2024)
This repository contains the official code for the spotlight paper "How Green is Continual Learning, Really? Analyzing the Energy Consumption in Continual Training of Vision Foundation Models", presented at the GreenFOMO Workshop (ECCV 2024). The paper explores the environmental impact of continual training in vision foundation models, providing benchmarks on their energy consumption and offering insights into their sustainability. (ArXiv)
This project builds on PILOT, integrating energy tracking using CodeCarbon to measure carbon emissions and energy usage during model training.
- Clone the repository:
git clone https://github.com/CodingTomo/how-green-continual-learning.git
- Install dependencies from PILOT following its instructions.
- To track energy consumption, install the
CodeCarbon
package:
pip install codecarbon
For troubleshooting CodeCarbon
, refer to its official repository.
To run training experiments with energy tracking:
python main.py --config exps/METHOD_NAME.yaml
Modify METHOD_NAME.yaml to switch between different continual learning methods.
- ImageNet-R: Follow setup instructions from the PILOT repository.
- DomainNet (for Incremental Learning): Follow the instructions in the DN4IL repository
To replicate the paper experiments on DN4IL, the dn_split folder contains the splits used. Place these files in the dataset directory before training.
-
The
logs
folder contains three CSV files that report the training energy consumption of the CPU, GPU, and memory at different levels of granularity: per epoch, per task, and total consumption. An additional CSV file reports the inference energy consumption for 10,000 requests. -
The
METHOD_NAME_gpu_inference_time.npy
file reports the inference time for each of the 10,000 requests after the final incremental training step. -
The
per_step_incremental_accuracy.txt
file reports the accuracy per step after each task, while anotheraverage_incremental_accuracy.txt
file reports the average accuracy during the incremental training process.
Comparison in terms of training energy consumption (x-axis) and accuracy after the final incremental step (y-axis) across benchmarks and task sequence lengths.
If you use this repository in your research, please cite the following:
@misc{trinci2024greencontinuallearningreally,
title={How green is continual learning, really? Analyzing the energy consumption in continual training of vision foundation models},
author={Tomaso Trinci and Simone Magistri and Roberto Verdecchia and Andrew D. Bagdanov},
year={2024},
eprint={2409.18664},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2409.18664},
}
@article{zhou2024continual,
title={Continual learning with pre-trained models: A survey},
author={Zhou, Da-Wei and Sun, Hai-Long and Ning, Jingyi and Ye, Han-Jia and Zhan, De-Chuan},
journal={arXiv preprint arXiv:2401.16386},
year={2024}
}
This project is licensed under the MIT License. See the LICENSE
file for more information.
For any questions or issues, please open an issue in this repository.