Skip to content

Library for automatic retraining and continual learning

License

Notifications You must be signed in to change notification settings

martinferianc/Renate

 
 

Repository files navigation

PyPI - Status Latest Release PyPI - Downloads License Documentation Status Coverage Badge

Renate: Automatic Neural Networks Retraining and Continual Learning in Python

Renate is a Python package for automatic retraining of neural networks models. It uses advanced Continual Learning and Lifelong Learning algorithms to achieve this purpose. The implementation is based on PyTorch and Lightning for deep learning, and Syne Tune for hyperparameter optimization.

Quick links

Who needs Renate?

In many applications data is made available over time and retraining from scratch for every new batch of data is prohibitively expensive. In these cases, we would like to use the new batch of data provided to update our previous model with limited costs. Unfortunately, since data in different chunks is not sampled according to the same distribution, just fine-tuning the old model creates problems like catastrophic forgetting. The algorithms in Renate help mitigating the negative impact of forgetting and increase the model performance overall.

Renate vs Model Fine-Tuning.

Renate's update mechanisms improve over naive fine-tuning approaches. [1]

Renate also offers hyperparameter optimization (HPO), a functionality that can heavily impact the performance of the model when continuously updated. To do so, Renate employs Syne Tune under the hood, and can offer advanced HPO methods such multi-fidelity algorithms (ASHA) and transfer learning algorithms (useful for speeding up the retuning).

Impact of HPO on Renate's Updating Algorithms.

Renate will benefit from hyperparameter tuning compared to Renate with default settings. [2]

Key features

  • Easy to scale and run in the cloud
  • Designed for real-world retraining pipelines
  • Advanced HPO functionalities available out-of-the-box
  • Open for experimentation

Resources

Cite Renate

@misc{renate2023,
  title           = {Renate: A Library for Real-World Continual Learning},
  author          = {Martin Wistuba and
                     Martin Ferianc and
                     Lukas Balles and
                     Cedric Archambeau and
                     Giovanni Zappella},
  year            = {2023},
  eprint          = {2304.12067},
  archivePrefix   = {arXiv},
  primaryClass    = {cs.LG}
}

What are you looking for?

If you did not find what you were looking for, open an issue and we will do our best to improve the documentation.

[1]To create this plot, we simulated class-incremental learning with CIFAR-10. The training data was divided into 5 partitions, and we trained sequentially on them. Fine-tuning refers to the strategy to learn on the first partition from scratch, and train on each of the subsequent partitions for few epochs only. We compare to Experience Replay with a memory size of 500. For both methods we use the same number of epochs and choose the best checkpoint using a validation set. Results reported are on the test set.
[2]The setup is the same as in the last experiment. However, this time we compare Experience Replay against a version in which its hyperparameters were tuned.

About

Library for automatic retraining and continual learning

Resources

License

Code of conduct

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%