Incremental learning is a learning paradigm in which a deep architecture is required to continually learn from a stream of data.
In our work, we implemented several state-of-the-art algorithms for incremental learning, such as Finetuning, Learning Without Forgetting and iCaRL. Next, we amend several modifications to the origial iCaRL algorithm: specifically, we experiment with different combinations of distillation and classification losses and introduce new classifiers into the framework.
Furthermore, we propose three extensions of the origial iCaRL algorithm and we verify their effectiveness. We perform our tests on CIFAR-100, as used in the original iCaRL paper.
A fully detailed report of the project is available.
- PyTorch
- scikit-learn
models.py
containts a modular implementation of iCaRL and Learning without Forgetting within a single classFrankenCaRL
knn_icarl.py
is a version of iCaRL which employs a normal kNN instead of a nearest class mean classifiersvmCaRL.py
is a SVM-iCaRL hybrid, loosely inspired by SupportNetspecialist.py
is a version of iCaRL which makes use of an ensemble of specialized models for each class batchFamiliCaRL.py
adopts a more convoluted "double distillation" mechanism to further reduce the imbalance between new and past classesexemplars_generator.py
contains various functions that attempt to synthesize new exemplars based on the ones already stored in memory