Skip to content

iPRoBe-lab/Continuous_Learning_FE_DM

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 

Repository files navigation

Task-conditioned Ensemble of Expert Models for Continuous Learning

Abstract:

One of the major challenges in machine learning is maintaining the accuracy of the deployed model (e.g., a CNN classifier) in a non-stationary environment. The non-stationary environment results in distribution shifts and, consequently, a degradation in performance. Continuous learning of the deployed model with new data could be one solution. However, the question arises of how we should update the model with new training data so that it retains its accuracy on the old data while adapting to the new data. In this work, we propose a task-conditioned ensemble of models to maintain the performance of the existing model. The method involves an ensemble of expert models based on task membership information. The in-domain models—based on the local outlier concept (different from the expert models) provide task membership information dynamically at run-time to each probe sample. To evaluate the proposed method, we experiment with three setups: the first represents distribution shift between tasks (LivDet-Iris-2017), the second represents distribution shift both between and within tasks (LivDet-Iris-2020), and the third represents disjoint distribution between tasks (Split MNIST). The experiments highlight the benefits of the proposed method.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages