GMM-IL: Image Classification Using Incrementally Learnt, Independent Probabilistic Models for Small Sample Sizes

When deep-learning Wooden Train Set classifiers try to learn new classes through supervised learning, they exhibit catastrophic forgetting issues.In this paper we propose the Gaussian Mixture Model - Incremental Learner (GMM-IL), a novel two-stage architecture that couples unsupervised visual feature learning with supervised probabilistic models to represent each class.The key novelty of GMM-IL is that each class is learnt independently of the other classes.New classes can be incrementally learnt using a small set of annotated images with no requirement to relearn data from existing classes.This enables the incremental addition of classes to a model, that can be indexed by visual features and reasoned over based on perception.

Using Gaussian Mixture Models to represent the independent classes, we outperform a benchmark of an equivalent read more network with a Softmax head, obtaining increased accuracy for sample sizes smaller than 12 and increased weighted F1 score for 3 imbalanced class profiles in that sample range.This novel method enables new classes to be added to a system with only access to a few annotated images of the new class.

Leave a Reply

Your email address will not be published. Required fields are marked *