Incremental Zero-Shot Learning

IEEE Trans Cybern. 2022 Dec;52(12):13788-13799. doi: 10.1109/TCYB.2021.3110369. Epub 2022 Nov 18.

Abstract

The goal of zero-shot learning (ZSL) is to recognize objects from unseen classes correctly without corresponding training samples. The existing ZSL methods are trained on a set of predefined classes and do not have the ability to learn from a stream of training data. However, in many real-world applications, training data are collected incrementally; this is one of the main reasons why ZSL methods cannot be applied to certain real-world situations. Accordingly, in order to handle practical learning tasks of this kind, we introduce a novel ZSL setting, referred to as incremental ZSL (IZSL), the goal of which is to accumulate historical knowledge and alleviate Catastrophic Forgetting to facilitate better recognition when incrementally trained on new classes. We further propose a novel method to realize IZSL, which employs a generative replay strategy to produce virtual samples of previously seen classes. The historical knowledge is then transferred from the former learning step to the current step through joint training on both real new and virtual old data. Subsequently, a knowledge distillation strategy is leveraged to distill the knowledge from the former model to the current model, which regularizes the training process of the current model. In addition, our method can be flexibly equipped with the most generative-ZSL methods to tackle IZSL. Extensive experiments on three challenging benchmarks indicate that the proposed method can effectively tackle the IZSL problem effectively, while the existing ZSL methods fail.