Abstract
This dissertation investigates the robustness and vulnerability of Continual Learning (CL), a critical paradigm in computer vision. CL models are trained on continuous data and are supposed to assimilate new information without compromising previously learned knowledge. This dynamic setting presents significant longstanding challenges, including forgetting previous information with the absence of older data, at the current training stage, particularly in large-scale datasets with extensive class varieties. To address these limitations, we introduce an approach that enhances the model’s ability to manage new and old knowledge efficiently. This method ensures scalability and robustness, even in the case of large datasets. Additionally, this dissertation advances Online Continual Learning (OCL), a scenario where models are limited to a single pass over the data. This restriction necessitates all learning must occur in this one exposure, prohibiting any subsequent data revisits for hyperparameter tuning or memory updates. The proposed “Bias Robust Continual Learning” (BRCL) method, utilizes inherent biases of intermediate networks to counteract forgetting. It incorporates a model consolidation technique that focuses on aligning similar feature representations, significantly reducing memory demands. Moreover, this dissertation explores the susceptibility of CL systems to adversarial poison attacks, presenting a novel method that analyze and exploit feature representation divergences By maximizing dissimilarity in deep layers while preserving similarity in shallow layers, our approach can significantly impair CL systems, highlighting a new dimension of vulnerability in this framework. The methods and findings presented in this dissertation address several shortcomings in the field of continual learning, set a groundwork for further research into both performance enhancement and security aspects of CL in continual learning environments.