Abstract
Network intrusion detection systems (NIDS) are essential in defending online assets from bad actors. Machine learning (ML) has proven to be an effective tool due to its ability to accurately detect complex patterns in network data. Maintaining high model performance, however, is a constant challenge, as malicious actors are non-static and the distribution, frequency and variety of attacks a network experiences constantly changes. Continual Learning (CL) has emerged as a new paradigm for ML-based NIDS to constantly adapt to changing threats and network environments, but faces many key issues, most of all the catastrophic forgetting problem. This paper provides an empirical analysis of different training strategies of CL and model-agnostic metal learning (MAML) to mitigate the effects of catastrophic forgetting within the network security domain. We show that regularization strategies prove ineffective in network security to prevent catastrophic forgetting, and that replay is the most effective way to ameliorate catastrophic forgetting. We also demonstrate that model repair with MAML, while effective for select classes, causes dramatic performance degradation to other classes. This work should prove useful in future research into the efficacy of traditional continual learning strategies in ML-NIDS, and their potential applications.