Abstract
Various research works demonstrate that the distribution of training data has a significant impact on the performance of Convolutional Neural Networks (CNNs). The CNN classifiers using balanced datasets achieve very accurate performance when compared to shallow classifiers. However, considering the CNNs originally are not designed to handle imbalanced datasets, such cases have a severely negative impact on their overall performance.
In this research we focus on examining the effect of imbalanced data sets on performance outcomes of various CNN-based classifier models. The work also discusses implementation of some newly suggested optimizers as well as traditional ones for Deep Learning (DL), along with different approaches of learning rate adaptation including a method based on cyclic increase and decrease during training. We compare the impact of using two different methods of ensembling in order to further increase the classifier performance. Additionally, we experimentally evaluate the proposed approaches on two imbalanced image datasets.