Abstract
Graph Neural Networks (GNNs) are widely applied in many applications such as social networks, recommendation systems and cybersecurity. However, the impact of adversarial attacks on GNNs is a prevalent problem. Previous research shows that GNNs have adversarial vulnerabilities due to a minor perturbation of graph data can lead to poor performance outcomes for these models. Our research investigates the transferability of GNN's robustness against adversarial vulnerabilities to different GNN architectures through transfer learning. This method examines how adversarial information can propagate across different GNN models, while investigating methods to develop robustness of the model. Our approach begins with a graph convolutional network (GCN) trained on an adversarial dataset which showcases its specific vulnerabilities. We then create a robust graph-attention network (GAT) model that are expected to be more resistant to adversarial perturbations. In training the model on an adversarial dataset, we assess its robustness while working towards improving its adversarial defenses. We also apply meta-learning at the target-side model to improve its adaptability and further develop the model's defenses. Our research aims to improve the adversarial robustness of GAT, contributing to developing more effective defenses for applications involving graph-based data protection.