Abstract
Driven by their high mobility and rich sensing capabilities, UAVs have been widely adopted in a spectrum of military, civilian, and commercial applications. The adoption of UAVs addresses many safety and efficiency challenges in these applications, especially for hard-to-reach places. Meanwhile, with the explosive development of AI techniques in recent years, the AI-assisted UAV paradigm is increasingly drawing research attention from both academia and industry, which uses AI techniques to enhance the UAV autopilot systems and various UAV applications in terms of intelligence, effectiveness, and efficiency. Although these existing research efforts have shown promising benefits of integrating AI and UAVs, limited attention has been given to the corresponding security and safety concerns raised by the integration. Specifically, multiple studies have shown that AI models can be vulnerable to adversarial inputs, which can confuse them to output wrong results. If such vulnerabilities are also valid in the context of the AI-assisted UAV paradigm, their exploitation by malicious entities can cause severe security and safety consequences. This dissertation uncovers the underlying security vulnerabilities and their root causes in the integration of AI and UAVs with a focus on two major directions, including AI assisted UAV operations and AI-enabled UAV anomaly detection. For AI-assisted UAV operations, this dissertation analyzes the data sensing and processing pipeline of major sensors involved in AI-assisted UAV operations and discovers the vulnerabilities with a case study of AI-assisted UAV infrastructure inspection. The vulnerabilities are exploited by designing effective adversarial attacks. Based on the attacks’ effectiveness, this dissertation also discusses the defenses based on adversarial training to improve AI-assisted UAV operations. With regard to AI-enabled UAV anomaly detection, this dissertation explores the possibility of constructing adversarial inputs and confirms the existence of vulnerabilities in state-of-the-art AI-enabled UAV anomaly detection designs. On top of that, this dissertation proposes an effective construction of adversarial attacks, which can significantly mislead the AI-enabled UAV anomaly detection system. Based on the understanding obtained from this study, this dissertation also discusses the defenses to improve the robustness of AI-enabled UAV security solutions.