Abstract
In recent years, artificial intelligence (AI) has made significant strides in various domains, including natural language processing, computer vision, and robotics. One area where AI has shown remarkable promise is in generating code. The languages and complexity of this code have been developed by training AI models with millions of different programs. AI has become prevalent enough for teachers to introduce how to safely use it in the classroom and a bigger emphasis on how not to use it for plagiarism. Is the code that AI is creating secure? Are there specific vulnerabilities that AI cannot avoid when creating code? By prompting AI in multiple ways, we can answer these questions by seeing the differences in code that are created when security is the main goal. By using prompts that are generic versus specifically trying to avoid certain vulnerabilities, the code changes significantly and must be rescanned less for vulnerabilities. Another important question asked is, how good is AI at finding and improving its insecure code? Are there better AI scanners than others? The prompting language proves to be especially important in the found results. The type of languages shows some are more susceptible to vulnerabilities in creation and certain AI tools give better suggestions for fixing code than others. This thesis explores the impact of AI-generated code on software development, examining its limits, vulnerabilities, and ethical implications when considering effectiveness and security.