cover image: Cybersecurity Risks of AI-Generated Code

Cybersecurity Risks of AI-Generated Code

1 Nov 2024

This report examines the cybersecurity risks associated with artificial intelligence (AI) code generation models, which have become increasingly adept at producing computer code. It identifies three primary categories of risk: the generation of insecure code, the vulnerability of the models themselves to attacks, and the downstream cybersecurity implications of such technologies. The evaluation of code produced by five large language models (LLMs) revealed that nearly 50% of the generated snippets contained bugs that could facilitate malicious exploitation. The report emphasizes the necessity for comprehensive security measures across various stakeholders, including AI developers and organizations, to mitigate these risks. It also highlights the challenges in assessing the security of AI-generated code, advocating for an expansion of existing cybersecurity frameworks to encompass AI systems and promote secure software development practices.
cybersecurity risk software development large language models (llms) ai-generated code

Authors

Jessica Ji, Jenny Jun, Maggie Wu, Rebecca Gelles

Pages
41
Published in
United States of America

Table of Contents

Related Topics

All