Does AI-assistance lead engineers to write more insecure code?

In this post we look at how developers accept AI-assisted code, and whether they introduce more security vulnerabilities in their software when using AI-assisted code.

Subscribe to newsletter
By subscribing you agree to with our Privacy Policy.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

AI coding assistants have exploded in popularity over the last two years, with a number of teams adopting them to help engineers write and test software faster. However, there has been controversy around whether the use of these tools leads engineers to write less secure code and introduce security vulnerabilities in software. This week we ask, do engineers write more insecure code when using an AI programming assistant?

The context

AI coding assistants surged in popularity following the launch of tools like GitHub Copilot, which was developed by GitHub in collaboration with OpenAI. Their rise is attributed to the ability to improve developer productivity by automating routine coding tasks, generating code suggestions, and identifying potential errors. Some studies show that AI coding assistants can improve individual productivity by up to 29%.

While great for productivity, AI coding assistants do pose risks for software security, such as generating code that might introduce vulnerabilities if not properly reviewed. Over-reliance on these tools may lead teams to focus less on reviewing and refining their code if they believe that AI-assisted code is “good enough”, which increases the odds of these risks coming true. This has many teams concerned, with some bigger organizations banning the use of tools like Copilot until they can establish a better policy on security.

The research

Researchers at Stanford studied 47 participants who were tasked with completing security-related programming tasks with and without the assistance of an AI code assistant. Participants were divided into groups to assess the impact of AI assistance on the security of the code they wrote, as well as their perception of the code's security. The researchers then analyzed the security of the produced code and surveyed participants to understand their confidence in the security of their solutions.

The results showed that:

  • Developers using AI code assistants produced less secure code compared to those not using these tools.
  • Participants overestimated the security of the code generated with the help of AI assistants, indicating a false sense of security when relying on automated coding solutions.
    • In free-response fields, multiple participants wrote that they often trust the machine to know more than they do when it comes to technologies they are less familiar with.
  • Specifying security requirements explicitly when interacting with AI assistants did not significantly improve the security outcomes of the generated code.

In reviewing what types of prompts were more likely to be accepted by engineers, researchers found that:

  • Engineers were more likely to trust prompts which contain function declaration or helper prompts.
  • Long prompts were more likely to be accepted than short prompts.
  • Prompts that contained text generated from a prior output of an AI assistant were more likely to be accepted.

The application

AI assistance in code generation has major benefits for speed of development, but it is important to weigh those benefits against the effectiveness (and security) of the code generated. While it is enticing to accept AI-generated code that, on first glance, appears to provide a reasonable solution, efforts need to be made to make sure speed doesn’t come at the cost of security.

Informed by the researchers, here are some steps managers can take to introduce guard-rails into AI-assisted development:

  1. Integrate security practices in the development process: Researchers recommend the integration of specific security practices into the development workflow when using AI coding assistants, so that generated code adheres to security standards and guidelines. This can be done in either the build or review process.
  2. Lean on the code review process: AI-assisted code is only being used and reviewed by a single developer, until it gets to the code review process. By facilitating a culture of reviewing team code, teammates with a fresh set of eyes can get ahead of any security vulnerabilities. (Note: it is much better to rely on a system vs a human to do this - so refer back to option one!)
  3. Educate Developers: Given the great benefits on individual productivity, AI-assistance is only going to increase in popularity within teams. To stay ahead of the risks, managers can educate their team on examples of good and bad usage of AI-assisted tools.

To receive weekly editions of research-driven insights, subscribe to our newsletter, Research-Driven Engineering Leadership.