Developer security firm warns that Copilot and other AI-powered coding assistants may replicate security vulnerabilities already present in the user’s codebase. Credit: Alengo GitHub’s AI-powered coding assistant, GitHub Copilot, may suggest insecure code when the user’s existing codebase contains security issues, according to developer security company Snyk. GitHub Copilot can replicate existing security issues in code, Snyk said in a blog post published February 22. “This means that existing security debt in a project can make insecure developers using Copilot even less secure,” the company said. However, GitHub Copilot is less likely to suggest insecure code in projects without security issues, as it has a less insecure code context to draw from. Generative AI coding assistants such as GitHub Copilot, Amazon CodeWhisperer, and ChatGPT offer a significant leap forward in productivity and code efficiency, Snyk said. But these tools do not understand code semantics and thus cannot judge it. GitHub Copilot generates code snippets based on patterns and structures it has learned from a vast repository of existing code. While this approach has advantages, it also can have a glaring drawback in the context of security, Snyk said. Copilot’s code suggestions may inadvertently replicate existing security vulnerabilities and bad practices present in neighbor files. To mitigate duplication of existing security issues in code generated by AI assistants, Snyk advises the following steps: Developers should conduct manual reviews of code. Security teams should put a SAST (security application security testing) guardrail in place, including policies. Developers should adhere to secure coding guidelines. Security teams should provide training and awareness to development teams and prioritize and triage the backlog of issues per team. Executive teams should mandate security guardrails. Snyk data says the average commercial software project has an average of 40 vulnerabilities in first-party code, and almost a third of those are high-severity issues. “This is the playground in which AI generation tools can duplicate code by using these vulnerabilities as their context,” Snyk said. The most common issues Snyk sees in commercial projects are cross-site scripting, path traversal, SQL injection, and hard-coded secrets and credentials. GitHub could not be reached late-Wednesday afternoon to respond to Snyk’s comments about GitHub Copilot. More on GitHub Copilot: Judge dismisses lawsuit over GitHub Copilot AI coding assistant GitHub previews GitHub Copilot Workspace Visual Studio update brings smoother Copilot integration Related content analysis Beyond the usual suspects: 5 fresh data science tools to try today The mid-month report includes quick tips for easier Python installation, a new VS Code-like IDE just for Python and R users, and five newer data science tools you won't want to miss. By Serdar Yegulalp Jul 12, 2024 2 mins Python Programming Languages Software Development analysis Generative AI won’t fix cloud migration You’ve probably heard how generative AI will solve all cloud migration problems. It’s not that simple. Generative AI could actually make it harder and more costly. By David Linthicum Jul 12, 2024 5 mins Generative AI Artificial Intelligence Cloud Computing news HR professionals trust AI recommendations HireVue survey finds 73% of HR professionals trust AI to make candidate recommendations, while 75% of workers are opposed to AI making hiring decisions. By Paul Krill Jul 11, 2024 3 mins Technology Industry Careers how-to Safety off: Programming in Rust with `unsafe` What does it mean to write unsafe code in Rust, and what can you do (and not do) with the 'unsafe' keyword? The facts may surprise you. By Serdar Yegulalp Jul 11, 2024 8 mins Rust Programming Languages Software Development Resources Videos