It's a bad day for insects. Earlier today, Sentry announced AI Autofix capabilities for debugging production code. Now, a few hours later, GitHub is releasing the first beta version of its code scanning auto-fix feature to detect and fix security vulnerabilities during the coding process. This new feature combines the real-time capabilities of GitHub's Copilot with CodeQL, the company's semantic code analysis engine. The company first previewed the feature last November.
GitHub promises that this new system will be able to remediate more than two-thirds of the vulnerabilities it discovers. In many cases, developers do not need to edit the code themselves. The company also promises that code scan auto-remediation will cover more than 90% of alert types in the languages it currently supports (currently JavaScript, Typescript, Java, and Python).
This new feature is now available to all GitHub Advanced Security (GHAS) customers.
“Just as GitHub Copilot frees developers from tedious and repetitive tasks, automated code scanning helps development teams regain time spent on remediation,” GitHub said in today's announcement. I'm writing this. “Security teams also benefit from a reduced amount of day-to-day vulnerabilities, allowing them to focus on strategies that protect their business while keeping up with the accelerated pace of development.”
Behind the scenes, this new feature uses GitHub's semantic analysis engine, the CodeQL engine, to find vulnerabilities in your code even before the code is executed. The company made the first generation of CodeQL publicly available in late 2019 after acquiring Semmle, the code analysis startup where CodeQL was developed. Over the years, there have been many improvements to CodeQL, but one thing that has never changed is that CodeQL is free and available only to researchers and open source developers.
CodeQL is currently at the heart of this new tool, but GitHub also says it uses “a combination of heuristics and the GitHub Copilot API” to suggest fixes. To generate fixes and their descriptions, GitHub uses OpenAI's GPT-4 model. And while it's clear that GitHub is confident enough to suggest that the vast majority of autofix suggestions are correct, the company notes that “a small percentage of suggested fixes may be related to the codebase or vulnerability.” We do not believe that it reflects a serious misunderstanding of gender.