Code reviews are peer reviews of code that help developers improve their code quality, and they are time-consuming. According to one source, 50% of companies spend 2-5 hours per week on code reviews. Without enough staff, code reviews can become overwhelming and distract developers from other important work.
Harjot Gill believes that code reviews can be largely automated with artificial intelligence. He is co-founder and CEO of CodeRabbit, which uses AI models to analyze code and provide feedback.
Prior to founding CodeRabbit, Gill was senior director of technology at datacenter software company Nutanix, joining the company when it acquired his startup, Netsil, in March 2018. CodeRabbit's other founder, Gur Singh, previously led the development team at Alegeus, a white-label healthcare payment platform.
According to Gill, CodeRabbit's platform uses “advanced AI reasoning” to automate code reviews, “understand the intent” behind the code and provide “actionable” and “human-like” feedback to developers.
“Traditional static analysis tools and linters are rule-based and often have high false positive rates, and peer review is time-consuming and subjective,” Gill told TechCrunch. “In contrast, CodeRabbit is an AI-first platform.”
These are bold, buzzword-heavy claims, and unfortunately for CodeRabbit, anecdotal evidence shows that AI-powered code reviews tend to perform worse than human-involved code reviews.
In a blog post, Graphite's Greg Foster described their internal experiments applying OpenAI's GPT-4 to code reviews: While the model caught some useful things, like minor logic errors and spelling mistakes, it also produced a lot of false positives, and even attempts at fine-tuning couldn't significantly reduce these, Foster said.
These aren't new findings: A recent Stanford University study found that engineers who use code generation systems are more likely to introduce security vulnerabilities into the apps they develop. Copyright is also an ongoing concern.
Using AI for code review also has logistical drawbacks: As Foster points out, traditional code reviews require engineers to learn through sessions and conversations with peers. Offloading reviews threatens this knowledge sharing.
Gill thinks differently: “CodeRabbit's AI-first approach improves code quality and significantly reduces the manual effort required for the code review process,” he says.
Some people are buying the sales pitch: Gill says that about 600 organizations are currently paying for CodeRabbit's services, and the company is in pilots with “several” Fortune 500 companies.
The company is also making investments: CodeRabbit today announced a $16 million Series A funding round led by CRV, with participation from Flex Capital and Engineering Capital. Bringing the company's total funding to just under $20 million, the new funds will be used to expand CodeRabbit's 10-person sales and marketing function and product offerings, with a focus on strengthening its security vulnerability analysis capabilities.
“We will invest in deeper integrations with platforms like Jira and Slack, AI-driven analytics and reporting tools,” Gill said, adding that Bay Area-based CodeRabbbit is in the process of setting up a new office in Bangalore to roughly double its team size. “The platform will also introduce advanced AI automation for dependency management, code refactoring, unit test generation and documentation generation.”