ARTICLE AD
Code reviews — peer reviews of code that help devs improve code quality — are time-consuming. According to one source, 50% of companies spend two to five hours a week on them. Without enough people, code reviews can be overwhelming and take devs away from other important work.
Harjot Gill thinks that code reviews can be largely automated using artificial intelligence. He’s the co-founder and CEO of CodeRabbit, which analyzes code using AI models to provide feedback.
Prior to starting CodeRabbit, Gill was the senior director of technology at datacenter software company, Nutanix. He joined the company when Nutanix acquired his startup, Netsil, in March 2018. CodeRabbit’s other founder, Gur Singh, previously led dev teams at white-label healthcare payments platform, Alegeus.
According to Gill, CodeRabbit’s platform automates code reviews using “advanced AI reasoning” to “understand the intent” behind code and deliver “actionable,” “human-like” feedback to devs.
“Traditional static analysis tools and linters are rule-based and often generate high false-positive rates, while peer reviews are time-consuming and subjective,” Gill told TechCrunch. “CodeRabbit, by contrast, is an AI-first platform.”
These are bold claims with a lot of buzzwords. Unfortunately for CodeRabbit, anecdotal evidence suggests that AI-powered code reviews tend to be inferior compared to human-in-the-loop ones.
In a blog post, Graphite’s Greg Foster talks about internal experiments to apply OpenAI’s GPT-4 to code reviews. While the model would catch some useful things — like minor logical errors and spelling mistakes — it generated lots of false positives. Even attempts at fine-tuning didn’t dramatically reduce these, according to Foster.
These aren’t revelations. A recent Stanford study found that engineers who use code-generating systems are more likely to introduce security vulnerabilities in the apps they develop. Copyright is an ongoing concern, as well.
There are also logistical drawbacks of using AI for code reviews. As Foster notes, more traditional code reviews force engineers to learn through sessions and conversations with their developer peers. Offloading reviews threatens this knowledge sharing.
Gill feels differently. “CodeRabbit’s AI-first approach improves code quality and significantly reduces the manual effort required in the code review process,” he said.
Some folks are buying the sales pitch. Around 600 organizations are paying for CodeRabbit’s services today, Gill claims, and CodeRabbit is in pilots with “several” Fortune 500 companies.
It also has investments: CodeRabbit today announced a $16 million Series A funding round led by CRV, with participation from Flex Capital and Engineering Capital. Bringing the company’s total raised to just under $20 million, the new cash will be put toward expanding CodeRabbit’s 10-person sales and marketing functions and product offerings, with a focus on enhancing its security vulnerability analysis capabilities.
“We’ll invest in deeper integrations with platforms like Jira and Slack, as well as AI-driven analytics and reporting tools,” Gill said, adding that Bay Area-based CodeRabbbit is in the process of setting up a new office in Bangalore as it roughly doubles the size of the team. “The platform will also introduce advanced AI automation for dependency management, code refactoring, unit test generation and documentation generation.”