[go: up one dir, main page]

DEV Community

Cover image for How not to do code reviews
Ibrahim Salami
Ibrahim Salami

Posted on • Originally published at aviator.co

How not to do code reviews

Traditionally, code reviews involved engineers scrutinizing a colleague’s code for errors and ensuring its readability, efficiency, and maintainability.

This approach results in bottlenecks, especially in large teams, because the right reviewers don’t always have the capacity to review changes when necessary. While solutions such as CODEOWNERS files try to fix these issues, they can make matters worse by creating knowledge silos and overloading domain experts.

All this leads to frustration and hinders progress—which, in turn, impacts release timelines and team morale.

The good news is that teams don’t have to rely on code reviews to identify bugs like in the 1970s. These days, you’re much better off relying on automated testing and static code analyzers to identify bugs. Modern code reviews can move beyond error finding and instead focus on growing a team that can maintain a healthy codebase in the long term.

The Traditional Approach: Error-Driven Code Reviews

Michael Fagan first described code reviews in his 1976 paper titled “Design and Code Inspections to Reduce Errors in Program Development.”

He focused on a formal inspection process that emphasized error-driven inspections of functional issues—issues that impact software’s runtime behavior, causing it to break or produce incorrect results. His proposed process focused exclusively on finding potential errors. It involved a group of developers manually running predefined inputs through the code and verifying that they produced the correct predefined outputs. If the code produced the incorrect output, it needed to be reworked and tested again using the same strategy.

This process was time-consuming, but such a manual, error-driven approach made sense back then. Automated testing tools were limited, so you needed humans to debug your code.

However, in practice, an error-driven approach tends to neglect evolvability issues—those that affect the code’s maintainability, readability, and future modifications. These defects often lead to technical debt, which hinders agility and increases development costs in the long run. However, error-driven inspections leave little room for discussions around alternative implementations and code structure considerations.

An error-driven approach also comes with other challenges. Deciding who should review a code change is a common challenge regardless of how you do code reviews. An error-driven approach exacerbates this issue because it relies so heavily on experienced engineers who are familiar with the codebase and technologies. Without them, obscure errors can make it to production.

Even if you use a CODEOWNERS file to designate individuals or teams as code owners of portions of your codebase—an approach that has many benefits—an error-driven approach still means that code owners become bottlenecks. You only have so many engineers with enough experience to detect issues, which delays code reviews and slows down the development process.

Most importantly, this approach keeps the most experienced members of a team trapped in a never-ending cycle of reviews and debugging. It creates knowledge silos among code owners and hinders knowledge sharing, which prevents the rest of the team from expanding their knowledge of the codebase.

The Modern Approach: Code Reviews as Knowledge Sharing

Over the past twenty years, modern tooling, such as static analyzers and development practices, like automated tests, have become exponentially better at finding errors, especially functional ones. These tools allow developers to focus on higher-level concerns, such as knowledge transfer, code architecture, and long-term code maintainability.

Instead of using code reviews for error finding and code fixes, they can now focus on the strategic aspects of code changes and knowledge sharing.

The Role of Automated Tests

Automated unit and integration tests are far better at finding logical bugs in code than human reviewers.

When reviewers look for these logic issues, they often run through the code line-by-line using different inputs and see if any lines cause the code to produce the wrong output. This takes significantly longer than an automated test, which can execute the code instantly and verify different inputs produce the correct outputs.

Automated tests can also consistently identify issues, whereas reviewers might miss them due to bias or human error.

Effective automated testing requires discipline to write proper tests, though. You need to take the time to identify different inputs and determine the correct output for each to develop comprehensive test cases. This includes identifying erroneous inputs and figuring out how the code should respond to them. Once you’ve identified different test cases, you need to write automated tests to check each case. Reviewers should also analyze automated tests and code changes to find any edge cases that might not be covered by existing tests.

This means effective automated testing does add development time—engineers need to write automated tests for every line of new code.

However, this time is made up in the review process since reviewers can then rely on automated tests to find logical bugs rather than manually testing the code with different inputs.

The Role of Static Analyzers

While automated tests can pick up logical issues in code, they don’t identify code vulnerabilities. Automated tests focus more on how software runs rather than what it uses to run. However, static code analyzers solve this problem.

Static code analyzers analyze code and its dependencies for potential security flaws. If it finds vulnerabilities, it alerts the code author to fix them by changing the affected lines of code or updating the dependencies. Without a static code analyzer, you’d need an experienced engineer to catch many of these vulnerabilities and keep them from making it into production.

For example, the JavaScript code sample below demonstrates an issue a static analyzer would detect that a developer might miss:

const n = NaN;

if (a === NaN) {
    console.log("Number is invalid");
}
Enter fullscreen mode Exit fullscreen mode

JavaScript uses NaN, which stands for Not-A-Number, when you try to convert a non-numerical value to a number. To check if a variable is NaNyou should always use Number.isNaN(n) instead of n === NaN. It’s likely that a developer would miss this small detail, but an analyzer would pick it up immediately.

Static code analyzers can also enforce style guides. A static analyzer that’s been configured to use your style guide can run through changed code and identify any lines that violate those guidelines, such as incorrect naming or spacing issues. Static analyzers also often come preconfigured with coding best practices that allows them to find performance optimizations and maintainability issues such as missing documentation and overly complicated code.

Modern AI-powered static code analyzers can identify even more issues. Code analyzers that don’t use AI parse code and look for patterns that might cause bugs or create security vulnerabilities. While these patterns can identify some evolvability issues, such as code structure and style, they’re still limited.

However, AI analyzers can be trained on a codebase to understand the code architecture. When new changes are proposed, they can check if the changes align with the code’s architecture. They can also make sure the code fully meets the requirements and notify the author if any are not met. Because they’re better at understanding the bigger picture, AI-powered code analyzers can detect maintainability and code architecture issues that only human reviewers were able to catch before.

Like with automated tests, using static analyzers doesn’t mean human reviewers are not involved anymore. Reviewers must still examine their results for any false positives, but it takes a fraction of the time it did before these tools were available.

Code Review for Knowledge Sharing

With so much manual labor out of the way, code reviews now have a new purpose: building a healthier codebase in the long term and a more resilient, adaptable team.

Code reviews should now be done in a way to foster collaboration and continuous learning. This feedback and discussion are no longer mainly meant to improve the current code—it’s to grow a team that can build a healthy codebase in the long term.

Code reviews are also almost exclusively focused on evolvability defects, such as checking for missing documentation, improving algorithmic efficiency, and reducing cyclomatic complexity that might cause bottlenecks or maintainability issues. The aim isn’t only to fix a given issue, though, but to help the team learn from it. Code reviews might involve discussing these issues to expose less experienced engineers to higher-level programming concepts and help them see the big picture.

The aim is for all engineers, regardless of experience level, to contribute to and benefit from the review process.

One example of this modern approach to code reviews is peer programming, where multiple engineers work together on a single functionality piece. The engineer writing the code assumes the driver’s role while others review the code as it’s written and offer suggestions or point out potential errors.

You can strategically pair more experienced domain experts with less experienced reviewers to accelerate learning and reduce knowledge silos. Less experienced engineers gain exposure to expert feedback and best practices, while seniors benefit from fresh perspectives and must clearly articulate their reasoning.

Peer programming isn’t always possible—a team might be too small or spread across different time zones. In these cases, you can use pull requests or even email threads and mailing lists to achieve the same aims.

The emphasis should still be on thoroughly discussing and explaining issues and concepts rather than just pointing out issues that need fixing. Other engineers not involved in the review process can also read through these discussions at a later stage to understand why certain changes were made and how they fit into the big picture.

Conclusion

Traditional code review processes focused solely on error hunting, which creates bottlenecks and hinders team growth. While some tools, such as CODEOWNERS files, try to improve the process, an error-driven approach still doesn’t accommodate knowledge sharing, and you don’t get the full benefit of code reviews.

But these days, automated tests and static analyzers can pick up defects faster and more accurately than human reviewers. This means code reviews should focus less on error finding and instead prioritize knowledge sharing to avoid knowledge silos and encourage team growth.

The Aviator FlexReview is built to encourage knowledge sharing during code reviews. Like CODEOWNERS files, FlexReview reduces the effort of assigning reviewers to code changes—but with more flexibility. It considers reviewers’ workloads and availability, and you can configure it to assign less experienced reviewers with domain experts to facilitate knowledge sharing as part of the review process. You can register for a free account to try it out.

flexreview teams

Top comments (0)