Ready to get started with DevStats?

No credit card required

Full access to all features

2 minute setup

How to Do a Great Code Review

By 

Cameron Pavey

27 Oct 2023

11

 min Read

SHARE THIS POST

As codebases grow and developers join and leave the team, the ability for any single developer to have a total understanding of the codebase diminishes rapidly. With fragmented understanding spread between multiple developers working on different parts of the same codebase, it becomes easy for code quality to decline and issues to arise. Code reviews help mitigate this threat by getting more sets of eyes across changes waiting to be merged.

However, what constitutes a code review—let alone a _great one_—varies significantly between teams. How do you know if your code review process covers all the most important bases without leaving some benefits on the table?

This guide provides actionable practices for conducting code reviews that help optimize your cycle time and give developers on your team better opportunities for learning and growth.

Why Do Code Reviews Matter?

All code reviews aim to protect undesirable changes from being merged. Most code review processes look something like this:

  1. The submitter branches from the main line and writes some code.
  2. The submitter creates a pull request (PR) with their new changes.
  3. Reviewer(s) read through the PR, leaving comments and requesting changes as necessary.
  4. The submitter responds to feedback by making the requested changes or discussing the proposed change if they disagree.
  5. Once the reviewer(s) are satisfied, they approve the PR.
  6. When any other thresholds, like continuous integration (CI), are satisfied, the PR is merged.

What distinguishes run-of-the-mill code reviews from great ones is how steps two and three are implemented. Typically, when a code review is done well, you'll enjoy several benefits:

  • Catch and address tech debt: If all code entering the codebase goes through review, so does all tech debt. Code reviews are a great opportunity to be proactive about tech debt—both accidental and deliberate. Reviewers can flag things they think need to be handled, whether in this PR or by adding a ticket to the backlog to be handled soon.
  • Help developers learn and grow: Code reviews provide excellent learning opportunities for all parties involved. Submitters might discover new ways of doing things from the feedback they receive, and reviewers might learn new things from the code they review.
  • Shape your dev team's culture: When done well, code reviews will be frequently undertaken by all of the devs in your team. In larger or more active teams, you might submit or review several PRs per day. Code reviews are a crucial touchpoint for setting and shaping team culture. If your team culture has issues, they will likely surface in your code reviews. If you make efforts to improve your code review practices, it will also affect your team's culture positively.
  • Catch defects: _Catching defects isn't the primary purpose_ of a code review—that is the domain of quality assurance. However, when reviewers look at a PR, they will often identify issues. These issues may have gotten caught during testing anyway, but additional layers of protection are always welcome.

Practices for Doing a Great Code Review

With these benefits in mind, let's explore how to do code reviews to realize them.

Keep PRs Small

The larger a PR is, the more cognitive load it places on the reviewer. Despite best intentions, this inevitably leads to lower-quality code reviews. Assume that the reviewer has a fixed amount of focus to spend on a code review, which will be divided between all the lines in a PR. In smaller PRs, each chunk of code will receive more focus, thus leading to a better review. Conversely, large PRs mean the reviewer cannot keep the whole context of the PR in their head, and the level of focus they give to the review declines.

Ideally, you should aim for PRs of around 250 lines of code (LOC) and no more than 400. If you're not practiced at creating small PRs, you might wonder how it's possible to do anything meaningful in 250–400 LOC, especially when you're working on large, complex features. In these cases, it's best to break work down into small, discrete chunks to review each as a small PR. Various version control methodologies, such as stacking, help devs split their work up this way.

If working this way is new for you, tracking your PR cycle time can be beneficial to see how long it takes for PRs to go from newly opened through review and approval to finally merged. Keeping track of this will help you determine if efforts to decrease your PR sizes are helping speed things up and will give you the data you need to tweak things and experiment with the PR process.

Automate the Boring Stuff

If reviewers have a limited amount of focus to spend on a given PR, it doesn't make sense to waste effort on things that can easily be automated, such as style guides, linting, and static analysis.

Many tools can add automated checks to your CI workflows to catch predictable issues and flag them automatically. This frees up reviewers' capacity to focus on important things that automated systems have a harder time understanding, like business logic, potential defects, and architectural decisions.

Provide Sufficient Context

If reviewers don't have sufficient context of the work in a PR, they can't provide keen observations. Often, many improvements that can be implemented in a PR are not purely technical. Improvements will often be functional changes in the application logic that require context and understanding of the module under change.

Keeping PRs small helps as the smaller scope of change means that the reviewer doesn't need to understand as much of the underlying domain context.

It also helps to put more effort into the title and description of the PR. Instead of using the default title and description, which are usually based on your commit messages and branch name, spend some time to help the reviewer understand the context of the work in question before they dive in to reach the code.

  • The title should be succinct yet descriptive of the (ideally small amount of) work included in the PR. If the reviewer is familiar with the codebase, they should know the _vague scope_ of the PR based only on the title.

  • The description should outline any details or information that the reviewer needs for them to understand the piece of work but that they cannot infer easily from the changed code. You might include links to the Jira ticket or relevant design documents, screenshots, or even a video recording demonstrating the end result of the work done.

Giving the reviewer this context greatly increases the effectiveness of the code review.

Providing high-quality context for PRs can be taxing for PR submitters if the process is entirely unconstrained. To mitigate this, consider using PR templates that outline a solid starting point when new PRs are created. If you're not sure where to start, you can find excellent example templates online with varying levels of detail depending on your needs.

If you decide to implement a template, including a "preflight checklist" in the template is a great way to ensure that submitters have been mindful of their PR before submitting it. When used well, checklists can profoundly impact your team's consistency and performance.

Keep PRs Focused

Just as well-designed functions should focus on doing one thing well, a well-formulated PR should focus on one discrete change. Batching unrelated code changes together in a single PR creates several issues:

  • It increases the scope of the PR, which requires the reviewer to have more context.
  • If one of the changes has a problem, it blocks the others from being released.
  • It increases the size of the PR for the sake of having fewer PRs, which is typically a false economy.

Instead, don't be afraid to split changes into as many PRs as you need to keep things logically focused. (If you're sticking to small PRs, you likely have this point covered already.)

Be Respectful

Some reviewers tend to take an adversarial stance when reviewing code, perhaps with the intent of finding all potential problems. However, even if this approach is well-intentioned, it is inherently flawed because it means you're losing out on the value that can come from collaboration.

Code reviews are one of the best opportunities for developer growth, especially when they're small and frequent, as they shorten the feedback cycle for developers and allow them to get near-immediate feedback on their coding and software design skills. Yet, when code reviews are done with an adversarial approach, their value is diminished as the focus shifts from helping developers grow to picking apart their code and stopping them from merging problematic code.

While an adversarial approach can help prevent issues from entering the codebase, it will wear away at developers, and code reviews will be seen as a painful necessity. 

Conversely, a more respectful approach uplifts developers, helps them grow, and brings the team closer together. When you do a code review, it helps to remember that your comments aren't just going into the void. They're received by a human—your coworker—on the other end. Reviewers should be mindful of the human on the other end and treat them with respect, just as they would want to be treated.

Training your team to be respectful during code reviews is one of the most impactful changes you can make to take your team's code review process. Instead of being a painful experience, this change can make code reviews an enjoyable activity embraced by your team so that it can produce the value you want it to.

Give Specific Feedback

Code reviews are a learning opportunity for all. Specific, rationalized feedback supports this learning.

Some reviewers' default might be to flag something as incorrect and expect the submitter to fix it themselves:

> This is wrong.

However, if something first appears blatantly out of place, consider that you may be missing some context. Even if you don't, it still helps the submitter if you formulate your feedback helpfully. Provide enough information to reduce the amount of back-and-forth clarification as much as possible:

> Is this correct? I was under the impression that to call FooAPI, we need to provide all the headers outlined in the documentation here <link>. Is this not the case?

Clarify _why_ you believe something needs to change and supply enough information for the submitter to perform their investigation without needing to come back to you for clarification.

Give Praise, Not Only Correction

While code reviews are a great opportunity to catch issues before they ship, they're also a fantastic time to compliment teammates for good work.

If you learn something new when you review someone's code or see something that impresses you, consider leaving a comment to praise it. Small moments like this build over time to define your development team's culture.

If people go into code reviews expecting unfathomable PRs and adversarial comments, it becomes an unpleasant part of the daily feedback loop—which is not an ideal way to feel about one of the most important processes in shipping code.

Give Timely Feedback

If you're doing everything else on this list, this one should follow naturally.

By their nature, code reviews need to be completed before code can be merged and features can be released. If a backlog of PRs is waiting to be reviewed, your lead time for change is slowly increasing.

If your goal is to ship value often, handling open PRs should be a priority. Until you review them, you won't know whether they require changes or are ready to merge. Everyone on the team should see it as a priority that PRs do not sit around for longer than necessary.

How DevStats Can Help

It can be hard to improve your code reviews without an effective way to monitor them.

For example, how can you tell if your code reviews are being handled promptly? You could measure it manually, but you likely have more important things to spend your limited time on.

This is where DevStats can help. DevStats offers several helpful views that provide insight into planning accuracy, DORA metrics , and PR cycle time. Two reports that can help with measuring your code reviews are the Open PRs report and the Code Review report.

The Open PRs report shows which PRs are open for your projects, who is reviewing them, and how long they've been open. You can also see whether review times are being kept under twenty-four hours. It's a quick and easy way to see if reviews are being handled promptly.

Open PR report

The Code Review report shows more in-depth metrics about your code reviews, such as the following:

  • Reviews completed: the total count of reviews completed in a given time frame
  • Comments per review: the average number of inline comments per review, including discussions
  • PR size: the average size of PRs in LOC
  • Review depth: the average number of comments per PR
  • PRs merged without review: the number of PRs where no formal review took place
  • Total issues caught: the count of _change requests_ sent and accepted during the review

These two reports provide deep insight into how your team's code review process is faring and which areas should be improved.

Wrapping Up

At first, code reviews might seem secondary to the code and other software development practices. In reality, they're an important tool that high-performing teams know how to use correctly to improve their performance and team culture.

Code reviews are a valuable learning and growth opportunity for developers. When done well, they allow you to catch tech debt, share good coding practices, and increase the pool of domain knowledge shared amongst the developers on the team. Code reviews are also a fantastic opportunity to set the developer culture on your team and shift toward an approach where continuous learning and collaboration are emphasized.

This guide has offered several practices that you and your team can adopt so that your code reviews benefit your codebase and your team.