Code Review Flow
I have been a part of many retrospectives that have been filled with discussions on how to improve the code review process. Many teams get laser focused on code reviews when they are trying to improve cycle time because it is a form of waiting waste that is visible and felt by members of the team. The most common action item coming out of those retros is some agreement or SLA around when code will be reviewed or an automation to notify people via some new channel. These rarely solve result in sustained improvement because they are operating at the surface level, not the root cause.
Why Code Reviews Don't Flow
I start with the assumption that everyone on the team is professional who shows up to work wanting to do a good job be a good teammate. There are obviously exceptions, but they should be handled by other means. So why aren't these teams filled with engaged, capable engineers effectively (in their own evaluation) executing code reviews?
Individual Ownership and Incentives
Many teams have a single person assigned to a ticket. At any given time, one person may have several tickets in progress. If the organization rewards people for getting their tickets across the line quickly and consistently, why would an engineer disrupt their flow of executing tickets to review a peers'? This is a case where engineers are reacting to their environment and what is rewarded, or at least their perception. When faced with the question of reviewing someone else's ticket or picking up additional ticket to move across the board and get rewarded, the achievement oriented folks will general choose the new work. This is a judgement on them; this is what the system encourages.
Knowledge Gaps and Silos
If the team has a wide domain, you may end up with areas of the system that only a few people know. In that case you can end up with bottlenecks since most of the team is not comfortable reviewing. If the one knowledgable person on the team who isn't the change author is busy in meetings or on vacation, the code review can sit. Even if a team member jumps in and does a review, the author may not feel comfortable merging until the expert reviews.
Bystander Effect
Most teams have the lead who knows every part of the system well enough and is always willing to set aside time for review. These folks are often more oriented around team success rather than individual success. The problem is that these engineers often have higher leverage work available to them than reviewing everyone's code. Other engineers on the team learn a behavior either by waiting and seeing the lead pick up the review or trying to grab a review and seeing the lead already posting comments. The junior engineer can tell themself a very reasonable story that the lead will provide a better review anyway, so they should just grab the next ticket off the backlog rather than do a review.
So What Does Work?
There are a handful of practices that I have seen successfully drive better flow. Unfortunately, none of them are quick tool changes. They are all changes to how the team works, but they have the added benefit of building more resilient and effective teams.
Work In Progress (WIP) Limits
A WIP limit is the most effective single change to drive better flow in code reviews. Being able to bring more work into the queue acts as a escape hatch, but it doesn't help the team address the problems. A WIP Limit closes that hatch and forces the team to address bottlenecks or knowledge gaps. There is a clear signal telling engineers to not bring in additional work; instead, you need to create space by helping to close out existing work in progress. I recommend setting a WIP limit less than the number of team members. That forces a level of collaboration and helps break down full, individual ownership.
Earlier Collaboration
This tactic is helpful for silos and bystander effect. The extreme form of this is ensemble or pair programming where multiple team members work together synchronously building the software and their context. Code review is done real time and continuously. Less extreme options can work as well. I have had teams that assigned core reviewers at the same time as the ticket owner. Then they would discuss the problem and solution, building context and alignment. When review time came there was a shared fact base and understanding that led to more effective review. This has the happy side effect of driving cross training and learning. It is effective whether the expert is the implementer or the reviewer.
Intentional Space
One extreme option is creating intentional space within the work day where the team has a shared expectation that the work will shift to reviews. This can lead to more synchronous code review discussions or pairing through specific details. Creating a cadence sets expectations and reinforces from an organizational perspective the value we place on getting changes through review.
No Silver Bullet
Software engineers are often tempted to reach for a tool to solve their problems. However, code reviews are just as much a social as they are a technical process. There isn't a simple automation or trivial process change that will lead a team to have a better flow. Investing in flow throughout the system and working on the deeper motivational challenges will drive sustained improvement both in code review effectiveness and overall team performance.