During a peer code review, one or more team members view a teammate’s changes to the source code before those changes are implemented into the codebase. What can go wrong? Yet many programmers consider the task a burden and team leads – a nightmare to deal with.
Whether it is your team’s first or 1,000th code review, they can still learn new tricks. In this article, we share Onix’s code review practices that help us release software daily and sleep peacefully at night.
Code reviews are integral to software development. Still, they are often either overlooked or become so time-consuming and tedious that it affects the process and the team’s morale.
Many resources write about code review best practices, but few mention things to avoid.
In this article, we will:
Software developers should conduct a code review for every pull request (also called merge request, changelist, change, or patch) before merging. The procedure comes with several benefits.
Code reviews alone can’t guarantee that your code is easy to read and maintain. However, if someone looks at a piece of code and understands what was written there, at least it can be considered readable.
A fresh pair of eyes can discover mistakes in the code the author might never notice. A group of experts may catch a bug or error that automated code review tools overlooked.
Each identified and corrected error helps a programmer improve their coding skills and the quality of their output over time. Moreover, even a 1-in-3 chance of one’s code being called out for review is a sufficient incentive to write cleaner code and double-check one’s work. Knowledge sharing during code reviews also contributes to gradually improving code quality.
One of the goals of code reviews is to assess edge cases and make sure that nothing has been missed. Increasing security is a particular benefit resulting from such checks.
Code reviewers can identify inherent vulnerabilities in the code that compromise the application or surrounding code blocks and errors in the logic affecting the application’s security. Fixing these issues early on helps prevent data and intellectual property loss.
Read also: SaaS application security best practices
We at Onix regard code reviews as time for spreading best practices and sharing knowledge about the system among team members. It helps ensure that we all write in the same style and according to common standards.
Virtually everyone involved in peer code reviews can learn or teach others something new about a language, a framework, or general software design principles.
New developers can regard them as learning and mentorship opportunities. Junior programmers can learn from senior team members. Colleagues getting familiar with code written by teammates can learn from their mistakes and discover new tricks. The procedure helps even the most experienced programmers correct bad habits.
This way, code reviews help keep the codebase consistent, maintainable, and stable. Without them, slight code degradations, complexities, or tech debts would add up and eventually result in a product that is challenging to maintain and expand.
Still, teams often meet the topic with a collective groan. If it sounds familiar, your company probably needs to optimize code review methods and the whole software development process.
A company without established code review procedures most likely won’t know the effectiveness of its code reviews or whether the developers even run them. Not to mention they jeopardize the codebase and the team’s productivity.
Adopting and maintaining a formal set of procedures is the first best practice for code reviews. Before any recommendations, let’s look at or recall the aspects they should cover.
The code reviewer or reviewers should look at every line of code they have been assigned to review with an eye on the following:
Code reviewers should determine whether it is clear what the code does and whether the code has meaningful naming conventions and code comments. If the code is too hard to read, the reviewer should ask the author to make it simpler before submitting it for review again.
The programmer should have picked appropriate names for everything. A good name is long enough to explain clearly what the item is or does but not so long as to become hard to read.
Besides checking a pull request against the requirements outlined in the appropriate style guide,
code reviewers should also check whether it includes significant style changes combined with other changes. For example, if the author wants to reformat the entire file, they should send just the reformatting as one pull request and then another with their functional changes.
Ideally, most functionality-related questions should have been eliminated by unit-tests. Looking through them, Onix’s programmers usually can tell whether the logic is implemented correctly.
Suppose the reviewer has difficulty understanding a change’s impact, but it’s inconvenient to patch in the pull request and try it. In that case, they should request a demo of the functionality.
Code reviewers assess the complexity of the classes, functions, individual lines, and the entire pull request. Excessive complexity usually means that code readers can’t understand something quickly or that developers will likely introduce defects when they try to call or modify this code.
Reviewers should also check whether the developer has made the code more generic than it needs to be or added functionality that the system doesn’t need presently.
If the code reviewer has difficulty understanding the code, they may ask for a code walkthrough or explanation.
Unless it’s an emergency, developers should include unit, integration, or end-to-end tests with the same pull request as the production code. Code reviewers should check whether they are correct, sensible, and useful. The code of the tests also has to be maintained and shouldn’t be complex.
Code reviewers should:
- assess the code for security robustness
- look for possible issues that could be exploited, such as controller actions without protection against CSRF or SQL queries that can become vulnerable to injections
- check the code for how well it guards against external threats
- look for opportunities to improve security
Reviewers can assess this by determining whether the author followed coding best practices like loose coupling and high cohesion.
Code reviewers should check whether each comment is necessary and valuable and written in understandable English. Comments should primarily include information that the code itself can’t possibly contain. However, it may be helpful to explain what regular expressions and complex algorithms do.
Comments that were there before this pull request can also be reviewed: some might advise against this particular change, while old TODO comments should be deleted.
The documentation of classes, modules, or functions should explain the purpose of a piece of code, its use, and its behavior when used. If documentation is missing, code reviewers should ask for it.
If a pull request changes how users build, test, interact with, or release code, code reviewers should see that it updates the associated documentation: READMEs, g3doc pages, and any generated reference documents.
Ideally, the documentation should be as close as possible to the code it explains. i.e., on the same line of code, next to it, in the description of the method-function, the same repository, or the issue indicated by the commit. Otherwise, it will most likely be detached from changes in the code. Searching for this information will be inconvenient and to little avail.
If the pull request deletes or deprecates code, code reviewers should also consider whether the related documentation should be deleted.
Other key things to assess may include the code’s structure, logic, reusability, etc.
An ideal code review is comprehensive yet short. To this end, our first code review tip will be…
The key to a short but efficient code review is thoughtful planning. If your code reviews are burdensome, it may be a sign that you need to revіse your team’s workflow. For instance, concurrency models where race conditions or deadlocks are possible complicate code reviews.
The following recommendations may help you facilitate code reviews:
The management should help the developers understand the benefits of code reviews. Many programmers dread them because of possible critique from peers and managers and measuring defect density in their code. These factors can also put a strain on relationships within the team.
The number of defects found during peer code reviews should not be a criterion for employee evaluation. Code review reports should never be used in performance reports. If these metrics become a basis for compensation or promotion, programmers may become hostile toward the process and focus on personal metrics rather than writing better code.
It’s essential to facilitate collaboration and knowledge sharing through peer reviews and to foster a positive attitude toward defect detection. Every bug caught before it went into production should be celebrated instead of blaming the author for the mistake.
The company should define requirements for code that is ready for review. For example, the author should submit every new chunk of code only after they have scanned it for common mistakes and verified that it meets the requirements set forth by relevant style guides.
Checklists have proven helpful for this purpose. Below, you can see a standard author’s checklist we use at Onix.
Proper comments and automated testing are also essential to the definition of “ready for review.”
Developers should annotate their source code before submitting it for review. Annotations should guide the reviewer through the changes, showing which files to look at first, explaining the intent of a code block or why the author made the decision or change, and providing deeper context.
The author needn’t comment on code style, formatting, and everything that should be checked in the Continuous Integration (CI) framework using static code analyzers.
A pull request should be “ready for review” when it doesn’t require additional explanations.
Test-driven development requires writing unit tests before implementing the code. This approach serves as a preliminary check for specific code blocks. CI also implies an automatic review with a series of previously written tests before doing a manual review.
One way is to write unit tests for each code block. Another approach uses test suites that accelerate the validation of code sections performing one or more tasks the test suite prescribes. A failure can point the developer to a specific area they need to modify.
Authors should run automated code checkers. These tools not only cover overt formatting mistakes like errors in function names or spacing by checking code against coding rules but can catch vulnerabilities before they reach code review. It will drastically reduce the number of errors that reach the peer review.
For example, GitHub indicates that the code is ready for manual review only if it passes tests successfully.
No reason can justify skipping the tests. If the team can’t deliver a feature after testing due to time constraints, it’s preferable to descope the deliverables than to skip tests.
Scanners or linters perform a “spellcheck” for the code to find errors quickly. Static code analyzers, also known as static application security testing (SAST) tools, analyze the software under development without running it. Some popular SAST tools are Embold, SonarQube, and Veracode Static Analysis.
For example, Onix uses SAST tools and linters to find most of the problems with the code style as part of the CI process. SonarQube automatically provides the scanning result to the code reviewer. It also covers the code complexity aspect.
A good code review strategy requires balancing a safe, collaborative environment and strictly documented processes. Highly regimented peer reviews can stifle productivity, whereas a laid-back approach is ineffective. Managers should find a middle ground.
Let’s proceed to the recommendations for managers and code reviewers regarding the code review process.
The developer should create pull requests with specific instructions for code reviewers, but reviewers should think bigger than that. To this end, the company can share general goals of code reviews so that programmers can brush them up at any time.
For example, Onix’s programmers always keep in mind that they should strive to:
Conducting reviews of product requirements, user stories, and design documents also helps ensure that all team members understand the goals.
Still, the goal of a particular review may depend on circumstances. For example, code reviewers may have to focus on the most important points to address due to time constraints.
Checklists streamline code reviews by setting expectations for code reviewers, helping them focus on the right things or specifying what to look for in the code.
For example, programmers often make the same ‘typical’ mistakes over and over. Omissions are a ‘typical’ defect but one of the hardest to find: detecting something that isn’t there can be tricky.
Code reviewers may have their own checklists, but giving them a team-approved checklist will ensure that the reviews are consistent, common mistakes are in check, and programmers apply the same criteria to each other’s code.
Below, you can see Onix’s standard template:
No matter how skilled and experienced a programmer is, everyone needs to review and be reviewed.
Anyone who can read the code can do it. The reviewer needn’t necessarily know all the project details. On the contrary, the less a person knows about the project, the more valuable their feedback may be: ideally, code and documentation should allow any developer to work with them in the shortest possible time.
In big companies, code reviews are usually assigned to employees with sufficient expertise in a specific field or project. A software engineer and a software architect are excellent options: they can spot different issues related to both the codebase and the product’s overall design.
Suppose a reviewer doesn’t feel qualified to do some part of the review, especially related to privacy, security, concurrency, accessibility, or internationalization. In that case, they should involve someone more competent.
If several reviewers decide to divide the task, each reviewing only specific files or certain aspects of the pull request, each reviewer should note in a comment which parts they reviewed.
A heavyweight or formal inspection requires up to six reviewers, takes hours of meetings paging through code printouts, and averages nine hours per 200 lines of code.
A lightweight, tool-assisted process reportedly allows finding the same number of bugs in less than 20% of a formal review time.
A company’s management of code reviews may depend on its situation, but it’s generally recommended to limit pull requests to 400 lines of code. The optimal number is up to 250 lines. A review of 200-400 lines of code over 60-90 minutes should reportedly yield 70-90% defect discovery.
Writing smaller and more manageable chunks of code will facilitate the review thereof. For example, when developing a mobile app’s sign-in feature, the programmers may split it into small tasks: sign-in via social networks, sign-in via email and password, reset password, etc.
If that is not possible for some reason, developers can hold frequent code reviews before all the code is completed, even daily.
The following practices help prevent the formation of mega pull requests:
- automating code styling and formatting with a commit hook
- doing a refactoring story in front of the main story
- avoiding stray changes in the main story
Every team has its own optimal pull request size and code review session time; finding them is an optimization issue.
Teams may do it through experiments, measuring the changes in velocity. For example, if the current rule is to create pull requests that will take less than an hour to review, they may try a 45-minute limit for a couple of months, examine the metrics, set a new limit, and repeat the experiment until they find the optimal pull request size.
It’s crucial both to plan enough time for a quality peer review and to limit the amount of time to spend on it. The limitations should apply to each line of code and the total code review time.
Allocating at least 3 seconds of review time per line of code and limiting the total time to 60 minutes are good practices. If you expect a peer review to take longer, it’s best to split it into sessions and take 20-minute breaks between sessions to let the brain reset and the eyes rest.
An hour is a reasonable upper limit, but at Onix, code reviews usually take around 10-30 minutes. Shorter but frequent reviews are more effective.
It’s recommended that teams do a certain amount of code review every day, and it’s best to run a code review after a planned break for lunch, coffee, etc.
These tools streamline code reviews, optimize a team’s time spent on them, and increase their efficiency and accuracy.
Some of the most popular code review tools are BitBucket, Gerrit, GitHub, and GitLab, which Onix uses.
A collaborative code review tool that helps reviewers to log bugs, discuss them with the author, and approve changes in the code will help ensure that detected defects are fixed. Without an automated tool, detected bugs may fail to be logged in the team’s defect tracking system because they are found before the code is released to QA.
Teams should use standard commenting tools to record the discussion of changes in the repository. Constructive discussions will allow others to join and learn and preserve the context of changes in the code.
Some of the steps to take before a code review include:
The basic steps to protect the source code and enable a secure code review include:
Having the entire pull request reviewed and merged on the same day may not always be possible, but it’s important to give the author some feedback quickly. If the team is short on time or can’t allocate the resources for a complete code review, the reviewer can, at worst, check 20-30% of the written code and quickly point out things the author could look into.
Code reviewers should stay kind, polite, and respectful to authors while also being clear and helpful. They should make every effort to be constructive in their feedback rather than critical, always referring to the code and never to the author. Everybody should respect their teammates’ opinions and try to understand their motivation and goals.
Asking open-ended questions is preferable to statements or explicit suggestions. Such questions invite the author to think about their code critically and independently. Suggesting guidance, explanation, and tips regarding fixes is preferable to offering an entire solution.
If a comment suggests an alternative approach or flags something up, it’s helpful to explain how a suggestion can improve the code and provide an example based on the reviewer’s knowledge and experience.
Suppose there are several options. If the author can provide relevant data or prove through solid engineering principles that they are all valid, the author’s choice should prevail.
If the comments are optional or minor, the reviewer should indicate so in review comments. The author should decide whether to address or ignore them.
Instead of focusing on mistakes or “imperfections,” reviewers should strive to offer encouragement and appreciate good practices. If they see something good, they should shout out the author for good work.
Giving feedback in person, not to mention doing a review face-to-face, will promote communication with the right tone.
Programmers should share knowledge and experience wherever possible. Reviewers should feel free to write comments to tell the author that something could be improved or help them learn something new.
However, they should also explain the “why” behind their feedback. If they suggest an alternative line of code, they should explain why it might improve the original code.
If a suggestion isn’t critical to meeting the style guide and other standards, a prefix like “Nit“ should indicate that it’s just a point of polish and not mandatory to resolve in this pull request.
In the case of a conflict, the developer and reviewer should first try to come to a consensus following the style guide and coding standards.
A face-to-face meeting or a video conference between the reviewer and the author can be more helpful than exchanging comments. Afterward, they should record the discussion results as a comment on the pull request for future readers.
If they can’t resolve the conflict together, one way out is to have a Tech Lead weigh in. Another is to ask the maintainer of the code or an engineering manager to help out, or seek the advice of another expert in the area. Just don’t let a pull request sit around because the author and the reviewer can’t agree.
Quantifiable internal process metrics can help teams gauge the effectiveness and efficiency of their code review process.
It’s up to your team to decide how they will measure the effectiveness of peer reviews. For instance, it may be helpful to watch:
For instance, you may track the time your team spends on each code review and compare those times to the number of lines changed in your pull requests to establish a correlation.
Let us offer a few more code review tips, now regarding things that may impede the process or otherwise adversely impact the team and the software product development.
Read also: How to speed up software development
The senior principle among all code reviews is: reviewers should approve a pull request once it is in a state where it improves the maintainability, readability, and understandability of the system.
Peer code reviews are an integral part of web or mobile application development at Onix, but occasionally, the procedure plays a prominent role.
Many clients approach Onix with a request to assess their codebase. For example, when Jimmy Gambier, the founder/CEO of InnerVR, came up with the idea of a meditation and relaxation VR game, he began to develop it on his own. Unfortunately, the project didn’t go well. When performance and bugs became major issues, he contacted Onix.
After several code reviews, Onix’s developers understood that the problems were more severe than expected. Optimizing what was possible to optimize would be more difficult than building the system anew.
The client agreed to the latter plan. The result, InnerVR Beta for Oculus Quest 2, was published on the Oculus App Lab store and enjoys excellent reviews and praise for its visual content.
Code reviews were also significant for establishing a trust-based relationship between Onix and Learning Pool. Instead of a usual client-contractor relationship, Learning Pool expected dedicated developers to quickly integrate into their team with its own processes, rules, and management that had been formalized over fifteen years.
It was crucial that Learning Pool’s software engineers should trust and approve all our programmers’ work. Reviewing all the code written, they saw that it was clear and written to a very high standard. After a month of work, Onix’s developers were entrusted with reviewing the code written by Learning Pool’s programmers.
Currently, Onix’s programmers are involved in software development, support, testing, analysis, and anything else Learning Pool may require. The collaboration continues to go smoothly, the internal and remote team members understand each other perfectly, all proposals are discussed, and all decisions are made collectively.
A secure code review with a combination of automated and manual peer review practices is the most efficient way to ship secure high-quality code quickly.
A good code review strategy benefits both the software product development and the developers and begins with a positive attitude toward the procedure.
Checking a reasonable quantity of code at a moderate pace for a limited amount of time results in most effective code reviews. Qualified code reviewers should inspect every line but shouldn’t delay a pull request for days because it isn’t “perfect.” Instead, everybody should seek continuous improvement of the codebase.
We hope that the code review tips and tricks listed in this article will help improve the health of your software and your developers’ well-being. If you need help with any part of your software development process, please feel free to contact Onix!
Onix-Systems is an outsourcing agency catering to international businesses and startups across various industries. We provide discovery services, custom web and mobile app development, AR/VR development, UI/UX design, quality assurance, DevOps, support, consulting, and more.
Onix offers dedicated full-time teams and builds software on flexible employment and pricing terms, focusing on regulatory compliance and intellectual property. Some clients have worked with us on multiple projects for over ten years.
Software engineers conduct code reviews to ensure that the code at least:
The list of things to do may depend on the situation. Here is Onix’s rather detailed checklist that you can use as a reference:
1)Make sure the code is easy to read and understand.
2)The classes, functions, and variables are named clearly and logically, with comments where necessary.
3)Avoid code duplication.
4)Make sure that all edge cases in the code are checked and handled in the code as described in the comments/documentation.
5)The documentation and comments should add the missing context. The code explains what it does and the comments – why.
6)Make sure the code doesn’t do something already implemented in the project, standard library, or dependencies used in the project.
7)Make sure that the code/documentation contains information about what we will do and why.
8)Check the code for any security issues.
Some of the best practices for code review include: