- code review
- linters, static checkers, and compiler error messages
- test suites
- style checkers
Human code review is good at finding coding errors that escaped the compiler, but static checkers also find some of these errors. Static checkers have little up-front cost and negligable cost to re-run, but their coverage is limited. By contrast, human code review is expensive each time you do it, but they have as much coverage as you can afford.
As a human process, code review is not well suited to reveal defects like performance issues, that require both a specification of acceptable performance and an accurate measurement of a running system. Human processes are also expensive to use for checking conformance with style guidelines, for which fully automated tools are generally available. This doesn't mean you can't use code review to find these defects. It just means there may be more suitable tools.
Human review of interfaces (for instance C++ header files) is good at finding squishy issues that automated tools aren't any good at finding. There isn't an automated tool yet that can judge the quality of a class or library API. A bad API isn't *wrong* per se. It's just more confused or more difficult to use than it needs to be. Class level APIs aren't captured as user requirements, but evolve out of design and use. Test cases developed early in the class design are also good at detecting API issues. The test writer goes to test a feature, and finds they can't get at the feature in a useful way. The advantage of test cases is that they are inexpensive to re-run if the API implementation changes, while code review is expensive every time you use it.
Pair programming is the ultimate in code review. While one person types code, another reads the code and makes suggestions. It's also the most expensive form of review, costing a man hour of review for each man hour spent coding. I think code review meetings have almost the same coverage, but at a far lower cost.
There are automated tools that seek to make code review efficient. You can get your source control system to hold off checkins until people have signaled approval of the code. I find these tools are less effective than a code review that involves an actual face-to-face meeting. When each developer reviews in isolation, it's too easy to get lazy and perform a prefunctory read of the changed lines before signing off on the review. I used a code review process that required face-to-face meetings among three or more participants. I felt the cost was justified by the quality of the review.