Can two free tools cut your code review time in half without changing your stack, your habits, or your sanity? If you write Java, the duo is FindBugs and PMD, and they are sitting there waiting to clean up your pull before a single human reads it.
With Oracle now holding the keys to Java and teams arguing about build scripts while the new tablet from Apple steals the headlines, the boring work still pays the rent. Static analysis is boring in the best way. Run FindBugs to scan bytecode for patterns that usually end in pain. Run PMD to catch sloppy constructs, empty blocks, dead locals, and copy paste clones. Together they act like that senior reviewer who never gets tired. They do not replace review. They tee it up. They move noise out of the room so your reviewer can look at the design and the tricky bits, not at missing braces or a sketchy equals. You fix the cheap stuff fast, then you bring people into the loop for the parts that matter.
The pitch is simple run both tools before any review and ship fewer smells for your teammates to comment on.
What does each tool bring? FindBugs flags null risks, dodgy equals and hashCode pairs, misuse of collections, bad synchronization, and a long list of bug patterns seen in the wild. It reads the compiled classes and spots things the compiler lets slide. PMD reads source and is great at rules you would add to a checklist. It warns on empty catch, long methods, unused private fields, and those if conditions that always evaluate the same way. PMD also ships with CPD which catches copy paste code across your project, a favorite source of tiny forks and big outages.
Speed matters and both tools are quick enough to run on your laptop before you push, or on Hudson after you push.
Wiring them in is not a science project. If you use Maven, add the FindBugs and PMD plugins and turn on their check goals so the build fails when the rules say stop. If you use Ant, drop in the tasks from each project and point them to your sources and classes. In Hudson, install the FindBugs and PMD reporters so every build publishes a tidy report and trend chart. Start with a build that only reports, then flip the switch so a new violation fails the build. That one change trains a team fast.
Create a baseline once then only allow the numbers to move down from there.
The trap is to turn on everything and drown. Do not do that. Pick a small set. In FindBugs, start with the highest priority bug patterns. In PMD, begin with basic, design, and unused code. Leave code style rules for your formatter and keep the analyzer on things that create bugs or cost time. If you must break a rule for a good reason, suppress it in place and write a short note. PMD supports the NOPMD comment. FindBugs understands suppression with annotations or the standard warning mechanism. Be explicit so the team learns from the exception instead of starting a silent rules war.
False positives happen so treat suppressions as part of the review, not a hack.
Here is a sane routine before a review. Run PMD and FindBugs locally. Fix the red flags. Commit. Let Hudson publish the reports. If the build breaks on new issues, fix them right away. Share the links to the reports in your review request so humans can skip the lint and jump to the ideas. Over time, raise the bar by enabling more rules, but only the ones that catch real mistakes in your codebase. Keep a short document in the repo that lists which rules are on and why. That tiny bit of context keeps decisions from getting lost between sprints.
This is all doable today with the tools you already have. Most teams are on Subversion or just trying out Git, and these analyzers do not care. In Eclipse you can install the FindBugs and PMD plugins and get feedback while you type. If your team uses Crucible or a lighter review flow, the same idea holds. Clean the basics with machines, then invite people to check the design, concurrency, and the tests. That is where the brain power belongs.
One more bonus your junior devs learn faster. The tools point at patterns with names and short explanations, and that shared language makes review comments shorter and clearer.
Let the static analyzers do the nagging so your reviewers can do the thinking.