Skip to content
CMO & CTO
CMO & CTO

Closing the Bridge Between Marketing and Technology, By Luis Fernandez

  • Digital Experience
    • Experience Strategy
    • Experience-Driven Commerce
    • Multi-Channel Experience
    • Personalization & Targeting
    • SEO & Performance
    • User Journey & Behavior
  • Marketing Technologies
    • Analytics & Measurement
    • Content Management Systems
    • Customer Data Platforms
    • Digital Asset Management
    • Marketing Automation
    • MarTech Stack & Strategy
    • Technology Buying & ROI
  • Software Engineering
    • Software Engineering
    • Software Architecture
    • General Software
    • Development Practices
    • Productivity & Workflow
    • Code
    • Engineering Management
    • Business of Software
    • Code
    • Digital Transformation
    • Systems Thinking
    • Technical Implementation
  • About
CMO & CTO

Closing the Bridge Between Marketing and Technology, By Luis Fernandez

What to Do When Coverage Drops

Posted on November 28, 2010 By Luis Fernandez

The build light turned red at 1 in the morning. Hudson blinked like a small lighthouse in a foggy kitchen while I reheated pizza and hit refresh. Our latest merge went in clean, tests passed on my machine, life was fine. Then the email landed. Code coverage dropped by four points. The chat room filled with shrug emojis before emojis were a thing. “Must be Cobertura acting up.” “Maybe Hudson skipped a module.” Someone blamed the test seed we do not have. We stared at the graph on Sonar and the red line stared back.

I have seen this movie too many times. A sprint drives a big feature, we touch many files, then coverage dips and folks reach for any excuse that does not involve writing more tests. Tonight I did not want excuses. I wanted a simple plan for what to do when the number drops. If you ship software and you run unit tests, this is for you.

First, treat the number like a fire alarm

Coverage is not truth, but it is a strong signal. When it falls, you do not argue with the siren. You check the oven and the wiring. There are only a few root causes that move that percentage:

  • You added code without tests.
  • You removed tests or they stopped running.
  • You changed what the tool measures.
  • You wrote code that is harder to reach, like branches that never flip.

The good news is you can verify each one fast. No drama, just a checklist.

Quick technical triage when coverage drops

Step 1. Verify the measurement stayed the same. On Hudson or TeamCity, check the job that produces the report. Did someone tweak includes or excludes. Is generated code sneaking in. Are we skipping a module. If you are using Maven with Cobertura or Emma, make sure the same profiles ran. If you are in .NET with NCover or PartCover, confirm the same assemblies are listed. In Python, did coverage.py still run the same test runner. These changes move the denominator and make the graph lie.

Step 2. Reproduce locally with the exact command the server runs. Build clean. Run tests with coverage. Open the HTML report and sort by classes with most uncovered lines. You are looking for a heat map. The hot files usually belong to the branch you just merged.

Step 3. Compare the before and after reports. Even a quick glance helps. In Cobertura and Clover you can click a package and see red and green lines. Find the files that grew the most lines and check if tests touch the new branches. If your tool supports branch coverage, pay attention to that number. Line coverage can trick you when you add many ifs that stay on the happy path.

Step 4. Confirm tests are actually running. Sounds silly, but I have been burned by a renamed test class that Surefire no longer picks up, or an @Ignore that someone forgot to remove, or a category filter left on from a quick run. In Python, a nose pattern can drop a whole folder. In Ruby, RSpec tags can sideline half the suite. A quiet test is the fastest way to tank coverage.

Step 5. Pull on the biggest thread. Sort by uncovered lines and pick the top file. Ask a simple question. What would a small test need to assert here. Controllers often hide complexity. Service classes gain early returns that nobody flips. Data mappers grow branches for null or empty. One or two smart tests here can buy back a point fast.

Common traps that make coverage sink

  • Generated or boilerplate code included in the report. Exclude DTOs, proxies, and codegen folders. You do not want to chase getters all day.
  • New integration code with no seams. If a method builds its own collaborators, you cannot stub them. Introduce a constructor and pass dependencies in. Once you can swap a fake, tests become possible.
  • Big feature branches merged late. The more surface you push at once, the higher the chance you miss edges. Smaller merges keep coverage steady.
  • Branch coverage ignored. A file can hit lines but never flip the else. If your tool reports branches, watch that chart.
  • Multi module projects misreporting totals. An aggregator that skips a child module will inflate or deflate the global number. Check the module list in your build logs.

How to claw back coverage without theatrics

Start with the files that moved the needle. The report will tell you where the holes are. Pick the top two and add targeted unit tests. Focus on branches first. Flip the guard clauses. Test the unhappy path. You are not trying to paint the whole town green. You are fixing the story you just shipped.

Add seams where tests cannot reach. If a method both parses input and hits the database, tease those apart. Pull the parsing into a pure function. Keep the gateway thin. Once you can pass a fake gateway, the rest is easy to cover.

Gate on changed code, not the whole world. A fair rule is simple. Do not let the change make things worse. Some tools let you fail the build when overall coverage drops. That is a blunt stick. A better habit is to inspect coverage on files you touched. If they go up or stay flat, you are fine. If you want to script it, you can compare the report against the diff from Git or Subversion and alert only on those files. It takes a few lines of glue and pays dividends.

Use a ratchet instead of a fixed bar. Set the threshold to the current value and only allow equal or higher. Each merge nudges the ratchet up. No one has to break rocks to reach a random target like ninety five percent. The team moves forward together.

Keep integration tests and unit tests in balance. If your rescue plan always reaches for slow end to end tests, your builds will crawl by Friday. Put logic behind small tests. Leave system paths for a few happy routes and a couple of nasty ones.

For managers who watch the trend

Coverage is a coaching tool, not a scoreboard. If you pay bounties for a number, the number will go up while quality goes down. People learn to test the easy stuff. The goal is not ninety nine percent green. The goal is confidence that a change will not break users.

Watch the trend line and the gap stories. A sudden drop means a risky merge. Ask for a short retro. What made testing hard. Did we couple a class too much. Did deadline pressure push tests out. Fund the fixes that reduce friction. More fakes. Smaller services. Faster builds. And keep the report visible. A big screen with the last ten builds on Hudson makes this social in the best way.

Set a simple rule for merges. Do not merge if coverage on touched files goes down. It is clear, fair, and hard to game. Pair this with lightweight reviews that ask one question. Where are the tests that lock the bug we fear. That keeps the conversation on risk, not on vanity metrics.

Your turn: run a coverage drop drill

This Monday, try a thirty minute drill. Grab a teammate and do the following:

  • Open your latest coverage report and list the five files with most uncovered lines.
  • Pick one file you touched this week and add one test that flips an untested branch.
  • Measure before and after with the same command your server uses.
  • Write a short note in the commit message. What made the test easy or hard.
  • Put the report on a shared screen near the team table. Let the graph be part of the room.

If your tool chain makes any of that painful, capture the pain points. Maybe your Hudson job hides the command. Maybe your Cobertura plugin includes generated code. Fixing that plumbing pays for itself the next time the graph dips at 1 in the morning.

Final thought. When coverage drops, it is not a moral failure. It is a nudge. A reminder that some code slipped through without a safety net. Use the nudge. Add the small tests that save you from late night support. Then let the graph recover on its own as you do real work with real tests. The pizza will still be warm.

Productivity & Workflow Software Engineering

Post navigation

Previous post
Next post
  • Digital Experience (94)
    • Experience Strategy (19)
    • Experience-Driven Commerce (5)
    • Multi-Channel Experience (9)
    • Personalization & Targeting (21)
    • SEO & Performance (10)
  • Marketing Technologies (92)
    • Analytics & Measurement (14)
    • Content Management Systems (45)
    • Customer Data Platforms (4)
    • Digital Asset Management (8)
    • Marketing Automation (6)
    • MarTech Stack & Strategy (10)
    • Technology Buying & ROI (3)
  • Software Engineering (310)
    • Business of Software (20)
    • Code (30)
    • Development Practices (52)
    • Digital Transformation (21)
    • Engineering Management (25)
    • General Software (82)
    • Productivity & Workflow (30)
    • Software Architecture (85)
    • Technical Implementation (23)
  • 2025 (12)
  • 2024 (8)
  • 2023 (18)
  • 2022 (13)
  • 2021 (3)
  • 2020 (8)
  • 2019 (8)
  • 2018 (23)
  • 2017 (17)
  • 2016 (40)
  • 2015 (37)
  • 2014 (25)
  • 2013 (28)
  • 2012 (24)
  • 2011 (30)
  • 2010 (42)
  • 2009 (25)
  • 2008 (13)
  • 2007 (33)
  • 2006 (26)

Ab Testing Adobe Adobe Analytics Adobe Target AEM agile-methodologies Analytics architecture-patterns CDP CMS coding-practices content-marketing Content Supply Chain Conversion Optimization Core Web Vitals customer-education Customer Data Platform Customer Experience Customer Journey DAM Data Layer Data Unification documentation DXP Individualization java Martech metrics mobile-development Mobile First Multichannel Omnichannel Personalization product-strategy project-management Responsive Design Search Engine Optimization Segmentation seo spring Targeting Tracking user-experience User Journey web-development

©2025 CMO & CTO | WordPress Theme by SuperbThemes