Skip to content
CMO & CTO
CMO & CTO

Closing the Bridge Between Marketing and Technology, By Luis Fernandez

  • Digital Experience
    • Experience Strategy
    • Experience-Driven Commerce
    • Multi-Channel Experience
    • Personalization & Targeting
    • SEO & Performance
    • User Journey & Behavior
  • Marketing Technologies
    • Analytics & Measurement
    • Content Management Systems
    • Customer Data Platforms
    • Digital Asset Management
    • Marketing Automation
    • MarTech Stack & Strategy
    • Technology Buying & ROI
  • Software Engineering
    • Software Engineering
    • Software Architecture
    • General Software
    • Development Practices
    • Productivity & Workflow
    • Code
    • Engineering Management
    • Business of Software
    • Code
    • Digital Transformation
    • Systems Thinking
    • Technical Implementation
  • About
CMO & CTO

Closing the Bridge Between Marketing and Technology, By Luis Fernandez

Jacoco and Friends: Wiring Coverage into CI

Posted on December 3, 2011 By Luis Fernandez
\n\n\n\n\n
\n\n\n

Created: 2011 12 03 00 59 46

\n\n\n\n

We keep shipping faster and the walls between commit and prod keep moving closer. Jenkins grew out of Hudson and is now everywhere on the floor. GitHub is where half of our code lives and pull requests are the new code review. In that world, code coverage is one of the few numbers that helps a team keep the guardrails without slowing down. The hot topic this week on my team chat was JaCoCo with Jenkins, Maven or Ant, and what to do with that coverage number when the build turns red. So here is a field note on setting up JaCoCo and friends for continuous integration, with a few scars and a simple rubric you can steal.

\n\n\n\n

Definitions that keep the conversation sane

\n\n\n\n

Before wiring anything, agree on words. People fight over numbers when they do not mean the same thing. Code coverage is the percentage of your code the tests touch at runtime. It does not prove correctness. It shows where tests do not even walk. That is valuable by itself.

\n\n\n\n

JaCoCo is a modern Java coverage library from the team behind EclEmma. It plugs in as a Java agent at test runtime or instruments classes offline. JaCoCo reports these metrics:

\n\n\n\n
    \n
  • Instruction coverage: tiny bytecode level measure. Good for a precise number but hard to reason about when you read source.
  • \n
  • Line coverage: the one most dashboards show. Easier to read in HTML reports and in EclEmma inside Eclipse.
  • \n
  • Branch coverage: did we take both sides of ifs and switch cases. If you care about decision logic, chase this.
  • \n
  • Complexity: a count based on branches that hints at how much decision logic a method has. It is not a badge. It is a smell radar.
  • \n
  • Method and class coverage: coarse measures that make nice summary tiles in Jenkins.
  • \n
\n\n\n\n

There is also the topic of test scope. Unit tests run fast and live in memory with mocks. Integration tests touch real databases, containers, queues, or the network. Both can feed JaCoCo, but not at the same time on the same files when you care about clean numbers. Keep scopes separate and then merge reports when you want a full picture.

\n\n\n\n

Last, about instrumentation modes. JaCoCo supports an on the fly agent that you pass to the JVM during tests and an offline mode that rewrites class files before tests run. The agent is simpler in CI and plays nicer with classloaders. Offline can be handy for special frameworks or when you need coverage for code you spawn in a different process.

\n\n\n\n

Examples from a team room that loves green builds

\n\n\n\n

Let us start with Jenkins plus Maven. The path of least surprise is to hook JaCoCo into the surefire run with the agent. The build runs tests, writes a jacoco.exec file, then generates HTML and XML reports. Jenkins can archive the HTML and publish it with a nice link on the job page. If you run Sonar on the side, point Sonar to the exec file and it will show the same metrics alongside your rules and duplications. The combo of Jenkins, JaCoCo, and Sonar gives you one set of numbers and one place to argue about them.

\n\n\n\n

On Ant, the story is similar. Add a target that starts tests with the agent and a target that creates the report after tests. It is a tiny bit more typing but not that much. Archive the report folder in your Jenkins job. For teams that still sit on old scripts, this is a painless upgrade.

\n\n\n\n

In Eclipse, the EclEmma plugin uses JaCoCo under the hood. You click Run with Coverage, the editor paints lines in green, yellow, and red. That is the most direct feedback loop you can get. When someone says the coverage went down, you can open the report, click a red method, and see the exact statement that never runs. That conversation beats a vague number any day.

\n\n\n\n

For multi module projects, keep per module reports and then produce an aggregate at the top. It helps because teams own modules, and you can set targets by module. Utility libraries can shoot for higher coverage than UI glue. The aggregate tells the story for the product. Both views matter.

\n\n\n\n

On thresholds, a simple baseline works for most shops:

\n\n\n\n
    \n
  • Line coverage not lower than 70 percent for the core. Libraries can aim for 80 to 90 percent. UI and wiring code can live with less.
  • \n
  • Branch coverage not lower than 50 to 60 percent where business rules live.
  • \n
  • No drop policy: on any branch build the number must not go down from the last successful build. Greenfield teams like to keep a ratchet.
  • \n
\n\n\n\n

You can wire a quality gate in Jenkins. If coverage is below the number, mark the build unstable or fail it outright. Some teams go soft fail at first. They alert and track deltas in a report. Once the habit sticks, they flip the switch to hard fail. The point is to make the build talk back when risk creeps in.

\n\n\n\n

One more nice touch. When you run integration tests in a separate stage, use a second JaCoCo agent with its own exec file. Generate a second report. At night, merge unit and integration coverage and publish a full report on a nightly job. During the day, keep the fast unit coverage for quick feedback. Your dashboard stays snappy and your nightly gives the complete picture.

\n\n\n\n

Counterexamples that save you from chasing ghosts

\n\n\n\n

Coverage is not a grade. It is a flashlight. A few common traps show up again and again.

\n\n\n\n
    \n
  • Generated code: JAXB, protocol buffers, IDE wizards, JPA metamodel classes. Exclude those packages. You do not want to write tests for a machine.
  • \n
  • Boilerplate: equals, hashCode, toString. If your tool generates them, exclude. If you hand write them, test the ones that matter. Do not pad numbers with trivia.
  • \n
  • Framework glue: servlet filters, dependency injection modules, configuration classes. These often need a container to run. Keep expectations humble and lean on integration tests.
  • \n
  • Legacy static singletons: a classic. You can get some coverage with PowerMock and friends, but be careful. It slows builds, and the number hides design debt. Put effort into refactors instead of forcing fake tests.
  • \n
  • Bytecode agents party crashers: you can only have so many agents before something trips. Profilers, AOP frameworks, and coverage at the same time can fight. Start simple. One agent in CI. Add more when you have a reason.
  • \n
  • Magic classloaders: OSGi, application servers, and some plugin systems load classes in exotic ways. JaCoCo agent mode handles most cases. Offline mode can help when agents do not see everything. Test on a mirror of prod if you depend on this.
  • \n
\n\n\n\n

There is also the classic false sense of safety. You can hit 90 percent line coverage with sloppy assertions. A test that only calls methods without verifying outcomes boosts the number and gives nothing back. Better to have a lower number with tests that fail for real bugs. Teams that take pride in assertions write less code to get more safety.

\n\n\n\n

Another trap is slow builds. Old coverage tools have a reputation for heavy runs on large projects. JaCoCo is fast enough for per commit runs when you keep the scope to unit tests. If your build drags, split steps. Unit with coverage on every push. Integration with coverage nightly. Keep the feedback loop tight.

\n\n\n\n

Decision rubric you can print and stick by the coffee machine

\n\n\n\n

Use this checklist when you wire coverage into your CI. It keeps the debate short and the build honest.

\n\n\n\n
    \n
  • Pick your tool\n
      \n
    • New projects or teams tired of slow reports pick JaCoCo.
    • \n
    • Legacy builds already on Cobertura that you cannot touch today keep it until the next refactor. Plan the switch.
    • \n
    • Commercial reports like Clover can be fine if you already pay for them. Watch agent clashes.
    • \n
    \n
  • \n
  • Pick the scope\n
      \n
    • Unit coverage on every push with the JaCoCo agent.
    • \n
    • Integration coverage on a separate job, nightly or scheduled, with a second exec file.
    • \n
    • Merge reports for a weekly dashboard if your audience likes one number.
    • \n
    \n
  • \n
  • Define thresholds by module\n
      \n
    • Core domain and libraries shoot for higher branch and line coverage.
    • \n
    • Adapters and UI glue keep a modest target. Focus more on acceptance tests.
    • \n
    • Set a no drop rule across the board. The best keeper of quality is the ratchet.
    • \n
    \n
  • \n
  • Decide on gates\n
      \n
    • First month run soft gates. Build stays green but Jenkins shouts and creates a task.
    • \n
    • Once stable, flip to hard gates. Failing coverage breaks the build for that module.
    • \n
    \n
  • \n
  • Keep reports useful\n
      \n
    • Publish HTML JaCoCo reports as artifacts. Link them from the build summary.
    • \n
    • If you have Sonar, import JaCoCo and let people browse coverage next to rules.
    • \n
    • Exclude generated code and unwanted packages so the number reflects real work.
    • \n
    \n
  • \n
  • Developer workflow\n
      \n
    • Use EclEmma locally. Run with coverage before pushing. Fix red lines while the code is fresh.
    • \n
    • Write tests with clear assertions. Treat coverage as a guide, not a target.
    • \n
    • When a diff reduces coverage for a good reason, note it in the review and move on.
    • \n
    \n
  • \n
  • Special cases\n
      \n
    • Complex app servers or OSGi need a spike branch to test agent vs offline modes.
    • \n
    • If you see agent conflicts, start with only JaCoCo in CI. Add other agents later if needed.
    • \n
    \n
  • \n
\n\n\n\n

If you follow that list, the build stays quick, the numbers mean something, and people trust the dashboard.

\n\n\n\n

Lessons we keep relearning when the coffee wears off

\n\n\n\n

Coverage is part numbers and part behavior change. These are the habits that stuck for teams I have worked with.

\n\n\n\n
    \n
  • Chase branches where the risk lives. A long if with money on the line deserves both sides tested. Branch coverage shines a light there. If the line number looks decent but branches are weak, do not pat yourself on the back.
  • \n
  • Make red lines actionable. A red method in a report should lead to a conversation about value. Why is this red. Is it brittle or just forgotten. Decide and act.
  • \n
  • Celebrate small wins. Turning on EclEmma and getting instant feedback boosts the team more than a weekly coverage email. People fix small gaps right away when the editor shows it.
  • \n
  • Exclude noise early. Generated files, configuration glue, auto wizards. Cut them out day one so your number starts real. It is hard to remove noise later without drama.
  • \n
  • Keep one agent in CI. Fewer moving parts means fewer surprises. If you need a profiler, run it on a separate job. For day to day, keep JaCoCo as the only rider.
  • \n
  • Structure tests by intent. Write focused unit tests that prove behavior, not just call paths. Keep integration tests to prove wiring. Your coverage will look balanced and your bugs will show up sooner.
  • \n
  • Teach the ratchet. A no drop rule turns the conversation from hitting a magic number to not getting worse. It is less political and it spreads across teams without big meetings.
  • \n
  • Use reports to drive refactors. When a class shows high complexity and low branch coverage, that is a refactor target. Break it down, cover the pieces, and the number moves for the right reason.
  • \n
\n\n\n\n

There is a quiet joy in opening a fresh JaCoCo HTML report and seeing whole packages in green. It is not about the scoreboard. It is about knowing where the shadows are. The combo of Jenkins, JaCoCo, and either Maven or Ant is a straight path to that feeling. You can wire it in a day, share the link with the team, and start making better calls in code reviews next week.

\n\n\n\n

If your build is noisy, start small. Pick one module, turn on the agent, publish the report, and set a modest threshold. Run that for a week. Let people use EclEmma. When the questions stop, roll it to the rest of the repo. No big bang. Just steady ground gained.

\n\n\n\n

We write tests for the same reason we write logs. We want the system to talk back. Coverage is the answer to the question Did we even go there. It is a simple question, and it keeps shipping honest. Wire JaCoCo into your CI, watch the trend, and use the story it tells to guide where you invest your testing time.

\n\n\n\n

TLDR for the busy builder
Turn on the JaCoCo agent in your test runs. Publish HTML reports in Jenkins. Set a no drop rule. Aim for higher branch coverage in core logic. Exclude generated code. Use EclEmma locally. Keep the pipeline fast by splitting unit and integration coverage. Let the number guide you, not own you.

\n\n\n
\n
Productivity & Workflow Software Engineering

Post navigation

Previous post
Next post
  • Digital Experience (94)
    • Experience Strategy (19)
    • Experience-Driven Commerce (5)
    • Multi-Channel Experience (9)
    • Personalization & Targeting (21)
    • SEO & Performance (10)
  • Marketing Technologies (92)
    • Analytics & Measurement (14)
    • Content Management Systems (45)
    • Customer Data Platforms (4)
    • Digital Asset Management (8)
    • Marketing Automation (6)
    • MarTech Stack & Strategy (10)
    • Technology Buying & ROI (3)
  • Software Engineering (310)
    • Business of Software (20)
    • Code (30)
    • Development Practices (52)
    • Digital Transformation (21)
    • Engineering Management (25)
    • General Software (82)
    • Productivity & Workflow (30)
    • Software Architecture (85)
    • Technical Implementation (23)
  • 2025 (12)
  • 2024 (8)
  • 2023 (18)
  • 2022 (13)
  • 2021 (3)
  • 2020 (8)
  • 2019 (8)
  • 2018 (23)
  • 2017 (17)
  • 2016 (40)
  • 2015 (37)
  • 2014 (25)
  • 2013 (28)
  • 2012 (24)
  • 2011 (30)
  • 2010 (42)
  • 2009 (25)
  • 2008 (13)
  • 2007 (33)
  • 2006 (26)

Ab Testing Adobe Adobe Analytics Adobe Target AEM agile-methodologies Analytics architecture-patterns CDP CMS coding-practices content-marketing Content Supply Chain Conversion Optimization Core Web Vitals customer-education Customer Data Platform Customer Experience Customer Journey DAM Data Layer Data Unification documentation DXP Individualization java Martech metrics mobile-development Mobile First Multichannel Omnichannel Personalization product-strategy project-management Responsive Design Search Engine Optimization Segmentation seo spring Targeting Tracking user-experience User Journey web-development

©2025 CMO & CTO | WordPress Theme by SuperbThemes