Skip to content
CMO & CTO
CMO & CTO

Closing the Bridge Between Marketing and Technology, By Luis Fernandez

  • Digital Experience
    • Experience Strategy
    • Experience-Driven Commerce
    • Multi-Channel Experience
    • Personalization & Targeting
    • SEO & Performance
    • User Journey & Behavior
  • Marketing Technologies
    • Analytics & Measurement
    • Content Management Systems
    • Customer Data Platforms
    • Digital Asset Management
    • Marketing Automation
    • MarTech Stack & Strategy
    • Technology Buying & ROI
  • Software Engineering
    • Software Engineering
    • Software Architecture
    • General Software
    • Development Practices
    • Productivity & Workflow
    • Code
    • Engineering Management
    • Business of Software
    • Code
    • Digital Transformation
    • Systems Thinking
    • Technical Implementation
  • About
CMO & CTO

Closing the Bridge Between Marketing and Technology, By Luis Fernandez

Test Pyramid with JUnit: Fast at the Bottom, Valuable at the Top

Posted on June 25, 2013 By Luis Fernandez
\n

We just wrapped another late night deployment. Jenkins is green again, but our UI checks took longer than the coffee line this morning. Teams keep asking the same question: how many tests is enough and where should they live. The Test Pyramid is still the best mental model I know for keeping builds quick and feedback sharp. With JUnit plus tools like Mockito, Selenium WebDriver, and Jenkins, we can make tests fast at the bottom and valuable at the top without wrecking our day. That is the plan I want to share here, from the point of view of someone who writes and fixes these tests all week.

\n\n\n\n

Definitions

\n\n\n\n

Picture a pyramid split into three layers. The base is lots of unit tests. They run in memory, use no real network, no disk, no app server. They are fast and precise. The middle layer is integration tests. These touch more than one piece at a time. Think database, file system, a REST call to a local stub, or an embedded servlet container. They are slower but they catch wiring mistakes and broken assumptions between parts. The top is end to end tests. A browser clicking through the app. A full request to a staging API with a real database behind it. These are slow, flaky if we get sloppy, and expensive to keep green, but they give strong confidence for a few critical flows.

\n\n\n\n

Across all layers we use test doubles when a real dependency makes the test slow, flaky or hard to set up. That includes stubs for HTTP calls, fakes for email senders, and mocks for collaborations where we want to verify a call was made. With JUnit as the runner, this gives us a repeatable suite that tells us what broke and where.

\n\n\n\n

Examples

\n\n\n\n

Here are concrete shapes I keep seeing pay off with JUnit in day to day work.

\n\n\n\n
    \n
  • Unit tests: A money converter that applies a rate and rounds; a parser that turns a CSV line into a domain object; a validator for business rules on a signup form. These run in milliseconds, hit only memory, and use Mockito to stand in for collaborators. Good names on tests make failures meaningful. Build tool wise, these run with Maven Surefire or the default Gradle test task.
  • \n
  • Integration tests: A repository that talks to H2 in memory with the schema created at test setup; a REST client that hits a local Jetty instance started for the suite; a message publisher that writes to an embedded queue. These give real wiring and find broken configs early. With Maven they can be separated using the Failsafe plugin so they run after packaging. With JUnit we can label them using @Category and have the CI server pick which groups to run on each stage.
  • \n
  • End to end tests: A Selenium WebDriver script that logs in, creates a user, and verifies the new user appears in a search. A full purchase flow that moves money and then rolls it back. We keep these few, focus on the business line that would hurt most if it broke, and run them on Jenkins slaves with a clean browser profile. They run on every push to a shared branch or at least on a scheduled job. If we see one get flaky, we fix it or throw it away.
  • \n
\n\n\n\n

On the CI side, a nice shape is: unit tests on every commit, middle layer on commit to master, and UI checks nightly or before a release tag. The idea is cheap failures first and expensive checks later.

\n\n\n\n

Counterexamples

\n\n\n\n

Here are patterns that look fine at first and then turn into pain.

\n\n\n\n
    \n
  • Ice cream cone testing: Tons of UI checks at the top, a little in the middle, almost no unit tests. Every change breaks three scripts and you spend your day updating selectors. Build times stretch, people start ignoring red builds, and quality drops.
  • \n
  • Everything through the browser: Writing a test for each controller rule that clicks through the whole site. The browser is not the right place to check every small branch. Use service level tests for most rules, then keep a handful of full journeys for the happy path and the scariest money flows.
  • \n
  • Over mocking: Mocking values and return types for basic domain objects, then asserting on those mocks. The test passes even if the real object is broken. Mock only collaborations across boundaries. For logic inside one object, create the real thing.
  • \n
  • Network in a unit test: A tiny test that pulls a feed from a real server. It fails when WiFi is spotty or the server rate limits. Keep unit tests free of network calls. If you need to test HTTP code, stick a local stub right next to the suite.
  • \n
  • Huge fixtures: A test that loads a pile of XML or JSON just to check one field. Use small builders or factory methods to create the slice you need. If a test needs pages of setup, we are probably testing too much at once.
  • \n
\n\n\n\n

Decision rubric

\n\n\n\n

When a new feature lands or a bug shows up, here is how I decide where the check belongs and how to write it with JUnit.

\n\n\n\n
    \n
  • Ask what can break: If the risk lives inside a single class or pure function, write a unit test. If the risk is about wiring, data mapping, or config, write a middle layer test. If the risk is about a journey across screens or services, write one end to end check.
  • \n
  • Push speed down the stack: Prefer the fastest layer that can fail for the right reason. If a rule can be tested at unit level, do not push it up to UI. Keep the top tiny and meaningful.
  • \n
  • One behavior, one reason to fail: For each behavior, write the smallest test that fails for that reason alone. That makes the failure message helpful and keeps the suite stable.
  • \n
  • Score the test idea: Speed, signal, isolation, maintenance cost. If two ideas have the same signal, take the faster one. If a fast one hides real risk, add a slower but high value check as a backup.
  • \n
  • Use JUnit features to shape your suite: Categories to group tests by layer. Assumptions for environment checks. Rules for temporary folders and timeouts. Suites to run common groups locally and in CI. This keeps the flow smooth for both laptop runs and Jenkins jobs.
  • \n
  • Set target ratios: A healthy project often lands around two thirds to three quarters unit tests, the next chunk in the middle, and a small tip of end to end. Do not chase the number blindly. Use it as a smell. If the tip grows too large, move checks down. If the base is thin, write more small tests.
  • \n
  • Stage the pipeline: Gate 1 runs unit tests and static checks in minutes. Gate 2 runs the middle layer with a local database and embedded servers. Gate 3 runs UI flows on clean machines. Keep reports visible and fast to read. Fail early and cheap.
  • \n
  • Name and tag with intent: Prefix classes with Unit, Service, or E2E or use JUnit categories. This makes it clear what to run in each stage and what to run before pushing.
  • \n
\n\n\n\n

Lesson learned

\n\n\n\n

The Test Pyramid is not a poster on a wall. It is a set of tradeoffs we make every day. JUnit gives us the rails to run on, and the rest of our toolchain fills the gaps. When the base is solid and fast, people run tests on each save and push more often. When the middle checks real wiring, we catch bad configs before staging. When the top stays small and focused, we can trust a green build before we ship.

\n\n\n\n

Right now teams are leaning into continuous delivery and pull request checks. Git hosting and cloud CI make it easy to run tests for every change, but only if the suite is not a boat anchor. The pyramid steers us back to the essentials. Small fast checks near the code, a practical set of service level checks, and a tiny set of business journeys that reflect how people actually use the product.

\n\n\n\n

If your builds are slow or flaky, do not start by buying more hardware. Look at the shape of your suite. Move checks down. Cut brittle UI flows. Add more unit tests around the hot spots. Tag everything so you can slice runs on a laptop and on Jenkins. Use JUnit rules and categories to make that easy.

\n\n\n\n

Most of all, watch the feedback loop. The best suites make it natural to push code every hour and sleep well after a release. That comes from being fast at the bottom and valuable at the top. With JUnit, a bit of discipline, and a friendly CI server, the pyramid is not theory. It is how we keep shipping without fear.

\n
Development Practices Software Engineering

Post navigation

Previous post
Next post
  • Digital Experience (94)
    • Experience Strategy (19)
    • Experience-Driven Commerce (5)
    • Multi-Channel Experience (9)
    • Personalization & Targeting (21)
    • SEO & Performance (10)
  • Marketing Technologies (92)
    • Analytics & Measurement (14)
    • Content Management Systems (45)
    • Customer Data Platforms (4)
    • Digital Asset Management (8)
    • Marketing Automation (6)
    • MarTech Stack & Strategy (10)
    • Technology Buying & ROI (3)
  • Software Engineering (310)
    • Business of Software (20)
    • Code (30)
    • Development Practices (52)
    • Digital Transformation (21)
    • Engineering Management (25)
    • General Software (82)
    • Productivity & Workflow (30)
    • Software Architecture (85)
    • Technical Implementation (23)
  • 2025 (12)
  • 2024 (8)
  • 2023 (18)
  • 2022 (13)
  • 2021 (3)
  • 2020 (8)
  • 2019 (8)
  • 2018 (23)
  • 2017 (17)
  • 2016 (40)
  • 2015 (37)
  • 2014 (25)
  • 2013 (28)
  • 2012 (24)
  • 2011 (30)
  • 2010 (42)
  • 2009 (25)
  • 2008 (13)
  • 2007 (33)
  • 2006 (26)

Ab Testing Adobe Adobe Analytics Adobe Target AEM agile-methodologies Analytics architecture-patterns CDP CMS coding-practices content-marketing Content Supply Chain Conversion Optimization Core Web Vitals customer-education Customer Data Platform Customer Experience Customer Journey DAM Data Layer Data Unification documentation DXP Individualization java Martech metrics mobile-development Mobile First Multichannel Omnichannel Personalization product-strategy project-management Responsive Design Search Engine Optimization Segmentation seo spring Targeting Tracking user-experience User Journey web-development

©2025 CMO & CTO | WordPress Theme by SuperbThemes