Skip to content
CMO & CTO
CMO & CTO

Closing the Bridge Between Marketing and Technology, By Luis Fernandez

  • Digital Experience
    • Experience Strategy
    • Experience-Driven Commerce
    • Multi-Channel Experience
    • Personalization & Targeting
    • SEO & Performance
    • User Journey & Behavior
  • Marketing Technologies
    • Analytics & Measurement
    • Content Management Systems
    • Customer Data Platforms
    • Digital Asset Management
    • Marketing Automation
    • MarTech Stack & Strategy
    • Technology Buying & ROI
  • Software Engineering
    • Software Engineering
    • Software Architecture
    • General Software
    • Development Practices
    • Productivity & Workflow
    • Code
    • Engineering Management
    • Business of Software
    • Code
    • Digital Transformation
    • Systems Thinking
    • Technical Implementation
  • About
CMO & CTO

Closing the Bridge Between Marketing and Technology, By Luis Fernandez

Load Testing with JMeter: First Principles

Posted on April 13, 2011 By Luis Fernandez

“Your site is fast until people actually use it.”

The itch that sends you to JMeter

If you work on the web, there is a moment when the happy path turns into a long night. Maybe your app got a mention on a busy forum, or your promo email landed all at once. Things feel fine at five users, a little warm at fifty, and then a login screen starts to crawl. That is when you reach for Apache JMeter. It is free, it speaks HTTP out of the box, and it does not ask for a sales call before letting you push your app a little harder.

Right now we have all sorts of new shiny tools. Heroku just joined a bigger company and EC2 keeps getting new instance types. Node is on every second blog post. Yet the bottlenecks remain classic. Sessions, database calls, caches that do not warm up, and a queue that flatlines without telling you why. The good news is that most of this can be seen with simple, repeatable tests. JMeter gives you that repeat button.

A night in the war room

Here is a story from last week. A small Rails app looked great on a laptop. We had a CDN in front for static files and a clean Nginx setup. Then we ran our first real push. Signups fell over at fifteen requests per second, the homepage was still fine, and the database looked bored. We grabbed JMeter 2.4, recorded a flow with the HTTP proxy, and built a simple plan with one Thread Group, a login, a page view, and an action that writes data.

The first run said this was all good. Pretty charts, zero errors, green all around. Then we noticed the results were cached because our users in the test had the same token. Once we fixed correlation and fed real data into the plan, the truth appeared. Throughput was stuck, response times stacked up, and the 90 percent line jumped the fence. The culprit was not the CPU. It was a missing index plus a sticky session on the load balancer. The kind of bug that waves at you only when people show up together. JMeter did not solve it by itself. It just kept knocking on the same door while we changed one thing at a time. That steady tap is what you want.

Deep dive 1: Model the human, not a robot

Before you press Start, decide what a real user does. Ten clicks per minute is not real for most apps. Home, login, view, write, think, scroll, bounce. Make that the plan.

  • Thread Group: Set a clear number of users, a Ramp Up, and a Loop Count. A Ramp Up spreads the start, which avoids a synthetic pile up on second one. If you aim for 100 users, try a Ramp Up of 100 seconds for a calm start. Then tighten it if you need to test a traffic spike.
  • Think Time: Real people pause. Add a Constant Timer and a Gaussian Random Timer to sprinkle breathing room. This keeps your test from turning into a click storm that no one does in real life.
  • HTTP Cookie Manager and Cache Manager: Users keep cookies and a local cache. Your test should too. Otherwise you are hitting cold pages every time, which can be useful for a first look but not for a day to day picture.
  • Parameterization: Feed different usernames, emails, and ids. The CSV Data Set Config is the easiest way to do it. One row per user avoids cache hits and duplicate keys. Unique data also reveals hidden locks and slow queries that do not appear with a single account.
  • Correlation: Sites send tokens and ids that change on each page. Grab them with a Regular Expression Extractor or a Post Processor and pass them forward. Think CSRF tokens, session keys, next page cursors. If you do not capture them, you will get green results for requests that your app is ignoring or caching in odd ways.
  • Assertions: A test is not a test if it only counts response codes. Add Response Assertions for key text in the page, a JSON field, or a redirect location. Catching a silent error is pure gold.

A clean user model pays off. It makes your numbers closer to what a day of traffic feels like. It also makes results easier to explain to your team. This is how our users behave, so this is how the system reacts.

Deep dive 2: Make JMeter tell the truth

JMeter can chew your laptop if you let it draw charts while firing requests. Keep the tool simple and the results will be honest.

  • Run in non GUI mode: Use the command line to kick off tests. The GUI is great for building plans, not for running heavy tests. Non GUI mode leaves more CPU and memory for the traffic.
  • Keep listeners light: During a run, stick to a Simple Data Writer that writes CSV. Open the pretty graphs after. Live charts are fun to watch but they steal cycles.
  • Separate load from app: Use a different machine for the generator, or a few small ones if you want to push hard. A single laptop will hit a ceiling long before your app does. An EC2 box is cheap and easy for this job. You can scale out a bit with distributed testing if your plan is correct.
  • Control the pace: If you care about requests per minute more than concurrent users, try the Constant Throughput Timer. It lets you ask for a target rate while the Thread Group manages users and think time.
  • Keep requests real: Set the HTTP Request Defaults so you do not repeat servers and ports. Add headers that a browser would send. Use Follow Redirects only when it matches your user flow. Do not forget gzip.
  • Tidy data and logs: Save results to separate files with timestamps. Turn on Save Response Headers for a sample of requests to spot caching and compression issues. Keep a few server logs in sync so you can match an error spike to a code path.

One more thing. Warm up your app before you measure. Prime caches, open connections, and load classpaths. A two minute warm up smooths out the odd bumps from a cold start.

Deep dive 3: Read the numbers without lying to yourself

The Aggregate Report and Summary Report are your core views. They show average response time, min and max, standard deviation, error rate, throughput, and the 90 percent line. That last one is useful. Averages make slow calls look smaller than they feel to users. If your 90 percent line is double the average, you have a tail problem.

  • Throughput vs response time: Plot both. If you add users and throughput stops growing while response time climbs, you hit a wall. That is your current limit.
  • Errors matter: A low error rate can hide a bad bug if all the errors happen on one key action. Break down by sampler and look at the failing ones, not just totals.
  • Find the knee: Increase users in steps. 5, 10, 20, 40. Watch the point where response time bends up. That is where sizing decisions start to pay off. Cache, pool, or scale out.
  • Relate to capacity: If your clean plan shows 25 requests per second at a steady 300 ms, you can back into rough estimates. With think time close to real use, that is your ballpark for one app node. If you need 5 times that for a campaign, you either add nodes or make each request cheaper.
  • Look at the end to end: Do a test through the same path your users take. CDN, SSL, load balancer, app, database. Then isolate parts. This helps you place wins in the right spot. If TLS is slow, you fix that in the front door, not in the database.

Numbers are not opinions. Still, they need a story. Tie your curves to code changes. Tag each run with the git commit and the config. That makes a graph in two months worth more than a late night hunch.

What to expect from fixes

Most wins are boring. Add an index. Trim a render. Cut a third party call that stalls every now and then. Use keep alive. Tune the connection pool to match the worker count. Cache that expensive call for a minute. JMeter will make each win visible because the line comes down and the rate goes up. You will not need a meeting to notice it.

A few patterns show up again and again.

  • Session stickiness hurts scale: If your load balancer pins users to one node and your plan ramps up fast, you can overload a single server while others idle. Either share sessions or soften the ramp.
  • Chatty pages break easily: Ten small requests are slower on a phone or a slow network. Combine where you can. Gzip everything that is text.
  • Queues hide pain: A background job can keep your pages fast until the queue fills. Watch the queue depth during a test. Aim for small jobs that finish quickly so the buffer does not become a black hole.

A timeless checklist to start

  • Define the user path and create a simple flow.
  • Add think time so it feels human.
  • Parameterize and correlate tokens.
  • Run in non GUI mode and save to CSV.
  • Warm up the app, then test in steps.
  • Track the 90 percent line and error rate, not just averages.
  • Change one thing at a time, then rerun the same plan.

Reflective close

Load testing with JMeter is not about chasing big numbers. It is about turning guesses into facts. The tool is not flashy. It does not need to be. It records, it plays back, it lets you shape time and users. That is enough to answer the questions that wake you up at two in the morning.

If you are thinking about where to run it, a small EC2 box will do, and you can spin two or three if you want to push harder. If you prefer to stay local, close every shiny window and let your machine focus on the traffic. Keep your plan in version control, comment your changes, and attach the results to the same thread where you share code. Over time, that trail becomes your team memory.

There is a reason people still talk about caching and indexes while new frameworks roll through our feeds. The web is still requests over a network. People still click, wait, and decide if they trust you. JMeter, used with care, helps you respect that wait. It is not magic. It is a mirror. And it will keep you honest when the rush arrives.

Development Practices Software Engineering

Post navigation

Previous post
Next post
  • Digital Experience (94)
    • Experience Strategy (19)
    • Experience-Driven Commerce (5)
    • Multi-Channel Experience (9)
    • Personalization & Targeting (21)
    • SEO & Performance (10)
  • Marketing Technologies (92)
    • Analytics & Measurement (14)
    • Content Management Systems (45)
    • Customer Data Platforms (4)
    • Digital Asset Management (8)
    • Marketing Automation (6)
    • MarTech Stack & Strategy (10)
    • Technology Buying & ROI (3)
  • Software Engineering (310)
    • Business of Software (20)
    • Code (30)
    • Development Practices (52)
    • Digital Transformation (21)
    • Engineering Management (25)
    • General Software (82)
    • Productivity & Workflow (30)
    • Software Architecture (85)
    • Technical Implementation (23)
  • 2025 (12)
  • 2024 (8)
  • 2023 (18)
  • 2022 (13)
  • 2021 (3)
  • 2020 (8)
  • 2019 (8)
  • 2018 (23)
  • 2017 (17)
  • 2016 (40)
  • 2015 (37)
  • 2014 (25)
  • 2013 (28)
  • 2012 (24)
  • 2011 (30)
  • 2010 (42)
  • 2009 (25)
  • 2008 (13)
  • 2007 (33)
  • 2006 (26)

Ab Testing Adobe Adobe Analytics Adobe Target AEM agile-methodologies Analytics architecture-patterns CDP CMS coding-practices content-marketing Content Supply Chain Conversion Optimization Core Web Vitals customer-education Customer Data Platform Customer Experience Customer Journey DAM Data Layer Data Unification documentation DXP Individualization java Martech metrics mobile-development Mobile First Multichannel Omnichannel Personalization product-strategy project-management Responsive Design Search Engine Optimization Segmentation seo spring Targeting Tracking user-experience User Journey web-development

©2025 CMO & CTO | WordPress Theme by SuperbThemes