Skip to content
CMO & CTO
CMO & CTO

Closing the Bridge Between Marketing and Technology, By Luis Fernandez

  • Digital Experience
    • Experience Strategy
    • Experience-Driven Commerce
    • Multi-Channel Experience
    • Personalization & Targeting
    • SEO & Performance
    • User Journey & Behavior
  • Marketing Technologies
    • Analytics & Measurement
    • Content Management Systems
    • Customer Data Platforms
    • Digital Asset Management
    • Marketing Automation
    • MarTech Stack & Strategy
    • Technology Buying & ROI
  • Software Engineering
    • Software Engineering
    • Software Architecture
    • General Software
    • Development Practices
    • Productivity & Workflow
    • Code
    • Engineering Management
    • Business of Software
    • Code
    • Digital Transformation
    • Systems Thinking
    • Technical Implementation
  • About
CMO & CTO

Closing the Bridge Between Marketing and Technology, By Luis Fernandez

EHCache in a Layered Architecture

Posted on April 9, 2014 By Luis Fernandez

“Cache like a saint. Invalidate like a skeptic.”

Shop floor wisdom from late night deploys

Midnight pages and the cache that cried wolf

Two coffees in, the pager buzzed again. Slow product pages, the kind that make marketing ask if we changed something. We had not. The database was doing what databases do when a promotion lands in the inbox of half the mailing list. Reads were fine on paper, but the tail latencies told a different story. That is when EHCache earns its keep in a layered app. Not as a silver bullet, but as a steady hand that turns repeated work into a quick lookup without twisting your domain rules.

We flipped the obvious switches first. Indexes were good. The ORM was not doing anything silly. The fix lived somewhere else. We brought the hot paths into the service layer cache, kept the cache small, and set short time to live. The next deploy cooled the alerts. That night taught me something I carry to every project since. Cache is a story about correctness first, speed second. If it lies, it breaks trust. If it tells the truth, it makes the system feel light.

Where EHCache fits in real projects

Today a lot of Java shops sit on Spring and Hibernate. EHCache sits in that stack like a good neighbor. You can wire it through the Spring Cache abstraction for method level caching at the service layer. Or plug it in as the Hibernate second level cache for entity reads. Both patterns have value, and they are not the same story. Service layer caching gives you control over use cases. Second level cache gives you control over entities. Pick the one that matches how your app is read.

We also have fresh news. The JCache spec just arrived, which means a common API for caching in Java is now on the table. EHCache plays well here, and that is good for everyone. You can keep your mental model and swap annotations or wiring without rewriting your world. The point stands. Think in layers first, then pick the adapter that keeps your code honest.

Deep dive 1: EHCache in a layered architecture

The classic layered stack looks like this. Controller or resource at the edge. Service in the middle. Repository or DAO near the database. Put EHCache at the service layer for most business reads. This keeps caching close to the language of your product. Product by slug. Cart by user id. Pricing by region. It also lets you compose caches. A service can call two cached methods and merge results without leaking cache details to the web layer.

Repository level caching is useful when the data shape matches the entity and you mostly need raw records. This is the classic Hibernate second level cache case. Keep in mind that service level rules can change while entity data stays the same. If you cache only at the repository, you might miss the part where pricing depends on flags, dates, or campaigns. That is why cache aside at the service layer is my default for reads that map to user facing endpoints.

Edge caching at the controller sounds tempting. It can work for static or near static pages. For anything dynamic, it often leaks request details into the cache key and grows brittle. Keep the edge thin and let the service decide what is safe to memoize.

Deep dive 2: Keys, TTL, and what to cache

The hardest part is not writing the annotation. It is picking clean keys and lifetimes. Strong keys are deterministic and explicit. If the result depends on user id and region, both must be part of the key. Hidden context makes caches lie. For lifetimes, use a simple ladder. Seconds for things that can drift without pain. Minutes for hot catalog or currency rates. Hours for reference tables. If you are afraid to set a longer time, it probably means you need an eviction signal, not a short timer.

Memory is not free. EHCache gives you on heap storage by default and a path to store off heap with the commercial bits. Start on heap with modest sizes. Watch hit rate and eviction count. A healthy cache has a high hit rate for the keys that matter and a low eviction churn. Keep few caches, each with a clear purpose. One for product summaries, one for user profile views, and one for price snapshots. The more generic the cache, the harder it is to tune.

Warmups can help right after deploy. Preload the top slugs or top categories you know will be hit first. Do not try to warm the world. It delays deploys and gives a false sense of safety. Warm the first screen, not the entire catalog.

Deep dive 3: Invalidation, concurrency, and clusters

Reads are easy. Writes make you earn your badge. When data changes, you can evict by key, evict by pattern, or bump a version inside the key so the next read will miss. Evict by key is safest. Pattern wipes wide and can cause a stampede. Versioned keys are neat when you have a clear parent id that many reads depend on. A profile version number tucked into the cache key is a great trick.

In a cluster you face a choice. Local caches with short TTLs on each node, or a clustered cache that shares state. Local caches are simple and fast, and a short timer smooths out stale reads. A clustered cache shares invalidation, which is great for long lived entries like reference data. EHCache can work both ways. If you are not ready for shared state, a simple message on a queue to broadcast evictions goes a long way.

Mind the thundering herd. When a hot key expires, many requests can race to rebuild it. Put a small jitter on TTL or stagger keys that would otherwise expire together. Some teams keep a soft TTL plus a hard TTL. Serve the slightly stale value for a short grace period while one thread rebuilds. Your users see steady pages, your backend breathes.

Reflections from this week

This week everyone is talking about Heartbleed. We rolled keys and patched boxes. It felt like a fire drill, and it reminded me why I treat caching like I treat crypto. Trust first. A fast wrong answer is worse than a slow true one. When you design caches in a layered app, ask what the user would expect if they hit refresh after a write. Let that answer guide your strategy. Put EHCache where it reflects your business, keep keys honest, and make evictions boring.

Speed follows from clarity. The teams that treat caching as part of the design spend less time chasing ghosts at midnight. The lesson holds across stacks and tools. Make the data model clear. Choose the layer with intent. Expire with purpose. Then go get some sleep.

Software Architecture Software Engineering

Post navigation

Previous post
Next post
  • Digital Experience (94)
    • Experience Strategy (19)
    • Experience-Driven Commerce (5)
    • Multi-Channel Experience (9)
    • Personalization & Targeting (21)
    • SEO & Performance (10)
  • Marketing Technologies (92)
    • Analytics & Measurement (14)
    • Content Management Systems (45)
    • Customer Data Platforms (4)
    • Digital Asset Management (8)
    • Marketing Automation (6)
    • MarTech Stack & Strategy (10)
    • Technology Buying & ROI (3)
  • Software Engineering (310)
    • Business of Software (20)
    • Code (30)
    • Development Practices (52)
    • Digital Transformation (21)
    • Engineering Management (25)
    • General Software (82)
    • Productivity & Workflow (30)
    • Software Architecture (85)
    • Technical Implementation (23)
  • 2025 (12)
  • 2024 (8)
  • 2023 (18)
  • 2022 (13)
  • 2021 (3)
  • 2020 (8)
  • 2019 (8)
  • 2018 (23)
  • 2017 (17)
  • 2016 (40)
  • 2015 (37)
  • 2014 (25)
  • 2013 (28)
  • 2012 (24)
  • 2011 (30)
  • 2010 (42)
  • 2009 (25)
  • 2008 (13)
  • 2007 (33)
  • 2006 (26)

Ab Testing Adobe Adobe Analytics Adobe Target AEM agile-methodologies Analytics architecture-patterns CDP CMS coding-practices content-marketing Content Supply Chain Conversion Optimization Core Web Vitals customer-education Customer Data Platform Customer Experience Customer Journey DAM Data Layer Data Unification documentation DXP Individualization java Martech metrics mobile-development Mobile First Multichannel Omnichannel Personalization product-strategy project-management Responsive Design Search Engine Optimization Segmentation seo spring Targeting Tracking user-experience User Journey web-development

©2025 CMO & CTO | WordPress Theme by SuperbThemes