Your database is sweating, the app servers are yawning, and the product owner wants pages to feel instant. That is the moment you stop fiddling with SQL and start thinking cache. With EHCache growing up fast and shipping solid features, two patterns keep popping up in reviews and late night chats: Cache Aside and Read Through. Both speed things up. Both can backfire if you pick without a plan. Here is the straight talk I wish I had the first time I added EHCache to a busy Java app.
Cache Aside in plain words
With Cache Aside the application stays in charge. You try the cache first. If you miss, you read from the source of truth and put the value in the cache. On writes, you update the source and then either evict or refresh the cached entry. That is it. No magic loader in the middle.
This fits teams that already have service methods that know where data comes from and how it should be shaped. It is easy to read in code reviews and lets you apply business rules before caching. With EHCache this is a simple get and put flow with the usual time to live and time to idle knobs. It also plays nice with Hibernate when you want to keep second level cache for entities and use Cache Aside for view models or mixed aggregates.
The catch is discipline. Every path must remember to refresh or evict. If a rare write forgets to do it, you serve stale data. Testing needs to hit both the cache miss and cache hit paths. You can still win big, but you own the ceremony.
Read Through when you want the cache to fetch
With Read Through you call the cache and let it load the data on a miss. EHCache supports cache loaders and a self populating wrapper. The app code only talks to the cache. On a miss the loader pulls from the source and returns the value that then gets stored.
This pattern shines when many call sites ask for the same data shape. You centralize loading rules, retries, and backoff. Your services get cleaner since they do not juggle cache calls. You can also reduce stampedes during traffic spikes because the cache can lock a key while one thread loads it.
The tradeoff is indirection. Debugging a slow page means checking cache config and loader code. If your loader talks to three services and a database, you can hide a lot of work under the word fetch. Keep your loader focused and make timeouts strict. The cache should not become a secret orchestrator.
What about writes and staleness
People love reads. Writes break hearts. With Cache Aside you normally write to the source and then evict. That keeps things simple and avoids race conditions. Some teams refresh instead of evict to keep hot keys warm. Be careful with values built from multiple tables. Evict is usually safer.
With Read Through the pair is Write Through or Write Around. Write Through asks the cache to perform the write and pass it to the source. Write Around skips the cache on write and lets the next read repopulate. For most apps a plain service write plus an explicit cache evict is clear and dependable. If you add async write behind, you just raised the bar for monitoring and failure handling. Only do it when you really need the extra speed on the write path.
Either way set good TTLs, size limits, and consider a version token in the key when data shape changes. That avoids ghost bugs after a deploy.
Cache Aside versus Read Through at a glance
- Control: Cache Aside gives the app full control per call. Read Through centralizes fetch in the cache layer.
- Complex aggregations: Cache Aside fits composite reads where you need rules before caching. Read Through fits simple object fetch with a loader.
- Failure story: With Cache Aside you can fall back on a miss and decide what to do. With Read Through your loader must handle timeouts, partial errors, and backoff inside the cache path.
- Hot keys: Read Through with per key locking can cut stampedes. Cache Aside needs guards in the service or a soft lock pattern.
- Team habits: If your team forgets cache invalidation, Read Through reduces slip ups. If your team likes explicit flows, Cache Aside stays clearer.
- Stack fit: Using Hibernate second level cache already for entities Go Cache Aside for custom views. Building a shared catalog that many services read Read Through earns its keep.
A quick checklist before you pick
- Map your hot reads: Which keys are hammered and how fresh do they need to be
- Define write rules: Who evicts on create update delete and how soon
- Decide on staleness: Can users see data that is a few seconds old If not, favor evict paths over refresh
- Protect the source: Add per key locks or single flight to stop dogpile storms
- Set timeouts: Keep loader and data source timeouts tight so caches do not hide slow calls
- Watch metrics: Track hit rate, load time, eviction count, and exceptions from loaders
- Plan warmup: Preload critical keys after deploy or first traffic to cut cold start pain
- Keep keys boring: Include version when the shape changes and avoid surprises on rollout
Pick the pattern that matches your write path and your failure plan, and your cache will be your friend.