Skip to content
CMO & CTO
CMO & CTO

Closing the Bridge Between Marketing and Technology, By Luis Fernandez

  • Digital Experience
    • Experience Strategy
    • Experience-Driven Commerce
    • Multi-Channel Experience
    • Personalization & Targeting
    • SEO & Performance
    • User Journey & Behavior
  • Marketing Technologies
    • Analytics & Measurement
    • Content Management Systems
    • Customer Data Platforms
    • Digital Asset Management
    • Marketing Automation
    • MarTech Stack & Strategy
    • Technology Buying & ROI
  • Software Engineering
    • Software Engineering
    • Software Architecture
    • General Software
    • Development Practices
    • Productivity & Workflow
    • Code
    • Engineering Management
    • Business of Software
    • Code
    • Digital Transformation
    • Systems Thinking
    • Technical Implementation
  • About
CMO & CTO

Closing the Bridge Between Marketing and Technology, By Luis Fernandez

JVM Concurrency Models: Lessons Across Languages

Posted on August 12, 2015 By Luis Fernandez

We build on the JVM because it is everywhere, runs fast enough, and lets us pick a language that fits our brain. The hard part is not syntax. The hard part is concurrency. Threads, actors, futures, channels, streams. Each one promises less pain, more throughput, fewer late night pages.

Today the buzz around microservices, reactive apps, and containers is loud. Docker keeps climbing, Akka shows up in talks, and Java 8 gave us new toys. Let’s compare the JVM concurrency models we actually use, from a practitioner seat, and trade notes across languages.

What problem are we really solving?

We fight the trio of throughput, latency, and correctness. We want work to flow, responses to feel snappy, and we want to sleep at night. Shared mutability flips the board on us. Memory visibility rules bite. Locks make code simple to read and tricky to scale.

So we pick patterns that shrink shared state and push communication to messages or pure functions. The JVM gives us many ways to get there, from plain pools to actors to CSP to STM.

Threads and pools. Where to start?

Java gives you ExecutorService, ForkJoinPool, and in Java 8 the very handy CompletableFuture. With these you can keep I O or CPU heavy tasks off the main thread and compose steps without nesting callbacks.

ExecutorService pool = Executors.newFixedThreadPool(8);

CompletableFuture<Integer> a =
    CompletableFuture.supplyAsync(new Supplier<Integer>() {
        public Integer get() { return loadFromDb("A"); }
    }, pool);

CompletableFuture<Integer> b =
    CompletableFuture.supplyAsync(new Supplier<Integer>() {
        public Integer get() { return callService("B"); }
    }, pool);

CompletableFuture<Integer> total =
    a.thenCombine(b, new BiFunction<Integer, Integer, Integer>() {
        public Integer apply(Integer x, Integer y) { return x + y; }
    });

Integer result = total.join();

This style is great when your units are clear and you want plain Java. Watch out for blocking calls inside the common pool. Create your own pool for I O so CPU tasks do not starve.

Futures or callbacks?

Callbacks spread state across many places. Futures let you compose, fan out, and gather. In Scala, futures feel natural and pair well with a good execution context.

import scala.concurrent._
import ExecutionContext.Implicits.global
import scala.concurrent.duration._

def price(id: String): Future[Int] = Future { fetchPrice(id) }
def tax(p: Int): Future[Int] = Future { calcTax(p) }

val total =
  for {
    p <- price("A42")
    t <- tax(p)
  } yield p + t

Await.result(total, 3.seconds)

Guava has ListenableFuture. Java 8 has CompletableFuture. Pick one and keep a consistent style to avoid a mishmash.

Actors. When chatty beats shared state

Actors shine when you have lots of small messages and state that must be guarded without locks. Akka brings mailboxes, supervision, and routing. You write single threaded logic inside each actor, and the runtime moves messages around.

import akka.actor._

case class Add(x: Int)
case object Get

class Counter extends Actor {
  private var n = 0
  def receive = {
    case Add(x) => n += x
    case Get    => sender() ! n
  }
}

val system = ActorSystem("app")
val c = system.actorOf(Props[Counter], "counter")
c ! Add(3)
c ! Add(4)
c.tell(Get, testActor)

Actors reward clear message design. Keep payloads small. Avoid blocking inside an actor. For CPU work, offload to a pool and send the reply back.

Clojure. STM, atoms, and now transducers

Clojure leans into immutability and managed refs. STM lets you coordinate changes without manual locks. Atoms handle independent state. For workflows, core.async gives channels and go blocks. And fresh off the press, transducers give you composable steps without building lazy chains.

(require '[clojure.core.async :as a])

(def ch (a/chan 10 (comp (map inc) (filter odd?))))

(a/go
  (dotimes [i 5]
    (a/>! ch i))
  (a/close! ch))

(a/go
  (loop []
    (when-some [v (a/

Channels make back pressure and handoff feel natural. STM shines when you have a few pieces of shared state that must change together.

Groovy with GPars. Dataflow and CSP

Groovy teams reach for GPars when they want dataflow variables, actors, and fork join helpers without leaving Groovy. Dataflow keeps your code clean by waiting only when a value is needed.

import groovyx.gpars.dataflow.*
import static groovyx.gpars.dataflow.Dataflow.task

def x = new DataflowVariable()
def y = new DataflowVariable()
def sum = new DataflowVariable()

task { x << 20 }
task { y << 22 }
task { sum << x.val + y.val }

println sum.get()

This feels like futures but with a tidy syntax and a rich set of helpers. Great for glue code and I O heavy jobs.

Rx on the JVM. Streams of events

RxJava treats everything as a stream. You compose operators, handle back pressure, and schedule work on the right pool. Perfect for UI bridges, HTTP pipelines, and any place where events flow like water.

Observable.from(Arrays.asList(1, 2, 3, 4))
  .map(new Func1<Integer, Integer>() {
    public Integer call(Integer x) { return x * x; }
  })
  .filter(new Func1<Integer, Boolean>() {
    public Boolean call(Integer y) { return y % 2 == 0; }
  })
  .subscribeOn(Schedulers.computation())
  .observeOn(Schedulers.io())
  .subscribe(new Action1<Integer>() {
    public void call(Integer v) { System.out.println(v); }
  });

The mental model is different from futures. You think in flows, not requests. Testing is friendly thanks to virtual time and test schedulers.

Fibers and queues. Curious about lightweight threads?

Projects like Quasar bring fibers to the JVM. They park cheaply and let you write code that looks blocking while staying friendly to resources. The LMAX Disruptor is another angle with a ring buffer that burns for low latency queues.

These can shine in trading, telemetry, or fast pipelines. They ask for care and a strong grasp of how the JVM schedules work.

How do we pick a model?

Start with the shape of your work. Chatty state machines point to actors. Fan out CPU tasks point to pools and futures. Event heavy apps with pull or push fit Rx well. Stateful yet consistent logic within a process can lean on STM.

Also consider the team. The best model is the one your crew can read at 3am. Tooling, logs, and metrics matter as much as theory.

What about testing and debugging?

For Java futures, CompletableFuture chains are testable with explicit pools and timeouts. For Akka, use the TestKit to assert messages. For Clojure channels, fake time or buffer sizes to surface deadlocks fast.

Keep thread dumps handy. Add request ids to logs that follow a task across async hops. Expose pool sizes, queue depths, and mailbox stats to your dashboard.

What should we avoid?

Do not block inside a flow that expects nonblocking steps. For example, do not call a slow JDBC query on the common fork join. Route that to a separate pool built for I O.

Watch thread locals. They vanish across async jumps unless you copy them by hand. Be careful with volatile and atomics. They fix visibility but not coordination bugs. When in doubt, move state behind a message boundary.

Can we mix and match?

Yes, but draw clear borders. An actor system can call out to Rx for streamy parts, then send a single message back. A Java service can wrap a legacy driver in a bounded pool and expose a clean future based API to the rest of the app.

Keep the crossing points small. That is where bugs like to hide. Measure at those edges first.

What stays true across languages?

Make shared state small. Prefer message passing or pure steps. Size your pools for the kind of work you run. Add back pressure so fast producers cannot drown slow consumers.

Write load tests that mimic real traffic. Fail fast when queues grow. Keep metrics near the code. Your future self will thank you.

Compact takeaways

Start simple with pools and futures. Move to actors for chatty state. Use Rx for event pipes. Pull in STM or dataflow when consistency or handoff clarity is the top concern.

Pick one primary model per service, document it, and keep edges tight. Measure the hot paths. Tune based on data, not vibes. The JVM gives us many roads. Choose the one your team can drive well and keep the pager quiet.

Software Architecture Software Engineering

Post navigation

Previous post
Next post
  • Digital Experience (94)
    • Experience Strategy (19)
    • Experience-Driven Commerce (5)
    • Multi-Channel Experience (9)
    • Personalization & Targeting (21)
    • SEO & Performance (10)
  • Marketing Technologies (92)
    • Analytics & Measurement (14)
    • Content Management Systems (45)
    • Customer Data Platforms (4)
    • Digital Asset Management (8)
    • Marketing Automation (6)
    • MarTech Stack & Strategy (10)
    • Technology Buying & ROI (3)
  • Software Engineering (310)
    • Business of Software (20)
    • Code (30)
    • Development Practices (52)
    • Digital Transformation (21)
    • Engineering Management (25)
    • General Software (82)
    • Productivity & Workflow (30)
    • Software Architecture (85)
    • Technical Implementation (23)
  • 2025 (12)
  • 2024 (8)
  • 2023 (18)
  • 2022 (13)
  • 2021 (3)
  • 2020 (8)
  • 2019 (8)
  • 2018 (23)
  • 2017 (17)
  • 2016 (40)
  • 2015 (37)
  • 2014 (25)
  • 2013 (28)
  • 2012 (24)
  • 2011 (30)
  • 2010 (42)
  • 2009 (25)
  • 2008 (13)
  • 2007 (33)
  • 2006 (26)

Ab Testing Adobe Adobe Analytics Adobe Target AEM agile-methodologies Analytics architecture-patterns CDP CMS coding-practices content-marketing Content Supply Chain Conversion Optimization Core Web Vitals customer-education Customer Data Platform Customer Experience Customer Journey DAM Data Layer Data Unification documentation DXP Individualization java Martech metrics mobile-development Mobile First Multichannel Omnichannel Personalization product-strategy project-management Responsive Design Search Engine Optimization Segmentation seo spring Targeting Tracking user-experience User Journey web-development

©2025 CMO & CTO | WordPress Theme by SuperbThemes