Skip to content
CMO & CTO
CMO & CTO

Closing the Bridge Between Marketing and Technology, By Luis Fernandez

  • Digital Experience
    • Experience Strategy
    • Experience-Driven Commerce
    • Multi-Channel Experience
    • Personalization & Targeting
    • SEO & Performance
    • User Journey & Behavior
  • Marketing Technologies
    • Analytics & Measurement
    • Content Management Systems
    • Customer Data Platforms
    • Digital Asset Management
    • Marketing Automation
    • MarTech Stack & Strategy
    • Technology Buying & ROI
  • Software Engineering
    • Software Engineering
    • Software Architecture
    • General Software
    • Development Practices
    • Productivity & Workflow
    • Code
    • Engineering Management
    • Business of Software
    • Code
    • Digital Transformation
    • Systems Thinking
    • Technical Implementation
  • About
CMO & CTO

Closing the Bridge Between Marketing and Technology, By Luis Fernandez

Polling, Comet, Long-Running Requests

Posted on March 13, 2011 By Luis Fernandez
\n
\n\n\n\n\n\n\n

Polling, Comet, Long Running Requests: AJAX from a practitioner perspective with timeless lessons.

\n\n\n\n

The late night ping that would not stop

\n\n\n\n

Last night I had that classic support ping. The new activity widget we shipped was eating sockets like popcorn. Tabs open. Team chat running. Music streaming. Our app trying to look real time. The result was a slow burn on the server and a chorus of fans spinning on laptops. You can almost hear the browser whisper please stop asking me every second.

\n\n\n\n

We want that magical feel of instant updates like Gmail chat or Facebook notifications. But the browser is still plain old HTTP. WebSockets looks sweet on slides, yet not ready across browsers. Chrome is moving fast. Firefox 4 RC is around the corner. IE9 lands any minute. Still, you cannot bet on every user having new toys. So today I want to share what has worked for me in production when the goal is fresh data without setting the servers on fire.

\n\n\n\n

Technical bits: polling, Comet, and long running requests

\n\n\n\n

There are three ways I reach for when I want server push vibes with current browsers.

\n\n\n\n

1. Short polling

\n\n\n\n

Set a timer. Ask the server. Repeat. It is simple and it works everywhere. The tradeoff is load and latency. If you poll every second you hit the server sixty times a minute per user. If you poll every 20 seconds the UI feels sleepy.

\n\n\n\n
function startShortPolling() {\n  var interval = 5000; // 5 seconds\n  setInterval(function() {\n    $.getJSON("/api/activity?since=" + window.lastSeen, function(data) {\n      renderItems(data.items);\n      if (data.lastSeen) window.lastSeen = data.lastSeen;\n    });\n  }, interval);\n}\n
\n\n\n\n

Use conditional queries. Send a since marker or an ETag. The server should reply fast with 304 if nothing changed. It is not pretty but sometimes this is all you need.

\n\n\n\n

2. Long polling

\n\n\n\n

This is the sweet spot for many apps. The browser asks once and the server holds the request open until there is something to say or a timeout hits. When the response arrives the client opens a new request right away. You get near instant updates but only one request per cycle.

\n\n\n\n
// jQuery 1.5 style long polling\n(function poll() {\n  $.ajax({\n    url: "/events",\n    dataType: "json",\n    timeout: 30000, // let the server keep it open\n    success: function (data) {\n      if (data.items && data.items.length) {\n        renderItems(data.items);\n      }\n      setTimeout(poll, 10); // reconnect quickly\n    },\n    error: function () {\n      // network hiccup or 30s timeout\n      setTimeout(poll, 2000); // backoff a bit\n    }\n  });\n})();\n
\n\n\n\n

Server side, you keep a queue per user and reply when you can. Here is a tiny PHP sketch that waits up to 25 seconds.

\n\n
<?php\n// /events\nignore_user_abort(true);\nset_time_limit(0);\nheader("Content-Type: application/json");\nheader("Cache-Control: no-cache");\n\n$user = getUserId();\n$start = microtime(true);\n\nwhile (true) {\n  $items = dequeue_items_for($user);\n  if (!empty($items)) {\n    echo json_encode(array("items" => $items, "lastSeen" => time()));\n    flush();\n    break;\n  }\n  if ((microtime(true) - $start) > 25) {\n    echo json_encode(array("items" => array()));\n    flush();\n    break;\n  }\n  usleep(200000); // 200ms\n}\n
\n\n\n\n

On Node.js this pattern feels natural since the event loop just waits. Keep the connection open, then write when there is news.

\n\n
// Node 0.4 style\nvar http = require("http");\nvar clients = [];\n\nhttp.createServer(function(req, res) {\n  if (req.url === "/events") {\n    req.socket.setTimeout(0);\n    res.writeHead(200, {\n      "Content-Type": "application/json",\n      "Cache-Control": "no-cache",\n      "Connection": "keep-alive"\n    });\n    clients.push(res);\n    req.on("close", function() {\n      var i = clients.indexOf(res);\n      if (i > -1) clients.splice(i, 1);\n    });\n  } else if (req.url === "/publish" && req.method === "POST") {\n    var body = "";\n    req.on("data", function(c) { body += c; });\n    req.on("end", function() {\n      for (var i = 0; i < clients.length; i++) {\n        clients[i].end(JSON.stringify({ items: [JSON.parse(body)] }));\n      }\n      res.writeHead(204); res.end();\n    });\n  } else { res.writeHead(404); res.end(); }\n}).listen(3000);\n
\n\n\n\n

Got a proxy in front The default behavior of nginx and some CDNs is to buffer responses. That breaks streaming and can delay long poll replies. Turn buffering off for that route.

\n\n\n\n
# nginx\nlocation /events {\n  proxy_pass http://app;\n  proxy_buffering off;\n  proxy_read_timeout 3600;\n}\n
\n\n\n\n

3. Streaming with a forever iframe or chunked XHR

\n\n\n\n

You open one connection and keep writing pieces. The browser fires progress events as chunks arrive. You get constant flow without reconnects. The tricky parts are proxies that buffer and the browser limits on connections per host.

\n\n\n\n

A classic forever iframe looks like this:

\n\n\n\n
<iframe id="stream" src="/stream" style="display:none"></iframe>\n<script>\n  var iframe = document.getElementById("stream");\n  iframe.onload = function() {\n    // server writes <script>window.onEvent(...);</script> chunks\n    window.onEvent = function(data) { renderItems(data.items); };\n  };\n</script>\n
\n\n\n\n

For XHR streaming send Transfer Encoding: chunked and flush after each chunk. In PHP you need ob_flush and flush. In Node you just res.write.

\n\n\n\n

What about Server Sent Events or WebSockets

\n\n\n\n

EventSource is clean and simple, but support is not wide yet. WebSockets feels like the future but a few browsers turned it off for now due to proxy issues. My take today is to ship long polling with a plug for streaming where you control the proxy, then layer in EventSource or WebSockets for modern clients later.

\n\n\n\n

Important details that save you from pain

\n\n\n\n

Connection limits per host. Older IE gave you two. Newer browsers raised it but they still cap it. If you open one forever connection you only have a small number left for images, CSS, and other AJAX calls. Consider a cookie free subdomain for images and another one for events to spread the load.

\n\n\n\n

Timeouts and backoff. Do not hammer on errors. Use backoff that grows a bit on repeated failures and resets on success. Keep client timeouts slightly below server timeouts so the browser controls reconnects.

\n\n\n\n

Compression and buffering. Gzip is great for bulk JSON but for streaming it can add delay. If every chunk is tiny you may want to turn gzip off just for the stream. Watch your proxy buffers too.

\n\n\n\n

Fanout. If you push the same event to many users, keep the work out of the request path. Push to a queue like Redis pub sub or RabbitMQ, then let a worker feed the connected clients.

\n\n\n\n

Security. Use CSRF tokens on any publish endpoints. Set proper CORS if you must share across origins. For cookies set HttpOnly and secure where you can.

\n\n\n\n

The manager view: speed, cost, and risk

\n\n\n\n

If you lead a team and want that real time feel, ask three questions.

\n\n\n\n

How fresh is fresh enough If the UI can live with 5 to 10 seconds, short polling is cheap to build and easy to run. If you need near instant chat or trading like feedback, long polling or streaming is worth it.

\n\n\n\n

What is the cost curve Short polling cost grows with the timer. Long polling cost grows with connected users and timeouts. Streaming holds sockets for a long time. This might change how you pick your web server. Apache with many threads can groan with a lot of idle sockets. Node, nginx with upstream, or event loop servers like Tornado or Twisted shine here.

\n\n\n\n

What is the escape hatch Put a feature flag on your real time feed. If the database gets slow or the queue fills up, fall back to short polling with a longer interval. Also ship a small health dashboard that shows connections, average wait time, and error rate. Your on call will sleep better.

\n\n\n\n

Budget tip: most of the cost is not bandwidth, it is processes and sockets. One busy loop per connection can melt a box. Pick a server model that parks while it waits.

\n\n\n\n

Your turn: make something breathe

\n\n\n\n

Pick one small feature in your app that would feel better with live updates. A counter. A notification dot. A message list. Then try this plan.

\n\n\n\n

– Ship a long polling endpoint that waits up to 25 or 30 seconds and returns JSON. Keep it dull and safe.
– Wire a tiny client loop like the one above.
– Measure time to first byte and average reconnect time in your environment. Watch the Network tab in Chrome DevTools or Firebug.
– Put the route behind nginx with buffering off. Verify chunks land when you expect.
– Add a switch to drop to 10 second short polling if error rate climbs.

\n\n\n\n

Write down two numbers at the end. How many requests per minute per user did you reduce and what is the average delay users see. Share those with your team. If the drop in chatter is big and the feel is snappy, keep going. If not, no harm done, you have a simple on off knob and you learned where the bottleneck lives.

\n\n\n\n

The web still speaks HTTP. With polling, Comet, and long running requests, we can get close to push without waiting for every browser to catch up. Ship it, measure it, and make the app breathe.

\n\n\n
\n
General Software Software Engineering

Post navigation

Previous post
Next post
  • Digital Experience (94)
    • Experience Strategy (19)
    • Experience-Driven Commerce (5)
    • Multi-Channel Experience (9)
    • Personalization & Targeting (21)
    • SEO & Performance (10)
  • Marketing Technologies (92)
    • Analytics & Measurement (14)
    • Content Management Systems (45)
    • Customer Data Platforms (4)
    • Digital Asset Management (8)
    • Marketing Automation (6)
    • MarTech Stack & Strategy (10)
    • Technology Buying & ROI (3)
  • Software Engineering (310)
    • Business of Software (20)
    • Code (30)
    • Development Practices (52)
    • Digital Transformation (21)
    • Engineering Management (25)
    • General Software (82)
    • Productivity & Workflow (30)
    • Software Architecture (85)
    • Technical Implementation (23)
  • 2025 (12)
  • 2024 (8)
  • 2023 (18)
  • 2022 (13)
  • 2021 (3)
  • 2020 (8)
  • 2019 (8)
  • 2018 (23)
  • 2017 (17)
  • 2016 (40)
  • 2015 (37)
  • 2014 (25)
  • 2013 (28)
  • 2012 (24)
  • 2011 (30)
  • 2010 (42)
  • 2009 (25)
  • 2008 (13)
  • 2007 (33)
  • 2006 (26)

Ab Testing Adobe Adobe Analytics Adobe Target AEM agile-methodologies Analytics architecture-patterns CDP CMS coding-practices content-marketing Content Supply Chain Conversion Optimization Core Web Vitals customer-education Customer Data Platform Customer Experience Customer Journey DAM Data Layer Data Unification documentation DXP Individualization java Martech metrics mobile-development Mobile First Multichannel Omnichannel Personalization product-strategy project-management Responsive Design Search Engine Optimization Segmentation seo spring Targeting Tracking user-experience User Journey web-development

©2025 CMO & CTO | WordPress Theme by SuperbThemes