Cloud hosting is the new shared office for our apps. You bring your laptop, plug into power, and get to work. Git push to Heroku feels normal now. AWS is the data center you rent by the hour. Google App Engine, Cloud Foundry, AppFog, Azure Websites, they all promise less yak shaving and more shipping. In that world, the twelve factor app reads less like theory and more like street rules. I am writing this after a string of production weeks where the difference between a calm pager and a 3am fire was nothing more than whether we respected those factors. The goal here is simple. If you are building a hosted app or API, make choices that keep you portable, boring in the right places, and ready to scale when traffic shows up out of nowhere because your Product Hunt forerunner was a lucky tweet.
Definitions that matter when your app is hosted
There is a lot of lore around the twelve factors. Let us pin them down with a hosted apps lens. Short and to the point.
- Codebase: One codebase tracked in version control. Many deploys. If you have a folder per customer, you do not have a codebase, you have a hobby with invoices.
- Dependencies: Declare them and pin them. The server should not guess. No magical system packages. If you need ImageMagick, say it out loud in your config or buildpack.
- Config: Put config in the environment. Keys, secrets, URLs, all of it. The repo is not a safe.
- Backing services: Treat databases, queues, caches, storage, email, third party APIs as attached resources. You can swap them without code changes beyond configuration.
- Build release run: Separate these three. Build turns code into an artifact. Release combines artifact with config. Run executes it. Do not remix these steps on a whim.
- Processes: The app runs as stateless processes. Nothing important should live in memory between requests other than caches you can afford to lose.
- Port binding: The app speaks HTTP by opening a port. No injected web server you do not control. You bring your own server, whether it is Puma, Unicorn, Node HTTP, or Jetty.
- Concurrency: Scale by adding more processes of the right type. Web, worker, scheduler. Little armies that you can grow and shrink.
- Disposability: Processes start fast and shut down clean. If a dyno restarts, nobody notices. Draining makes queues happy.
- Dev prod parity: Keep differences between dev and prod small. Shorten the gap in time, people, and tools. If dev uses SQLite and prod uses Postgres, you are making a promise you cannot keep.
- Logs: Treat logs as event streams. Write to stdout and let the platform collect and ship them.
- Admin processes: Run one off tasks as the same code and config as the app. Schema migrations, data fixes, reports. Same release, same world.
These are not commandments carved in stone. They are tradeoffs that keep hosted apps honest. They make PaaS friendlier and IaaS less noisy.
Examples from the field
Config in environment: A Rails app on Heroku that reads Stripe keys from environment variables. When you spin a staging app, it gets staging keys by default because the config lives with the app, not in the repo or in a wiki. You rotate a secret by changing a config var and restarting dynos. No commit. No new build.
Backing services as attached resources: Your Node API uses Redis for rate limiting. In dev, the app points at a local Redis. In staging and prod, it points at an add on. You outgrow the shared plan and flip to a bigger one. No code change. Same config key, new URL.
Port binding: A JVM app that runs with Jetty embedded. The process listens on the port the platform gives. No system Apache or Nginx needed inside the app. On Heroku that means web dynos just run the process and the router takes it from there.
Concurrency that fits the workload: A queue worker that pulls from SQS or RabbitMQ. You start with one worker process. Traffic grows. You scale workers to five. The web process count stays the same. Costs go up in a way that maps to load. You stop guessing.
Disposability done right: Your process traps a signal and stops taking requests. It finishes in flight work, commits, and exits. When the platform restarts a dyno, customers do not hit a half written response. Your queue is the buffer between you and chaos.
Logs as streams: The app writes to stdout. Heroku drains send logs to Papertrail. You search by request id and jump across web and worker logs instantly. No ssh. No tail on a random server that may be gone tomorrow.
Admin processes that look like the app: You run a schema migration by promoting the current release and running the migration command in a one off dyno. Same code. Same config. No secret admin branch that only one person can run.
Counterexamples that bite when hosted
Secrets in the repo: A PHP app with a config file in git that holds the production database password. A contractor gets read access to help on a feature. Now they also have prod access. You rotate creds and every deploy breaks because you forgot to update the hard coded value in a test helper and a cron job. That is not a break glass moment. That is a slow leak.
Writing to local disk: A Python app saves user uploads to the dyno file system. It works in dev. In prod, files vanish on restart or shuffle between dynos. Users complain that their avatars are gone. The fix is to push uploads to S3 or a similar service and cache locally for speed. Local disk is a scratch pad, not a vault.
Sticky sessions: The web tier stores session state in memory. The load balancer pins traffic to one dyno to keep users logged in. Scale out and you create hot spots. A dyno restart logs users out. Use a shared store like Redis for sessions so any dyno can handle any request.
One server to rule them all: An app runs on a single pet VPS with custom packages and manual tweaks. It is fast until it is not. You cannot add a second server without a week of hand holding. A minor Ubuntu update breaks ImageMagick. Nobody knows what flags were used. That is fragility wearing a pleasant mask.
Vendor lock by accident: You call a PaaS private API to do releases or manage scaling from inside the app. It feels handy. Months later you want to try AWS Elastic Beanstalk or Cloud Foundry. Now a plain build and config deploy turns into a rewrite because the app is tied to one set of platform calls. A thin platform contract keeps you free.
Split brain config: Half of your settings live in environment vars. The other half sit in a YAML file committed to the repo. Somebody updates one but not the other. Staging behaves one way and prod another. If your team says where is the truth and you cannot answer in one sentence, that is a smell.
Decision rubric for teams shipping hosted apps
Here is a simple checklist you can walk through before you hit deploy. Use it in grooming, in code review, and when you are tempted to cut a corner for speed. Call it a score if that helps. High scores map to fewer on call surprises.
- Config truth: Can I destroy the repo on my laptop and still deploy by setting only environment variables on the platform. If not, fix config.
- Stateless web: Can I kill any single web process and keep user state. If the answer is no, move sessions and long work out of memory.
- Backing services swap: Can I replace my database or cache plan with a bigger one by changing only a URL. If not, inspect code for vendor hooks.
- Build release run split: Can I point the same build at staging and prod without rebuilding. If not, you have mixed build with release.
- Start and stop speed: Does a new process start in seconds and exit cleanly on signal. If not, find slow inits and long shutdowns.
- Parity: Does dev use the same type of database, queue, and server as prod. If not, you are testing a different app than the one you ship.
- Logs: Can I get all logs for a request or job without ssh. If you need shell access to debug, you own pets not cattle.
- Admin path: Can I run a one off task with the same release and config as web and worker processes. If you need a different env, expect drift.
- Concurrency plan: Do we know which process types we will scale first when traffic grows. If that is fuzzy, write it down now.
- No local writes: Do all writes go to stores outside the dyno or VM. If not, move them or you will lose data on restart.
Give yourself one point per yes. Aim for eight or more. Any no becomes a ticket with a name and a date. Lower the unknowns before you chase new features. Your future self will thank you when an investor asks for a live demo over a crowded coffee shop wifi and the app just works.
Lessons learned you can carry into any cloud
Make state someone else’s job. Use Postgres, Redis, S3, and friends for the durable parts. Keep your processes free of long lived memory that matters. It makes restarts boring and scaling a button press.
Put config on the platform. The right place for secrets and URLs is the environment. This pays off the minute you spin up a staging app for a customer walkthrough or when you rotate credentials during an incident without a new build.
Think in process types. Web serves traffic. Worker does jobs. Scheduler kicks off work. Draw that picture on a whiteboard and you will see where to put pressure when traffic spikes. It also helps when you reason about queues and retries.
Treat logs as a product. If you cannot answer why was this slow and when did it start in minutes, you will spend nights guessing. Forward everything to a service that can search and alert. You want request ids, job ids, and app version in every line.
Keep build and release boring. A build should be reproducible. A release should be repeatable. That keeps rollbacks quick and makes it safe to ship small changes often. Pair that with continuous delivery and you remove the fear around deploys.
Pick portability on purpose. Use the platform features that save you time, but do not tie your core to private APIs without a plan. A thin deployment contract gives you the freedom to try Heroku today and Beanstalk or Cloud Foundry later. It also makes local dev less weird.
If you only change three things this week: move your config to environment variables, push state out of your web processes, and stop writing to local disk. Those three steps buy you a lot of uptime and a calmer on call phone.
The cloud pitch is speed plus scale. The trap is silent complexity. The twelve factor ideas cut through that. They are not trendy. They are about clear contracts between your code and your host. That is why they travel well across AWS, Heroku, Google App Engine, Cloud Foundry, and the next platform that will pop up right when you have a big launch. If your app follows these rules, you can change hosts without rewriting core paths, scale without heat, and debug without guesswork.
Final note from the week: We shipped a small feature to a Node API running on dynos and discovered a noisy neighbor effect in a third party queue. Because our processes were disposable and our config lived in the environment, we moved to a new queue plan in minutes. No code change. No deploy. That is the quiet power of twelve factor thinking for hosted apps.