“Every shortcut has a toll. The only question is when you pay it.”
someone who shipped on Friday night
Vendor lock in: Accept, Avoid, or Embrace. Cloud from a practitioners point of view with lessons that last.
A story about speed and regret
My first brush with vendor lock in felt harmless. We wanted a queue. The team picked a cloud message service in five minutes because the demo was slick and the price looked fair. We shipped in two sprints, traffic spiked, and the service kept up. Six months later we needed features that did not exist and throughput that cost three times what we expected. We drew migration plans and realized our code was married to that API. We had to keep paying the toll while we wrote glue code and backfilled messages at three in the morning. We did not lose because the service was bad. We lost because we never asked how we would leave.
Today I pick my battles
Right now the cloud is overflowing with choices. AWS wants me to use Lambda and DynamoDB. Azure waves Functions and Cosmos DB. Google Cloud talks up Cloud Functions and Spanner. Kubernetes is everywhere and every provider has a managed version. Terraform and Ansible promise that my setup lives as code. The noise is real. Teams worry about cloud vendor lock in and reach for multi cloud as a shield. Other teams go all in on one provider and enjoy the ride. Both paths can work. The trick is to pick the parts where lock in is a gift and the parts where it bites.
Deep dive one: the portability ladder
Think of a ladder. At the bottom you rent raw compute and storage. Virtual machines, block storage, object storage. These travel fairly well. You can move an image and some data with pain but it is doable. One rung up you have containers and Kubernetes. You write to a common API and let the cloud run the control plane. That gives you a decent exit story. Your app and YAML can run on another Kubernetes cluster with work but not a rewrite. One more rung up you have managed databases, queues, and caches like RDS, BigQuery, Pub Sub, and Redis services. These save a lot of time and late night pages. They also set roots. Data is sticky. Protocols look familiar but features and limits vary. On the top rung you have serverless and heavy managed platforms like Lambda, Step Functions, API gateways, and vendor specific auth. This is the fastest path to shipping. It is also the highest toll when you change providers.
The lesson is simple. Move up the ladder where you create value, stay lower where you need freedom. If your edge is the app experience, not the database engine, then managed data can be a win. If your edge is a streaming engine or a custom datastore, stay portable there and spend your lock in budget elsewhere.
Deep dive two: data gravity and egress math
Data makes the rules. Small services are easy to rewire. Large datasets are not. Two numbers shape every decision. The first is egress cost. Pulling data out of a cloud to the public internet or to another provider costs money. The second is time to move. Even with fat pipes, terabytes and petabytes take days. That delay turns into risk and lost focus.
Before you commit to a data service, write down three things on a napkin. One, how big will the dataset be in a year if growth is good. Two, what would it cost to move that data out once. Three, what part of your app would stop while the move happens. If those numbers make you sweat, buy resilience up front. That can be cross region replicas, a backup that is cloud neutral, or a plan to mirror to another store. It is not fancy. It is insurance.
Deep dive three: control planes and exits
Lock in is not only about APIs. It is how you run your cloud. If you click through a console by hand, you are tied to that console. If you write Terraform or CloudFormation, you have a script you can read and move. If you package your app with Docker and run it on Kubernetes, you have a standard way to start, stop, and scale. If you use the Serverless Framework or SAM, you have a map of functions, events, and resources. These tools are not magic walls. They are pressure valves. They turn panic moves into planned work.
Another trick is adapter layers. Wrap calls to cloud services behind tiny modules that your code owns. Keep the surface small. Log everything that crosses that line. If you must jump clouds, you swap the adapter, replay the logs, and keep going. It is not perfect but it is better than a search and replace across the codebase.
When to accept, avoid, or embrace
Accept lock in when speed beats optionality. Early stage product, a short project, or a team without deep ops skills. Pick managed services for the boring parts. Spend your time on the thing users touch. Write simple exit notes so future you does not curse present you.
Avoid lock in when the core of your business is the tech you are choosing. If data is your moat, do not tie it to one feature set you cannot leave. If you must run across providers for compliance or sales, design for that from day one and pay the cost with eyes open.
Embrace lock in when it gives you clear leverage. If managed streaming gets you sub second alerts that save real money, take it. If a serverless stack lets a small team ship weekly without pager pain, enjoy it. The cloud is full of sharp tools. Use them. Just keep a map of where the blades are.
A quiet close
The cloud debate today loves slogans. Multi cloud is the new hot take. Serverless is the new monolith joke. Under the noise, real teams ship real things. Vendor lock in is not a moral issue. It is a trade. You trade freedom for focus. You trade time now for time later. Do the math. Write it down. Pick your rungs on the ladder. Then build with care.
If you are still unsure, run a drill. Pick one service you rely on. Describe how you would replace it in a week. No heroics. Just steps, owners, and a small test. That little document will tell you whether to accept, avoid, or embrace. And it will save you one painful night when the bill or the feature list stops being your friend.