Your startup doesn’t need microservices

A freelance developer recently posted on r/SaaS something that made me laugh because of how real it is. When a non-technical founder asks for a simple MVP, the build takes two weeks and the quote is reasonable. But when a founder asks for microservices and Kubernetes “so we can scale to millions,” the quote doubles.

Instead of a simple session handler, now there’s a separate Auth Service. That’s 10 billable hours. Instead of one SQL database, now there are three syncing via events. That’s a week of work. Instead of a $5 VPS, there’s a complex AWS cluster to configure. The app does the exact same thing. It just costs $5.000 a month to maintain instead of $50.

I’ve seen this play out too many times. So here’s my take on why microservices are almost always the wrong choice at the early stage, and what to do instead.

Microservices solve a people problem, not a tech problem

Microservices were created at companies like Netflix, Amazon, and Google to solve a specific problem: hundreds of engineers working on the same product needed a way to deploy independently and scale individual components.

One commenter in that Reddit thread put it very well: “Microservices are for scaling people/teams, not applications. Microservices can easily make things less scalable compared to a modular monolith. Network calls aren’t cheap.”

When your team is two or three people, you don’t have this problem. You have a finding-customers problem.

Martin Fowler wrote about this back in 2014 in his essay Microservice Prerequisites: most teams considering microservices don’t have the operational maturity to run them. Without solid monitoring, CI/CD, and DevOps culture, microservices don’t reduce complexity. They multiply it.

The cost difference is real

Let me give you some numbers.

A monolithic application can run on a single server or a modest PaaS instance. We’re talking $20–$100/month. One founder in the Reddit thread described his entire SaaS: one database, one server, straightforward deployment, under $50/month. It handles everything he needs.

A minimal microservices setup with five services needs container orchestration (Kubernetes or ECS), an API gateway, a message broker, centralized logging, and separate databases per service. You can do the math yourself from AWS pricing: the EKS control plane alone is $73/month, three t3.medium worker nodes add ~$90/month, a managed PostgreSQL instance is another ~$50/month (per database, and in microservices you’ll likely have more than one), an Application Load Balancer runs ~$20/month, and a NAT Gateway for private subnets adds ~$33/month. Before you’ve written any application code, you’re already past $300/month for a bare-bones setup, and that’s without redundancy, a message broker, or proper monitoring. A production-ready microservices deployment easily reaches $500–$1.000+/month.

Then add observability. Datadog monitoring for five services can easily cost $75–$110/month. For a monolith, you’re monitoring one process.

One engineer in the thread who works as a fractional CTO shared a story I’ve heard too many times: “I’ve cleaned up so many of these “microservices” builds where someone paid 3x for a system that could have been a single Django app behind a load balancer. The real kicker is when they come to you 6 months later wondering why their 12-service Kubernetes cluster costs $2k/month to run with 50 users.”

$2.000 a month for 50 users. That’s $40 per user in infrastructure alone.

Developer speed suffers

Stripe’s Developer Coefficient report found that the average developer spends more than 17 hours a week dealing with maintenance issues, such as debugging and refactoring, rather than building new features. Microservices multiply this problem. Now your debugging spans multiple services, your refactoring requires coordinating API contracts, and your technical debt lives in a dozen repositories instead of one.

When your team is three engineers, that’s like losing one full-time person to infrastructure work.

Think about the difference in practice. In a monolith, a bug that touches authentication and billing is one code change, one test run, one deployment. In microservices, the same fix might span two repositories, require coordinating API changes between services, need integration testing across service boundaries, and involve deploying two services in the correct order. What takes 30 minutes in a monolith can take half a day.

One technical advisor from the Reddit thread described the mindset I think is correct: “What’s the smallest set of features we can have that people will pay for? Technology is an enabler: only configure it to enable as little as you need at first.”

Everyone successful started with a monolith

This is the part people tend to forget.

Shopify still runs one of the largest Rails monoliths in the world. Stack Overflow served hundreds of millions of page views per month with a single .NET monolith. GitHub was a Rails monolith for years. Etsy scaled to billions of dollars on a PHP monolith. Basecamp built a multi-million dollar business on a single Rails app.

Former Amazon engineers report that, in the mid‑2000s, most retail transactions went through a single large application (Obidos), which Amazon later migrated to a service‑oriented architecture.

And this one from the Reddit thread should end the debate: OpenAI serves hundreds of millions of users with a single PostgreSQL write database. If that’s good enough for ChatGPT, it’s good enough for your todo list app with zero users.

Then there’s Pieter Levels, who runs Nomad List, Remote OK, and a growing list of other products, all from a single VPS. His stack? Vanilla HTML, PHP, jQuery, and SQLite. No frameworks, no Kubernetes, no microservices. As he explained on the Lex Fridman podcast: “People are getting sick of frameworks. All the JavaScript frameworks are so wieldy. It takes so much work to just maintain this code, and then it updates to a new version, you need to change everything. PHP just stays the same and works.” The man makes over a million dollars a year in revenue from products hosted on a single server. That’s not a hack β€” that’s proof that simplicity scales.

Another commenter in the Reddit thread mentioned scaling a site to a million unique visitors per day running CodeIgniter and MySQL. Others described companies running multi-state operations on cPanel with database backups. No per-user costs, no heavy spending.

The boring stack is the profitable stack.

I can confirm this from my own experience. I built Concorsone.it as a monolith, and it currently serves about 3.000 users a month. The entire thing runs on Scalingo and costs me 30€ a month. One app, one database, one deployment. It works, it’s fast enough, and I spend my time improving the product instead of managing infrastructure. If I had set this up with microservices, I’d be paying 10x more for the same result and dealing with problems I simply don’t have.

The modular monolith

If you’re worried about code quality in a monolith, the answer isn’t microservices. It’s better internal architecture.

The idea is simple: define clear module boundaries within your codebase aligned with business domains (billing, users, notifications), enforce interfaces between modules, use database schemas to logically separate data per module, and keep a single deployable artifact with a single CI/CD pipeline.

Shopify formalized this with their concept of “components” within their Rails monolith, using custom tooling that prevents cross-component violations. The result is modular, maintainable at massive scale, and deployed as a single application. Over 2.8 million lines of Ruby code, 500.000 commits, and they chose a modular monolith over microservices.

As one commenter in the thread put it: “You can scale teams by splitting a monolith into packages and a decent architecture.” No network calls. No service discovery. No distributed transactions.

What to do when you actually need to scale

When scaling pressure actually arrives: vertical first if it’s marginal, horizontal if it’s significant.

Another Reddit commenter: “Make that monolith stateless first so you can spin up a couple more instances dynamically and slap a load balancer in front of it. Don’t over-engineer until that stops being viable.”

If I had to write this as a list:

  1. Optimize queries. Most performance issues are N+1 queries, missing indexes, or unbounded queries. Free.
  2. Background jobs. Move heavy work (email sending, PDF generation) to a job queue. Still part of the same codebase.
  3. Add caching. Redis in front of the database. $15–$50/month.
  4. Horizontal scaling. Make the monolith stateless, run multiple instances behind a load balancer.
  5. Read replicas. One checkbox on AWS or GCP.
  6. Selective extraction. If a specific component genuinely needs independent scaling, extract it.

Microservices appear at step 6. Most startups never reach step 4.

The distributed systems tax

Once you split your application into services, you inherit every problem in the distributed systems textbook. Network calls fail. Services go down. Messages get lost or arrive out of order. You now need service discovery, circuit breakers, retry logic with idempotency, distributed transactions (or eventual consistency with sagas), API versioning, and cross-service data consistency.

Each of these is a solved problem in theory. But each solution adds complexity and introduces its own failure modes. For a startup team without a dedicated SRE function, this is a lot to deal with on top of actually building the product.

Testing also becomes harder. In a monolith, your integration test starts the application and hits endpoints. In microservices, you need to either run all services locally, maintain a shared staging environment, or build contract testing pipelines. All of these add overhead.

The freelancer incentive problem

The Reddit post made something visible that doesn’t get discussed enough: the incentive structure between freelancers and non-technical founders is misaligned.

A freelancer billing by the hour has every reason to agree with overcomplicated architecture. The most upvoted comment in the thread had the antidote: the biggest tell that a dev is going to overcharge you is when they agree with your architecture choices instead of pushing back. A good technical partner will tell you it’s overkill and suggest starting simple.

If you’re a non-technical founder: never lead with architecture decisions when hiring. Describe the problem you’re solving and the users you’re serving. Let the technical partner recommend the architecture. If their recommendation involves Kubernetes for a pre-launch MVP, find someone else.

This is why several commenters in the thread recommended getting a fractional CTO or technical advisor early. Someone whose incentives are aligned with the product, not the invoice.

What about AI making microservices easier?

One dissenting voice in the Reddit thread argued that with AI tools like Codex and Claude, the setup overhead of microservices has collapsed. Using these tools alongside Docker and .NET Aspire, they can set up microservices as quickly as a monolith.

That’s fair. AI-assisted development has made scaffolding faster. But setup cost was never the main problem. The ongoing costs remain: operational complexity, debugging distributed failures, maintaining consistency across services, and paying for infrastructure you don’t need. AI can help you build a 12-service architecture in a day. It can’t make the AWS bill any smaller.

A well-structured modular monolith is building it right. Clean boundaries, good abstractions, and a codebase ready to be decomposed when the need actually arises. That’s not technical debt. That’s pragmatic engineering.

When microservices actually make sense

Microservices aren’t bad. They’re a tool. They make sense in specific contexts:

  • Your engineering team exceeds 40–50 people and coordination within a single codebase is measurably slow.
  • You have genuinely different scaling requirements, like a GPU-intensive pipeline alongside a CRUD API.
  • You’ve achieved product-market fit with mature CI/CD, monitoring, and on-call practices.
  • You have dedicated platform engineers who can own the operational complexity.
  • You are well past $1M ARR and the scaling problems are real.

If your team is fewer than 15–20 people, you’re pre-product-market-fit, and your main constraint is speed of iteration, use a monolith. That describes 90%+ of SaaS startups.

Conclusion

As one of the best comments in the Reddit thread said: “Premature scalability is just expensive architecture with zero ROI.”

Build for the users you have, not the users you imagine having. If you’re lucky enough to have scaling problems later, that’s a great problem to solve with revenue.

The real skill isn’t knowing how to set up Kubernetes. It’s knowing when not to.


This article was inspired by a discussion on r/SaaS about the realities of microservices in early-stage startups.

Speaking of opinions, thank you for taking the time to read mine! You can reach me on LinkedIn to comment. I would love to hear from you 🌞