I run multiple $10K MRR companies on a $20/month tech stack

tradertef 855 points 469 comments April 12, 2026
stevehanov.ca · View on Hacker News

Discussion Highlights (20 comments)

tradertef

Not my website. I found this interesting.

komat

Cool but missing the Claude Code or Coding Agent part imo

hackingonempty

> If you need a little breathing room, just use a swapfile. You should always use a swap file/partition, even if you don't want any swapping. That's because there are always cold pages and if you have no swap space that memory cannot be used for apps or buffers, it's just wasted.

codemog

A lot of this advice is good or at least interesting. A lot of it is questionable. Python is completely fine for the backend. And using SQLite for your prod database is a bad idea, just use Postgres or similar.

p4bl0

Just in case, if there are others like me who where wondering what does "MRR" means, it seems to be "monthly recurring revenue".

vxsz

I learned nothing. Most of this seems like common basic advice, wrapped up in AI written paragraphs... Initially from the title, I thought it would be about brainstorming and launching a successful idea, and that sort of thing.

brador

You already have and had everything you need to scale the business to max and it hasn’t happened so more money won’t help. What do you want VC to do? You didn’t bring a plan.

dnnddidiej

Is infra where investors money is going? I imagined salaries would be it. Marketing costs maybe.

gobdovan

Nice list! I'd say the SQLite with WAL is the biggest money saver mentioned. One note: you can absolutely use Python or Node just as well as Go. There's Hetzner that offers 4GB RAM, 10TB network (then 1$/TB egress), 2CPUs machines for 5$. Two disclaimers for VPS: If you're using a dedicated server instead of a cloud server, just don't forget to backup DB to a Storage box often (3$ /mo for 1TB, use rsync). It's a good practice either way, but cloud instances seem more reliable to hardware faults. Also avoid their object store. You are responsible for security. I saw good devs skipping basic SSH hardening and get infected by bots in <1hr. My go-to move when I spin up servers is a two-stage Terraform setup: first, I set up SSH with only my IP allowed, set up Tailscale and then shutdown the public SSH IP entrypoint completely. Take care and have fun!

firefoxd

I was writing about this recently [0]. In the 2000s, we were bragging about how cheap our services are and are getting. Today, a graduate with an idea is paying $200 amounts in AWS after the student discounts. They break the bank and go broke before they have tested the idea. Programming is literally free today. [0]: https://idiallo.com/blog/programming-tools-are-free

thibaultmol

Pretty sure this is just written by AI... Why else would someone call "Sonnet 3.5 Sonnet and gpt 4o' high end models.

aleda145

Great stack! I'm doing a similar approach for my latest project (kavla.dev) but using fly.io and their suspend feature. Scaling to zero with database persistence using litestream has cut my bill down to $0.1 per month for my backend+database. Granted I still don't have that many users, and they get 200ms of extra latency if the backend needs to wake up. But it's nice to never have to worry about accidental costs!

globalnode

nice article, validates some of the things i already thought. although im sure things like aws and database servers etc are still useful for big companies

trick-or-treat

LMFAO at Linode / Digital Ocean as lean servers. Hetzner / Contabo maybe. Cloudflare workers definitely. This guy is not at my level and multiple $10k MRR is possible but unlikely.

sailingcode

AI has solved the "code problem", but it hasn't solved the "marketing problem"…

hackingonempty

> The enterprise mindset dictates that you need an out-of-process database server. But the truth is, a local SQLite file communicating over the C-interface or memory is orders of magnitude faster than making a TCP network hop to a remote Postgres server. I don't want to diss SQLite because it is awesome and more than adequate for many/most web apps but you can connect to Postgres (or any DB really) on localhost over a Unix domain socket and avoid nearly all of the overhead. It's not much harder to use than SQLite, you get all of the Postgres features, it's easier to run reports or whatever on the live db from a different box, and much easier if it comes time to setup a read replica, HA, or run the DB on a different box from the app. I don't think running Postgres on the same box as your app is the same class of optimistic over provisioning as setting up a kubernetes cluster.

raincole

So what's the $10K MMR product, exactly? The lede is buried into nonexistence. Is it this one: https://www.websequencediagrams.com/ ...? > Here is the trick that you might have missed: somehow, Microsoft is able to charge per request, not per token. And a "request" is simply what I type into the chat box. Even if the agent spends the next 30 minutes chewing through my entire codebase, mapping dependencies, and changing hundreds of files, I still pay roughly $0.04. Really? Lol. If it's true why would you publish it? To ensure Microsoft will patch it up and fuck up your workflow?

jstanley

The most interesting thing in here is https://github.com/smhanov/laconic which is the author's "agentic research orchestrator for Go that is optimized to use free search & low-cost limited context window llms". I have been doing this kind of thing with Cursor and Codex subscriptions, but they do have annoying rate limits, and Cursor on the Auto model seems to perform poorly if you ask it to do too much work, so I am keen to try out laconic on my local GPU. EDIT: Having tried it out, this may be a false economy. The way it works is it has a bunch of different prompts for the LLMs (Planner, Synthesizer, Finalizer). The "Planner" is given your input question and the "scratchpad" and has to come up with DuckDuckGo search terms. Then the harness runs the DuckDuckGo search and gives the question, results, and scratchpad to the Synthesizer. The Synthesizer updates the scratchpad with new information that is learnt. This continues in a loop, with the Planner coming up with new search queries and the Synthesizer updating the scratchpad, until eventually the Planner decides to give a final answer, at which point the Finalizer summarises the information in a user-friendly final answer. That is a pretty clever design! It allows you to do relatively complex research with only a very small amount of context window. So I love that. However I have found that the Synthesizer step is extremely slow on my RTX3060, and also I think it would cost me about £1/day extra to run the RTX3060 flat out vs idle. For the amount of work laconic can do in a day (not a lot!), I think I am better off just sending the money to OpenAI and getting the results more quickly. But I still love the design, this is a very creative way to use a very small context window. And has the obvious privacy and freedom advantages over depending on OpenAI.

ianpurton

When he switches from Kubernetes in the cloud to Nginx -> App Binary -> Sqlite he trades operations functionality for cost. But, actually you can run Kubernetes and Postgres etc on a VPS. See https://stack-cli.com/ where you can specify a Supabase style infra on a low cost VPS on top of K3s.

cagz

Nice tech read, but without information about which companies, doing what, just feels way too click-baity.

Semantic search powered by Rivestack pgvector
4,351 stories · 40,801 chunks indexed