Cal.com is going closed source
Benjamin_Dobell
269 points
193 comments
April 15, 2026
Related Discussions
Found 5 related stories in 43.4ms across 4,686 title embeddings via pgvector HNSW
- Open Source Isn't Dead bearsyankees · 318 pts · April 15, 2026 · 54% similar
- California's Digital Age Assurance Act, and FOSS todsacerdoti · 71 pts · March 04, 2026 · 47% similar
- Digg.com Closing Due to Spam napolux · 35 pts · March 14, 2026 · 45% similar
- Euro-Office Wants to Replace Google Docs and Microsoft Office rapnie · 64 pts · March 30, 2026 · 44% similar
- I'm losing the SEO battle for my own open source project devinitely · 463 pts · March 03, 2026 · 43% similar
Discussion Highlights (20 comments)
andsoitis
> Today, we are making the very difficult decision to move to closed source, and there’s one simple reason: security. It seems like an easy decision, not a difficult one.
gouthamve
This is a weird knee-jerk reaction. I feel like this is more a business decision than a security decision. I feel like with AI, self-hosting software reliably is becoming easier so the incentives to pay for a hosted service of an OSS project are going down.
doytch
I get the mentality but it feels very much like security through obscurity. When did we decide that that was the correct model?
ButlerianJihad
This seems kind of crazy. If LLMs are so stunningly good at finding vulnerabilities in code, then shouldn't the solution be to run an LLM against your code after you commit, and before you release it? Then you basically have pentesting harnesses all to yourself before going public. If an LLM can't find any flaws, then you are good to release that code. A few years ago, I invoked Linus's Law in a classroom, and I was roundly debunked. Isn't it a shame that it's basically been fulfilled now with LLMs? https://en.wikipedia.org/wiki/Linus%27s_law
rvz
You know what? Great move. Open-source supporters don't have a sustainable answer to the fact that AI models can easily find N-day vulnerabilities extremely quickly and swamp maintainers with issues and bug-reports left hanging for days. Unfortunately, this is where it is going and the open-source software supporters did not for-see the downsides of open source maintenance in the age of AI especially for businesses with "open-core" products. Might as well close-source them to slow the attackers (with LLMs) down. Even SQLite has closed-sourced their tests which is another good idea.
simonw
Drew Breunig published a very relevant piece yesterday that came to the opposite conclusion: https://www.dbreunig.com/2026/04/14/cybersecurity-is-proof-o... Since security exploits can now be found by spending tokens, open source is MORE valuable because open source libraries can share that auditing budget while closed source software has to find all the exploits themselves in private. > If Mythos continues to find exploits so long as you keep throwing money at it, security is reduced to a brutally simple equation: to harden a system you need to spend more tokens discovering exploits than attackers will spend exploiting them .
woodruffw
Today, it's easy to (publicly) evaluate the ability of LLMs to find bugs in open source codebases, because you don't need to ask permission. But this doesn't actually tell us the negative statement, which is that an LLM won't just as effectively find bugs in closed codebases, including through black-box testing, reverse engineering, etc. If the null hypothesis is that LLMs are good at finding bugs, full stop, then it's unclear to me that going closed actually does much to stop your adversary (particularly as a service operator).
zb3
This has to be the most bullshit reason I've seen.. if AI can be pointed and find vulnerabilities then do it yourself before publishing the code.
bearsyankees
Think this is a bad, bad move... https://news.ycombinator.com/item?id=47780712
creatonez
This is some truly exceptionally clownish attention seeking nonsense. The rationale here is complete nonsense, they just wanted to put "because AI" after announcing their completely self-serving decision. If AI cyber offense is such a concern, recognize your role as a company handling truckloads of highly sensitive information and actually fix your security culture instead of just obscuring it.
nativeit
I guess why fix vulnerabilities when you can just obscure them?
asdev
Who even uses their open source product?
_pdp_
The real threat is not security but bad actors copying your code and calling it theirs. IMHO, open source will continue to exist and it will be successful but the existence of AI is deterrent for most. Lets be honest, in recent times the only reason startups went open source first was to build a community and build organic growth engine powered by early adaptors. Now this is no longer viable and in fact it is simply helping competitors. So why do it then? The only open source that will remain will be the real open source projects that are true to the ethos.
popalchemist
Seems like it's just being used as a convenient pretense to back out of open-source.
barelysapient
I hate how this sounds...but this reads to me "we lack the confidence in our code security so we're closing the source code to conceal vulnerabilities which may exist."
iancarroll
I know plenty of security researchers who exclusively use Claude Code and other tools for blackbox testing against sites they don’t have the source code for. It seems like shutting down the entire product is the only safe decision here!
adamtaylor_13
Could you not simply point AI at your open source codebase and use it to red-team your own codebase? This post's argument seems circular to me.
hmokiguess
Risk tolerance and emotional capacity differs from one individual to another, while I may disagree with the decision I am able to respect the decision. That said, I think it’s important to try and recognize where things are from multiple angles rather than bucket things from your filter bubble alone, fear sells and we need to stop buying into it.
tudorg
It's funny that this news showed up just as we (Xata) have gone the other direction, citing also changes due to AI: https://xata.io/blog/open-source-postgres-branching-copy-on-... We did consider arguments in both directions (e.g. easier to recreate the code, agents can understand better how it works), but I honestly think the security argument goes for open source: the OSS projects will get more scrutiny faster, which means bugs won't linger around. Time will tell, I am in the open source camp, though.
tokai
Security through obscurity has been known to be a faulty approach for nearly 200 years. Yet here we are.