Ask HN: Why there are no actual studies that show AI is more productive?

make_it_sure 39 points 84 comments March 08, 2026
View on Hacker News

I know there are companies that are highly productive with AI including ours. However, AI skeptics ask for real studies and all of them available now show no real gains. Many won't care unless you show them an actual study. So my question is, are there any actual studies about the companies that actually make it work with AI?

Discussion Highlights (20 comments)

anovikov

If AI makes people so much more productive, why aren't there much more apps on the Apple store? Mobile apps involve a lot of dirty, boring scaffolding work which AI automated first thing, 2 years ago easily. It should've been the very first place where productivity boost should've been evident, a year ago at least. But it's just not there. Why not?

flawn

AI can build systems based on static assumptions that the orchestrator (you) gives it. But proper engineering (which is what matters economically much more) is the process of the system's assumptions & requirements changing over time to ensure you have a reliable and consistent service - and that's not something that AI excels at (yet).

austin-cheney

Some people prefer evidence before investing large amounts of money and labor. That is not an indication of irrational behavior even if challenging your emotionally invested opinion or result.

Lionga

There are a few studies that show perceived increases in productivity (all of them show negativ or almost no real increase, but I don't that is relevant to snake oils salesman).

vjk800

We've had the AI tools for maybe two years, and they have only gotten really good in the past half a year or so. For fuck's sake, adopting electricity took like 50 years, why would you expect to see any kind of effect from the AI so quickly? The tools are still developing - rapidly - and people are still figuring out the best usage patterns for it.

chrisjj

> Why there are no actual studies that show AI is more productive? Beats me. With "AI" being so good at faking stuff, there should by now be ton of such studies :)

IshKebab

These sort of things are really hard to study. Combine that with the fact that the AI landscape is so varied and fast moving... It's easy to see why there aren't many studies on it. There are a mountain of things that we reasonably know to be true but haven't done studies on. Is it beneficial for programming languages to support comments? Are regexes error-prone? Does static typing improve productivity on large projects? Is distributed version control better than centralised (lock based)? Etc. Also you can't just say "AI improves productivity". What kind of AI? What are you using it for? If you're making static landing pages... yeah obviously it's going to help. Writing device drivers in Ada? Not so much.

AugustoCAS

Dora released a report last year: https://dora.dev/research/2025/dora-report/ The gains are ~17% increase in individual effectiveness, but a ~9% of extra instability. In my experience using AI assisted coding for a bit longer than 2 years, the benefit is close to what Dora reported (maybe a bit higher around 25%). Nothing close to an average of 2x, 5x, 10x. There's a 10x in some very specific tasks, but also a negative factor in others as seemingly trivial, but high impact bugs get to production that would have normally be caught very early in development on in code reviews. Obviously depends what one does. Using AI to build a UI to share cat pictures has a different risk appetite than building a payments backend.

Nevermark

I think most major efficiency improvements involve more adaptation costs than expected. Those that can “see” the potential push through the adaptation period, even when longer than expected. Depending on how forward looking a group is, the adaptation costs are a problem, a dilemma, or a completely obvious win. Yet, external measurements don't distinguish between accumulating, accelerating, flat or fading intermediate value. -- Avoidance of necessary adaptation, even with no immediate impact, becomes the dual. Technical, strategic, or capability debt. Does that hidden anti-productivity ever get accounted for? When maladaptive firms take their anti-productivity into a hole as they fade/demise? A company can operate with high margins while its sales fall off a cliff. Is that just "decreasing quantities" of uniformly "high productivity"?

heraldgeezer

So... you want a study to prove your ready made hypothesis?

Stronz

It might also depend on how the tools are used. In practice a lot of value seems to come from reducing small bits of friction rather than dramatically increasing output.

danr4

because you can just look at the commit log

otabdeveloper4

Just trust the vibe, bro. One trillion market cap cannot be wrong.

blitzar

Ask HN: Why are there no actual studies that show the sky is green and the earth is at the centre of the universe? I would have included the flatness of earth, but the flat earthers have some excellent studies (reviewed by their flat earth peers) on the subject.

lysecret

Because we are incapable of measuring developer productivity.

rienbdj

GitHub has their own study using Copilot but given the obvious conflict of interest I would discount it.

chrysoprace

Self-reported productivity does not equate to actual productivity. People have all sorts of biases that make such assessments fairly pointless. They only gauge how you feel about your productivity, which is not necessarily a bad thing, but it doesn't mean you're actually more productive.

bawolff

> Many won't care unless you show them an actual study Why are the pro AI people so obsessed with proving the AI skeptics wrong. Is AI is working for you? Great. Go make great things. Isn't that the point after all? Who cares who believes you if the results speak for themselves?

smackeyacky

The code was never the bottleneck. It’s always the org around it.

charcircuit

Because the data is private and often such studies are not measuring solely the part that AI makes more productive. And measuring productivity in general is a very hard problem so the results of whatever study often are meaningless in practice. Pair this with studies today still being based off ancient models like GPT-4o and it's even more meaningless. If you are familiar with AI it's obvious how it increases productivity. When bugs get fixed with 0 human time it's plain as day that it was productive compared to a human making the fix.

Semantic search powered by Rivestack pgvector
3,471 stories · 32,344 chunks indexed