Code Review for Claude Code

adocomplete 67 points 39 comments March 09, 2026
claude.com · View on Hacker News

Discussion Highlights (9 comments)

CharlesW

Interesting: "Reviews are billed on token usage and generally average $15–25, scaling with PR size and complexity."

simianwords

nice but why is this not a system prompt? what's the value add here?

Bnjoroge

what are the implications for the tens of code review platforms that have recently raised on sky high valuations?

cpncrunch

Does AI review of AI generated code even make sense?

xlii

> We've been running Code Review internally for months: on large PRs (over 1,000 lines changed), 84% get findings, averaging 7.5 issues. On small PRs under 50 lines, that drops to 31%, averaging 0.5 issues. Engineers largely agree with what it surfaces: less than 1% of findings are marked incorrect. So the take would be that 84% heavily Claude driven PRs are riddled with ~7.5 issues worthy bugs. Not a great ad of agent based development quality.

lowsong

> Reviews are billed on token usage and generally average $15–25, scaling with PR size and complexity. You've got to be completely insane to use AI coding tools at this point. This is the subsidised cost to get users to use it, it could trivially end up ten times this amount. Plus, you've got the ultimate perverse incentive where the company that is selling you the model time to create the PRs is also selling you the review of the same PR.

raflueder

Or, just spin up your own review workflow, I've been doing this for the past couple of months after experimenting with Greptile and it works pretty well, example setup below: https://gist.github.com/rlueder/a3e7b1eb40d90c29f587a4a8cb7c... An average of $0.04/review (200+ PRs with two rounds each approx.) total of $19.50 using Opus 4.6 over February. It fills in a gap of working on a solo project and not having another set of eyes to look at changes.

nemo44x

So their business model is to deliver me buggy code and then charge me to fix it?

nolanl

The concept of "AI will review AI-authored PRs" seems completely wrong to me. Why didn't the AI write the correct code in the first place? If it takes 17 rounds of review from 5 different models/harnesses – I don't care. Just spit out the right code the first time. Otherwise I'm wasting my time clicking "review this" over and over until the PR is worth actually having a human look at.

Semantic search powered by Rivestack pgvector
3,471 stories · 32,344 chunks indexed