Affirm Retooled for Agentic Software Development in One Week
brd529
32 points
22 comments
April 24, 2026
Related Discussions
Found 5 related stories in 77.3ms across 5,498 title embeddings via pgvector HNSW
- Agentic Engineering Patterns r4um · 505 pts · March 04, 2026 · 54% similar
- Levels of Agentic Engineering bombastic311 · 135 pts · March 10, 2026 · 54% similar
- What is agentic engineering? lumpa · 118 pts · March 16, 2026 · 54% similar
- 6 Practices that turned AI from prototyper to workhorse (106 PRs in 14 days) waleedk · 15 pts · March 01, 2026 · 51% similar
- My fireside chat about agentic engineering at the Pragmatic Summit lumpa · 12 pts · March 14, 2026 · 50% similar
Discussion Highlights (12 comments)
scientaster2
Any bets on how long it'll take for a security breach, now that every attacker knows affirm is vibe coding 60% of PRs? I feel like these top down mandates miss the forest through the trees -- in isolation claude code is a speedup, like how sometimes WD40 is the right tool for the job. But when applying it to everything, you end up with a sticky mess.
dotdi
> We move money, so mistakes are costly and quality is contractually non-negotiable. We build on a twelve-year-old monorepo with structural bottlenecks: bloated test suites, manual code review, unstable CI, and deploy infrastructure not made for the pace we need. In my experience, each single item on this list already is a major hurdle for AI agents. The unholy union of all of them together is something I couldn't personally be responsible for - period. Working on that codebase - I'm sure - is already difficult and often frustrating. Having a horde of short-term-memory-only agents without any real institutional knowledge is a recipe for disaster. I'm sure the rollout looks great on paper, and long-term effects are - conveniently - not the scope of this article.
ziml77
Headline soon: Affirm lays off 799 software developers Headline later: Affirm data breach exposes personal details and bank information of millions of users
deadbabe
Affirm is on its way out anyway, so really this is one last Hail Mary to try to prop up the company, they don’t have much left to lose.
postexitus
Do I have to read the article before calling it bs?
ookblah
having tried to wrangle this on my own over months and still seeing gaps everyday i have to raise severe skepticism on this lol. you mandate and "solve" this in a few weeks over 1:N channels and measure on a metric that nobody even fully understands yet = someone getting paid to bullshit some metrics on agentic productivity to executives. i agree with other posters, 12 months until a dumpster fire of shit reveals itself. FWIW, i think it is the future but not in the way that'd described here.
sharadov
Are these guys public? Can I short them? Oh, even easier maybe just wager on Kalshi?
maxothex
Having integrated LLMs into middleware systems handling financial data, I think the skepticism here is warranted but the direction is right. The real challenge isn't the agents writing code; it is the context window around financial logic, compliance boundaries, and legacy system quirks that live in engineers' heads, not documentation. What works: starting with isolated internal tools where mistakes are recoverable, not customer-facing payment flows. Agents excel at boilerplate and test generation but need human guardrails for business logic. Affirm's one-week timeline sounds more like executive theater than genuine transformation. The 12-month check will be more telling than the announcement.
jmount
Good thing that didn't require two weeks, as that is about 14 attention spans.
giancarlostoro
Nice, this reminds me of how I do this in my spare time. My current employer is still figuring out how they want to do AI coding.
keybored
> The window to retool is now open, while the models are capable and the costs are low. That window will not stay open forever. Or the window to get hooked is now? Or do they have an open model backup plan? > We believe the companies that leap will stay ahead, the ones that wait will be leapt over. FOMO is the same as last week.
zug_zug
> This post covers how we got to a place where over 60% of our pull requests (PRs) are agent-assisted Well I mean you could (and should) turn on 100% agent code-reviews, and that's a type of assistance. The hard part is that most orgs never made disposable environments nor any meaningful local testing, so the ability to validate code doesn't break something indirectly (e.g. memory leak, hammer the prod DB, cache values with the wrong key, etc) isn't there. In my experience AI code has several subtle bugs and is deceptively dangerous (because it can look so competent in other ways).