Agentic Engineering Patterns
r4um
505 points
284 comments
March 04, 2026
Related Discussions
Found 5 related stories in 38.1ms across 3,471 title embeddings via pgvector HNSW
- What is agentic engineering? lumpa · 118 pts · March 16, 2026 · 90% similar
- Levels of Agentic Engineering bombastic311 · 135 pts · March 10, 2026 · 76% similar
- My fireside chat about agentic engineering at the Pragmatic Summit lumpa · 12 pts · March 14, 2026 · 62% similar
- Searching for the Agentic IDE bigwheels · 30 pts · March 11, 2026 · 61% similar
- Prompt Engineering for Humans mooreds · 14 pts · March 31, 2026 · 53% similar
Discussion Highlights (20 comments)
ukuina
I find StrongDM's Dark Factory principles more immediately actionable (sorry, Simon!): https://factory.strongdm.ai/principles
mohsen1
I've experimented with agentic coding/engineering a lot recently. My observation is that software that is easily tested are perfect for this sort of agentic loop. In one of my experiments I had the simple goal of "making Linux binaries smaller to download using better compression" [1]. Compression is perfect for this. Easily validated (binary -> compress -> decompress -> binary) so each iteration should make a dent otherwise the attempt is thrown out. Lessons I learned from my attempts: - Do not micro-manage. AI is probably good at coming up with ideas and does not need your input too much - Test harness is everything, if you don't have a way of validating the work, the loop will go stray - Let the iterations experiment. Let AI explore ideas and break things in its experiment. The iteration might take longer but those experiments are valuable for the next iteration - Keep some .md files as scratch pad in between sessions so each iteration in the loop can learn from previous experiments and attempts [1] https://github.com/mohsen1/fesh
kubb
Is there a market for this like OOP patterns that used to sell in the 90s?
chillfox
Isn’t this pretty much how everyone uses agents? Feels like it’s a lot of words to say what amounts to make the agent do the steps we know works well for building software.
benrutter
I use AI in my workflow mostly for simple boilerplate, or to troubleshoot issues/docs. I've dipped into agentic work now and again, but never been very impressed with the output (well, that there is any functioning output is insanely impressive, but it isn't code I want to be on the hook for complaining). I hear a lot of people saying the same, but similarly a bunch of people I respect saying they barely write code anymore. It feels a little tricky to square these up sometimes. Anyway, really looking forward to trying some if these patterns as the book develops to see if that makes a difference. Understanding how other peopke really use these tools is a big gap for me.
tr888
For web apps, explictly asking the agent to build in sensible checkpoints and validate at the checkpoint using Playwright has been very successful for me so far. It prevents the agent from strating off course and struggling to find its way back. That and always using plan mode first, and reviewing the plan for evidence of sensible checkpoints. /opusplan to save tokens!
nishantjani10
I primarily use AI for understanding codebases myself. My prompt is: "deeply understand this codebase, clearly noting async/sync nature, entry points and external integration. Once understood prepare for follow up questions from me in a rapid fire pattern, your goal is to keep responses concise and always cite code snippets to ensure responses are factual and not hallucinated. With every response ask me if this particular piece of knowledge should be persistent into codebase.md" Both the concise and structure nature (code snippets) help me gain knowledge of the entire codebase - as I progressively ask complex questions on the codebase.
wokwokwok
I really like the idea of agent coding patterns. This feels like it could be expanded easily with more content though. Off the top of my head: - tell the agent to write a plan, review the plan, tell the agent to implement the plan - allow the agent to “self discover” the test harness (eg. “Validate this c compiler against gcc”) - queue a bunch of tasks with // todo … and yolo “fix all the todo tasks” - validate against a known output (“translate this to rust and ensure it emits the same byte or byte output as you go”) - pick a suitable language for the task (“go is best for this task because I tried several languages and it did the best for this domain in go”)
sdevonoes
Is there anything about reviewing the generated code? Not by the author but by another human being. Colleagues don’t usually like to review AI generated code. If they use AI to review code, then that misses the point of doing the review. If they do the review manually (the old way) it becomes a bottleneck (we are faster at producing code now than we are at reviewing it)
pts_
I really hate smelly statements like this or that is cheap now. They reek of carelessness.
yoaviram
Yesterday I wrote a post about exactly this. Software development, as the act of manually producing code, is dying. A new discipline is being born. It is much closer to proper engineering. Like an engineer overseeing the construction of a bridge, the job is not to lay bricks. It is to ensure the structure does not collapse. The marginal cost of code is collapsing. That single fact changes everything. https://nonstructured.com/zen-of-ai-coding/
jkhdigital
Today I gave a lecture to my undergraduate data structures students about the evolution of CPU and GPU architectures since the late 1970s. The main themes: - Through the last two decades of the 20th century, Moore’s Law held and ensured that more transistors could be packed into next year’s chips that could run at faster and faster clock speeds. Software floated on a rising tide of hardware performance so writing fast code wasn’t always worth the effort. - Power consumption doesn’t vary with transistor density but varies with the cube of clock frequency, so by the early 2000s Intel hit a wall and couldn’t push the clock above ~4GHz with normal heat dissipation methods. Multi-core processors were the only way to keep the performance increasing year after year. - Up to this point the CPU could squeeze out performance increases by parallelizing sequential code through clever scheduling tricks (and compilers could provide an assist by unrolling loops) but with multiple cores software developers could no longer pretend that concurrent programming was only something that academics and HPC clusters cared about. CS curricula are mostly still stuck in the early 2000s, or at least it feels that way. We teach big-O and use it to show that mergesort or quicksort will beat the pants off of bubble sort, but topics like Amdahl’s Law are buried in an upper-level elective when in fact it is much more directly relevant to the performance of real code, on real present-day workloads, than a typical big-O analysis. In any case, I used all this as justification for teaching bitonic sort to 2nd and 3rd year undergrads. My point here is that Simon’s assertion that “code is cheap” feels a lot like the kind of paradigm shift that comes from realizing that in a world with easily accessible massively parallel compute hardware, the things that matter for writing performant software have completely shifted: minimizing branching and data dependencies produces code that looks profoundly different than what most developers are used to. e.g. running 5 linear passes over a column might actually be faster than a single merged pass if those 5 passes touch different memory and the merged pass has to wait to shuffle all that data in and out of the cache because it doesn’t fit. What all this means for the software development process I can’t say, but the payoff will be tremendous (10-100x, just like with properly parallelized code) for those who can see the new paradigm first and exploit it.
winwang
Linear walkthrough: I ask my agents to give me a numbered tree. Controlling tree size specifies granularity. Numbering means it's simple to refer to points for discussion. Other things that I feel are useful: - Very strict typing/static analysis - Denying tool usage with a hook telling the agent why+what they should do (instead of simple denial, or dangerously accepting everything) - Using different models for code review
fud101
Any word on patterns for security and deployment to prod?
sd9
I've recently got into red/greed TDD with claude code, and I have to agree that it seems like the right way to go. As my projects were growing in complexity and scope, I found myself worrying that we were building things that would subtly break other parts of the application. Because of the limited context windows, it was clear that after a certain size, Claude kind of stops understanding how the work you're doing interacts with the rest of the system. Tests help protect against that. Red/green TDD specifically ensures that the current work is quite focused on the thing that you're actually trying to accomplish, in that you can observe a concrete change in behaviour as a result of the change, with the added benefit of growing the test suite over time. It's also easier than ever to create comprehensive integration test suites - my most valuable tests are tests that test entire user facing workflows with only UI elements, using a real backend.
yieldcrv
I dont currently have confidence in TDD A broken test doesn’t make the agentic coding tool go “ooooh I made a bad assumption” any more than a type error or linter does All a broken test does it prompt me to prompt back “fix tests” I have no clue which one broke or why or what was missed, and it doesnt matter. Actual regressions are different and not dependent on these tests, and I follow along from type errors and LLM observability
gaigalas
The most important thing you need to understand with working with agents for coding is that now you design a production line. And that has nothing to do (mostly) with designing or orchestrating agents. Take a guitar, for example. You don't industrialize the manufacture of guitars by speeding up the same practices that artisans used to build them. You don't create machines that resemble individual artisans in their previous roles (like everyone seems to be trying to do with AI and software). You become Leo Fender, and you design a new kind of guitar that is made to be manufactured at another level of scale magnitude. You need to be Leo Fender though (not a talented guitarrist, but definitely a technical master). To me, it sounds too early to describe patterns, since we haven't met the Ford/Fender/etc equivalent of this yet. I do appreciate the attempt though.
Madmallard
patterns that may help increase subjective perception of reliability from non-deterministic text generators trained on the theft of millions of developer's work for the past 25 years.
ben30
I contribute to an open source spec based project management tool. I spend about a day back and forth iterating on a spec, using ai to refine the spec itself. Sometimes feeding it in and out of Claude/gemini telling each other where the feedback has come from. The spec is the value. Using the ai pm tool I break it down into n tasks and sub tasks and dependencies. I then trigger Claude in teams mode to accomplish the project. It can be left alone over night. I wake up in the morning with n prs merged.
sidcool
PSA: This is sponsored by Augment code.