Lat.md: Agent Lattice: a knowledge graph for your codebase, written in Markdown
doppp
83 points
56 comments
March 29, 2026
Related Discussions
Found 5 related stories in 83.7ms across 3,471 title embeddings via pgvector HNSW
- Your Agents.md is probably too long jlong · 14 pts · March 18, 2026 · 52% similar
- Show HN: Agent Kernel – Three Markdown files that make any AI agent stateful obilgic · 40 pts · March 23, 2026 · 51% similar
- Parallel coding agents with tmux and Markdown specs schipperai · 139 pts · March 02, 2026 · 51% similar
- Show HN: Altimate Code – Open-Source Agentic Data Engineering Harness aaur0 · 18 pts · March 19, 2026 · 51% similar
- Leanstral: Open-source agent for trustworthy coding and formal proof engineering Poudlardo · 407 pts · March 16, 2026 · 50% similar
Discussion Highlights (17 comments)
nimonian
I have a vitepress package in most of my repos. It is a knowledge graph that also just happens to produce heat looking docs for humans when served over http. Agents are very happy to read the raw .md.
iddan
So we are reinventing the docs /*/*.md directory? /s I think this is a good idea just don’t really get why would you need a tool around it
Yokohiii
> "chalk": "^5.6.2", security.md ist missing apparently.
jatins
tl;dr: One file, bad (gets too big for context) So give you agent a whole obsidian I am skeptical how that helps. Agents cant just grep in one big file if reading entire file is the problem.
reactordev
I found having smaller structured markdowns in each folder explaining the space and classes within keeps Claude and Codex grounded even in a 10M+ LOC codebase of c/c++
mmastrac
I definitely agree with the need for this. There's just too much to put into the agents file to keep from killing your context window right off the bat. Knowledge compression is going to be key. I saw this a couple of days ago and I've been working on figuring out what the right workflows will be with it. It's a useful idea: the agents.md torrent of info gets replaced with a thinner shim that tells the agent how to get more data about the system, as well as how to update that. I suspect there's ways to shrink that context even more.
touristtam
At that point why not have an obsidian vault in your repo and get the Agent to write to it?
robertclaus
We've been doing this with simple mkdocs for ages. My experience is that rendering the markdown to feel like public docs is important for getting humans to review and take it seriously. Otherwise it goes stale as soon as one dev on the project doesn't care.
eliottre
The staleness problem mentioned here is real. For agentic systems, a markdown-based DAG of your codebase is more practical than a traditional graph because agents work within context windows. You can selectively load relevant parts without needing a complex query engine. The key is making updates low-friction -- maybe a pre-commit hook or CI job that refreshes stale nodes.
bisonbear
managing agents.md is important, especially at scale. however I wonder how much of a measurable difference something like this makes? in theory, it's cool, but can you show me that it's actually performing better as compared to a large agents.md, nested agents.md, skills? more general point being that we need to be methodical about the way we manage agent context. if lat.md shows a 10% broad improvement in agent perf in my repo, then I would certainly push for adoption. until then, vibes aren't enough
adrq
So the graph is human-maintained, and agents consume it and `lat check` is supposed to catch broken links and code-spec drift. How do you manage this in a multi-agent setup? Is it still a manual merge+fix conflicts situation? That's where I keep seeing the biggest issues with multi-agent setups
caijia
I've been doing similar work since Claude code updated their "slash command"(later merge to skills), first 3-4 long content docs, gradually split to modular groups. I designed it for loading the docs based on what agent is actually doing. The maintenance part is honestly not that hard, for me, I created some CI jobs that diffs the docs against the codebase and flags drift handles of it. The pattern works. But I keep catching myself spending more time on how to organize context than on what the agent is actually supposed to accomplish. Feels like the whole space has that problem right now.
1st1
Creator of lat.md here. There are two videos with me talking about lat in more detail [1] and less detail [2]. But I'm also working on a blog post exploring lat and its potential, stay tuned. AMA :) [1] https://x.com/mitsuhiko/status/2037649308086902989?s=20 [2] https://www.youtube.com/watch?v=gIOtYnI-8_c
inerte
So... does it work? Good description of what it does, but, does it actually make agents better, or use less tokens? What's the benchmark?
cowlby
This is one of the things that GitHub Spec Kit solves for me. The specify.plan step launches code exploration agents and builds itself the latest data model, migrations, etc etc. Really reduces the need to document stuff when the agent self discovers codebase needs. Give Claude sqlite/supabase MCP, GitHub CLI, Linear CLI, Chrome or launch.json and it can really autonomously solve this.
midnight_eclair
i've had a related question recently but it didn't get much traction https://news.ycombinator.com/item?id=47543324 what's the point of markdown? there's nothing useful you can do with it other than handing it over to llm and getting some probabilistic response
helixfelix
Can you offer any insights how this compares to building an AST or RAG over your codebase? Several projects do that and it autoupdates on changes too. The agent does a wide sweep using AST/RAG search followed by drill down using an LSP. This sped up my search phase by 50%. How Will this project help me?