Show HN: Hippo, biologically inspired memory for AI agents
kitfunso
72 points
17 comments
April 06, 2026
Related Discussions
Found 5 related stories in 59.7ms across 3,752 title embeddings via pgvector HNSW
- Show HN: A plain-text cognitive architecture for Claude Code marciopuga · 65 pts · March 25, 2026 · 59% similar
- Show HN: NERDs – Entity-centered long-term memory for LLM agents tdaltonc · 13 pts · March 06, 2026 · 58% similar
- Show HN: Agent Kernel – Three Markdown files that make any AI agent stateful obilgic · 40 pts · March 23, 2026 · 57% similar
- Show HN: Antfly: Distributed, Multimodal Search and Memory and Graphs in Go kingcauchy · 85 pts · March 17, 2026 · 57% similar
- I built a better, human like memory, for Agents emson · 11 pts · March 29, 2026 · 55% similar
Discussion Highlights (11 comments)
cyanydeez
no open code plugin? This seems like something that should just run in the background. It's well documented that it should just be a skill agents can use when they get into various fruitless states. The "biological" memory strength shouldn't just be a time thing, and even then, the time of the AI agent should only be conformed to the AI's lifetime and not the actual clock. Look up https://stackoverflow.com/questions/3523442/difference-betwe... monotonic clock. If you want a decay, it shouldn't be related to an actual clock, but it's work time. But memory is more about triggers than it is about anything else. So you should absolutely have memory triggers based on location. Something like a path hash. So whever an agent is working and remembering things it should be tightly compacted to that location; only where a "compaction" happens should these memories become more and more generalized to locations. The types of memory that often are more prominent are like this, whether it's sports or GUIs, physical location triggers much more intrinsics than conscious memory. Focus on how to trigger recall based on project paths, filenames in the path, file path names, etc.
nberkman
Cool project. I like the neuroscience analogy with decay and consolidation. I've been working on a related problem from the other direction: Claude Code and Codex already persist full session transcripts, but there's no good way to search across them. So I built ccrider ( https://github.com/neilberkman/ccrider ). It indexes existing sessions into SQLite FTS5 and exposes an MCP server so agents can query their own conversation history without a separate memory layer. Basically treating it as a retrieval problem rather than a storage problem.
the_arun
Aren't tools like claude already store context by project in file system? Also any reason use "capture" instead of "export" (an obvious opposite of import)?
gfody
yegge has a cool solution for this in gastown: the current agent is able to hold a seance with the previous one
kami23
Cool to see others on this thread. Here's a post I wrote about how we can start to potentially mimic mechanisms https://n0tls.com/2026-03-14-musings.html Would love to compare notes, I'm also looking at linguistic phenomena through an LLM lens https://n0tls.com/2026-03-19-more-musings.html Hoping to wrap up some of the kaggle eval work and move back to researching more neuropsych.
swyx
hmm the repo doesnt mention this at all but this name and problem domain brings up HippoRAG https://arxiv.org/abs/2405.14831 <- any relation? seems odd to miss out this exactly similarly named paper with related techniques.
esafak
How does it select what to forget? Let's say I land a PR that introduces a sharp change, migrating from one thing to another. An exponential decay won't catch this. Biological learning makes sense when things we observe similar things repeatedly in order to learn patterns. I am skeptical that it applies to learning the commits of one code base.
matt765
cool project mate, gj
AndyNemmity
The biggest issue I have with these systems is, I don't want a blanket memory. I want everything to be embedded in skills and progressively discovered when they are required. I've been playing around with doing that with a cron job for a "dream" sequence. I really want to get them out of main context asap, and where they belong, into skills. https://github.com/notque/claude-code-toolkit
suradethchaipin
Interesting approach storing raw samples as JSONB alongside typed summary columns. I work on a mobile app that tracks similar time-series health data locally, and the query pattern split you describe — aggregates on scalar columns, detail lookups on JSONB — mirrors what works well in SQLite too. Curious whether Terra's normalization has notable gaps between providers. For example, does sleep staging data come through consistently from Garmin vs Polar, or do you end up papering over differences in your own layer?
extr
I think explicit post-training is going to be needed to make this kind of approach effective. As this repo notes is "The secret to good memory isn't remembering more. It's knowing what to forget." But knowing what is likely to be important in the future implies a working model of the future and your place in it. It's a fully AGI complete problem: "Given my current state and goals, what am I going to find important conditioned on the likelihood of any particular future...". Anyone working with these agents knows they are hopelessly bad at modeling their own capabilities much less projecting that forward.