CC leak: skills are better than I thought
KaseKun
39 points
12 comments
April 02, 2026
Related Discussions
Found 5 related stories in 35.0ms across 3,471 title embeddings via pgvector HNSW
- Skillfile, the declarative skill manager, now with search for 110K+ skills _juli_ · 12 pts · March 16, 2026 · 52% similar
- Agent Skills – Open Security Database 4ppsec · 33 pts · March 16, 2026 · 49% similar
- RAG vs. Skill vs. MCP vs. RLM weltview · 24 pts · March 02, 2026 · 47% similar
- The Claude Code Source Leak: fake tools, frustration regexes, undercover mode alex000kim · 1057 pts · March 31, 2026 · 46% similar
- The Claude Code Leak mergesort · 79 pts · April 02, 2026 · 43% similar
Discussion Highlights (4 comments)
KaseKun
A technical breakdown of how agent skills are parsed, rendered, injected, and refreshed in your Claude Code working session.
SyneRyder
I thought this was worth the quick read. Just as the article says at the start, I thought skills were essentially the same as pasting a long Markdown prompt document into the Claude Code window, or having Claude read the prompt file. But it seems if you invoke the skill, CC handles it quite differently, eg it's special cased for how it survives compaction. Changed my mental model of using Skills a bit anyway.
kaelyx
"You aren’t simply giving the agent instructions, you are changing how it operates." Why is the AI generated "Mic Drop" everywhere now.
EnPissant
> 4. persisting context across compactions > LLMs forget things as their context grows. When a conversation gets long, the context window fills up, and Claude Code starts compacting older messages. To prevent the agent from forgetting the skill’s instructions during a long thread, Claude Code registers the invoked skill in a dedicated session state. > When the conversation history undergoes compaction, Claude Code references this registry and explicitly re-injects the skill’s instructions: you never lose the skill guardrails to context bloat. If true, this means that over time a session can grow to contain all or most skills, negating the benefit of progressive disclosure. I would expect it would be better to let compaction do its thing with the possibility of an agent re-fetching a skill if needed. I don't trust the article though. It looks like someone just pointed a LLM at the codebase and asked it to write an article.