Show HN: How LLMs Work – Interactive visual guide based on Karpathy's lecture
All content is based on Andrej Karpathy's "Intro to Large Language Models" lecture (youtube.com/watch?v=7xTGNNLPyMI). I downloaded the transcript and used Claude Code to generate the entire interactive site from it — single HTML file. I find it useful to revisit this content time to time.
Discussion Highlights (20 comments)
learningToFly33
I’ve had a look, and it’s very well explained! If you ever want to expand it, you could also add how embedded data is fed at the very final step for specific tasks, and how it can affect prediction results.
lukeholder
Page keeps annoyingly scroll-jumping a few pixels on iOS safari
gushogg-blake
I haven't found an explanation yet that answers a couple of seemingly basic questions about LLMs: What does the input side of the neutral network look like? Is it enough bits to represent N tokens where N is the context size? How does it handle inputs that are shorter than the context size? I think embedding is one of the more interesting concepts behind LLMs but most pages treat it as a side note. How does embedding treat tokens that can have vastly different meanings in different contexts - if the word "bank" were a single token, for example, how does embedding account for the fact that it can mean river bank or money bank? Do the elements of the vector point in both directions? And how exactly does embedding interact with the training and inference processes - does inference generate updated embeddings at any point or are they fixed at training time? (Training vs inference time is another thing explanations are usually frustrating vague on)
Barbing
Lefthand labels (like Introduction) can overlap over main text content on the right in the central panel - may be able to trigger by reducing window width.
PetitPrince
Have you reread what was produced by Claude Code before publishing ? This thing in one of the first paragraph jumps out: > you end up with about 44 terabytes — roughly what fits on a single hard drive No normal person would think that 44 TB is a usual hard drive size (I don't think it even exists ? 32TB seems the max in my retailer of choice). I don't think it's wrong per se to use LLM to produce cool visualization, but this lack of proof reading doesn't inspire confidence (especially since the 44TB is displayed proheminently with a different color).
lateral_cloud
This is completely AI generated..don't bother reading.
PeakScripter
currently working on somewhat same thing myself
arcza
Another low effort, dark mode slopsite. You lost me at "44 terabytes" before I even got to the emdash in that sentence. @dang, when is the 'flag as slop' button coming?
endymion-light
I really dislike the default AI slop css - if you're going to do this - please have a design language and taste ideas beforehand. It can help so much in refining the look. Genuine piece of feedback, as soon as I see those gradients + quirks. My perception immediately becomes - you put no effort into finding your own style, therefore you will not have put effort into creating this website.
5asHajh
"Retrieved chunks are prepended to the prompt before the LLM sees the question. The model generates from injected facts rather than relying on memorized training data — dramatically reducing hallucination on knowledge-intensive tasks." So plagiarism is even explicit now. A stolen database relying on cosine similarity to parse the prompts. Why doesn't The Pirate Bay have a $1 trillion valuation?
hansmayer
> and used Claude Code to generate the entire interactive site from it Hard pass on AI slop. First - principally as it brings no real value, anyone can iterate over some prompts to generate a version of this. Secondly - more specific - Don't you know that LLMs are particularly prone to make mistakes in summarising, where they make subtle changes in the wording which has much wider context impact? If you insist on being the human part of a centaur, then at least do your human slave part - inspect the excremented "content", fix inconsistencies etc.
gslepak
Just want to give appreciation for proper attribution. I feel like still some people will say "Here's something I made" when the reality is, "Here's something I asked my AI to make."
weego
putting text in colored boxes around the page isn't really interactive or visual in the way I'd hoped, but it looks pretty.
ynarwal__
Update: The "single hard drive" claim was wrong and I've corrected it to "roughly 10 consumer hard drives" (44TB ÷ ~4TB = ~11). Attribution to Karpathy is now a direct link. Added a caveat under the stats noting these are representative 2024-era figures — the exact numbers shift with every release and that's somewhat the point. Also did a few iterations on visual redesign (linked in the header as v2) with a proper top navigation bar after a few people found the dot nav hard to use and UI was jumping. Also I have not fact checked everything but I have read it and it seems to be aligned with what is described in the lecture.
thesz
The page does very poor job tokenizing phrase "Noinceolik fiyulnabmed fyvaproldge" into "Noinceolik fiyulnabm ed fyvaproldge", factoring only "ed" suffix. As if made up words such as "noinceolik" are so common they are part of 100K token vocabulary. The actual application of GPT-5 tokenizer at [1] to my made up phrase results in 14 tokens, only two of them are four characters long and there are tokens containing spaces. [1] https://gpt-tokenizer.dev/ I will read along, though.
vova_hn2
I think that BPE visualization is slightly misleading, because it seems to imply that the "old" (smaller) tokens are thrown away and replaced with longer tokens, which is not the case. In fact, it is purely additive process: we iteratively add the most frequent pairs to the set, until we reach the desired total number of tokens. But we never remove tokens, we keep everything, including the initial 256 tokens, representing bytes. This ensures that the model is capable of producing every possible unicode sequence (in fact, I think that it is capable of producing every possible byte sequence, but bytes that are not valid unicode are filtered during sampling). Edit #1: also, this page entirely skips the attention mechanism, which is, in my opinion, both the most interesting part and the part that is hardest to understand (I can't say that I fully understand it, to me it is just some linear algebra matrix multiplication magic).
dylkil
Was this created with claude design? it looks very similar to something it mocked for me last week
ynarwal__
I disagree with some comments saying it's not worth reading since it's generated by LLM. Even though I made it clear that I have download the transcript. LLMs are exceptionally good at generating accurate information if information is directly loaded into context window.
jasonjmcghee
Highly recommend instead reading the human created "The Illustrated GPT-2" by Jay Alammar - https://jalammar.github.io/illustrated-gpt2/ And his similar work. He also has a free course on "how llms work"
siva7
> WITHOUT RAG > "I don't have reliable information about a colony called Ares Base. As of my > training cutoff, no such Mars colony has been established..." Oh we must have lived in a parallel universe then if this is a "without rag" textbook example.