The Training Example Lie Bracket
pb1729
26 points
13 comments
April 09, 2026
Related Discussions
Found 5 related stories in 51.3ms across 4,075 title embeddings via pgvector HNSW
- Show HN: Duplicate 3 layers in a 24B LLM, logical deduction .22→.76. No training xlayn · 90 pts · March 18, 2026 · 45% similar
- Claude Code LSP LexSiga · 75 pts · March 02, 2026 · 41% similar
- The Claude Code Leak mergesort · 79 pts · April 02, 2026 · 41% similar
- How I write software with LLMs indigodaddy · 69 pts · March 16, 2026 · 40% similar
- The Claude Code Source Leak: fake tools, frustration regexes, undercover mode alex000kim · 1057 pts · March 31, 2026 · 39% similar
Discussion Highlights (6 comments)
E-Reverance
Could this be used for batch filtering?
measurablefunc
Eventually ML folks will discover fiber bundles.
willrshansen
Was hoping for a tournament bracket of best lies found in training data :(
thaumasiotes
> An ideal machine learning model would not care what order training examples appeared in its training process. From a Bayesian perspective, the training dataset is unordered data and all updates based on seeing one additional example should commute with each other. One of Andrew Gelman's favorite points to make about science 'as practiced' is that researchers fail to behave this way. There's a gigantic bias in favor of whatever information is published first.
Majromax
Wait a second, they define the induced vector field (and consequently Lie bracket) in terms of batch-size 1 SGD: > In particular, if x is a training example and L(x) is the per-example loss for the training example x, then this vector field is: v^(x)(θ) = -∇_θ L(x). In other words, for a specific training example, the arrows of the resulting vector field point in the direction that the parameters should be updated. but for the MXResNet example: > The optimizer is Adam, with the following parameters: lr = 5e-3, betas = (0.8, 0.999) This changes the direction of the updates, such that I'm not completely sure the intuitive equivalence holds. If it were just SGD with momentum, then the measured update directions would be a combination of the momentum vector and v1/v2, so {M + v1, M + v2} = {v1, M} + {M, v2} + {v1, v2}. The Lie bracket is no longer "just" a function of the model parameters and the training examples; it's now inherently path dependent. For Adam, the parameter-wise normalization by the second norm will also slightly change the directions of the updates in a nonlinear way (thanks to the β2 term). The interpretation is also strained with fancier optimizers like Muon; this uses both momentum and (approximate) SVD normalization, so I'm really not sure what to expect.
eden-u4
I don't understand the RMS table, shouldn't it be non commutative? Eg "example 0 vs 1"'s RMS != "example 1 vs 0"'s RMS? Which doesn't seem the case for the checkpoints I checked.