Understanding the Kalman filter with a simple radar example

alex_be 296 points 38 comments April 08, 2026
kalmanfilter.net · View on Hacker News

Discussion Highlights (7 comments)

alex_be

Author here. I recently updated the homepage of my Kalman Filter tutorial with a new example based on a simple radar tracking problem. The goal was to make the Kalman Filter understandable to anyone with basic knowledge of statistics and linear algebra, without requiring advanced mathematics. The example starts with a radar measuring the distance to a moving object and gradually builds intuition around noisy measurements, prediction using a motion model, and how the Kalman Filter combines both. I also tried to keep the math minimal while still showing where the equations come from. I would really appreciate feedback on clarity. Which parts are intuitive? Which parts are confusing? Is the math level appropriate? If you have used Kalman Filters in practice, I would also be interested to hear whether this explanation aligns with your intuition.

smokel

This seems to be an ad for a fairly expensive book on a topic that is described in detail in many (free) resources. See for example: https://rlabbe.github.io/Kalman-and-Bayesian-Filters-in-Pyth... Is there something in this particular resource that makes it worth buying?

joshu

i liked how https://www.bzarg.com/p/how-a-kalman-filter-works-in-picture... uses color visualization to explain

palata

I really loved this one: https://www.bzarg.com/p/how-a-kalman-filter-works-in-picture...

lelandbatey

Kalman filters are very cool, but when applying them you've got to know that they're not magic. I struggled to apply Kalman Filters for a toy project about ten years ago, because the thing I didn't internalize is that Kalman filters excel at offsetting low-quality data by sampling at a higher rate. You can "retroactively" apply a Kalman filter to a dataset and see some improvement, but you'll only get amazing results if you sample your very-noisy data at a much higher rate than if you were sampling at a "good enough" rate. The higher your sample rate, the better your results will be. In that way, a Kalman filter is something you want to design around, not a "fix all" for data you already have.

roger_

Here's my (hopefully) intuitive guide: 1. understand weighted least squares and how you can update an initial estimate (prior mean and variance) with a new measurement and its uncertainty (i.e. inverse variance weighted least squares) 2. this works because the true mean hasn't changed between measurements. What if it did? 3. KF uses a model of how the mean changes to predict what it should be now based on the past, including an inflation factor on the uncertainty since predictions aren't perfect 4. after the prediction, it becomes the same problem as (1) except you use the predicted values as the initial estimate There are some details about the measurement matrix (when your measurement is a linear combination of the true value -- the state) and the Kalman gain, but these all come from the least squares formulation. Least squares is the key and you can prove it's optimal under certain assumptions (e.g. Bayesian MMSE).

anamax

There's also a .com - https://thekalmanfilter.com/kalman-filter-explained-simply/

Semantic search powered by Rivestack pgvector
3,961 stories · 36,971 chunks indexed