Comparing C/C++ unity build with regular build on a large codebase (2024)
PaulHoule
11 points
2 comments
April 01, 2026
Related Discussions
Found 5 related stories in 52.7ms across 3,471 title embeddings via pgvector HNSW
- SWE-CI: Evaluating Agent Capabilities in Maintaining Codebases via CI mpweiher · 114 pts · March 08, 2026 · 47% similar
- Show HN: Claude's Code – tracking the 19M+ commits generated by Claude on GitHub phantomCupcake · 13 pts · March 24, 2026 · 43% similar
- A new C++ back end for ocamlc glittershark · 146 pts · April 01, 2026 · 43% similar
- Show HN: GDSL – 800 line kernel: Lisp subset in 500, C subset in 1300 FirTheMouse · 62 pts · March 15, 2026 · 42% similar
- The Claude Code Source Leak: fake tools, frustration regexes, undercover mode alex000kim · 1057 pts · March 31, 2026 · 42% similar
Discussion Highlights (2 comments)
pstomi
By reading your initial script,I see that there was absolutely no parallelisation in the initial build. Was it a choice because you wanted to compare only single core performances?
jjmarr
If you're doing single-core builds, you will get impressive speedups from unity builds. This is because C++ compilers spends a lot of time redundantly parsing the same headers included in different .cpp files. Normally, you get enough gains from compiling each .cpp file in parallel that it outweighs the parsing, but if you're artificially limited in parallelism then unity builds can pay for themselves very quickly as they did in the article. C++20 modules try to split the difference by parsing each header once into a precompiled module, allowing it to reuse work across different .cpp files. Unfortunately, it means C++ compilation isn't embarrassingly parallel, which is why we have slow buildsystem adoption.