Async Programming Is Just Inject Time

marvinborner 36 points 13 comments March 06, 2026
willhbr.net · View on Hacker News

Discussion Highlights (6 comments)

gpderetta

> Your CPU doesn’t know or care what functions are [...] Well, most architectures do not use plain jumps with implicit arguments for function calls, but have explicit call/ret or branch-with-link instructions. Even those that used jumps had branch hints. The reason is that the microarchitecture must know about call/ret pairs to be able to predict the return path as the generic dynamic predictor is just not good enough for such a common path. Reduced prediction performance compared normal calls is actually a concern for some coroutine and async code. > in C we don’t have any dynamic lookup inside functions—every dynamic jump comes from an explicit conditional statement function pointers.

rdevilla

> To start with, you need to remember that functions don’t exist. They’re made up. They’re a social construct. https://www.felixcloutier.com/x86/call Sufficiently large distances of abstraction from the concrete, underlying mechanics are indistinguishable from religion and superstitious belief. Expect LLMs to widen this gap in understanding. We are not far away from the tech priests of the Adeptus Mechanicus.

hackingonempty

> If you want to read more, I’d recommend starting with the Effekt and Koka language tours Instead of exploring a research language that nobody uses you could try a mature effects system for a semi-popular language. I think Zio is great and runs on the JVM and ScalaJS. https://zio.dev/

noelwelsh

This article would benefit from an introduction that lays out the structure of what is to come. I'm expecting an article on effect systems, but it jumps straight into a chunky section on the implementation of function calls. I'm immediately wondering why this is here, and what is has to do with effect systems. Also, this is a very operational description: how it works. It's also possible to give a denotational description: what it means. Having both is very useful. I find that people tend to start with the operational and then move to the denotational.

jcranmer

> Your CPU doesn’t know or care what functions are This has already been commented on by a couple of people, but yes, your CPU absolutely does care a lot about functions. At the very least, call/ret matching is important for branch prediction, but the big arches nowadays have shadow stacks and CFI checks that require you to use call/rets as regular functions. x86 has a more thoroughly built-in notion of functions, since they have a (since mostly-defunct) infrastructure for doing task switching via regular-ish call instructions. > The toString method that gets called depends on the type of the receiver object. This isn’t determined at compile time, but instead a lookup that happens at runtime. The compiler effectively generates a switch statement that looks at the result of getClass and then calls the right method. It’s smarter than that for performance I’m sure, but conceptually that’s what it’s doing. No, it's conceptually doing the exact opposite. Class objects have a vtable pointer, a pointer to a list of functions, and the compiler is reading the vtable and calling the n'th function via function pointer. The difference is quite important: vtables are an inherently open system (anyone can define their own vtable, if they're sufficiently crazy), but switches are inherently closed (the complete set of possible targets has to be known at compile-time). Not that I've written it up anywhere, but I've come to think of the closed nature of switch statements as fundamentally anathema to the ideals of object-oriented programming.

jiehong

> having to add IO effects to all functions Sound like Haskell, and its Monads. I think it does end up being very similar in the end. Would you compare effects and monads?

Semantic search powered by Rivestack pgvector
3,471 stories · 32,344 chunks indexed