That's called 4k aliasing.
4K aliasing occurs when you store one memory location, then load from another memory location which is 4KB offset from original.
(and if it is not apparent to some readers, most modern x86-based systems use 64 byte cache line sizes, which is sort of analogous to disk block size - quite a few memory operations tend to happen in 64 byte chunks under the covers - the ones that don't are "special")
I wonder what the page size is on his system (and what effective alignment his pointers have). If it's 4K, the sizes look really close to 0x1000 and 0x2000 - maybe crossing page boundaries?
I would ignore this benchmark because it’s not going to predict anything for real world code.
In real world code, your caches and the CPU’s pipeline are influenced by some complex combination of what happens at the call site and what else the program is doing. So, a particular kind of call will perform better or worse than another kind of call depending on what else is happening.
The version of this benchmark that would have had predictive power is if you compared different kinds of call across a sufficiently diverse sampling of large programs that used those calls and also did other interesting things.
> Passing structs up to size 256 is very cheap, and uses SIMD registers.
Presumably this means for all arguments combined? If for example you pass four pointers each pointing to a 256-byte struct, you probably don’t want to pass all four structs (or even just one or two of the four?) by value instead.
If I understand correctly he is passing a single struct whose size he varies. This struct is being memcpy'd to stack (except for very small struct sizes that can be passed directly in registers) and so basically we are looking at a curve of memcpy performance by size.
There is no pass-by-value overhead. There are only implementation decisions.
Pass by value describes the semantics of a function call, not implementation. Passing a const reference in C++ is pass-by-value. If the user opts to pass "a copy" instead, nothing requires the compiler to actually copy the data. The compiler is required only to supply the actual parameter as if it was copied.
This might be true in the abstract but it's not true of actual compilers dealing with real world calling conventions. Absent inlining or whole program optimization, calling conventions across translation units don't leave much room for flexibility.
The semantics of pass by const reference are also not exactly the same as pass by value in C++. The compiler can't in general assume a const reference doesn't alias other arguments or global variables and so has to be more conservative with certain optimizations than with pass by value.
Unfortunately "the compiler is required to supply the actual parameter as if it was copied" is leaky with respect to the ABI and linker. In C and C++ you cannot fully abstract it.
I usually use ChatGPT for such microbenchmarks (of course I design it myself and use LLM only as dumb code generator, so I don't need to remember how to measure time with nanosecond precision. I still have to add workarounds to prevent compiler over-optimizing the code). It's amazing, that when you get curious (for example, what is the fastest way to find an int in a small sorted array: using linear, binary search or branchless full scan?) you can get the answer in a couple minutes instead of spending 20-30 minutes writing the code manually.
By the way, the fastest way was branchless linear scan up to 32-64 elements, as far as I remember.
In C++, I’ve noticed that ChatGPT is fixated on unordered_maps. No matter the situation, when I ask what container would be wise to use, it’s always inordered_maps. Even when you tell it the container will have at most a few hundred elements (a size that would allow you to iterate their a vector to find what your are looking for before the unordered_map even has its morning coffee) it pushes the map… with enough prodding, it will eventually concede that a vector pretty much beats everything for small .size()’s.
What a fascinating CPU bug. I am quite curious as to how that came to pass.
I wonder if it has to do with a non-ideal implementation of virtual address resolution for the next page.
It would be great to repeat the author’s tests on other CPU models
In real world code, your caches and the CPU’s pipeline are influenced by some complex combination of what happens at the call site and what else the program is doing. So, a particular kind of call will perform better or worse than another kind of call depending on what else is happening.
The version of this benchmark that would have had predictive power is if you compared different kinds of call across a sufficiently diverse sampling of large programs that used those calls and also did other interesting things.
Presumably this means for all arguments combined? If for example you pass four pointers each pointing to a 256-byte struct, you probably don’t want to pass all four structs (or even just one or two of the four?) by value instead.
Pass by value describes the semantics of a function call, not implementation. Passing a const reference in C++ is pass-by-value. If the user opts to pass "a copy" instead, nothing requires the compiler to actually copy the data. The compiler is required only to supply the actual parameter as if it was copied.
The semantics of pass by const reference are also not exactly the same as pass by value in C++. The compiler can't in general assume a const reference doesn't alias other arguments or global variables and so has to be more conservative with certain optimizations than with pass by value.
I can cast the const away. The implementation does not hide this detail. The semantics therefore must be understood by the programmer.
By the way, the fastest way was branchless linear scan up to 32-64 elements, as far as I remember.
I don't know about this, whenever I've benchmarked it on my use cases, unordered_map started to become faster than vector at well below 100 elements
Yet remembering how to measure time with nanosecond precision is the burden?
> By the way, the fastest way was branchless linear scan up to 32-64 elements, as far as I remember.
The analysis presented in the article is far more interesting, qualified, and useful that what you've produced here.