Readit News logoReadit News
TillE · 4 months ago
> VisualC++ doesn’t have its source code available

Got all the way here and had to look back up to see this post was from 2019. The MSVC standard library has been open source for several years now. https://github.com/microsoft/STL

Though to be perfectly honest, setting a breakpoint and looking at the disassembly is probably easier than reading standard library code.

snfernandez · 4 months ago
I was working at MS at the time and actually had access to the source code (my project involved devdiv). I don't remember the exact details, but I opted for not adding any of my "private" knowledge to the post.

I agree with you that I prefer looking at optimized assembly with symbols rather than following code through files (which are usually filled with #ifdefs and macros).

tialaramex · 4 months ago
As STL (nominative determinism at work) points out in the r/cpp thread about this, even when that git repo didn't exist you could have gone to see how this template works because C++ has to monomorphize generics somehow and that means when you write shared_ptr<goose> your C++ compiler needs to compile the source code for shared_ptr with the T replaced by goose.

But you're correct, while I can read https://doc.rust-lang.org/src/alloc/sync.rs.html (where Rust's Arc is defined) ...

... good luck to me in https://github.com/microsoft/STL/blob/main/stl/inc/memory

There are tricks to cope with C++ macros not being hygienic, layered on top of tricks to cope with the fact C++ doesn't have ZSTs, tricks to reduce redundancy in writing all this out for related types, and hacks to improve compiler diagnostics when you do something especially stupid. Do its maintainers learn to read like this? I guess so, as it's Open Source.

chuckadams · 4 months ago
It also helps that the Rust version is lavishly documented with examples, and the C++ version has barely any comments at all.
manwe150 · 4 months ago
> In conclusion, I’ll assume this is not a typical scenario and it is mostly safe.

Ughh, this brings bad memories of the days I spent trying to diagnose why glibc often would give wrong answers for some users and not other users (they’ve since mitigated this problem slightly by combining pthreads and libdl into the same library). I wish they would get rid of this, since even the comment on it notes that the optimization is unsound (the ability to make syscalls directly, as used by go and others, makes this optimization potentially dangerous). It also upsets static analysis tools, since they see that glibc doesn’t appear to have the synchronization the library promises.

TuxSH · 4 months ago
In particular, std::thread constructor's (the non-default one) has a workaround against LTO optimizing the pthread call away: https://github.com/gcc-mirror/gcc/blob/master/libstdc%2B%2B-...

> Parallelism without pthread

To get __atomic_add_dispatch to work, looks like one is expected to ensure pthread_create is referenced. One way to do it without creating a pthread or std::thread, is to do it outside LTO'd files, or like they did above.

> > It is possible to create threads by using the OS syscalls bypassing completely the requirement of pthead

As the other person said, it is impratical to do so, and it's easier to just reimplement gthread and pthread functions to be hooks (some toolchains do this).

mackman · 4 months ago
> It is possible to create threads by using the OS syscalls bypassing completely the requirement of pthead. (Un)fortunately, I couldn’t find any popular libraries that implement the functionality by using the syscall interface instead of relying on pthread.

I have tried and failed to do this for a C++ program because the amount of C++ runtime static init/shutdown stuff you would need to deal with isn't practical to implement yourself.

marthacamila · 4 months ago
My spouse has a million passwords on his phone which makes it highly out of bounds, I knew he was cheating on me and all I needed was some evidence. Medialord really made it a piece of cake catching him in the act, he helped me install some monitoring spyware that was so easy for me to use (as I am a computer dummy) all I had to do was login to see the info. He helped me through the whole process and now I have enough evidence for my divorce case. Am sure he can help you if you have similar issue, its as easy as sending him a mail HACKSECRETE@ GMAILCOM , you can also reach him if you could not withdraw your funds from online trading platform such as expert-option ,cal financial, Analyst , coinspot, Ctxprime and many more. he’s reliable and affordable.
ptspts · 4 months ago
The correct spelling is in lowercase: shared_ptr<T> . The title of the article is correct, the title of the HN post is incorrect.
tialaramex · 4 months ago
In hindsight this convention seems weird to me by the way. I didn't question it for the decades I was paid money to write C, but after adopting Rust it jumped out more that it's weird how monocase the C and C++ standard libraries are.

Maybe there's a reason I'd never run into, but this seems like a missed opportunity. Even if I have no idea what Goose is, I can see it's a type, that seems like a win.

recursivecaveat · 4 months ago
Yeah I don't know why so many C programmers ended up on a convention where case is entirely unused. I wonder if it's some ancient compatability thing that they have just been keeping self-consistent with forever. To me not using case is like designing a road map in black-and-white just because you didn't have any ideas for what colors should represent.
Waterluvian · 4 months ago
I believe HN does that automatically.
sesuximo · 4 months ago
Why is the atomic version slower? Is it slower on modern x86?
loeg · 4 months ago
Agner's instruction manual says "A LOCK prefix typically costs more than a hundred clock cycles," which might be dated but is directionally correct. (The atomic version is LOCK ADD.)

If you go to the CPU-specific tables, LOCK ADD is like 10-50 (Zen 3: 8, Zen 2: 20, Bulldozer: 55, lol) cycles latency vs the expected 1 cycle for regular ADD. And about 10 cycles on Intel CPUs.

So it can be starkly slower on some older AMD platforms, and merely ~10x slower on modern x86 platforms.

Tuna-Fish · 4 months ago
On modern CPUs atomic adds are now reasonably fast, but only when they are uncontended. If the cache line the value is on has to bounce between cpus, that is usually +100ns (not cycles) or so.

Writing performant parallel code always means absolutely minimizing communication between threads.

eptcyka · 4 months ago
Atomic write operations force a cache line flush and can wait until the memory is updated. Atomic reads have to be read from memory or a shared cache. Atomics are slow because memory is slow.
nly · 4 months ago
This isn't true.

Atomic operations work inside the confines of the cache coherence protocol. Nothing has to be flushed to main memory, or even a lower cache

An atomic operation does something more along the lines of emitting an invalidation, putting the cache line in to an exclusive state, and then ignores find and invalidation requests from other cores while it operates.

Krssst · 4 months ago
I don't think an atomic operation necessarily demands a cache flush. L1 cache lines can move across cores as needed in my understanding (maybe not on multi-socket machines?). Barriers are required if further memory ordering guarantees are needed.
namibj · 4 months ago
Sure atomic reads need to at least check against a shared cache; but atomic writes only have to ensure the write is visible to any other atomic read when the atomic write finishes. And some less-strict memory orderings/consistency models have somewhat weaker prescriptions like needing an explicit write fence/writeback-cache-atomicity-flush to ensure those writes hit globally visible (shared) cache.

There's essentially nothing but DMA/PCIe accesses that won't look at shared/global cache in hopes of read hits before looking at the underlying memory, at least on any system (more specifically, CPU) you'd (want to) run modern Linux on.

There are non-temporal memory accesses where reads don't leave a trace in cache and writes only use a limited amount of cache for some modest writeback "early (reported) completion/throughput-smoothing" effects, as well as some special-purpose memory access types.

For example, on x86, "store-combining": it's a special mode of mapping set as such in the page table entry responsible for that virtual address, where writes use a special store combining buffer (that's typically some single-digit-number of cache lines) local to the core used as a writeback cache so small writes from a loop (like, for example, translating a CPU side pixel buffer to a GPU pixel encoding while at the same time writing through PCIe mapping into VRAM) can accumulate into full cachelines to eliminate any need for read-before-write transfers of those cachelines and also generally to make those writeback transfers more efficient for cases where you go through PCIe/Infiniband/RoCE (and can benefit from typically up to 64 cachelines being bundled together to reduce packet/header overhead).

What is slow, though, at least on some contemporary relevant architectures like Zen3 (just naming because I had checked that in some detail), are single-thread-originated random reads that break the L2 cache's prefetcher (especially if they don't hit any DRAM page twice), because the L1D cache has a fairly limited quantity (for Zen1 [0] and Zen2 [1] I could now find mention of 22) of asynchronous cache-miss-handlers, with random DRAM read latency (assuming you use 1G ("giant") pages and stay in the 32G of DRAM that the 32 entries of L1 TLB can therefore cache) around 50~100ns (especially once some concurrency causes minor congestion at the DDR4 interface) therefore dropping request inverse throughput to around 5ns/cacheline i.e. 12.8 GB/s, a fraction of the (e.g. on spec-conform DDR4-3200 on a mainstream Zen3 desktop processor like a "Ryzen 9 5900") 51.2 GB/s per-CCD (compute chip; a 5950X has 2 of those plus a northbridge; it's the connection to the northbridge that's limiting here) that limits streaming reads (technically it'd be around 2% lower because you'd have to either 100% fill the DDR4 data interface (not quite possible in practice) or add some reads through PCIe (attached to the northbridge's central data hub which doesn't seem to have any throughput limits other than those of the access ports themselves)).

[0]: https://www.7-cpu.com/cpu/Zen.html [1]: https://www.7-cpu.com/cpu/Zen2.html