LOL
LOL
https://www.researchgate.net/publication/228824849_Memory_Ba...
I can not give you thi final feedback at the moment, I only breefly looked through the articles for not.
The first ones are very accessible (given my prior lnowledge of lamport clocks and happens before as in Java memory model), the later ones I am currently not sure are very clear.
But are easier than the docs I used when first approached this topic in the past, like Documentation/memory-barriers.txt and the Doug Lea's texts.
for now
I can not give you thi final feedback at the moment, I only breefly looked through the articles for not.
The first ones are very accessible (given my prior lnowledge of lamport clocks and happens before as in Java memory model), the later ones I am currently not sure are very clear.
But are easier than the docs I used when first approached this topic in the past, like Documentation/memory-barriers.txt and the Doug Lea's texts.
(The OP says one time codes are worse than passwords. In case of fishing passwords fail the same way as one time codes.)
I was also sarcastic/provocative even in the prev comment, saying the GOOD site always includes a warning with the code making the attack impossible. A variation of the attack is very widely used by phone scammers: "Hello, we are updating intercomm on your appartment block. Please tell us your name and phone number. Ok, you will receive a code now, tell it to us". Yet many online services and banks still send one time codes without a warning to never share it!
The fishing point may also be used in defence of one time codes: if the GOOD service was using passwords instead of one time codes, the BAD could just initiated fishing attack, redirecting the user to a fake login page - people today are used to "Login with" flow.
I’m an avid reader. But there are limits to what I can process, and our world has become so full of noise that it has become a coping strategy for brains to selectively ignore stuff if they feel it’s not important at the moment. That effect becomes even more pronounced as the brain deteriorates with age.
And more so if you receive them constantly.
But of course, you are entitled to your opinion, even if it's wrong.
It was that “[t]hey only read what they need to finish what they are currently trying to do.”
Those are two different claims.
Do not share the code 3456
and will read the words, because they read left to right.The code should be in the same font as the rest of the text.
In a multi-threaded context, memory reads and writes can be reordered by hardware. It gets more complicated with shared cache. Imagine that you have core 1 writing to some address at (nearly) the same time that core 2 reads from that. Does core 2 read the old or the new? Especially if they don't share the same cache -- core 1 might "write" to a given address, but it only gets written to core 1's cache and then "scheduled" to be written out to main memory. Meanwhile, later core 2 tries to read that address, it's not in its cache, so it pulls from main memory before core 1's cache has flushed. As far as core 2 is concerned, the write happened after it read from the address even though physically the write finished in core 1 before core 2's read instruction might have even started.
A memory barrier tells the hardware to ensure that reads-before is also "happens-before" (or after) a given writen to the same address. It's often (but not always) a cache and memory synchronization across cores.
I found Fedora Pikus's cppcon 2017 presentation [1] to be informative, and Michael Wong's 2015 presentation [0] filled in some of the gaps.
C++, being a generic language for many hardware implementations, provides a lot more detailed concepts for memory ordering [2], which is important for hardwares that have more granularity in barrier types that what most people are used to with x86-derived memory models.
[0]: https://www.youtube.com/watch?v=DS2m7T6NKZQ
[1]: https://www.youtube.com/watch?v=ZQFzMfHIxng
[2]: https://en.cppreference.com/w/cpp/atomic/memory_order.html