https://github.com/llvm/llvm-project/tree/main/llvm/lib/Targ...
https://github.com/llvm/llvm-project/tree/main/llvm/lib/Targ...
Deleted Comment
https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p29... aims to address that, but that won't be included until C++26 (which also includes _ as a sibling commenter mentions).
This is incorrect, the correct strategy is mostly to check the most probable match (the exception being if the people in that match has less possible pairings remaining than the next most probable match).
The value of confirming a match, and thus eliminate all other pairings involving those two from the search space, is much higher than a 50/50 chance of getting a no match and only excluding that single pairing.
Do you have any hard evidence, or just basing this on vibes? Because your proposed strategy is emphatically not how you maximize information gain.
Scaling up the problem to larger sizes, is it worth explicitly spending an action to confirm a match that has 99% probability? Is it worth it to (most likely) eliminate 1% of the space of outcomes (by probability)? Or would you rather halve your space?
This isn't purely hypothetical, either. The match-ups skew your probabilities such that your individual outcomes cease to be equally probable, so just looking at raw cardinalities is insufficient.
If you have a single match out of 10 pairings, and you've ruled out 8 of them directly, then if you target one of the two remaining pairs, you nominally have a 50/50 chance of getting a match (or no match!).
Meanwhile, you could have another match-up where you got 6 out of 10 pairings, and you've ruled out 2 of them (thus you have 8 remaining pairs to check, 6 of which are definitely matches). Do you spend your truth booth on the 50/50 shot (which actually will always reveal a match), or the 75/25 shot?
(I can construct examples where you have a 50/50 shot but without the guarantee on whether you reveal a match. Your information gain will still be the same.)
It's interesting how close 22.5 is to the 21.8 bits of entropy for 10!, and that has me wondering how often you would win if you followed this strategy with 18 truth booths followed by one match up (to maintain the same total number of queries).
Simulation suggests about 24% chance of winning with that strategy, with 100k samples. (I simplified each run to "shuffle [0..n), find index of 0".)
I noticed that for any match up score of X, the following match up would keep exactly X pairs in common. So if they scored 4/10 one week, they would change 6 couples before the next one. Employing that approach alone performed worse than the contestants did in real life, so didn't think it was worth mentioning!
> Employing that approach alone performed worse than the contestants did in real life, so didn't think it was worth mentioning!
Yeah, this alone should not be sufficient. At the extreme of getting a score of 0, you also need the constraint that you're not repeating known-bad pairs. The same applies for pairs ruled out (or in!) from truth booths.
Further, if your score goes down, you need to use that as a signal that one (or more) of the pairs you swapped out was actually correct, and you need to cycle those back in.
I don't know what a human approximation of the entropy-minimization approach looks like in full. Good luck!
Today I can have ~8 people streaming from my Jellyfin instance which is a server that consumes about 35W, measured at the wall. That's ~5Wh per hour of content from me not even trying.
My understanding is that Netflix can stream 100 Gbps from a 100W server footprint (slide 17 of [0]). Even if you assume every stream is 4k and uses 25 Mbps, that's still thousands of streams. I would guess that the bulk of the power consumption from streaming video is probably from the end-user devices -- a backbone router might consume a couple of kilowatts of power, but it's also moving terabits of traffic.
[0] https://people.freebsd.org/~gallatin/talks/OpenFest2023.pdf
From https://blog.cr.yp.to/20251004-weakened.html#standards, linked in TFA.
IETF consensus does not require that all participants agree although
this is, of course, preferred. In general, the dominant view of the
working group shall prevail. (However, it must be noted that
"dominance" is not to be determined on the basis of volume or
persistence, but rather a more general sense of agreement.) Consensus
can be determined by a show of hands, humming, or any other means on
which the WG agrees (by rough consensus, of course). Note that 51%
of the working group does not qualify as "rough consensus" and 99% is
better than rough. It is up to the Chair to determine if rough
consensus has been reached.
The goal has never been 100%, but it is not enough to merely have a majority opinion.The UK should pound sand.
Is the strategy really just "get new federal laws passed so UK can't shove these regulations down our throats"? Is that going to happen on a timeline that makes sense for this specific case?
Oversubscription is expected to a certain degree (this is fundamentally the same concept as "statistical multiplexing"). But even oversubscription in itself is not guaranteed to result in bufferbloat -- appropriate traffic shaping (especially to "encourage" congestion control algorithms to back off sooner) can mitigate a lot of those issues. And, it can be hard to differentiate between bufferbloat at the last mile vs within the ISP's backbone.