Readit News logoReadit News
vitus commented on We have ipinfo at home or how to geolocate IPs in your CLI using latency   blog.globalping.io/we-hav... · Posted by u/jimaek
altairprime · 10 days ago
Courtesy of Xfinity and Charter overprovisioning most neighborhood’s circuits, we already have that today for a significant subset of U.S. Internet users due to the resulting Bufferbloat (up to 2500ms on a 1000/30 connection!)
vitus · 10 days ago
You probably meant to say oversubscribing, not overprovisioning.

Oversubscription is expected to a certain degree (this is fundamentally the same concept as "statistical multiplexing"). But even oversubscription in itself is not guaranteed to result in bufferbloat -- appropriate traffic shaping (especially to "encourage" congestion control algorithms to back off sooner) can mitigate a lot of those issues. And, it can be hard to differentiate between bufferbloat at the last mile vs within the ISP's backbone.

vitus commented on Rex is a safe kernel extension framework that allows Rust in the place of eBPF   github.com/rex-rs/rex... · Posted by u/zdw
amelius · a month ago
For the sake of safety, can't we simply have a back-end that emits eBPF?
vitus · a month ago
We do; most people don't just write eBPF by hand.

https://github.com/llvm/llvm-project/tree/main/llvm/lib/Targ...

Deleted Comment

vitus commented on map::operator[] should be nodiscard   quuxplusone.github.io/blo... · Posted by u/jandeboevrie
on_the_train · 2 months ago
There is in c++, too (std::ignore). Not sure why the author decided to go with the ancient void cast
vitus · 2 months ago
std::ignore's behavior outside of use with std::tie is not specified in any finalized standard.

https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p29... aims to address that, but that won't be included until C++26 (which also includes _ as a sibling commenter mentions).

vitus commented on “Are you the one?” is free money   blog.owenlacey.dev/posts/... · Posted by u/samwho
CmdDot · 2 months ago
«As such, you want to pick a pair where the odds are as close to 50/50 as possible.»

This is incorrect, the correct strategy is mostly to check the most probable match (the exception being if the people in that match has less possible pairings remaining than the next most probable match).

The value of confirming a match, and thus eliminate all other pairings involving those two from the search space, is much higher than a 50/50 chance of getting a no match and only excluding that single pairing.

vitus · 2 months ago
> This is incorrect, the correct strategy is mostly to check the most probable match (the exception being if the people in that match has less possible pairings remaining than the next most probable match).

Do you have any hard evidence, or just basing this on vibes? Because your proposed strategy is emphatically not how you maximize information gain.

Scaling up the problem to larger sizes, is it worth explicitly spending an action to confirm a match that has 99% probability? Is it worth it to (most likely) eliminate 1% of the space of outcomes (by probability)? Or would you rather halve your space?

This isn't purely hypothetical, either. The match-ups skew your probabilities such that your individual outcomes cease to be equally probable, so just looking at raw cardinalities is insufficient.

If you have a single match out of 10 pairings, and you've ruled out 8 of them directly, then if you target one of the two remaining pairs, you nominally have a 50/50 chance of getting a match (or no match!).

Meanwhile, you could have another match-up where you got 6 out of 10 pairings, and you've ruled out 2 of them (thus you have 8 remaining pairs to check, 6 of which are definitely matches). Do you spend your truth booth on the 50/50 shot (which actually will always reveal a match), or the 75/25 shot?

(I can construct examples where you have a 50/50 shot but without the guarantee on whether you reveal a match. Your information gain will still be the same.)

vitus commented on “Are you the one?” is free money   blog.owenlacey.dev/posts/... · Posted by u/samwho
jncfhnb · 2 months ago
If you can only check pairings one at a time I’m not sure it’s possible to do better than greedily solving one person at a time.
vitus · 2 months ago
So, for 10 pairs, 45 guesses (9 + 8 + 7 + 6 + 5 + 4 + 3 + 2 + 1) in the worst case, and roughly half that on average?

It's interesting how close 22.5 is to the 21.8 bits of entropy for 10!, and that has me wondering how often you would win if you followed this strategy with 18 truth booths followed by one match up (to maintain the same total number of queries).

Simulation suggests about 24% chance of winning with that strategy, with 100k samples. (I simplified each run to "shuffle [0..n), find index of 0".)

vitus commented on “Are you the one?” is free money   blog.owenlacey.dev/posts/... · Posted by u/samwho
owenlacey · 2 months ago
Thank you! This is consistent with feedback I got from the pudding, and is ultimately the reason they didn't go ahead with the post. I tried reverse-engineering the information-theory approach to try see what sort of decisions it made.

I noticed that for any match up score of X, the following match up would keep exactly X pairs in common. So if they scored 4/10 one week, they would change 6 couples before the next one. Employing that approach alone performed worse than the contestants did in real life, so didn't think it was worth mentioning!

vitus · 2 months ago
It should be easier to understand the optimal truth booth strategy. Since this is a yes/no type of question, the maximum entropy is 1 bit, as noted by yourself and others. As such, you want to pick a pair where the odds are as close to 50/50 as possible.

> Employing that approach alone performed worse than the contestants did in real life, so didn't think it was worth mentioning!

Yeah, this alone should not be sufficient. At the extreme of getting a score of 0, you also need the constraint that you're not repeating known-bad pairs. The same applies for pairs ruled out (or in!) from truth booths.

Further, if your score goes down, you need to use that as a signal that one (or more) of the pairs you swapped out was actually correct, and you need to cycle those back in.

I don't know what a human approximation of the entropy-minimization approach looks like in full. Good luck!

vitus commented on Ecosia: The greenest AI is here   blog.ecosia.org/ecosia-ai... · Posted by u/doener
belval · 2 months ago
Netflix spending 240Wh for 1h of content just does not pass the smell test for me.

Today I can have ~8 people streaming from my Jellyfin instance which is a server that consumes about 35W, measured at the wall. That's ~5Wh per hour of content from me not even trying.

vitus · 2 months ago
It's way more lopsided than your example would suggest.

My understanding is that Netflix can stream 100 Gbps from a 100W server footprint (slide 17 of [0]). Even if you assume every stream is 4k and uses 25 Mbps, that's still thousands of streams. I would guess that the bulk of the power consumption from streaming video is probably from the end-user devices -- a backbone router might consume a couple of kilowatts of power, but it's also moving terabits of traffic.

[0] https://people.freebsd.org/~gallatin/talks/OpenFest2023.pdf

vitus commented on NSA and IETF, part 3: Dodging the issues at hand   blog.cr.yp.to/20251123-do... · Posted by u/upofadown
stavros · 3 months ago
> That OMB rule, in turn, defines "consensus" as follows: "general agreement, but not necessarily unanimity, and includes a process for attempting to resolve objections by interested parties, as long as all comments have been fairly considered, each objector is advised of the disposition of his or her objection(s) and the reasons why, and the consensus body members are given an opportunity to change their votes after reviewing the comments".

From https://blog.cr.yp.to/20251004-weakened.html#standards, linked in TFA.

vitus · 3 months ago
To add to this: rough consensus is defined in BCP 25 / RFC 2418 (https://datatracker.ietf.org/doc/html/rfc2418#section-3.3):

   IETF consensus does not require that all participants agree although
   this is, of course, preferred.  In general, the dominant view of the
   working group shall prevail.  (However, it must be noted that
   "dominance" is not to be determined on the basis of volume or
   persistence, but rather a more general sense of agreement.) Consensus
   can be determined by a show of hands, humming, or any other means on
   which the WG agrees (by rough consensus, of course).  Note that 51%
   of the working group does not qualify as "rough consensus" and 99% is
   better than rough.  It is up to the Chair to determine if rough
   consensus has been reached.
The goal has never been 100%, but it is not enough to merely have a majority opinion.

vitus commented on IP blocking the UK is not enough to comply with the Online Safety Act   prestonbyrne.com/2025/11/... · Posted by u/pinkahd
holbrad · 3 months ago
I this is exactly how you should respond to outrageous demands.

The UK should pound sand.

vitus · 3 months ago
I get that it's satisfying to tell them to go away because they're being unreasonable. But what's the legal strategy here? Piss off the regulators such that they really won't drop this case, and give them fodder to be able to paint the lawyer and his client as uncooperative?

Is the strategy really just "get new federal laws passed so UK can't shove these regulations down our throats"? Is that going to happen on a timeline that makes sense for this specific case?

u/vitus

KarmaCake day2530June 23, 2011View Original