Readit News logoReadit News
domenicd commented on The privacy nightmare of browser fingerprinting   kevinboone.me/fingerprint... · Posted by u/ingve
tzury · 22 days ago
The OP argues that fingerprinting is a "privacy nightmare," but we need to look at why it exists.

From a pragmatic perspective, we are forcing two very different networks to run on the same protocols:

The Business Internet: Banking, SaaS, and VC-funded content (Meta/Google).

The Fun Internet: Hobby blogs, Lego fan sites, and the "GeoCities" spirit.

You cannot have a functioning "Business Internet" without identity verification. If you try to perform a transaction (or even just use a subsidized "free" tool like Gmail) while hiding behind a generic, non-unique fingerprint, you look indistinguishable from a bot or a fraudster.

Fingerprinting is often just the immune system of the commercial web trying to verify you are human.

The friction arises because we expect the "Fun Internet" to play by different rules. A Lego fan site shouldn't need to know who I am. But because we access both the Lego site and our Bank using the same browser, the same IP, and the same free tools (Chrome/Search), the "Fun Internet" becomes collateral damage of the "Business Internet's" need for security and monetization.

We can't have it both ways. We accepted the SLA for the "Business Internet" in exchange for free, billion-dollar tools. If you want 100% anonymity, you are effectively asking to use the commercial web's infrastructure without providing the identity signal it runs on.

As the OP notes, mitigation is hard. But that’s not just because advertisers are "evil"—it's because on the modern web, anonymity looks exactly like a security threat.

domenicd · 22 days ago
This is an excellent insight.

I think there is still some hope that technical solutions could be developed so that only the "Business Internet" gets access to verified identity, with the user somehow understanding this, while the "Fun Internet" doesn't have such capabilities. This is what stood behind, e.g., Google's proposed WEI [1] that got such huge backlash, or Apple's Private Access Tokens [2] which are essentially the same thing but quietly slipped under the community radar.

Other proposals are Google's in-limbo Private State Tokens [3], or the various digital-wallet/age verification proposals (I think Apple and Google both have stuff in that space).

But even basic stuff, like IP protection, can really throw off the anti-fraud and anti-botnet mechanisms. Your Lego fan site wants to be behind a CDN for speed and protection from DDOS? Well, people using VPNs or in Incognito mode might end up inconvenienced, because the CDN thinks it's dealing with bots. Rough stuff.

[1]: https://en.wikipedia.org/wiki/Web_Environment_Integrity

[2]: https://developer.apple.com/news/?id=huqjyh7k

[3]: https://privacysandbox.google.com/protections/private-state-...

domenicd commented on The privacy nightmare of browser fingerprinting   kevinboone.me/fingerprint... · Posted by u/ingve
domenicd · 22 days ago
As someone who used to work on Chrome, I can confirm that browser fingerprinting is indeed a nightmare.

Back in the early days of Privacy Sandbox, before that crashed and burned against the UK CMA not even letting Google remove third-party cookie support [0], there was a lot of optimism about how we were going to completely solve cross-site tracking, even in the face of determined adversaries. This had several ingredients; the biggest ones I can remember are:

1. Remove third-party cookie support 2. Remove unpartitioned storage support 3. IP protection at scale 4. Solving fingerprinting

In the end, well... at least we got 2, which has some security benefits, even if Chrome gave up on 1, 3, and 4, and thus on privacy. Anyway, everyone could tell that 4 was going to be the hardest.

The closest I saw to an overarching plan was the "privacy budget" proposal [1], which would catalogue all the APIs that could be used for fingerprinting, and start breaking them (or hiding them behind a permission prompt, maybe?) if a site used too many of them in a row. I think most people were pretty skeptical of this, and the main person driving it moved off of Chrome in 2022. Mozilla has an analysis suggesting it's impractical at [2]. Some code seems to still exist! [3]

A key prerequisite of the privacy budget proposal was trying to remove passive fingerprinting surfaces in favor of active ones. That involved removing data that is sent to the server automatically, or freezing APIs like `navigator.userAgent` which are assumed infallible, and then trying to replace them with flows like client hints where the server needed to request data, or promise-based APIs which could more clearly fail or even generate a permissions prompt. This was quite an uphill battle, as web developers (both in ad tech and outside) would fight us every step of the way, because it made various APIs less convenient. Elsewhere people have cited one example, of reducing Accept-Language [4]. The other big one was the user agent client hints headers/API [5], which generated whole new genres of trolls on the W3C forums.

As Privacy Sandbox slumped more and more towards its current defeated state, people backed off from the original vision of a brilliant technical solution that worked even in the face of determined adversaries. Instead they retreated to stances like "if we just make it hard enough to fingerprint, it'll be obvious that fingerprinting scripts are doing something wrong, and we can block those scripts"; see e.g. [6]. Maybe that would have worked, I don't know, but it becomes much more of a cat-and-mouse game, e.g. needing to detect bundled or obfuscated scripts.

And now of course it's all over; the ad tech industry, backed by the UK CMA, has won and forced Google to keep third-party cookies forever, and with those in place, there's not really any point in funding the anti-fingerprinting work, so it's getting wound down [7]. The individual engineers and teams are probably still passionate about launching opt-in or Incognito-only privacy protections, but I doubt that align with product plans. I'm sure Google doesn't mind the end result all that much either, as having to migrate the world to privacy-preserving ad tech was going to be a big lift. Now all that eng power can instead focus on AI instead of privacy.

[0]: https://privacysandbox.com/news/privacy-sandbox-next-steps/

[1]: https://github.com/mikewest/privacy-budget

[2]: https://mozilla.github.io/ppa-docs/privacy-budget.pdf

[3]: https://chromium.googlesource.com/chromium/src/+/36dc3642bee...

[4]: https://github.com/explainers-by-googlers/reduce-accept-lang...

[5]: https://developer.mozilla.org/en-US/docs/Web/API/User-Agent_...

[6]: https://privacysandbox.google.com/protections/script-blockin...

[7]: https://privacysandbox.com/news/update-on-plans-for-privacy-...

domenicd commented on Metabolic and cellular differences between sedentary and active individuals   howardluksmd.substack.com... · Posted by u/rzk
domenicd · a month ago
My friend's main study that he cites is https://www.sciencedirect.com/science/article/pii/S209525462... . The interesting thing is that there is no real cutoff. The benefit kind of tapers off logarithmically, but all-cause mortality just gets lower and lower the more steps/day you take. So his 12k is somewhat arbitrary.
domenicd commented on Metabolic and cellular differences between sedentary and active individuals   howardluksmd.substack.com... · Posted by u/rzk
kace91 · a month ago
Anaerobic training’s returns increase ridiculously with days/week until about 3 and it’s large diminishing returns after that.

Just saying, once you’re willing to lift weights once a week with all the upfront cost (gym membership, leaving your comfort zone, learning the ropes, etc) it’s a really good bang for your buck adding one or two more.

domenicd · a month ago
For sure. My friend's program is longevity-focused, not strength focused.

I usually do 2/week strength training + 1/week bouldering, but have dropped to 1/week strength training + 1/week bouldering while I worked to incorporate the 12k steps into my routine. I'm also currently doing a cut so am less motivated to lift. After I hit 10% body fat I plan to start bulking and go back to 2/week + bouldering or maybe even 3/week + bouldering.

Regarding diminishing returns, at least for longevity,

> Training once or twice a week for less than an hour can reduce the chance of death from any cause by 35%. But, if the time is increased to over an hour in a week or more than three sessions, then the longevity benefit disappears to zero compared with people who never put their hands on a weight.

from https://www.unaging.com/exercise/weight-lifting-for-life/ which cites https://pmc.ncbi.nlm.nih.gov/articles/PMC7385554/ . Pretty interesting.

domenicd commented on Metabolic and cellular differences between sedentary and active individuals   howardluksmd.substack.com... · Posted by u/rzk
Blackthorn · a month ago
People must have very different definitions of HIIT because there's no way someone is sustaining a 2 minute absolute max-effort sprint.
domenicd · a month ago
I just aim for zone 5. Usually it takes 45-60 seconds to get into zone 5, then I spend the remaining 60-75 seconds there.
domenicd commented on Metabolic and cellular differences between sedentary and active individuals   howardluksmd.substack.com... · Posted by u/rzk
dooglius · a month ago
Isn't 12k steps like 6 miles? I could plausibly jog that much, but to walk it every day seems like a huge time commitment.
domenicd · a month ago
12k steps is about 2 hours. It helps a lot to have a walking pad (basically a mini-treadmill), and possibly standing desk.

I do 45 minutes of Anki per day on the walking pad, and then if walking around the city hasn't gotten the other 1.25 hours, I can fill the rest with watching TV on the walking pad.

domenicd commented on Metabolic and cellular differences between sedentary and active individuals   howardluksmd.substack.com... · Posted by u/rzk
domenicd · a month ago
A lot of people in the comments are expressing curiosity about "ideal" amounts of exercise to avoid these sorts of problems.

I have a real-life friend whose hobby is studying this stuff. His recommendations boil down to:

- 1/week 20 minutes HIIT: 5 minutes warmup, 3x(2 minutes high intensity + 3 minutes low intensity) blocks.

- 1/week strength training focused on large muscle groups.

- 12,000 steps per day walking (HIIT excluded).

According to his reading of the literature, this gives you the best bang for your buck in terms of all-cause mortality avoidance. Most of the studies in this area are correlational, not randomized controlled trials, so it's hard to be sure. But I can vouch for his diligence in trying to get to the bottom of this. I've been following his program since January with reasonably good results over my already-active baseline.

His website is https://www.unaging.com/, and honestly it's a bit hard to recommend because he's definitely playing the SEO game: the articles are often repetitive of each other and full of filler. And the CMS seems janky. (I would tell you to find his older articles before he started optimizing for SEO, but, it seems like the CMS reset all article dates to today.) But, if you have patience, it might be worthwhile.

domenicd commented on Element: setHTML() method   developer.mozilla.org/en-... · Posted by u/todsacerdoti
jfengel · 2 months ago
I like React's dangerouslySetInnerHTML. The name so clearly conveys "you can do this but you really, really, really shouldn't".
domenicd · 2 months ago
Indeed, the web platform now has setHTML() and setHTMLUnsafe() to replace the innerHTML setter.

There's also getHTML() (which has extra capabilities over the innerHTML getter).

domenicd commented on What to do with C++ modules?   nibblestew.blogspot.com/2... · Posted by u/ingve
domenicd · 3 months ago
The standardization process here feels similar to what happened with JavaScript modules. Introduced in ES2015, but the language standard only had syntax and some invariants. It had no notion of how to load modules, or how they might be delivered, or how a program that had a module at its root might be started. But there was a similar urgency, of "we must have this in ES2015".

I made it one of my first projects after joining the Chrome team to fix that gap, which we documented at [1]. (This reminds me of the article's "The only real way to get those done is to have a product owner...".)

You could even stretch the analogy to talk about how standard JS modules compete against the hacked-together solutions of AMD or CommonJS modules, similar to C++ modules competing against precompiled headers.

That said, the C++ modules problem seems worse than the JavaScript one. The technical design seems harder; JS's host/language separation seems cleaner than C++'s spec/compiler split. Perhaps most importantly, organizationally there was a clear place (the WHATWG) where all the browsers were willing to get together to work on a standard for JS module loading. Whereas it doesn't seem like there's as much of a framework for collaboration between C++ compiler writers.

[1]: https://blog.whatwg.org/js-modules

u/domenicd

KarmaCake day1702October 22, 2012
About
Domenic Denicola

https://domenic.me https://x.com/domenic https://github.com/domenic

View Original