Fingerprinting is doing terrible things for big-tech data collection, and at the same time it's excruciatingly hard to protect against bots, spammers, fraudaters etc without it.
Few people seem to try to reconcile this, since neither side cares about the other.
I personally think that discussion about fingerprinting as raw tech, without mentioning the size of the company collecting the date or the purpose is meaningless, and only leads to a few tech savy users having less data collected on them.
Most people want to use Javascript, use the default setting and not be afraid of clicking on links. I can't really see a good solution without a coordination of regulation and tech standards, so I'm hopeful at least for decent solutions.
You don't need to precisely identify users across sessions without their consent to detect bots, advanced anti-bots make heavy use of biometrics to detect bots and don't rely too heavily on fingerprinting, mostly because they're easy to spoof in general, but generating human-like mouse data is a bigger challange.
Sure, but on the other hand, a lot of anti-fingerprinting efforts strive to reduce the info available including things like mouse movement data.
Mouse movement data is a fairly potent fingerprinting vector. Bucketing the average spouse speed and acceleration rates could provide provide useful information. This may imply specific OS speed settings, or physical mouse DPI. A machine learning system would likely be able to distinguish traditional mouse, vs trackpoint, vs touchpad, vs trackball. Etc.
Also it is not just bots that have non-human like mouse movement. Many assistive technologies would have no mouse movement, or would auto snap the mouse to relevant spot. That is actually a quite powerful for fingerprinting, since assistive technology users are a pretty small subset of internet users, so only a relatively small amount of additional data is needed to uniquely fingerprint that user/machine.
Disabling JavaScript does not stop fingerprinting either. HTTP headers are sufficient to construct unique user identifiers. Passing that data via API to a FaaS provider would enable cross site tracking that's invisible to the visitor.
Edit: The required FaaS implementation is trivial too. I could launch an endpoint that performs exactly this function in 30-60 minutes.
> I can't really see a good solution without a coordination of regulation...
Totally agree that this is perfectly within the government's purview, and they should be doing something about it. But, as with anything else in the US, until a Fortune 100, some few 1%-ers, or the deep state MIC wants it, we're not going to be getting it.
Until everyday people realize they’re being stalked, I don’t know what will change. I am seriously thinking about trying to go through the proposition process in my state to forbid selling of data (this should already run afoul of wiretapping laws, imho).
I thought having an ad campaign that targeted subgroups very specifically and boldly might be enough drum up public interest. Something like: “Hello $name from $city. How did $recent_embarrasing_purchase work out? I hope you enjoy your birthday in $birth_month.” And then a link to the proposed policy.
Unfortunately, marketers have neither scruples nor the ability to control themselves and have captured an asymmetric advantage. Technologists do what they do, preoccupied with whether or not they could, not stopping to think if they should. It seems like legislation may be the only remaining option.
Pretty much what Signal did a few years ago [1], but on a bigger scale. Sadly Facebook banned their Ads account and couldn't do it further, would be interesting if someone tries the same.
People realise they're being stalked, they don't know what that means though.
Techie people are convinced non-techie people don't know they're being tracked. They do! Ask your smart non-techie friends what they think about online privacy. I guarantee you they'll say something like "yeah, I know it's probably tracking me, but whachya gonna do".
Thanks to this disconnect, we have so many privacy campaigns with a message like "Did you know you can be uniquely identified on the web?", but so few (none?) that actually proceed to explain why that's bad, and what someone could do with that information. That's the missing piece. Give average people an actual reason to dislike or fear tracking, not just the mere curio that it exists.
I will admit that it always made me confused as to why browser has access to detailed hardware information. I can understand OS. I can understand resolution. I can rationalize GPU. I don't understand though why it should be able to access .. well, everything about the machine.
edit: It is still impressive. Even with the firefox settings on, the website was able to identify me. I am not entirely certain how I want to approach this.
> I can understand OS. I can understand resolution. I can rationalize GPU.
None of these should be available to websites by default. The first two come from simpler times when people were not as concerned with privacy implications. The third has been and continues to be pushed by advertising companies (Google, Apple, Microsoft).
edit/ update from original post cuz i cant edit anymore
So quick update since I am mildly obsessive.
I was sure it was either GPU, CPU or addons that were giving me away ( I do have a mildly unique setup ).
I ran few tests in VM and the moment I dropped GPU passthrough ( left CPU passthrough ), I was no longer ( based on that website anyway ) tracked across sessions.
Because the browser has become a vendor neutral, architecture neutral app engine and people want to do things like play MIDI instruments, use serial ports, use proprietary USB check scanners for accounting/ERP apps that work on the web and don't need SCCM to manage, etc.
I would assume for more advanced browser features, like 4k video playing, that hardware information could tell the player whether your machine is capable of playing back 4k video without stuttering.
> Until everyday people realize they’re being stalked, I don’t know what will change.
FTFY: People already know; nothing will change.
Many of the things that are happening (at least in the US) are deeply, deeply unpopular, but are not changing, and show no signs that they are even susceptible to change. Fortune-sized companies, the 1%, and the deep state are calling the shots, despite how much can be seen in real time, through things like Twitter and TikTok. I've actually had to pull back from Twitter because of all the things that are obviously beyond the pale, yet will never change. (Snowden, Assange, et. al.)
That’s why I, unfortunately, think legislation is necessary. My state allows citizen proposals with 250k signatures to get on ballot and >50% support to become law that cannot be overturned by the legislature (that has its own issues, but in this case it would be binding).
Not only that, but they might have a legal case against you. I've been slowly working through Seek and Hide: The Tangled History of the Right to Privacy, and my main takeaways have been:
(1) The constitutional right to free speech and a free press is not as broad as most people probably think.
(2) Truth is not necessarily an air-tight defense in a case of libel, as courts at various times and places have decided against publishers for true but embarrassing things intended to humiliate or harm.
Ha! I followed the instructions and went to fingerprint.com and it all 'crashed' because I had JavaScript turned off—that's my normal default setting.
I have five different browsers on my smartphone and three on the PC all sans JS and none of them are Chrome. Also, normal operation is to automatically delete all cookies at session's end.
My smartphone and PCs are de-googleized and firewalled and I never see ads in my browsers nor in apps. The apps are mainly from F-Droid and sans ads and the few Playstore ones I use are via Aurora Store and are firewalled from the internet when in use. Honestly, I cannot remember when I last saw an app display an ad, it has to be years back.
In the past I used to go to more extensive measures to stop the spying but I found it was unnecessary as the spy leakage was essentially negligible with much less stringent efforts.
It's pretty easy to render one's online personal data essentially wothlesss if one wants to. On the other hand if you insist on using JS, Gmail, Google search, Facebook etc. then you're fair game and you only have yourself to blame if your personal data is stolen.
Before you get all jubilant, note that they have fingerprinting techniques which don't use JS[0]. It was able to identity me. Contrary to popular opinion, disabling JS doesn't protect you from fingerprinting.
They describe their approach[1]. They use HTTP headers and conditional request triggered by CSS conditional media queries to gather data. Something like @media(...) {background: url(/tracking/$clientid)}. But in principle, they could also try and fingerprint the TCP/IP stack or the TLS implementation. I'm not sure it would get them more data than OS+Browser, though.
"Before you get all jubilant, note that they have fingerprinting techniques which don't use JS[0]. It was able to identity me."
I didn't detail every protection I've put in place or the post would have been too long. However, I'd suggest that spreading my browsing over at least eight browsers (and I actually use more than two machines and do so at different locations and with different ISPs) effectively reduces my profile across the net.
I also use randomized browser user agents and clean links, occasionally I'll even cut-and-paste links between multiple browsers in a single session. I often do this on HN not to hide from HN but for convenience when multitasking. (Having worked in surveillance professionally, this modus operandi just comes naturally, it's now second nature for me to work this way.)
Working with multiple browsers and multiple machines also solves the problem when on rare occasions I have to use JS. That said, I never watch YouTube with a JS-enabled browser, instead I'll use NewPipe or similar. There are other measures I could list but you get the idea. Oh, and I never use the internet on a smartphone with a SIM enabled, instead the SIM resides in a separate portable router and my 'real' phone is a dumb feature phone, it's only capable of making phone calls.
I really don't care if some stuff leaks but I've satisfied myself it's pretty trivial, as frankly, I've not had one indication over the past 20 or so years that I've been targeted as a result of fingerprinting. It's not necessary to make things completely watertight, I'm not trying to hide from the NSA or GCHQ, etc. (and it'd be unsuccessful and a complete waste of time to bother trying).
Moreover, even if something were to leak, I'm simply not a revenue-making target—that means I never respond to any targeted marketing because I simply never receive any.
FF mobile gives be different IDs each time I run a new private session on both the JS an non-js demos (I run w/o JS usually AND have enabled the resistFingerprint setting)
To me this seems extremely elitist. Non-technical people deserve to have their personal data stolen because they don't know about javascript for example?
Technical defenses are never perfect. In a sense they provide security through obscurity, as evinced by the comments above regarding Stallman's use of wget. If everyone applied technical defenses equally then workarounds would quickly be found, and everyone would be equally vulnerable. So privacy is a scale, and being in the minority provides its own defense. If in aggregate each individual is equally valuable, then the value of breaching a minority's technical defenses is some inverse multiplier of the minority's size. Personally my threat model is to put in just enough work to never be the juiciest target.
I run a similar setup as the OP when browsing the modern web, but i think it is in a way our responsibility as professionals to help the less tech inclined to navigate the sea of monsters the modern web has become.
For example: I have set up the systems of family members for whom i am some sort of digital janitor with a nice collection of firefox plugins to get rid of the worst offenders.
If you continue to willingly use socials like FB, TikTok, et al, your complaints about stolen personal data fall on deaf ears. Show me that you don't have those apps installed or do not visit their websites, then we can talk about being serious on deserving to not have data stolen.
Right, it probably is. But the issue of stolen personal data has been around for so long that nontechnical people have had years to develop political lobbying and to swing elections to put a stop to it.
The fact is that most people don't give a damn about such matters, if most did then the problems would be behind us by now.
Thus, unfortunately, with the internet it's every man and woman for him or herself. QED!
Have you ever tried to talk to "non-technical" people about this subject? They treat you like you're one of those tinfoil hat crazies.
At this point I'm 100% OK with us being the only ones able to protect ourselves. We warned them and they didn't care. Allow them to remain uncaring. We don't have to help everyone. People must want to be helped.
Yeah? If they don't know how to operate a computer then they shouldn't be operating one. The same I would feel if someone without a licence crashed their car.
If it breaks uploading a photo, it’s because the page unnecessarily copies the image into a <canvas> and then tries to upload the data from the <canvas> instead of the original image.
There are many perfectly valid reasons to do that. It’s a lot more scalable to resize images client side rather than server side and using a canvas is one of the simplest ways to achieve that.
No, this is how most pre-upload image editors work. Why upload a 5MB avatar photo that's you're going to have the user crop and scale on the client-side to a few hundred KB first?
Using canvas for this is much more friendly to their bandwidth, no nefarious intent needed.
It also breaks page zoom. The user's preferred zoom level for a domain isn't preserved between new-tab page loads, but resets itself every time.
(I'm guessing it was too much implementation work to separate out this feature: to preserve normal, expected UI behavior client-side, while presenting a fake pagezoom value to scripts. That would degrade only a handful of (poorly-designed, script-layout) websites, rather than the whole accessible browser experience).
Yeah I enabled the option yesterday after learning, today I disabled it back since NOPE without site-specific zoom settings retained the web is too inconsistent for me.
I just tried putting it on with the idea of trying it out for one workday to see if it breaks something. It immediataly broke favicons on my GitLab tabs (turning them into random vertical stripes of pixels), which is both odd and a pretty bad start.
I really like the idea behind this feature, but it seems the Web API might have become too complex to counteract bad actors like this. It's particularly scary that it can correlate your activity in private mode with your identity in normal mode.
RFP randomizes Canvas data extraction by default, which might have something to do with it. Gitlab favicon seems normal to me when I navigate there(RFP on).
Another method for web fingerprinting is called GPU-Fingerprinting [0], codenamed 'DrawnApart', it relies on WebGL to count the number and speed of the execution units in the GPU, measure the time needed to complete vertex renders, handle stall functions, and more stuff..
As the years pass, I keep thinking back and realize that Richard Stallman was right all along:
> For personal reasons, I do not browse the web from my computer. (I
also have not net connection much of the time.) To look at page I
send mail to a demon which runs wget and mails the page back to me.
It is very efficient use of my time, but it is slow in real time.
I think Stallman just shot himself in the foot by even revealing that much. Unless a lot of people do the same thing, it's very easy to conclude that it was Richard Stallman who sent that WGET request, granted a few variables. The difficult part is perhaps tracking it back to its actual source, but I don't think Stallman is that hard to find. All this is of course extremely chilling. I'm sure a profile could be built up around WGET requests, and then employing some "likelihood machine" on it, to make educated guesses as to how likely it is that the WGET request was actually from Richard Stallman. I think we've just stumbled upon a new and "fun" Where's Wally game here!
I actually did exactly that a while ago. Where I worked, we didn't have internet access but we had email access, so as a workaround, I made an email server on my home machine that fetched web pages for me. A coworker took it even further and made a proxy server that automated the process so you could actually browse the web, although very slowly. Just to say that Stallman is not the only one with this idea.
It was in the early 2000, and smartphones weren't a thing. It also was a time where companies were paranoid into letting employees access the internet, but at the same time had abysmal security. By that I mean viruses ran free on shared folders, undetected because their antivirus software was years outdated. Very different times...
Stallman shot himself in the foot by having a text only blog that was easily searchable when it came time for the wolves to cancel him. A crappy proprietary blog or thousands of hours of ranting via Youtube videos ironically would have slowed down the haters and maybe even cause them to miss things with which to cancel him with. Its hilarious in an ironic way. Bonus points if the cancelers were running GNU software. :D
Are we losing the point, here?
“Does not browse”
When interacting with webmail- also does not browse directly, preferring CLI scripts to act as intermediary.
Does wget execute .js or .css or execute anything it reads beyond a URL redirect?
Is wget a huge attack surface like a browser?
Why is this being fought with technical measures (which are ineffective and cripple the web as a platform) instead of legal consumer law where you can easily fine and punish companies that do the fingerprinting?
EDIT: Note that you can do BOTH - but one without the other is just a game of whack-a-mole.
Because some browser-makers (Firefox at least) believe that the identity of those browsing the web should be protected. Legislators do not believe that. (At least, a majority of legislators do not.)
What kills me is the cookie consent stuff, they should of enforced that Do Not Track is honored, and have fees that make sites ensure compliance or be sued over not honoring DNT which iirc was sent as a HTTP header, it would of actually been a meaningful solve.
Would you consider the entire European Union a minority of the legislators? Because that's what GDPR is designed to do, make identifying customers well controlled and expensive whatever the method.
A law needs a justification and needs to apply equally to everyone. Writing that about fingerprinting would not be trivial. Some site operators can make a believable argument that they use it in ways that are good for society.
The short answer which should be obvious... regulatory doesn't work, legal doesn't currently work.
The burden of proof is on the claimant, and with proper information control you can't ever meet that burden of proof. It becomes an ant versus a gorilla instead of David vs. Goliath.
Tell me, how do you differentiate a simple random alpha-numeric string from another random string that may have been generated as a fingerprint.
Mathematically do you think there's any way to actually prove one way or the other? If not, how would that bias the system if the person is adversarial and lies.
The only way to prevent this is to make sure the information is nonsensical.
Preventing collection would identify you in a way that they can prevent access. Even though websites are public, you see this happening with any captcha service.
Can you provide any proof that "regulatory doesn't work"?
Might be my European outlook, but consumer law has been stupidly effective at curbing abuses from companies here and was much more effective than playing the technology race USA is trying to fight. There's always a next side-step, the next abuse a company can invent - and you keep trying to push the responsibility of avoiding it to users (by adding more and more onerous technology) instead of punishing the abusers.
Because bad actors have an easy time on an actually global network. It's disturbingly hard to hold bad actors accountable, particularly if they have zero legal presence (e.g. a corporation's subsidiary) in one's jurisdiction.
Is it really that hard? I haven't seen anyone from US actually attempt any accountability - zero punishments for spam callers, zero punishments for data collectors, not even a semblance of attempt to punish data traffickers?
But we’re talking here about major corporations who would (largely) follow the law if there was a law with teeth commensurate with the potential rewards form abuse of privacy.
Look, forget about threat models. It's relatively trivial these days to avoid fingerprinting attacks if you want to (as a private, web browsing individual).
I use fingerprinting actively in enterprise apps as a form of silent 3FA. It's a useful backstop. If I have a user who forgot their password but retrieves it via email, I'll usually let them pass if their fingerprint matches one of their priors; otherwise my software shoots off an email to their immediate superior to make that manager validate that the machine the employee is using is one they can vouch for.
I've always viewed browser fingerprinting as something that can be leveraged as a security feature. It's far more useful for that than for some sort of distributed tracking. I'd never want to live in a world (ahem
... China) where submitting to such fingerprinting actively was mandatory, or politically punishable if you didn't. No society should be run like an employer/employee organization with that sort of lack of trust. No sane free person would allow their own browser to transmit a fingerprint. But for employer/employee systems management? It's a great tool in the box.
really? it takes a minute to set up a VPN and do your web browsing through a virtual machine. I guess it's not "trivial" for the average American, but it definitely is for the average terrorist or child pornographer, so it's easy compared to surmounting most other threat models faced by people intending to evade detection. Therefore, "trivial".
[edit] also, the less trivial it is, the better for corporate security.
I'm afraid your view is how the journey to the *"world where submitting to such fingerprinting actively was mandatory" starts. Something with frogs in very warm water.
I upvoted this because it's the only smart comment to my post here. This is the ultimate concern.
That said, fingerprinting is only useful as a third security measure because most people don't understand its mechanics. The mechanics of avoiding being tracked are pretty basic. If our country required browsers or computers to transmit their fingerprint, people would find ways around it and it would stop being useful as a security metric.
Put another way, the moment this becomes a feature of an oppressive regime, it's one of the easiest things to work around. The obscurity is what makes it remain somewhat useful.
(1) Users should not receive passwords via e-mail. (2) How very enterprisey of you to even be able to send passwords, which one also should not be able to do. (3) Users can change or modify their browser, either to another browser entirely or through installation of addons. The fingerprint is not guaranteed at all to stay the same or similar.
This is an uncharitable reading of the comment. "Retrieve via email" can just as well be understood as reset using an email flow, as is common on most websites. And the comment does not claim they rely on fingerprints never changing, they say that if you do have a matching fingerprint, you can use that instead of another procedure.
(1) There is nothing wrong with sending a password via email. Even if you send a reset link instead an email provider could steal that too.
(2) The server gets sent your password every time you log in. You shouldn't rely on a server operator not knowing your password.
(3) You can tune how sensative the system is in response to changes in the fingerprint. Even if their in a failure to match that just means authentication will be extra strict.
You seem to have a conflict of interest here. How can you accept this for employee/employer but at the same time say it's not ok for a person to submit fingerprinting? Employees are also persons.
Because the software is only allowed to be used on company computers and a few personal devices which have to be approved by upper management. It isn't fingerprinting the person or the public. It's checking that the software is running on a known/approved machine.
Few people seem to try to reconcile this, since neither side cares about the other.
I personally think that discussion about fingerprinting as raw tech, without mentioning the size of the company collecting the date or the purpose is meaningless, and only leads to a few tech savy users having less data collected on them.
Most people want to use Javascript, use the default setting and not be afraid of clicking on links. I can't really see a good solution without a coordination of regulation and tech standards, so I'm hopeful at least for decent solutions.
Mouse movement data is a fairly potent fingerprinting vector. Bucketing the average spouse speed and acceleration rates could provide provide useful information. This may imply specific OS speed settings, or physical mouse DPI. A machine learning system would likely be able to distinguish traditional mouse, vs trackpoint, vs touchpad, vs trackball. Etc.
Also it is not just bots that have non-human like mouse movement. Many assistive technologies would have no mouse movement, or would auto snap the mouse to relevant spot. That is actually a quite powerful for fingerprinting, since assistive technology users are a pretty small subset of internet users, so only a relatively small amount of additional data is needed to uniquely fingerprint that user/machine.
Edit: The required FaaS implementation is trivial too. I could launch an endpoint that performs exactly this function in 30-60 minutes.
Totally agree that this is perfectly within the government's purview, and they should be doing something about it. But, as with anything else in the US, until a Fortune 100, some few 1%-ers, or the deep state MIC wants it, we're not going to be getting it.
I thought having an ad campaign that targeted subgroups very specifically and boldly might be enough drum up public interest. Something like: “Hello $name from $city. How did $recent_embarrasing_purchase work out? I hope you enjoy your birthday in $birth_month.” And then a link to the proposed policy.
Unfortunately, marketers have neither scruples nor the ability to control themselves and have captured an asymmetric advantage. Technologists do what they do, preoccupied with whether or not they could, not stopping to think if they should. It seems like legislation may be the only remaining option.
[1] https://signal.org/blog/the-instagram-ads-you-will-never-see...
Techie people are convinced non-techie people don't know they're being tracked. They do! Ask your smart non-techie friends what they think about online privacy. I guarantee you they'll say something like "yeah, I know it's probably tracking me, but whachya gonna do".
Thanks to this disconnect, we have so many privacy campaigns with a message like "Did you know you can be uniquely identified on the web?", but so few (none?) that actually proceed to explain why that's bad, and what someone could do with that information. That's the missing piece. Give average people an actual reason to dislike or fear tracking, not just the mere curio that it exists.
edit: It is still impressive. Even with the firefox settings on, the website was able to identify me. I am not entirely certain how I want to approach this.
None of these should be available to websites by default. The first two come from simpler times when people were not as concerned with privacy implications. The third has been and continues to be pushed by advertising companies (Google, Apple, Microsoft).
So quick update since I am mildly obsessive.
I was sure it was either GPU, CPU or addons that were giving me away ( I do have a mildly unique setup ).
I ran few tests in VM and the moment I dropped GPU passthrough ( left CPU passthrough ), I was no longer ( based on that website anyway ) tracked across sessions.
In other words, cat and mouse game continues.
I know what to think about this… I fucking hate it.
FTFY: People already know; nothing will change.
Many of the things that are happening (at least in the US) are deeply, deeply unpopular, but are not changing, and show no signs that they are even susceptible to change. Fortune-sized companies, the 1%, and the deep state are calling the shots, despite how much can be seen in real time, through things like Twitter and TikTok. I've actually had to pull back from Twitter because of all the things that are obviously beyond the pale, yet will never change. (Snowden, Assange, et. al.)
This has been tried by a guy who placed Facebook ads like these. FB blocked his account in a few hours.
So good in theory, wont work in practice
People are such dumb fucking cattle that they'll lash out at you rather than the data brokers or the software vendors who ratted them out though
Not only that, but they might have a legal case against you. I've been slowly working through Seek and Hide: The Tangled History of the Right to Privacy, and my main takeaways have been:
(1) The constitutional right to free speech and a free press is not as broad as most people probably think.
(2) Truth is not necessarily an air-tight defense in a case of libel, as courts at various times and places have decided against publishers for true but embarrassing things intended to humiliate or harm.
I have five different browsers on my smartphone and three on the PC all sans JS and none of them are Chrome. Also, normal operation is to automatically delete all cookies at session's end.
My smartphone and PCs are de-googleized and firewalled and I never see ads in my browsers nor in apps. The apps are mainly from F-Droid and sans ads and the few Playstore ones I use are via Aurora Store and are firewalled from the internet when in use. Honestly, I cannot remember when I last saw an app display an ad, it has to be years back.
In the past I used to go to more extensive measures to stop the spying but I found it was unnecessary as the spy leakage was essentially negligible with much less stringent efforts.
It's pretty easy to render one's online personal data essentially wothlesss if one wants to. On the other hand if you insist on using JS, Gmail, Google search, Facebook etc. then you're fair game and you only have yourself to blame if your personal data is stolen.
They describe their approach[1]. They use HTTP headers and conditional request triggered by CSS conditional media queries to gather data. Something like @media(...) {background: url(/tracking/$clientid)}. But in principle, they could also try and fingerprint the TCP/IP stack or the TLS implementation. I'm not sure it would get them more data than OS+Browser, though.
[0] https://noscriptfingerprint.com/
[1] https://fingerprint.com/blog/disabling-javascript-wont-stop-...
I didn't detail every protection I've put in place or the post would have been too long. However, I'd suggest that spreading my browsing over at least eight browsers (and I actually use more than two machines and do so at different locations and with different ISPs) effectively reduces my profile across the net.
I also use randomized browser user agents and clean links, occasionally I'll even cut-and-paste links between multiple browsers in a single session. I often do this on HN not to hide from HN but for convenience when multitasking. (Having worked in surveillance professionally, this modus operandi just comes naturally, it's now second nature for me to work this way.)
Working with multiple browsers and multiple machines also solves the problem when on rare occasions I have to use JS. That said, I never watch YouTube with a JS-enabled browser, instead I'll use NewPipe or similar. There are other measures I could list but you get the idea. Oh, and I never use the internet on a smartphone with a SIM enabled, instead the SIM resides in a separate portable router and my 'real' phone is a dumb feature phone, it's only capable of making phone calls.
I really don't care if some stuff leaks but I've satisfied myself it's pretty trivial, as frankly, I've not had one indication over the past 20 or so years that I've been targeted as a result of fingerprinting. It's not necessary to make things completely watertight, I'm not trying to hide from the NSA or GCHQ, etc. (and it'd be unsuccessful and a complete waste of time to bother trying).
Moreover, even if something were to leak, I'm simply not a revenue-making target—that means I never respond to any targeted marketing because I simply never receive any.
I also notice that the no-JS hash changes when I move the window to a different monitor.
For example: I have set up the systems of family members for whom i am some sort of digital janitor with a nice collection of firefox plugins to get rid of the worst offenders.
If you continue to willingly use socials like FB, TikTok, et al, your complaints about stolen personal data fall on deaf ears. Show me that you don't have those apps installed or do not visit their websites, then we can talk about being serious on deserving to not have data stolen.
Right, it probably is. But the issue of stolen personal data has been around for so long that nontechnical people have had years to develop political lobbying and to swing elections to put a stop to it.
The fact is that most people don't give a damn about such matters, if most did then the problems would be behind us by now.
Thus, unfortunately, with the internet it's every man and woman for him or herself. QED!
At this point I'm 100% OK with us being the only ones able to protect ourselves. We warned them and they didn't care. Allow them to remain uncaring. We don't have to help everyone. People must want to be helped.
Nobody said that. "My defenses work" != "my defenses should be necessary".
Examples include the back button, uploading photos on some websites uploads random data instead of the photo, etc.
Surely there could be valid reasons for doing so?
I imagine for example that:
1. It ensures the selected file is a valid image before uploading it
2. It strips meta data like GPS position from the image before uploading it
3. It could reduce the size of the image, by either scaling it down, or compressing it more, or both, before uploading it
Or it might not be strictly necessary, but Instagram does it anyway.
No, this is how most pre-upload image editors work. Why upload a 5MB avatar photo that's you're going to have the user crop and scale on the client-side to a few hundred KB first?
Using canvas for this is much more friendly to their bandwidth, no nefarious intent needed.
(I'm guessing it was too much implementation work to separate out this feature: to preserve normal, expected UI behavior client-side, while presenting a fake pagezoom value to scripts. That would degrade only a handful of (poorly-designed, script-layout) websites, rather than the whole accessible browser experience).
I really like the idea behind this feature, but it seems the Web API might have become too complex to counteract bad actors like this. It's particularly scary that it can correlate your activity in private mode with your identity in normal mode.
_______________________
0. https://www.bleepingcomputer.com/news/security/researchers-u...
> For personal reasons, I do not browse the web from my computer. (I also have not net connection much of the time.) To look at page I send mail to a demon which runs wget and mails the page back to me. It is very efficient use of my time, but it is slow in real time.
It was in the early 2000, and smartphones weren't a thing. It also was a time where companies were paranoid into letting employees access the internet, but at the same time had abysmal security. By that I mean viruses ran free on shared folders, undetected because their antivirus software was years outdated. Very different times...
"Mail for you, sir!"
Deleted Comment
EDIT: Note that you can do BOTH - but one without the other is just a game of whack-a-mole.
Granted, the enforcement should be stepped up.
Example please
The burden of proof is on the claimant, and with proper information control you can't ever meet that burden of proof. It becomes an ant versus a gorilla instead of David vs. Goliath.
Tell me, how do you differentiate a simple random alpha-numeric string from another random string that may have been generated as a fingerprint.
Mathematically do you think there's any way to actually prove one way or the other? If not, how would that bias the system if the person is adversarial and lies.
The only way to prevent this is to make sure the information is nonsensical.
Preventing collection would identify you in a way that they can prevent access. Even though websites are public, you see this happening with any captcha service.
Might be my European outlook, but consumer law has been stupidly effective at curbing abuses from companies here and was much more effective than playing the technology race USA is trying to fight. There's always a next side-step, the next abuse a company can invent - and you keep trying to push the responsibility of avoiding it to users (by adding more and more onerous technology) instead of punishing the abusers.
FTFY?
I use fingerprinting actively in enterprise apps as a form of silent 3FA. It's a useful backstop. If I have a user who forgot their password but retrieves it via email, I'll usually let them pass if their fingerprint matches one of their priors; otherwise my software shoots off an email to their immediate superior to make that manager validate that the machine the employee is using is one they can vouch for.
I've always viewed browser fingerprinting as something that can be leveraged as a security feature. It's far more useful for that than for some sort of distributed tracking. I'd never want to live in a world (ahem ... China) where submitting to such fingerprinting actively was mandatory, or politically punishable if you didn't. No society should be run like an employer/employee organization with that sort of lack of trust. No sane free person would allow their own browser to transmit a fingerprint. But for employer/employee systems management? It's a great tool in the box.
[edit] also, the less trivial it is, the better for corporate security.
That said, fingerprinting is only useful as a third security measure because most people don't understand its mechanics. The mechanics of avoiding being tracked are pretty basic. If our country required browsers or computers to transmit their fingerprint, people would find ways around it and it would stop being useful as a security metric.
Put another way, the moment this becomes a feature of an oppressive regime, it's one of the easiest things to work around. The obscurity is what makes it remain somewhat useful.
(2) The server gets sent your password every time you log in. You shouldn't rely on a server operator not knowing your password.
(3) You can tune how sensative the system is in response to changes in the fingerprint. Even if their in a failure to match that just means authentication will be extra strict.
Dead Comment