I think the developer community need to start ostracising people working for these companies. Don't hire former employees, don't hang hang out with people who work for these companies and conferences.
Don't supply services to these companies (build their website, network...).
I believe by letting people of the hook for participating in this (similar things can be said for e.g. the NSA) we are essentially endorsing the behaviour. If you work on at e.g. NSO group, you are personally responsible for governments surpressing and even killing (just look at SA) critics
Ostracising someone from society solely on where they work without looking at their actual actions is implying guilt by association. A tactic often used by authoritarians. Everyone in a civilised society has the right to a fair trial without the presumption of guilt.
Finally, somebody's brave enough to say the truth!
I've been helping with some work for a small local gang- we do the usual (murder-for-hire, "debt collection", extortion, etc). Although I only do administrative work - keeping records and such. Pays great. But you know what? My wife- my wife of five years- left me when she found out.
Can you believe that? What a fucking fascist. I didn't do anything wrong. I never killed anybody. And, sure, I did also help machine firearms for folks, and I did help with some supply chain issues to make sure we have a reliable supply of bullets, but I never shot anyone. Not one person.
If a organisation is criminal, then being member of that organisation means, yes, you are a criminal, too.
But in this case it is not a criminal organisation, by law, it is a company selling "software weapons" to a government the western governments view as legitimate and therefore ok to sell weapons to, even though it is not at all democratic.
So the company is acting within the law (probably), but we probably agree, that it is not moral to do so. If we agree on that, than it is also not ok, to support a company who is doing wrong. So I agree, to avoid people, who do no ethical work. But I don't know enough of the companies in question to make that final judgement and judge case by case, like always.
> Everyone in a civilised society has the right to a fair trial without the presumption of guilt.
In the case of criminal and civil proceedings, sure, but a boycott on my part is an application of my own moral compass, not of the law. I don't owe anyone a "fair trial" for the judgement that guides my own free actions.
If for any other reason than absolute necessity you work for an organization that serves authoritarian, anti-democratic regimes with tools designed specifically to implement policies to that end, I will think poorly of you for no other reason. I won't trust that you are able to make decent moral choices. I will base my own conduct on that judgement. My conduct insofar that it's clearly legal should not be the subject of a fair trial.
I didn't not say ostracising from society, just from our community. Regarding fair trial, this is not a legal matter, it is a moral judgement. And working for a company who is actively helping authoritarian governments to prosecute and kill dissidents is making a choice. That is not guilt by association, you _are_ helping with your actions.
Your argumentation is exactly how how totalitarian governments commit atrocities, divide the responsibilities up enough so that every little cog can justify to themselves that what they are doing is not morally wrong. I know I'm coming close to invoking Godwins law, but Oskar Gröning had a moral (and even legal) responsibility for his actions, even if he did not kill anyone himself.
> Everyone in a civilised society has the right to a fair trial without the presumption of guilt.
This is absurd. A "right to a fair trial" is the standard for criminal trials, which are associated with criminal punishments -particularly but not always- imprisonment and execution.
The right to a fair trial has never been a standard we as individuals are obliged to follow in other contexts; for example: it would be ridiculous to think "a trial" is needed before we decide whether we should continue doing business with a company that dismisses its employees for being gay or trans.
Similarly, there is a long history of consumer boycott movements to pressure both companies and nations into acting more ethically; from Apartheid South Africa, to confectionary and fruit companies, to oil companies, to which eggs we might choose to buy. In none of those circumstances is a "right to a fair trial" a relevant concern.
A better question might be: How unethical does a company have to be before the act of just working for them should be considered immoral enough to merit public rebuke and repudiation? I don't think there's a lot of companies that reach that threshold, but I'm adamant that some should: Blackwater is the most obvious choice here.
This is the epitome of the ignorant, tiresome, bad-faith "innocent before proven guilty" argument. The childish "A tactic often used by authoritarians" is just icing on the cake.
It’s not guilt by association, it’s guilt by action. The action of deciding to work for the company with such a mission / clientele. Especially in a role requiring enough technical knowledge to know what’s going on.
>Everyone in a civilised society has the right to a fair trial without the presumption of guilt.
Fair trial is about the government enforcing laws, not about social groups enforcing morals and ethics. No one has the right to a trial when you act like an asshole and no one wants to be your friend because of it.
I was recently offered a job by NSO, didn't take it due to their terrible reputation. I won't be surprised if some countries start denying entry to NSO employees. Even Facebook suspended accounts of NSO employees after NSO hacked Whatsapp - https://www.vice.com/en_us/article/7x5nnz/nso-employees-take... .
On the other hand, their product is just a tool which can be used for good (stopping terrorists) or evil (spying on human rights activists). Just like a kitchen knife can be used for good (cooking a meal) or evil (stabbing people). So I find it hard to find the moral justification for the actions you suggest. The problem is not the tool or the tool's manufacturer, it's how it gets used.
I’ll play the opposite side of this argument, for the sake of discussion. You point to knifes having a good use: cooking. It’s by far the dominant use of knifes, and no doubt it makes cooking sunstantially easier.
But hacking tools: to what extent are they actually being used for good? Stuxnet is the clearest example I know of these tools almost certainly decreasing a threat to US citizens (at least for the time before it was found out). But beyond that, there’s very little publicly accessible information demonstrating that these tools are actually effective at stopping or decreasing terrorism. Moreover, even if they turn out to be effective at that, their use in this manner comes with other questionable effects on law and personal rights. I don’t think the knife is a good analogy because while everyone agrees that a knife can be put to either good or bad effect, there’s not consensus on whether hacking tools can even be used for any good.
> On the other hand, their product is just a tool which can be used for good (stopping terrorists) or evil (spying on human rights activists).
That applies to lots of technology things though. With the NSO group specifically though, wouldn't their tech have Sales people that need to actively court and sell it to potential customers?
> Just like a kitchen knife can be used for good (cooking a meal) or evil (stabbing people).
NSO knowingly sells tools to repressive regimes that use them to violate human rights. If you sell a knife to someone you know is going use it for murder then you're culpable and your behavior is immoral.
This won't work, as long as there is a market for hacking phones, there will be those willing to sell their expertise.
We should focus on making things more secure. While security is a tough problem, it's also somewhat surprising that properly sandboxing a browser is so difficult.
The mobile OS strategy failed everywhere. Now we have bad security (seriously, this is as an OS and browser error) and bad lock-in. I doubt it would have been easy to do with a decently updated, conventional desktop PC, even if you could redirect its network access like it was done here with his phone.
Even with a mitm attack on your browser, this shouldn't have happened.
I agree that people supporting this are guilty, but I don't agree with blacklists of developers for political reasons. These are established in the industry and speaks of incompetence in leadership as it is. That doesn't mean their behavior should be endorsed, but that is a case for legislation.
I really don't like NSO, to the point I never go to their parties and meetups even when invited and they have good parties.
However it's worth mentioning they really don't see it that way. A lot of people working for NSO (or the NSA) see themselves as making a personal sacrifice for public safety.
Also, NSO doesn't operate said technology it just sells it - so it's a bit more like going after people making anti DRM software or p2p sharing software. The only big difference is that NSO is making money.
That ‘only’ difference is a very big one, and they are completely aware that their software will be misused and are happy to make a profit with it.
To say, ‘we will only sell our software to countries who promise not to use it to violate human rights, and if we catch them doing it, we will suspend it’ is just hand waving. The software is designed to be undetected. That’s the whole point.
A actual policy would be that ‘we do not sell our software to countries who have a bad human rights track record, as defined by <independent group>’ ... but that would cut into sales.
It's more like going after people who make waterboarding kits and run logistics for kidnappings. Anti-DRM and p2p software aren't usually associated with aiding & abetting torture and murder of dissidents and journalists. Framing the two as equivalent elides what NSO group's employees are actually complicit in.
Respectfully, I think knowingly aiding covert surveillance of dissidents is a lot worse than merely helping pursuit of copyright violations, even if I don't like the latter much.
> I really don't like NSO, to the point I never go to their parties and meetups even when invited and they have good parties.
> However it's worth mentioning they really don't see it that way. A lot of people working for NSO (or the NSA) see themselves as making a personal sacrifice for public safety.
Yes we as humans are very good in justifying our own actions to ourselves. It also doesn't help if it's in your employers interest to reinforce this perception, creating a culture of "we are what stands against evil". This makes it even more important that outsiders will tell them that we hold a different moral judgement.
> Also, NSO doesn't operate said technology it just sells it - so it's a bit more like going after people making anti DRM software or p2p sharing software. The only big difference is that NSO is making money.
Apart from the fact that people don't die or get tortured because of p2p software, the question is also should someone working on e.g. biological weapons be able to absolve themselves by saying "I did not throw the bomb?". Yes, they did not throw the bomb, but they made a tool designed for one purpose only, to be put into that bomb, and they were fully aware of its purpose. They hold as much responsibility as the person using it.
I do agree that individuals should be held accountable for their work but it's the degree of the work that is problematic. Is it direct contribution or is it indirect contribution?
If I am working on an open source project used by NSA to hack you, am I responsible? No. That type of moral policing would be bad.
If someone is writing software directly for hacking you, then yes they are responsible but then you must consider all the actions of the org where they used that tool. People might work on these tools because of terrorism or believe in security of the state. That's by no means bad but how the org go about that can be bad and infringe rights. They don't have control over it. Now if they don't quit over the bad reuse of their tool and are not constraint by something (a person working for NSA is likely to get another job without problem), then I think there's something to be said about the personal responsibility.
Verifying the degree of contribution from outside is very hard to do as most details of what happens inside the orgs remains a secret. What their employees are told is wildly different than what they end up doing.
That said, I don't believe targeting individuals will have much effect. It's actively bad because there's an easy road here. Hold the org accountable. If we go down the path of wasting energy on ex-communicating individuals, orgs may get a free pass. It's not hard to replace people in a big org especially a monopoly. Go for the low hanging fruits. Boycott the org.
I don't think the inclusion of the word "mob" is very helpful. The connotations are both sinister, and organised.
What we have is the logical extension of the social justice, or SJW, movement. Which even 2 years ago, in my recollections, would have been met with utter disdain. Somehow we've arrived at a time when social justice has a new-found legitimacy and few detractors still speaking out about it.
To me this is scarier than a mob, who usually have a figurehead around whom they rally. The SJWs have been building their seat of power on the shoulders of social media celebrities.
This is Huxleyan populism. People 'follow' others from their sofa, they 'like' things without critical assessment, bolstering support for an ill-defined cause based on memetic catchphrases and sound-bite signals.
Twitter mobs help get people fired. NSO helps people get murdered. I can't claim to be a big fan of either, but as long as both exist I know which I'd like to see prevail.
I think the developer community need to start refusing to use the cellphone. It cannot be trusted. It's tainted by non-free software on top of non-free OS on top of non-free firmware with the separate processor whose behaviour we cannot observe from the main processor. It also relies on central wireless network from only a handful of providers. Easy single point of vulnerable target.
I do refuse to own a cellphone. What about you. Since you're suggesting the boycott, can you?
I think the problem can be solved by separating the "phone" experience from the "mobile" experience.
Phones are these devices powered by a philosophy (and to an extent, a technology) from 3-4 decades ago and day after day we see them ruining the experience of having the internet access from your hands. We need to move from a mobile-phone era to a mobile-internet era.
The Citizen Lab reports (one linked from this article) about the Israeli NSO Group's Pegasus spyware have been really scary for a few years now already.
Here's a category of articles on the citizenlab.ca web site described as "Investigations into the prevalence and impact of digital espionage operations against civil society groups": https://citizenlab.ca/category/research/targeted-threats/
"NSO Group Technologies (NSO standing for Niv, Shalev and Omri, names of company's founders) is an Israeli technology firm whose spyware called Pegasus enables the remote surveillance of smartphones. It was founded in 2010 by Niv Carmi, Omri Lavie, and Shalev Hulio. It employed almost 500 people as of 2017, and is based in Herzliya, near Tel Aviv."
--Wikipedia
I saw this discussed on reddit, and I was surprised that there was so much confusion about how this happened. It wasn't just "network injection" - quite clearly (unfortunately very poorly described in the article) there was a vulnerability in iOS/Safari that allowed remote code execution; network injection alone wouldn't have been enough. Does anyone know what the CVE was that allowed this?
A code execution vulnerability isn't enough. To work on truly any website, they need:
- A remote code execution vulnerability. There are almost certainly multiple vulnerabilities at play here, since long gone are the days where a single vuln gave arbitrary code execution.
- a way to bypass the encryption/https, unless the remote code execution was on a layer before encryption (which seems unlikely). EDIT: Apparently the hack only works on non-encrypted websites.
- Once remote code is achieved, they most certainly need a way to elevate privileges in order to make the hack more persistent and tap into other apps.
There are most likely several CVEs at play here. The amount of effort that went into this hack is, frankly, terrifying.
From my understanding, it is easy enough to bypass HTTPS encryption if you need to intercept traffic for an attack like this. You only need to intercept and modify the traffic for a website, not a specific website.
There are still websites that don't use HTTPS.
For websites that do use HTTPS, if they haven't configured something like HSTS, HPKP or Expect-CT, typing example.com into a web browser will make it will send an unencrypted HTTP request to http://example.com. If the website's content is served only HTTPS, the server will most likely respond with something that redirects the web browser to the HTTPS version of the website (most likely a HTTP 301 or 302 status code). The initial unencrypted HTTP request can be intercepted and modified.
Glad you fixed your comment with your edit, but this really isn't that hard to imagine at all: a privilege escalation in Safari is exactly how one of the original jailbreaks worked: https://en.wikipedia.org/wiki/JailbreakMe.
All you then have to do is network-inject on a user who visits a non-HSTS site by entering it in their address bar.
But "any website" here seems to be "any non-https website", so this is more likely: a) router or baseband-processor hack plus b) malicious JS injection into unencrypted HTML plus c) browser vulnerability via JS.
> "There are almost certainly multiple vulnerabilities at play here, since long gone are the days where a single vuln gave arbitrary code execution"
Could you go into this in a little more detail?
I'm inferring that chains of vulnerabilities are needed to go from some starting point to arbitrary code execution. Is that correct?
Have efforts to secure computer systems over the past ~2 decades succeeded, at least in that much more effort needs to be invested in order to get to the point of arbitrary code execution?
Too bad I can't run a different browser engine on iOS. In a monoculture everyone is exposed to the same vulnerabilities. If we had 3 or 4 browser engines running on iOS then the odds of a specific vulnerability affecting a single user go down.
Further, there is competition for having the most secure browser. it's not controversial to say the 12 years ago IE, Firefox and Safari were pretty bad at security and Chrome in 2008 pushed them all to up their game.
Apple's stance on browser engines is at best claiming security by obscurity. Either apps are sandboxed or they aren't. If they are then it would be safe to run any browser engine. If they aren't then having only one means users have no choice when that one fails.
It doesn’t protect you from vulnerabilities in things like, say the code in the system API which paints video frames to the screen, which is where a lot of these vulns seem to be. It wouldn’t have been 2008, didn’t safari and chrome mostly share WebKit for quite a long time anyway?
Also, does iOS have something similar to SELinux? I know it's not perfect, and there have been RCEs in Android as well. But I'm surprised there are still things out there like the original tiff jailbreak exploit that allows full root access to a person's device from just visiting a webpage.
The iOS equivalent would be the app sandboxing mechanism, which very heavily restricts kernel access from most of Safari (most importantly the process that is JITing javascript). It’s structured differently than SELinux and has some complications like entitlements that have led to vulnerabilities in the past, but it largely allows iOS to apply the same sort of app-level access control.
iOS has a number of features that provide similar functionality; getting kernel level privileges on iOS requires multiple vulnerabilities to be chained together. It's not like RCE in the web browser process instantly compromises the entire system.
>Does anyone know what the CVE was that allowed this?
>The malicious code even wipes crash logs, making it impossible to determine exactly what weaknesses were exploited to take over the phone, said Claudio Guarnieri, head of Amnesty International’s Security Lab, in an interview.
Thanks for clarifying. I honestly thought, how is the browser able to install spyware that "allows remote access to everything on the phone" (per the article), as the browser is supposed to be a sandboxed environment. I'm relieved it was "just" a vulnerability in iOS.
You can still be the best and have security vulnerabilities. That proves absolutely nothing. I don’t know what kind of logic you are using. Are you implying that the best at security should never have had security vulnerabilities? If yes, what platform would that be?
On HN I've seen a lot of unencrypted sites lately. I don't personally feel comfortable browsing on them, so I avoid them. Near the end of the article here, it mentions that this is only possible on an unencrypted website. Is there a reason why so many people are not encrypting their websites? Even browsers seem to have picked up on the insecure nature of http. Please correct me if I'm wrong here, I just find it very strange how many links I've inspected only to see a lack of TLS/SSL.
If your browser can be hijacked by visiting a webpage, the threat vector is not substantially different whether the website is HTTPS or not. It changes the attack path from a MITM attack to a watering hole attack but that doesn't overly raise the difficulty level.
The real threat in this case was that sending the right string of data to a browser let a malicious actor execute a RCE and install malware. Either you trust the browser to be secure against such attacks or you can't trust much of anything.
If I'm targeting an individual/organization a watering hole attack requires that I can own a site the target visits regularly.
This seems a lot more complicated than just going to any unencrypted website via network redirection of some sort. Do most people routinely visit encrypted sites that are easily hacked to target an RCE on an individual?
I'll admit up-front that I don't have a solid source that I can cite for this, and there's a good chance that it's outdated by now. That being said:
I've heard several times that this is largely driven by the crappier flavors of "media platform" and bottom-of-the-barrel ad networks breaking in spectacular fashion due to CORS and mixed-content problems when the main site tries to switch to HTTPS.
That was indeed a thing several years ago as ad networks were being forced to support HTTPS, but all of the major ad networks and ad servers have supported HTTPS for years at this point.
I don't doubt there are some bespoke ad servers or other dark corners of ad infrastructure where HTTPS support is still lacking, but that should be rare at this point.
For my simple self-publishing needs .. because I didn't make a self-signed cert for my Debian-based server, have not purchased a commercial cert, and do not want the short-term expire on LetsEncrypt.
edit- I do not have javascript-driven pages, they are PDFs or simple content
You're not worried about a middleman injecting their content into yours? It's such common practice that Comcast even documented how they do it [0].
There are easily found examples of malicious content being injected into HTML -- malvertisements for example. I can only imagine what might get injected into a PDF which can run javascript [1]. PDF readers aren't exactly known for their security.
Frankly, I'd much rather be able to talk to you about something downloaded from your site and get you to fix it instead of allowing a third party to infect me and point fingers at you.
>and do not want the short-term expire on LetsEncrypt.
certbot is too hard to set up?
>edit- I do not have javascript-driven pages, they are PDFs or simple content
The issue is that if the http protocol can be tampered with, even if all you serve is plain text, the attacker can change your response to contain javascript. Anyone visiting using a browser (with scripts enabled) will be vulnerable.
I also avoid them most of the time, although I don't think there are necessarily more of them now than previously. Sometimes the reason they are not encrypted is that they are fairly old (hopefully the server software was updated at least, although that should be much less effort than setting up https). Sometimes the sites do support https but don't redirect http to https and the http link was submitted.
I did a quick manual count of yesterday's HN front page articles according to hckrnews.com and found 8 non-https links (vs. 109 total non-dead links). 2 of these have a working https version.
I use the HTTPS Everywhere extension set to the new "Encrypt All Sites Eligible" option. Instead of using a list as previous HTTPS Everywhere, this tries to access all websites via https and pops up a warning if it doesn't work (most of the time; one of the six non-https supporting sites was misconfigured in a way that didn't get the popup). Since I want to know anytime I access an http site, I choose the "open insecure page for this session only" option if I want to look at an http page to make sure that it tries the https site again in the future and that I know any time I am visiting an http site. There are simpler extension that just do that, but unfortunately they are not Firefox Recommended extensions that are monitored by Mozilla. Hopefully it won't be too long before browsers do this themselves.
Obviously, HTTPS adds complexity many people find unnecessary + it puts you in position of depending on a 3-rd party - a certification authority.
In fact you can program almost any device (including very old and simple) to be a plain old HTTP client or a server but this is not the case with modern HTTPS.
As part of dealing with this I wrote a simple Firefox add-on to highlight insecure links (https://addons.mozilla.org/en-US/firefox/addon/insecure-link...). Basically it gives you a big red border around any HTTP, FTP or dynamic link (that last one can be turned off as it makes sneaky places like Google light up like a holiday decoration).
According to the article, Amnesty International assumes, that the Journalist in question was targeted by http-MITM attack. This assumptions nicely fits into popular "http is bad, https is good" narrative, but it is just a guess (and probably is far from truth). Modern browsers support multiple network code paths, several HTTP versions, dozens of TLS versions and boatload of ciphers. All of that code has RCE bugs.
Besides, delivering vulnerability payload via advertising network is far more reliable — with http-only exploit chain police would have to wait and hope that Omar will someday visit an http-only site. I would expect a pricey exploit toolkit, used by governments, to be more robust than that.
I've seen this improve a lot in recent years with Let's Encrypt, so that's been a great trend.
LE is still tedious as heck to set up on your own, though, so I guess people who haven't migrated to modern hosting yet are still being left behind. Most hosting-for-devs platforms these days give you HTTPS by default and don't think would even let you host a website without.
Certbot went through the registration process in the terminal window. Enter in an email address and read over the terms of service and then it goes and does its thing and finally spits out a success message telling me where the certificate and private key are on the filesystem.
Then just point an nginx configuration file to the two [1] and tell nginx to test and reload its configuration.
Then, LetsEncrypt will send an email to me notifying me that one or more certificates are about to expire (20 days, 10 days, 1 day ...). I even decided to test that and make sure that works (on a different site a couple years ago) [2]. The certificate can be updated using the certbot-renew service:
systemctl start certbot-renew.service
Google searches show several examples which put the renewal service on a timer.
That's it! I'm not sure what you think is tedious about that process. Would you care to elaborate?
January 31 of this year I got an email telling me that my LE client used the older ACMEv1 protocol, not the newer ACMEv2 protocol. They gave me 4 months notice to update my LE client to something compliant. I burnt the time and did the work.
On March 3 myself and many others[0] got an email demanding that we manually re-issue our certificates because of a vulnerability discovered in the LE service. They gave us one day to comply, after that they would revoke the certificates and our users would receive security errors. I begrudgingly went through all my servers and issued the command to forcibly renew certificates. Not a huge burden for me, but likely a bigger burden for larger operations.
As the feature set grows (new challenge types, wildcard support, etc.) and the service gets even more popular, it's going to be an even bigger target and the effects of a monoculture will really be felt. I'm starting to see the value in paying for certificates, and more specifically, using providers that don't provide a public certificate issuance API (or at least stick it behind a paywall.)
How many times would LE have to accidentally issue gstatic.com or fbcdn.net before they get the Symantec treatment[1]? Too big to fail: It's not just for investment banks. And that should give anyone seeking a decentralized internet pause.
Because 3 lines of nodejs can make a cool web demo for HN, but making that same demo https (in a way which isn't going to require manual action every 3 months) involves many more lines of code.
What does https have to do with getting exploited?
If the web server is compromised, then it’ll inject the malicious JavaScript code into the HTML, and transmit that to you. SSL is irrelevant in this regards.
Unless when not using SSL, then the HTML is getting intercepted in flight, and a malicious JavaScript code is injected into the HTML.
Is that more of what we are now seeing these days? The routers are compromised, and the HTML is getting compromised too.
This is a legitimate question.
Granted, I’m fully in support of SSL. Nobody should be seeing what you are browsing. This leaves too much digital breadcrumbs lying around.
>What does https have to do with getting exploited?
It significantly helps to prevent MITM [1] (Man-in-the-middle) attacks. (without scary certificate warnings anyways)
>If the web server is compromised, then it’ll inject the malicious JavaScript code into the HTML, and transmit that to you. SSL is irrelevant in this regards.
The web server isn't compromised in this case, presumably the network is compromised.
>Unless when not using SSL, then the HTML is getting intercepted in flight, and a malicious JavaScript code is injected into the HTML.
Yes.
>Is that more of what we are now seeing these days? The routers are compromised, and the HTML is getting compromised too.
"Stingray" devices [2] spoof mobile towers so cellphones are tricked into believing they're connecting to "Just another cell phone tower" and at that point traffic can be captured/modified.
>Granted, I’m fully in support of SSL. Nobody should be seeing what you are browsing.
It's more than people just knowing what you're browsing (or issues such as leaking passwords/private info), it's that if someone can MITM you, they can also transparently modify unencrypted data (including adding exploits).
Ok, so I want to make something clear to the (smart but mostly not "in-the-know" about NSO) HN crowd.
Let's say you're a Mexican drug lord or Saudi prince. You know this tech exists and the US/Israeli/European governments use it.
Then, you see this article, and see all the comments in the comment section about how competent, scary and balance-changing the technology is.
Basically: I think these pieces are bought for and paid by NSO through a PR firm, but you are not the target. When we leave comments like "NSO's tech is so good it has to be regulated!" or "NSO's tech is dangerous!" we are playing directly into the PR firm's clever hands.
It's like an article about how good the AR-15 or the F-35 are. Obviously to me (and most of the readers) it's mostly "why are we focusing on technology of death" but we are not the target.
Of course, NSO and other players in that field can do much, much, much more than advertised to the media.
Remember, the vast majority of people working for NSO worked for Israeli and US intelligence bodies. They serve in the 8200 unit doing malware analysis trailed by the NSA and then go work for NSO on the same sort of technology.
(If you want to get an idea of how much, I recommend "Permanent Record" but if you don't like Snowden then check out how far ahead intelligence bodies were _historically_ compared to public knowledge - WW2 crypto being a good analogy)
This lets the US government (and the Israeli government in turn) to make money off the technology without going through the same international regulatory systems.
The US government (or Israeli government) can stop companies like NSO in a single decision but they are not since it is making them money.
It's up to us (the citizens) to pressure them to do so and to promote security best practices and work on better tools to make it harder to breach peoples' privacy.
I'm not sure this particular article is paid for by NSO but there is a fierce competition in this space and NSO are just one player. As far as I know (I really don't, just rumors) NSO's tool is the best one but also the priciest. So arguably if I am a target audience (like a national security agency) an article like this outlines the competence of NSO / Pegasus.
Why aren't cell phone tower communications secured? Why aren't cell towers secured with certificates verified by the network? Why aren't stingray devices considered an attack on the cell network?
If stingray devices work by tricking your phone to connect with older protocols like 3G, why aren't those protocols deprecated just like we deprecate older encryption methods that are no longer secure?
Oversimplified answer: because people want their cellphone to work outside of major US cities.
For example, the laws on what is allowed to use encryption and what is not differ significantly from country to country. There are also often older installations that only provide 3G support.
Basically, it's complicated and there are a lot of different reasons but it mostly comes down to the world being a big place with lots of different laws and requirements, yet people want a phone that works everywhere.
I think it has to be 2G GSM specifically, 3G UMTS do cipher that kind of holds, also a lot of phones aren’t dynamically updatable or updated smartphones
GSM downgrade attacks as well as USB SDR gears came out late 3G era, I kind of trust 3GPP guys for protection for LTE onwards but if GSM downgrade attacks are your primary concern in your life you can move to Japan and get contract on au by KDDI as KDDI flat out ignored CSFB to CDMA2000 and went all VoLTE
>if GSM downgrade attacks are your primary concern in your life you can move to Japan and get contract on au by KDDI as KDDI flat out ignored CSFB to CDMA2000 and went all VoLTE
Wouldn't it be easier (at least on android) to go to mobile network settings and change it to "lte only" or "3g only"? As for using au by KDDI, I'm not even sure whether using their SIM cards will prevent a downgrade attack. It's possible that they still support 2g for roaming use, for instance.
> Why aren't cell phone tower communications secured? Why aren't cell towers secured with certificates verified by the network?
They can be, but that adds cost to running a cell phone network. Since very few people ask for their cell phone communications to be secured, companies just don't do it. it's like how GPS is completely unsecured - anyone with a couple hundred bucks can interfere with GPS signals. Maybe route a cruise ship into an island, or a random driver to a sketchy area of town.
> Why aren't stingray devices considered an attack on the cell network?
LEO's use them very frequently to conduct investigations and gather evidence. Just look at the current debate around full-device encryption. USA Law Enforcement really likes access to information. Telecommunications (and a lot of the inernet: TCP/IP, DNS, etc) were initially set up to be open - security was an afterthought. And they just never bothered to add security later on.
Also deprecating old things is hard. People still expect to be able to pick up a charged Nokia phone from 2001 and call 911 on it.
Both wind up eventually to the same place, but the first redirects to the second.
That can be a link, it can be an old bookmark, etc.
Worse if it was a targeted ad, the https:// link could be just a redirect back to an http:// link, something the browser probably has no trouble doing.
> the https:// link could be just a redirect back to an http:// link, something the browser probably has no trouble doing.
Doesn't HSTS prevent exactly this? Sure, not every website implements it, but the most visited ones overwhelmingly do - it's certainly misleading to say "any website" in that context.
>That can be a link, it can be an old bookmark, etc.
Doesn't even have to be a link. If you type addresses like most be people do (ie. without https://), the browser is going to attempt http first. So any manually typed in address will be vulnerable as well.
Don't supply services to these companies (build their website, network...).
I believe by letting people of the hook for participating in this (similar things can be said for e.g. the NSA) we are essentially endorsing the behaviour. If you work on at e.g. NSO group, you are personally responsible for governments surpressing and even killing (just look at SA) critics
I've been helping with some work for a small local gang- we do the usual (murder-for-hire, "debt collection", extortion, etc). Although I only do administrative work - keeping records and such. Pays great. But you know what? My wife- my wife of five years- left me when she found out.
Can you believe that? What a fucking fascist. I didn't do anything wrong. I never killed anybody. And, sure, I did also help machine firearms for folks, and I did help with some supply chain issues to make sure we have a reliable supply of bullets, but I never shot anyone. Not one person.
How dare anybody discriminate against me?
But in this case it is not a criminal organisation, by law, it is a company selling "software weapons" to a government the western governments view as legitimate and therefore ok to sell weapons to, even though it is not at all democratic.
So the company is acting within the law (probably), but we probably agree, that it is not moral to do so. If we agree on that, than it is also not ok, to support a company who is doing wrong. So I agree, to avoid people, who do no ethical work. But I don't know enough of the companies in question to make that final judgement and judge case by case, like always.
In the case of criminal and civil proceedings, sure, but a boycott on my part is an application of my own moral compass, not of the law. I don't owe anyone a "fair trial" for the judgement that guides my own free actions.
If for any other reason than absolute necessity you work for an organization that serves authoritarian, anti-democratic regimes with tools designed specifically to implement policies to that end, I will think poorly of you for no other reason. I won't trust that you are able to make decent moral choices. I will base my own conduct on that judgement. My conduct insofar that it's clearly legal should not be the subject of a fair trial.
Your argumentation is exactly how how totalitarian governments commit atrocities, divide the responsibilities up enough so that every little cog can justify to themselves that what they are doing is not morally wrong. I know I'm coming close to invoking Godwins law, but Oskar Gröning had a moral (and even legal) responsibility for his actions, even if he did not kill anyone himself.
This is absurd. A "right to a fair trial" is the standard for criminal trials, which are associated with criminal punishments -particularly but not always- imprisonment and execution. The right to a fair trial has never been a standard we as individuals are obliged to follow in other contexts; for example: it would be ridiculous to think "a trial" is needed before we decide whether we should continue doing business with a company that dismisses its employees for being gay or trans.
Similarly, there is a long history of consumer boycott movements to pressure both companies and nations into acting more ethically; from Apartheid South Africa, to confectionary and fruit companies, to oil companies, to which eggs we might choose to buy. In none of those circumstances is a "right to a fair trial" a relevant concern.
A better question might be: How unethical does a company have to be before the act of just working for them should be considered immoral enough to merit public rebuke and repudiation? I don't think there's a lot of companies that reach that threshold, but I'm adamant that some should: Blackwater is the most obvious choice here.
Fair trial is about the government enforcing laws, not about social groups enforcing morals and ethics. No one has the right to a trial when you act like an asshole and no one wants to be your friend because of it.
Seriously: boycott, divest, sanction NSO Group and similar businesses.
Dead Comment
On the other hand, their product is just a tool which can be used for good (stopping terrorists) or evil (spying on human rights activists). Just like a kitchen knife can be used for good (cooking a meal) or evil (stabbing people). So I find it hard to find the moral justification for the actions you suggest. The problem is not the tool or the tool's manufacturer, it's how it gets used.
But hacking tools: to what extent are they actually being used for good? Stuxnet is the clearest example I know of these tools almost certainly decreasing a threat to US citizens (at least for the time before it was found out). But beyond that, there’s very little publicly accessible information demonstrating that these tools are actually effective at stopping or decreasing terrorism. Moreover, even if they turn out to be effective at that, their use in this manner comes with other questionable effects on law and personal rights. I don’t think the knife is a good analogy because while everyone agrees that a knife can be put to either good or bad effect, there’s not consensus on whether hacking tools can even be used for any good.
That applies to lots of technology things though. With the NSO group specifically though, wouldn't their tech have Sales people that need to actively court and sell it to potential customers?
NSO knowingly sells tools to repressive regimes that use them to violate human rights. If you sell a knife to someone you know is going use it for murder then you're culpable and your behavior is immoral.
I'm sure there are dozens of companies like NSO that you just don't know about.
It's more like a self guiding missile. It's meant to hurt, so that makes NSO pretty dodgy.
We should focus on making things more secure. While security is a tough problem, it's also somewhat surprising that properly sandboxing a browser is so difficult.
Maybe, but it'll make them way way more expensive.
Even with a mitm attack on your browser, this shouldn't have happened.
Couple that with compliance being driven by ethics and non-compliance being driven by money - it will never work.
We should focus on increasing security
However it's worth mentioning they really don't see it that way. A lot of people working for NSO (or the NSA) see themselves as making a personal sacrifice for public safety.
Also, NSO doesn't operate said technology it just sells it - so it's a bit more like going after people making anti DRM software or p2p sharing software. The only big difference is that NSO is making money.
To say, ‘we will only sell our software to countries who promise not to use it to violate human rights, and if we catch them doing it, we will suspend it’ is just hand waving. The software is designed to be undetected. That’s the whole point.
A actual policy would be that ‘we do not sell our software to countries who have a bad human rights track record, as defined by <independent group>’ ... but that would cut into sales.
> However it's worth mentioning they really don't see it that way. A lot of people working for NSO (or the NSA) see themselves as making a personal sacrifice for public safety.
Yes we as humans are very good in justifying our own actions to ourselves. It also doesn't help if it's in your employers interest to reinforce this perception, creating a culture of "we are what stands against evil". This makes it even more important that outsiders will tell them that we hold a different moral judgement.
> Also, NSO doesn't operate said technology it just sells it - so it's a bit more like going after people making anti DRM software or p2p sharing software. The only big difference is that NSO is making money.
Apart from the fact that people don't die or get tortured because of p2p software, the question is also should someone working on e.g. biological weapons be able to absolve themselves by saying "I did not throw the bomb?". Yes, they did not throw the bomb, but they made a tool designed for one purpose only, to be put into that bomb, and they were fully aware of its purpose. They hold as much responsibility as the person using it.
Dead Comment
Dead Comment
If you work for a company like NSO you are willingly complicit in violations of human rights. That's not the kind of person I want to work with.
I do agree that individuals should be held accountable for their work but it's the degree of the work that is problematic. Is it direct contribution or is it indirect contribution?
If I am working on an open source project used by NSA to hack you, am I responsible? No. That type of moral policing would be bad.
If someone is writing software directly for hacking you, then yes they are responsible but then you must consider all the actions of the org where they used that tool. People might work on these tools because of terrorism or believe in security of the state. That's by no means bad but how the org go about that can be bad and infringe rights. They don't have control over it. Now if they don't quit over the bad reuse of their tool and are not constraint by something (a person working for NSA is likely to get another job without problem), then I think there's something to be said about the personal responsibility.
Verifying the degree of contribution from outside is very hard to do as most details of what happens inside the orgs remains a secret. What their employees are told is wildly different than what they end up doing.
That said, I don't believe targeting individuals will have much effect. It's actively bad because there's an easy road here. Hold the org accountable. If we go down the path of wasting energy on ex-communicating individuals, orgs may get a free pass. It's not hard to replace people in a big org especially a monopoly. Go for the low hanging fruits. Boycott the org.
What we have is the logical extension of the social justice, or SJW, movement. Which even 2 years ago, in my recollections, would have been met with utter disdain. Somehow we've arrived at a time when social justice has a new-found legitimacy and few detractors still speaking out about it.
To me this is scarier than a mob, who usually have a figurehead around whom they rally. The SJWs have been building their seat of power on the shoulders of social media celebrities.
This is Huxleyan populism. People 'follow' others from their sofa, they 'like' things without critical assessment, bolstering support for an ill-defined cause based on memetic catchphrases and sound-bite signals.
I do refuse to own a cellphone. What about you. Since you're suggesting the boycott, can you?
If the phone wasn't proprietary, would it have made any difference?
The answer is obvious.
Deleted Comment
Deleted Comment
This is a frightening 8-part series about the abuse of "Pegasus" in Mexico 2017-2019: https://citizenlab.ca/2017/02/bittersweet-nso-mexico-spyware...
Here's a category of articles on the citizenlab.ca web site described as "Investigations into the prevalence and impact of digital espionage operations against civil society groups": https://citizenlab.ca/category/research/targeted-threats/
https://en.m.wikipedia.org/wiki/NSO_Group
I assume because it was founded by three Israeli citizens, in Israel, and the HQ and staff almost all work near Tel Aviv.
And while Novalpina Capital provided funding, that was only in a partnership with two of the original founders, as a buy-out.
Deleted Comment
- A remote code execution vulnerability. There are almost certainly multiple vulnerabilities at play here, since long gone are the days where a single vuln gave arbitrary code execution.
- a way to bypass the encryption/https, unless the remote code execution was on a layer before encryption (which seems unlikely). EDIT: Apparently the hack only works on non-encrypted websites.
- Once remote code is achieved, they most certainly need a way to elevate privileges in order to make the hack more persistent and tap into other apps.
There are most likely several CVEs at play here. The amount of effort that went into this hack is, frankly, terrifying.
There are still websites that don't use HTTPS.
For websites that do use HTTPS, if they haven't configured something like HSTS, HPKP or Expect-CT, typing example.com into a web browser will make it will send an unencrypted HTTP request to http://example.com. If the website's content is served only HTTPS, the server will most likely respond with something that redirects the web browser to the HTTPS version of the website (most likely a HTTP 301 or 302 status code). The initial unencrypted HTTP request can be intercepted and modified.
and if a government did it, it would not be much harder for them to do it to everyone in the world at the same time before any exploits get fixed...
All you then have to do is network-inject on a user who visits a non-HSTS site by entering it in their address bar.
No need to bypass encryption.
Could you go into this in a little more detail?
I'm inferring that chains of vulnerabilities are needed to go from some starting point to arbitrary code execution. Is that correct?
Have efforts to secure computer systems over the past ~2 decades succeeded, at least in that much more effort needs to be invested in order to get to the point of arbitrary code execution?
Hmm, might make me think twice now about going to http://neverssl.com/ in a dubious location.
Further, there is competition for having the most secure browser. it's not controversial to say the 12 years ago IE, Firefox and Safari were pretty bad at security and Chrome in 2008 pushed them all to up their game.
Apple's stance on browser engines is at best claiming security by obscurity. Either apps are sandboxed or they aren't. If they are then it would be safe to run any browser engine. If they aren't then having only one means users have no choice when that one fails.
>The malicious code even wipes crash logs, making it impossible to determine exactly what weaknesses were exploited to take over the phone, said Claudio Guarnieri, head of Amnesty International’s Security Lab, in an interview.
I want to post this Everytime someone claims Apple is best for security. We need logic to fight marketing.
The real threat in this case was that sending the right string of data to a browser let a malicious actor execute a RCE and install malware. Either you trust the browser to be secure against such attacks or you can't trust much of anything.
This seems a lot more complicated than just going to any unencrypted website via network redirection of some sort. Do most people routinely visit encrypted sites that are easily hacked to target an RCE on an individual?
I've heard several times that this is largely driven by the crappier flavors of "media platform" and bottom-of-the-barrel ad networks breaking in spectacular fashion due to CORS and mixed-content problems when the main site tries to switch to HTTPS.
I don't doubt there are some bespoke ad servers or other dark corners of ad infrastructure where HTTPS support is still lacking, but that should be rare at this point.
edit- I do not have javascript-driven pages, they are PDFs or simple content
There are easily found examples of malicious content being injected into HTML -- malvertisements for example. I can only imagine what might get injected into a PDF which can run javascript [1]. PDF readers aren't exactly known for their security.
Frankly, I'd much rather be able to talk to you about something downloaded from your site and get you to fix it instead of allowing a third party to infect me and point fingers at you.
[0] https://tools.ietf.org/html/rfc6108
[1] https://stackoverflow.com/q/9219807/1111557
certbot is too hard to set up?
>edit- I do not have javascript-driven pages, they are PDFs or simple content
The issue is that if the http protocol can be tampered with, even if all you serve is plain text, the attacker can change your response to contain javascript. Anyone visiting using a browser (with scripts enabled) will be vulnerable.
You should see the list of root certificate shipped with most major browsers.
I did a quick manual count of yesterday's HN front page articles according to hckrnews.com and found 8 non-https links (vs. 109 total non-dead links). 2 of these have a working https version.
I use the HTTPS Everywhere extension set to the new "Encrypt All Sites Eligible" option. Instead of using a list as previous HTTPS Everywhere, this tries to access all websites via https and pops up a warning if it doesn't work (most of the time; one of the six non-https supporting sites was misconfigured in a way that didn't get the popup). Since I want to know anytime I access an http site, I choose the "open insecure page for this session only" option if I want to look at an http page to make sure that it tries the https site again in the future and that I know any time I am visiting an http site. There are simpler extension that just do that, but unfortunately they are not Firefox Recommended extensions that are monitored by Mozilla. Hopefully it won't be too long before browsers do this themselves.
The main root cause for missing https in HN submissions is old github pages before 2016.
GitHub rolled HTTPS but didn't enable it by default for older sites. Gotta go to settings and tick https.
In fact you can program almost any device (including very old and simple) to be a plain old HTTP client or a server but this is not the case with modern HTTPS.
Besides, delivering vulnerability payload via advertising network is far more reliable — with http-only exploit chain police would have to wait and hope that Omar will someday visit an http-only site. I would expect a pricey exploit toolkit, used by governments, to be more robust than that.
LE is still tedious as heck to set up on your own, though, so I guess people who haven't migrated to modern hosting yet are still being left behind. Most hosting-for-devs platforms these days give you HTTPS by default and don't think would even let you host a website without.
Here's what I recently did when I deployed a new site [0]:
Certbot went through the registration process in the terminal window. Enter in an email address and read over the terms of service and then it goes and does its thing and finally spits out a success message telling me where the certificate and private key are on the filesystem.Then just point an nginx configuration file to the two [1] and tell nginx to test and reload its configuration.
Then, LetsEncrypt will send an email to me notifying me that one or more certificates are about to expire (20 days, 10 days, 1 day ...). I even decided to test that and make sure that works (on a different site a couple years ago) [2]. The certificate can be updated using the certbot-renew service: Google searches show several examples which put the renewal service on a timer.That's it! I'm not sure what you think is tedious about that process. Would you care to elaborate?
[0] https://systemd.software/index.html
[1] https://github.com/inetknght/systemd.software/blob/44c584c68...
[2] https://knightoftheinter.net/img/LetsEncrypt_Expiration_Warn...
January 31 of this year I got an email telling me that my LE client used the older ACMEv1 protocol, not the newer ACMEv2 protocol. They gave me 4 months notice to update my LE client to something compliant. I burnt the time and did the work.
On March 3 myself and many others[0] got an email demanding that we manually re-issue our certificates because of a vulnerability discovered in the LE service. They gave us one day to comply, after that they would revoke the certificates and our users would receive security errors. I begrudgingly went through all my servers and issued the command to forcibly renew certificates. Not a huge burden for me, but likely a bigger burden for larger operations.
As the feature set grows (new challenge types, wildcard support, etc.) and the service gets even more popular, it's going to be an even bigger target and the effects of a monoculture will really be felt. I'm starting to see the value in paying for certificates, and more specifically, using providers that don't provide a public certificate issuance API (or at least stick it behind a paywall.)
How many times would LE have to accidentally issue gstatic.com or fbcdn.net before they get the Symantec treatment[1]? Too big to fail: It's not just for investment banks. And that should give anyone seeking a decentralized internet pause.
[0]: https://www.zdnet.com/article/lets-encrypt-to-revoke-3-milli...
[1]: https://www.zdnet.com/article/mozilla-warns-it-plans-to-dist...
Or my other 1 liner cron job that runs certbot for other demos.
Deleted Comment
If the web server is compromised, then it’ll inject the malicious JavaScript code into the HTML, and transmit that to you. SSL is irrelevant in this regards.
Unless when not using SSL, then the HTML is getting intercepted in flight, and a malicious JavaScript code is injected into the HTML.
Is that more of what we are now seeing these days? The routers are compromised, and the HTML is getting compromised too.
This is a legitimate question.
Granted, I’m fully in support of SSL. Nobody should be seeing what you are browsing. This leaves too much digital breadcrumbs lying around.
It significantly helps to prevent MITM [1] (Man-in-the-middle) attacks. (without scary certificate warnings anyways)
>If the web server is compromised, then it’ll inject the malicious JavaScript code into the HTML, and transmit that to you. SSL is irrelevant in this regards.
The web server isn't compromised in this case, presumably the network is compromised.
>Unless when not using SSL, then the HTML is getting intercepted in flight, and a malicious JavaScript code is injected into the HTML.
Yes.
>Is that more of what we are now seeing these days? The routers are compromised, and the HTML is getting compromised too.
"Stingray" devices [2] spoof mobile towers so cellphones are tricked into believing they're connecting to "Just another cell phone tower" and at that point traffic can be captured/modified.
>Granted, I’m fully in support of SSL. Nobody should be seeing what you are browsing.
It's more than people just knowing what you're browsing (or issues such as leaking passwords/private info), it's that if someone can MITM you, they can also transparently modify unencrypted data (including adding exploits).
1. https://en.wikipedia.org/wiki/Man-in-the-middle_attack
2. https://en.wikipedia.org/wiki/Stingray_phone_tracker
Let's say you're a Mexican drug lord or Saudi prince. You know this tech exists and the US/Israeli/European governments use it.
Then, you see this article, and see all the comments in the comment section about how competent, scary and balance-changing the technology is.
Basically: I think these pieces are bought for and paid by NSO through a PR firm, but you are not the target. When we leave comments like "NSO's tech is so good it has to be regulated!" or "NSO's tech is dangerous!" we are playing directly into the PR firm's clever hands.
It's like an article about how good the AR-15 or the F-35 are. Obviously to me (and most of the readers) it's mostly "why are we focusing on technology of death" but we are not the target.
Remember, the vast majority of people working for NSO worked for Israeli and US intelligence bodies. They serve in the 8200 unit doing malware analysis trailed by the NSA and then go work for NSO on the same sort of technology.
(If you want to get an idea of how much, I recommend "Permanent Record" but if you don't like Snowden then check out how far ahead intelligence bodies were _historically_ compared to public knowledge - WW2 crypto being a good analogy)
This lets the US government (and the Israeli government in turn) to make money off the technology without going through the same international regulatory systems.
The US government (or Israeli government) can stop companies like NSO in a single decision but they are not since it is making them money.
It's up to us (the citizens) to pressure them to do so and to promote security best practices and work on better tools to make it harder to breach peoples' privacy.
If stingray devices work by tricking your phone to connect with older protocols like 3G, why aren't those protocols deprecated just like we deprecate older encryption methods that are no longer secure?
For example, the laws on what is allowed to use encryption and what is not differ significantly from country to country. There are also often older installations that only provide 3G support.
Basically, it's complicated and there are a lot of different reasons but it mostly comes down to the world being a big place with lots of different laws and requirements, yet people want a phone that works everywhere.
GSM downgrade attacks as well as USB SDR gears came out late 3G era, I kind of trust 3GPP guys for protection for LTE onwards but if GSM downgrade attacks are your primary concern in your life you can move to Japan and get contract on au by KDDI as KDDI flat out ignored CSFB to CDMA2000 and went all VoLTE
Wouldn't it be easier (at least on android) to go to mobile network settings and change it to "lte only" or "3g only"? As for using au by KDDI, I'm not even sure whether using their SIM cards will prevent a downgrade attack. It's possible that they still support 2g for roaming use, for instance.
They can be, but that adds cost to running a cell phone network. Since very few people ask for their cell phone communications to be secured, companies just don't do it. it's like how GPS is completely unsecured - anyone with a couple hundred bucks can interfere with GPS signals. Maybe route a cruise ship into an island, or a random driver to a sketchy area of town.
> Why aren't stingray devices considered an attack on the cell network?
LEO's use them very frequently to conduct investigations and gather evidence. Just look at the current debate around full-device encryption. USA Law Enforcement really likes access to information. Telecommunications (and a lot of the inernet: TCP/IP, DNS, etc) were initially set up to be open - security was an afterthought. And they just never bothered to add security later on.
Also deprecating old things is hard. People still expect to be able to pick up a charged Nokia phone from 2001 and call 911 on it.
The author directly contradicts the headline used here:
> The website must use “clear text” which means the URL starts with “http” not “https.”
http://google.com
https://google.com
Both wind up eventually to the same place, but the first redirects to the second.
That can be a link, it can be an old bookmark, etc.
Worse if it was a targeted ad, the https:// link could be just a redirect back to an http:// link, something the browser probably has no trouble doing.
Doesn't HSTS prevent exactly this? Sure, not every website implements it, but the most visited ones overwhelmingly do - it's certainly misleading to say "any website" in that context.
Doesn't even have to be a link. If you type addresses like most be people do (ie. without https://), the browser is going to attempt http first. So any manually typed in address will be vulnerable as well.
https://rietta.com/blog/comcast-insecure-injection/
https://news.ycombinator.com/item?id=21389657