If companies practiced data minimisation, and end-to-end encrypted their customers' data that they don't need to see, fewer of these breaches would happen because there would be no incentive to break in. But intelligence agencies insist on having access to innocent citizens' conversations.
> But intelligence agencies insist on having access to innocent citizens' conversations.
That's part of the problem. But companies also are unwilling to pay to do any of the things that you've described. There is no punishment or fine that is actually punitive. Protecting (short term) profit is more important than protecting users' data --- it's even more important than protecting the (long term) profit potential of a company with a good reputation.
Until the data breaches lead to serious $$$ impact for the company, the impact of these breaches will simply be waved off and pushed down to users. ("Sorry, we didn't protect your stuff at all. But, here's some credit monitoring!") Even in the profession of software development and engineering, very few people actually take data security seriously. There's lots of talk in the industry, but also lots of pisspoor practices when it comes to actually implementing the tech in a business.
Companies already pay for cyber insurance because they don't want to take on this risk themselves.
In principle the insurance company then dictates security requirements back to the company in order to keep the premiums manageable.
However, in practice the insurance company has no deep understanding of the company and so the security requirements are blunt and ineffective at preventing breaches. They are very effective at covering the asses of the decision makers though... "we tried: look we implemented all these policies and bought this security software and installed it on our machines! Nobody could possibly have prevented such an advanced attack that bypassed all these precautions!"
Another problem is that often the IT at large enterprises is functionally incompetent. Even when the individual people are smart and incentivised (which is no guarantee) the entire department is steeped in legacy ways of doing things and caught between petty power struggles of executives. You can't fix that with financial incentives because most of these companies would go bankrupt before figuring out how to change.
I don't see things improving unless someone spoon-feeds these companies solutions to these problems in a low risk (ie. nobody's going to get fired over implementing them) way.
There's another side to it, which you allude to with the give away of credit monitoring services that data breaches result in. The whole reason the data is valuable is for account takeover and identity theft because identity verification uses publicly available information (largely publicly available, or at least discoverable, even without breaches). But no one wants to put in the effort to do appropriate identity verification, and consumers don't want to be bothered to jump through stricter identity verification process hoops and delays---they'll just go to a competitor who isn't as strict.
So we could make the PII less valuable by not using for things that attract fraudsters.
Hell in this instance, just replacing non EOL equipment that had known vulnerabilities would have gone a long way. We're talking routing infrastructure with implants designed years ago, still vulnerable and shuffling data internally.
Another issue is lack of education/training/awareness among developers.
A BS in CS has maybe one class on security, and then maybe employees have a yearly hour-long seminar on security to remind them to think about security. That isn't enough. And the security team and engineers that put the effort into learning more about security and privacy often aren't enough to guard against every possible problem.
Apple seems to be willing to spend money on this kinda stuff. But the reason why they do this is because it allows them to differentiate their offering from the others, with privacy being part of the "luxury package", so to speak. That is - their incentive to do so is tied to it not being the norm.
I work in internal tools development, aka platform engineering, and this is interesting:
> That's part of the problem. But companies also are unwilling to pay to do any of the things that you've described. There is no punishment or fine that is actually punitive. Protecting (short term) profit is more important than protecting users' data --- it's even more important than protecting the (long term) profit potential of a company with a good reputation.
Frankly, any company that says they're a technology or software business should be building these kinds of systems. They can grab FOSS implementations and build on top or hire people who build these kinds of systems from the ground up. There's plenty of people in platform engineering in the US who could use those jobs. There's zero excuse other than that they don't want to spend the money to protect their customers data.
After Apple argued for years that a mandatory encryption-bypassing, privacy-bypassing backdoor for the government could be used by malicious entities, and the government insisting that it's all fine don't worry, now we're seeing those mandatory encryption-bypassing, privacy-bypassing backdoors for government being used by malicious entities and suddenly the FBI is suggesting everyone use end-to-end encryption apps because of the fiasco that they caused.
But don't worry, as soon as this catastrophe is over we'll be back to encryption is bad, security is bad, give us an easy way to get all your data or the bad guys win.
The story is a little longer than this. A bunch of folks from academia and industry have been fighting the inclusion of wiretapping mandates within encrypted communications systems. The fight goes back to the Clipper chip. These folks made the argument that something like Salt Typhoon was inevitable if key escrow systems were mandated. It was a very difficult claim to make at the time, because there wasn’t much precedent for it - electronic espionage was barely in its infancy at the time, and the idea that our information systems might be systematically picked open by sophisticated foreign actors was just some crazy idea that ivory tower eggheads cooked up.
I have to admire those pioneers for seeing this and being right about it. I also admire them for influencing companies like Apple (in some cases by working there and designing things like iMessage, which is basically PGP for texts.) It doesn’t fix a damn thing when it comes to the traditional telecom providers, but it does mean we now have backup systems that aren’t immediately owned.
Thats not exactly true. The FCC911 and other government laws require the telcos to have access to location data and record calls/texts for warrants. The problem is both regulatory as well as commercial. It is unrealistic to expect the general public nor the government to go with real privacy for mobile phones. People want LE/firefighters to respond when they call 911. Most people want organized crime and other egregious crimes to be caught/prosecuted, etc. etc.
Nonsense. I kindly informed my teenage niece of the fact all her communications on her phone should be considered public, and the nature of Lawful Interception, and the tradeoffs she was opted into for the sakenof Law Enforcement's convenience.
She was not amused or empathetic to their plight in the slightest. Population of at least 2 I guess.
If law enforcement actually did their jobs, this would be more understandable. I don’t know about you or others’ experiences, but when I’ve called the police to report a crime (e.g. someone casually smashing car windows at 3p in the afternoon and stealing anything that isn’t bolted down), they never show up and usually just tell me to file a police report which of course never gets actioned. Seems pretty obvious to me that weakening encryption/opsec to “let the good guys in” is total nonsense and that there are blatant ulterior motives at play. To be clear I’m a strong proponent of good security practices and end to end encryption
There's not nearly enough public information to discern whether or not this had anything to do with stored PII or lawful interception. All we know is that they geolocated subscribers.
The SS7 protocol provides the ability to determine which RNC/MMC a phone is paired with at any given time: it's fundamental to the nature of the functioning of the network. A sufficiently sophisticated adversary, with sufficient access to telephony hardware, could simply issue those protocol instructions to determine the location.
Somewhat of a tangent: does anyone have any resources on designing/implementing E2E encryption for an app where users have shared "team" data? I understand the basics of how it works when there's just one user involved, but I'm hoping to learn more about how shared data scenarios (e.g. shared E2E group chats like Facebook Messsenger) are implemented.
Intelligence agencies may use that data, but there are plenty of financial incentives to keep that data regardless. Mining user data is a big business.
All of these claims for serious fines, yet no indication of where the fine is to be paid. Fines means the gov't is getting the money, yet the person whose data was lost still gets nothing. Why does the person that was actually harmed get nothing while the gov't who did nothing gets everything?
This exactly. Data ought to be viewed as fissile material. That is, potentially very powerful, but extremely risky to store for long periods. Imposing severe penalties is the only way to attain this, as the current slap on the wrist/offer ID theft/credit monitoring is an absurd slap in the face to consumers as we are inundated with new and better scams from better equipped scammers everyday.
The current state is clearly broken and unsustainable, but good luck getting any significant penalties through legislation with a far-right government.
Corporations' motivations rarely coincide with deep, consistent systems strategy, and largely operate reactively and in a manner where individuals get favorable performance reviews for adding profitable features or saving costs.
They are appropriately motivated in this case, carriers would surely rather have no idea whatsoever about the data they are carrying. The default incentive is they'd really rather avoid being part of any compliance regimes or law enforcement actions because that sort of thing is expensive, fiddly and carries a high risk of public outcry.
If they had the option the telecommunication companies would love to encrypt traffic and obscure it so much that they have no plausible way of figuring out what is going on. Then they can take customer money and throw their hands up in honest confusion when anyone wants them to moderate their customer's behaviour.
They don't because that would be super-illegal. The police and intelligence services demand that they snoop, log and avoid data-minimisation techniques. It is entirely a question of regulatory demand and time that these sort of breaches happen; if the US government demands the data then sooner or later the Chinese government will get a copy too. I assume that is a trade off the US government is happy to make.
> But intelligence agencies insist on having access to innocent citizens' conversations.
Intelligence agencies also stockpile software vulnerabilities that they don't report to the vendor because they want to exploit the security flaw themselves.
We'll never have a secure internet when it's being constantly and systematically undermined.
Yes, but spies are going to spy, so we should focus on getting software built to have security by design and not just keep out-sourcing to the cheapest programmers that don't even know what a sql injection is.
Currently, with proprietary software, there's an incentive for companies to not even acknowledge bugs and it costs them money to fix issues, so they often rely on security through obscurity which is not much of a solution.
Meanwhile US banks, Venmo, PayPal, etc all insist on using "real" phone numbers as verification.
Funny that Venmo won't let me use a voip number, but I signed up for Tello, activated an eSIM while abroad and was immediately able to receive an SMS and sign-up. For the high barrier cost of $5. Wow, such security. Bravo folks.
These stem from a requirement to know you as a person in some verifiable way. These are legal and regulatory requirements but the laws and requirements are there to ensure finserv can meaningfully contain criminal activity - fraud, theft, money laundering, black market, terrorism financing, etc. It turns out by far the most effective measure is simply knowing who the principals are in any transaction.
Some companies have much lower thresholds for their KYC, but end up being facilitators of crime and draw scrutiny over time by both their more regulated partners and their governments.
I’d note that the US is relatively lax in these requirements compared to Singapore, Canada, Japan, and increasingly the EU. In many jurisdictions you need to prove liveliness, do photo verification, sometimes video interviews with an agent showing your documents.
> know you as a person in some verifiable way .. the laws and requirements are there to ensure .. knowing who the principals are in any transaction.
Except that person you’re responding to explains succinctly how this is security theater that accomplishes little and ultimately is just a thinly veiled tactic for harassing users / coercive data collection. And the person above that is commenting that unnecessary data collection is just an incentive for hackers.
Comments like this just feel like apologism for bad policies, at best. Does anyone really think that people need to be scrutinized because most money laundering is small transactions from individuals, or, is it still huge transactions from huge customers that banks want to protect?
Phone number is not an identity document, and you can rent a number cheaply on a black market. Also, there should be no verification for small amounts of money. We can use cash anonymously, why we cannot transfer money anonymously?
> In many jurisdictions you need to prove liveliness, do photo verification, sometimes video interviews with an agent showing your documents.
When vtuber-esque deepfakes become trivial for the average person, I wonder what the next stage in this cat-and-mouse becomes. DNA-verficiation-USB-dongles?
You can, at the same time, verify a person's identity upon opening the account, as you mentioned with documents, and use a TOTP MFA instead of SIM-based authentication. If regulators require SIM-based authn, then it's just bad policy, which should come to no one's surprise when it comes to government regulation. Finally, KYC is for the IRS. The illusion of safety makes a good selling point, though.
My google voice number is unlikely to be stolen from me, but instead I have to use a 'real' phone number that could be compromised by handing cash to an employee at a store.
One time a company retroactively blocked VOIP numbers, which was really stupid.
This is why I like Google Fi. It is much harder to do account takeout over of a Google Fi number compared to most telecos. The attacker would have to take over the Google account which seemed to be harder to do.
A PROCESS for verifying the number isn't used for fraud and allowing use. I don't know, maybe the fact that I've been a customer for YEARS, use that number, and have successfully done thousands of dollars in transactions over a platform without any abnormal issue?
Does Tello require KYC, that is, is the eSIM linked to an actual identity ?
As least in Europe (psd2) that’s the key for accepting a phone number as a 2FA method
i bought a Tello eSIM to use for my Rabbit R1, am in USA, was not required to provide any KYC, received a (213) LA area code number, recommend Tello so far.
Another cool thing that some companies do: refuse to deal with me because the family business account is in my dad's name, despite me knowing all the correct information to pretend to be my dad.
Like, the only reason I don't answer the phone and say "this is <Dad's name>", is because I'm honest. You'll never keep a bad guy out that already knows all the information that you ask for - he'll just lie and claim to be the business/account owner.
> For the high barrier cost of $5. Wow, such security. Bravo folks.
$5 is at least 5x the cost of a voip number. I'm not a bank, but if I'm spending money to verify you control a number, I feel better when you (or someone else) has spent $5 on the number than if it was $1 or less.
"... but if I'm spending money to verify you control a number, I feel better when you (or someone else) has spent $5 ..."
This is exactly it.
All of these auth mechanisms that tie back to "real" phone numbers and other aspects of "real identity" are not for you - they are not for your security.
These companies have a brutal, unrelenting scam/spam problem that they have no idea how to solve and so the best they can do is just throw sand in the gears.
So, when twilio (for instance) refuses to let you 2FA with anything other than tracing back to a real mobile SIM[1] (how ironic ...) it is not to help you - it is designed to slow down abusers.
[1] The "authy" workflow is still backstopped by a mobile SIM.
Also, that is clearly a workaround that took some research to do. Aka you’re probably in the top 1% of the population from a ‘figuring out workarounds’ perspective.
VoIP is so well known (and automated) to do, even at $.10, it would be a magnitude easier to do.
Banks are always slow, and behind the times - because they are risk adverse. That has pros and cons.
I still have about $15 of international calling credit on a GV number I hardly use anymore with no option of transferring or using that balance on a different platform like Google's Play store.
The problem is that VOIP numbers, from companies like Bandwidth, are frequently used to perform various frauds. So many financial services ban them because the KYC for real numbers is much better.
I have more bank and credit accounts than the average person, probably. 5 bank accounts, and 8 credits accounts I can remember as active off the top of my head.
Every single one works with GVoice, except Venmo. Chase, Cap1, Fidelity, etc. Not small players.
So while I think you make a fair enough argument for sure, it doesn't seem to be the case when nobody else does it, and makes Venmo seem like a pain in the arse.
In practice, these companies get a phone number I possess for 1-3 months on a travel SIM rather than the VOIP number I’ve steadily maintained for two decades and by which the US feds know me (because they don’t care).
Don't all financial institutions need some real identification with physical address to sign up? Phone numbers / email addresses should be for communication, not tracking.
The same level of security that shitter's checkmark introduced. All checkmark accounts are fake, and the ones without are real people, I guess?
The idea that scammers don't have digital money laying around just waiting on being spent on something is so absurdly out of touch on how everything in cyber works.
I work in security and this surprised me to see. Not that these companies got hacked, but the scope of the attack being simultaneous. Coordinated. Popping multiple companies at the same time says something about the goals the PRD has.
It risks a lot of "noise" to do it this way. Why not just bribe employees to listen in on high profile targets? Why try to hit them all and create a top level response at the Presidential level?
This feels optics-driven and political. I'm not sure what it means, but it's interesting to ponder on. Attacking infrastructure is definitely the modern "cold war" of our era.
This is a total yawn, and the norm. It looks coordinated because the team who focuses specifically on telecoms had their tools burned. Pick pretty much any sector of interest and the intelligence services of the top 50 countries all have a team dedicated to hacking it. The majority of them are successful.
Sadly even most people in security are woefully unaware of the scope and scale of these operations, even within the networks they are responsible for.
The "noise" here was not from the attacker. They don't want to get caught. But sometimes mistakes happen.
Interestingly, some of those teams dedicated to hacking are either private sector or a branch that nobody has heard of. I once interviewed for a company whose pitch to me was basically "we get indemnity to hack foreign telcos" and "we develop ways to spy that nobody has thought of". That was 20 years ago
It probably wasn't a simultaneous attack, they probably penetrated over a long period of time. The defenders just found them all simultaneously (you find one, you go looking for the others)
> Why not just bribe employees to listen in on high profile targets?
Developing assets is complicated and difficult, attacking SS7 remotely is trivial, especially if you have multiple targets to surveil
Given the noise about huawaie and spy cranes, it would be interesting to know if the "attacks" were
against any and all telecoms equipment, or just chinese stuff, not that I think it would make any difference.
The daylight (heh heh!) trolling for telecom and power cables, is most definitly a (he ha!) signal,
aimed at western politicians.
Another one, is that while there are claims of North Korea , taking crypto, no identifiable victim has stood up.
Western politicians are attempting to redirect the whole worlds economy, based on saving us from
the very things that are happening, just now.
So it does seem more than coincidental.
I think this is the perfect time to do something like this, in the midst of a presidential transition. Regardless of the outgoing and incoming politics, things will be more chaotic. While it won't be unnoticed, it's going to be down the lists of things to deal with probably, and possibly forgotten.
Incompetence is just one dimension on odds of being caught.
You could be an incredibly competent and highly motivated crook and bad luck in the form of an intern looking at logs or a cleaning lady spotting you entering a building could take you down.
I can't confirm it because the descriptions of the hack are unclear but if more network operators say they've been hacked it is more and more likely the Chinese got in by attacking lawful intercept. This could happen in various ways: bribe or blackmail someone in law enforcement with access to a lawful intercept management system (LIMS), a supply chain attack on an LIMS vendor, hacking the authentication between networks and LIMS, etc.
If it is an LI attack the answer to which networks are compromised is: All of them that support automated LI.
That's a nasty attack because LI is designed to not be easily detectable because of worries about network operators knowing who is being tapped.
More likely they got access and then snooped any of the many insecure protocols used to manage network devices.
Anyone who has ever worked in networking will understand what I mean.
The networking industry is comically bad. They use ssh but never ever verify host keys, use agent forwarding, use protocols like RADIUS or SNMP which are completely insecure once you pop a single box and use the almost always global shared secret. Likewise the other protocols.
Do they use secure boot in a meaningful way? So they verify the file system? I have news for you if you think yes.
It’s kind of a joke how bad the situation is.
Twenty years ago someone discovered you could inject forged tcp resets to blow up BGP connections. What did the network industry do? Did they institute BGP over TLS? They did not. Instead they added TCP MD5 hashing (rfc: https://datatracker.ietf.org/doc/html/rfc2385 in 1999) using a shared secret because no one in networking could dream of using PKI. Still true today. If deployed at all, which it usually isn’t. 2010!!
If you want to understand the networking industry consider only this: instead of acknowledging how dumb the situation is and just using tls, instead we got this - https://datatracker.ietf.org/doc/html/rfc5925 - which is almost as dumb as 2385 and just as bad in actual deployment because they just keep using the same deployment model (the shared tuple). Not all vendors that “support” 5925 support the whole
RFC.
As an aside this situation is well known. People have talked about it for literal decades. The vendors have shown little to no interest in making security better except point fixes for the kind of dumb shit they get caught on. Very few security researchers look at networking gear or only look at low end junk that doesn’t really matter.
They aren't saying that more have been hacked, they are saying that more have been discovered related to that hack. Any adversary at this level would be monitoring the news, and would take appropriate actions (for gain) or roll up the network rather than allow reverse engineering of IOCs.
More than likely this was not an LI based attack, but rather they don't know for sure how they got in. Nearly all of the guidance is standard cybersecurity best practices for monitoring and visibility, and lowering attack surface with few exceptions (in the CISA guidance).
The major changes appear to be the requirements to no longer use TFTP, and the referral to the manufacturer for source of truth hashes (which have not necessarily been provided in the past). A firmware based attack for egress/ingress seems very likely.
For reference, TFTP servers are what send out the ISP configuration for endpoints in their network, the modems (customers), and that includes firmware images (which have no AAA). Additionally as far as I know the hardware involved lacks an ability to properly audit changes to these devices (by design), and TR-47 is rarely used appropriately, the related encryption is also required by law to be backward compatible with known broken encryption. There was a good conference talk on this a few years ago, at Cyphercon 6.
The particular emphasis on TLS1.3 (while now standard practice) suggests that connections may be being downgraded, and the hardware/firmware at CPE bridge may be performing MITM to public sites in earlier versions transparently, if this is the case (its a common capability needed).
The emphasis on using specific DH groups, may point to breaks in the key exchange of groups not known to be broken (but are broken), which may or may not be a factor as well.
If the adversary can control, and insert malicious code into traffic on-the-fly targeting sensitive individuals who have access already, they can easily use information that passes through to break into highly sensitive systems.
The alternative theory while fringe, is maybe they've come up with a way to break feistel networks (in terms of cryptographic breaks).
Awhile back the NSA said they had a breakthrough in cryptography. If that breakthrough was related to attacks on feistel network structures (which almost all modern cryptography is built on), that might explain another way (although this is arguably wild speculation at this point). Nearly every computer has a backdoor co-processor built-in in the form of Trustzone, Management Engine, or AMD's PSP. Its largely only secured by crypto without proper audit trails.
It presents a low hanging concentrated fruit into almost every computation platform on earth, and by design, its largely not auditable or visible. Food for thought.
Quantum computer breaks a single signing key for said systems, acting like a golden key back door to everything. All the eggs in one basket. Not out of the realm of possibility at the nation state level. No visibility means no perception or ability to react, or isolate the issues except indirectly.
You don’t need to bring up quantum computers. Almost all protocols in the networking industry are basically running with a shared secret that is service global. Pop any box at all and you have the world for any traffic you can capture.
The problem with the shared secret model isn’t that it can be stolen, it’s that it is globally shared within a provider network. You can’t root it in a hardware device. You can’t do forensics to see from what node it was stolen.
We are talking about an industry where they still connect console servers, often to serial terminal aggregators that are on the internal network alongside the management Ethernet ports, which have dumb guessable passwords, often the same one on every box, that all their bottom tier overseas contractors know.
Wasn’t it a couple years ago the intelligence community was arguing for backdoor mandates, and now the FBI recommends Signal for safe chats? Such a farce. Hopefully the new admin goes through their emails and text messages over the last 4 years. Privacy for me, not for thee, I suppose…
"...implies that the attack wasn't against the broadband providers directly, but against one of the intermediary companies that sit between the government CALEA requests and the broadband providers"
Yup. The attack hit the CALEA backdoor via a wiretapping outsourcing company.
Which one?
Thank you for posting this. The search term "calea solutions"[1] also brings up some relevant material, such as networking companies advising how to set up interception, and an old note from the DoJ[2] grumbling about low adoption in 2004 and interesting tidbits about how the government sponsored the costs for its implementation.
where from ""...implies that the attack wasn't against the broadband providers directly, but against one of the intermediary companies that sit between the government CALEA requests and the broadband providers" comes from ? from schneier ?
because if you go to the actual reporting in wsj for example, it doesn't imply that attack was against TTP providers. also TTP providers are optional
Nothing contradictory (in philosophy), really: they said American law enforcement should be able to break encryption when they have warrants and they now say Chinese spies should not be able to.
This is obviously technically impossible, but the desire for that end state makes a ton of sense from the IC’s perspective.
The FBI has a weird mandate in that it's both counter-espionage and counter-crime, and those are two quite different missions. Unsurprising to know that counter-espionage want great encryption, and counter-crime want backdoorable encryption.
You want the new anti democratic/authoritarian administration to look through the FBIs emails to find something to frame them for? You sure that's wise? Even if they don't respect privacy like they should?
It seems like every few years law enforcement puts out statements about how good encryption is for criminals, and then they have to walk it back as data breaches happen.
It doesn't take much to read between the lines on those two statements. Feds have access to Signal if they want it, but are using it as filter paper against most attacks against the public etc.
The "feds" do not have access to Signal, except by CNE attacks against individual phones. Signal's security does not rely on you trusting the Signal organization.
While the statments are contradictory I wouldn't take it as sign of some vast conspiracy. I would just take it as a sign they are stuck needing to give out some kind of guidance to prevent foreign access. While they are a domestic police service they are also a counterintelligence service and thus need to provide some guidance there.
US Military has atleast privately switched away from any Signal usage within the past few months – it’s undoubtedly compromised in some way. If the FBI is recommending it it’s for exploitative purposes & a false premise of safety.
This is why we need device to device encryption on top of all the security that a telco has. There is no excuse for any connection I make being unencrypted at any point except the receiver.
While you aren't wrong about needing end to end encryption, that would not have helped here. What China was after was meta data (who is communicating with who), which is a completely different problem to solve.
Well obviously there is a good excuse, that users do not want to and cannot generally deal with key management. Even dealing with phone numbers is a hassle, and now you want to add a public key on top? One which cannot easily be written down, and is presumably tied to the handset so if you lose and replace your phone you stop being able to receive all phone calls until you manually somehow distribute your new key to everyone else?
End to end encryption has proven to be unworkable in every context it's been tried. There are no end-to-end encrypted systems in the world today that have any use, and in fact the term has been repurposed by the tech industry to mean pseudo encrypted, where the encryption is done using software that is also controlled by the adversary, making it meaningless. But as nobody was doing real end-to-end encryption anyway, the engineers behind that decision can perhaps be forgiven for it.
> pseudo encrypted, where the encryption is done using software that is also controlled by the adversary
I'd say there's a very real use for this, though, which is that with mobile applications it's more complicated to compromise a software deployment chain than it is to compromise a server-side system. If you're a state-level attacker and you want to coordinate a deployment of listening capabilities on Signal, say, you need to persistently compromise Signal's software supply chain and/or build systems, and do so in advance of other attacks you might want to coordinate with, because you need to wait for an entire App Store review cycle for your code to propagate to devices. The moment someone notices (say, a security researcher MITM'ing themselves) that traffic doesn't match the Signal protocol, your existence has been revealed. Whereas for the telcos in question, it seems it was possible to just compromise a server-side system to gain persistent listening capabilities, which could happen silently.
Now, this can and should be a lot better, if, say, the Signal app was built not by Signal but by Apple and Google themselves, on build servers that provably create and release reproducible builds straight from a GitHub commit. It would remove the ability for Signal to be compromised in a non-community-auditable way. But even without this, it's a nontrivial amount of defense-in-depth.
The US Treasury just announced they had an incursion by Chinese threat actors. Their "cyber security vendor" had a remote access key compromised, enabling the attackers access to endpoints within Treasury.
That's part of the problem. But companies also are unwilling to pay to do any of the things that you've described. There is no punishment or fine that is actually punitive. Protecting (short term) profit is more important than protecting users' data --- it's even more important than protecting the (long term) profit potential of a company with a good reputation.
Until the data breaches lead to serious $$$ impact for the company, the impact of these breaches will simply be waved off and pushed down to users. ("Sorry, we didn't protect your stuff at all. But, here's some credit monitoring!") Even in the profession of software development and engineering, very few people actually take data security seriously. There's lots of talk in the industry, but also lots of pisspoor practices when it comes to actually implementing the tech in a business.
In principle the insurance company then dictates security requirements back to the company in order to keep the premiums manageable.
However, in practice the insurance company has no deep understanding of the company and so the security requirements are blunt and ineffective at preventing breaches. They are very effective at covering the asses of the decision makers though... "we tried: look we implemented all these policies and bought this security software and installed it on our machines! Nobody could possibly have prevented such an advanced attack that bypassed all these precautions!"
Another problem is that often the IT at large enterprises is functionally incompetent. Even when the individual people are smart and incentivised (which is no guarantee) the entire department is steeped in legacy ways of doing things and caught between petty power struggles of executives. You can't fix that with financial incentives because most of these companies would go bankrupt before figuring out how to change.
I don't see things improving unless someone spoon-feeds these companies solutions to these problems in a low risk (ie. nobody's going to get fired over implementing them) way.
So we could make the PII less valuable by not using for things that attract fraudsters.
A BS in CS has maybe one class on security, and then maybe employees have a yearly hour-long seminar on security to remind them to think about security. That isn't enough. And the security team and engineers that put the effort into learning more about security and privacy often aren't enough to guard against every possible problem.
> That's part of the problem. But companies also are unwilling to pay to do any of the things that you've described. There is no punishment or fine that is actually punitive. Protecting (short term) profit is more important than protecting users' data --- it's even more important than protecting the (long term) profit potential of a company with a good reputation.
Frankly, any company that says they're a technology or software business should be building these kinds of systems. They can grab FOSS implementations and build on top or hire people who build these kinds of systems from the ground up. There's plenty of people in platform engineering in the US who could use those jobs. There's zero excuse other than that they don't want to spend the money to protect their customers data.
But don't worry, as soon as this catastrophe is over we'll be back to encryption is bad, security is bad, give us an easy way to get all your data or the bad guys win.
I have to admire those pioneers for seeing this and being right about it. I also admire them for influencing companies like Apple (in some cases by working there and designing things like iMessage, which is basically PGP for texts.) It doesn’t fix a damn thing when it comes to the traditional telecom providers, but it does mean we now have backup systems that aren’t immediately owned.
She was not amused or empathetic to their plight in the slightest. Population of at least 2 I guess.
The SS7 protocol provides the ability to determine which RNC/MMC a phone is paired with at any given time: it's fundamental to the nature of the functioning of the network. A sufficiently sophisticated adversary, with sufficient access to telephony hardware, could simply issue those protocol instructions to determine the location.
Somewhat of a tangent: does anyone have any resources on designing/implementing E2E encryption for an app where users have shared "team" data? I understand the basics of how it works when there's just one user involved, but I'm hoping to learn more about how shared data scenarios (e.g. shared E2E group chats like Facebook Messsenger) are implemented.
It should give you some ideas on how it's done.
[1] https://nfil.dev/coding/encryption/python/double-ratchet-exa...
Leak or lose a customer's location tracking data? That'll be $10,000 per data point per customer please.
It would convert this stuff from an asset into a liability.
The current state is clearly broken and unsustainable, but good luck getting any significant penalties through legislation with a far-right government.
Same principle as fines for hard-to-localize pollution.
If they had the option the telecommunication companies would love to encrypt traffic and obscure it so much that they have no plausible way of figuring out what is going on. Then they can take customer money and throw their hands up in honest confusion when anyone wants them to moderate their customer's behaviour.
They don't because that would be super-illegal. The police and intelligence services demand that they snoop, log and avoid data-minimisation techniques. It is entirely a question of regulatory demand and time that these sort of breaches happen; if the US government demands the data then sooner or later the Chinese government will get a copy too. I assume that is a trade off the US government is happy to make.
Who put the backdoor there? The US government did.
Intelligence agencies also stockpile software vulnerabilities that they don't report to the vendor because they want to exploit the security flaw themselves.
We'll never have a secure internet when it's being constantly and systematically undermined.
Currently, with proprietary software, there's an incentive for companies to not even acknowledge bugs and it costs them money to fix issues, so they often rely on security through obscurity which is not much of a solution.
Funny that Venmo won't let me use a voip number, but I signed up for Tello, activated an eSIM while abroad and was immediately able to receive an SMS and sign-up. For the high barrier cost of $5. Wow, such security. Bravo folks.
Some companies have much lower thresholds for their KYC, but end up being facilitators of crime and draw scrutiny over time by both their more regulated partners and their governments.
I’d note that the US is relatively lax in these requirements compared to Singapore, Canada, Japan, and increasingly the EU. In many jurisdictions you need to prove liveliness, do photo verification, sometimes video interviews with an agent showing your documents.
Except that person you’re responding to explains succinctly how this is security theater that accomplishes little and ultimately is just a thinly veiled tactic for harassing users / coercive data collection. And the person above that is commenting that unnecessary data collection is just an incentive for hackers.
Comments like this just feel like apologism for bad policies, at best. Does anyone really think that people need to be scrutinized because most money laundering is small transactions from individuals, or, is it still huge transactions from huge customers that banks want to protect?
When vtuber-esque deepfakes become trivial for the average person, I wonder what the next stage in this cat-and-mouse becomes. DNA-verficiation-USB-dongles?
Deleted Comment
One time a company retroactively blocked VOIP numbers, which was really stupid.
I'd say that with Google, chances are that they just stop offering the service.
A PROCESS for verifying the number isn't used for fraud and allowing use. I don't know, maybe the fact that I've been a customer for YEARS, use that number, and have successfully done thousands of dollars in transactions over a platform without any abnormal issue?
All of my 2FA Mules[1] are USMobile SIMs attached to pseudonyms which were created out of thin air.
It helps a lot to run your own mail servers and have a few pseudonym domains that are used for only these purposes.
[1] https://kozubik.com/items/2famule/
Like, the only reason I don't answer the phone and say "this is <Dad's name>", is because I'm honest. You'll never keep a bad guy out that already knows all the information that you ask for - he'll just lie and claim to be the business/account owner.
> he'll just lie and claim to be the business/account owner.
He can lie, but he doesn't have another person's passport to prove his lies.
$5 is at least 5x the cost of a voip number. I'm not a bank, but if I'm spending money to verify you control a number, I feel better when you (or someone else) has spent $5 on the number than if it was $1 or less.
This is exactly it.
All of these auth mechanisms that tie back to "real" phone numbers and other aspects of "real identity" are not for you - they are not for your security.
These companies have a brutal, unrelenting scam/spam problem that they have no idea how to solve and so the best they can do is just throw sand in the gears.
So, when twilio (for instance) refuses to let you 2FA with anything other than tracing back to a real mobile SIM[1] (how ironic ...) it is not to help you - it is designed to slow down abusers.
[1] The "authy" workflow is still backstopped by a mobile SIM.
VoIP is so well known (and automated) to do, even at $.10, it would be a magnitude easier to do.
Banks are always slow, and behind the times - because they are risk adverse. That has pros and cons.
Every single one works with GVoice, except Venmo. Chase, Cap1, Fidelity, etc. Not small players.
So while I think you make a fair enough argument for sure, it doesn't seem to be the case when nobody else does it, and makes Venmo seem like a pain in the arse.
The idea that scammers don't have digital money laying around just waiting on being spent on something is so absurdly out of touch on how everything in cyber works.
Corporations "eat" money.
Entities that can feed a corporation, are treated as peers, i.e. "people".
Thus, on shitter, if you can pay, you are a person (and get a blue checkmark).
It risks a lot of "noise" to do it this way. Why not just bribe employees to listen in on high profile targets? Why try to hit them all and create a top level response at the Presidential level?
This feels optics-driven and political. I'm not sure what it means, but it's interesting to ponder on. Attacking infrastructure is definitely the modern "cold war" of our era.
Sadly even most people in security are woefully unaware of the scope and scale of these operations, even within the networks they are responsible for.
The "noise" here was not from the attacker. They don't want to get caught. But sometimes mistakes happen.
> Why not just bribe employees to listen in on high profile targets?
Developing assets is complicated and difficult, attacking SS7 remotely is trivial, especially if you have multiple targets to surveil
Deleted Comment
There's a huge selection bias factored into what attacks make the news.
You could be an incredibly competent and highly motivated crook and bad luck in the form of an intern looking at logs or a cleaning lady spotting you entering a building could take you down.
If it is an LI attack the answer to which networks are compromised is: All of them that support automated LI.
That's a nasty attack because LI is designed to not be easily detectable because of worries about network operators knowing who is being tapped.
Anyone who has ever worked in networking will understand what I mean.
The networking industry is comically bad. They use ssh but never ever verify host keys, use agent forwarding, use protocols like RADIUS or SNMP which are completely insecure once you pop a single box and use the almost always global shared secret. Likewise the other protocols.
Do they use secure boot in a meaningful way? So they verify the file system? I have news for you if you think yes.
It’s kind of a joke how bad the situation is.
Twenty years ago someone discovered you could inject forged tcp resets to blow up BGP connections. What did the network industry do? Did they institute BGP over TLS? They did not. Instead they added TCP MD5 hashing (rfc: https://datatracker.ietf.org/doc/html/rfc2385 in 1999) using a shared secret because no one in networking could dream of using PKI. Still true today. If deployed at all, which it usually isn’t. 2010!!
If you want to understand the networking industry consider only this: instead of acknowledging how dumb the situation is and just using tls, instead we got this - https://datatracker.ietf.org/doc/html/rfc5925 - which is almost as dumb as 2385 and just as bad in actual deployment because they just keep using the same deployment model (the shared tuple). Not all vendors that “support” 5925 support the whole RFC.
As an aside this situation is well known. People have talked about it for literal decades. The vendors have shown little to no interest in making security better except point fixes for the kind of dumb shit they get caught on. Very few security researchers look at networking gear or only look at low end junk that doesn’t really matter.
They aren't saying that more have been hacked, they are saying that more have been discovered related to that hack. Any adversary at this level would be monitoring the news, and would take appropriate actions (for gain) or roll up the network rather than allow reverse engineering of IOCs.
More than likely this was not an LI based attack, but rather they don't know for sure how they got in. Nearly all of the guidance is standard cybersecurity best practices for monitoring and visibility, and lowering attack surface with few exceptions (in the CISA guidance).
The major changes appear to be the requirements to no longer use TFTP, and the referral to the manufacturer for source of truth hashes (which have not necessarily been provided in the past). A firmware based attack for egress/ingress seems very likely.
For reference, TFTP servers are what send out the ISP configuration for endpoints in their network, the modems (customers), and that includes firmware images (which have no AAA). Additionally as far as I know the hardware involved lacks an ability to properly audit changes to these devices (by design), and TR-47 is rarely used appropriately, the related encryption is also required by law to be backward compatible with known broken encryption. There was a good conference talk on this a few years ago, at Cyphercon 6.
https://www.youtube.com/watch?v=_hk2DsCWGXs
The particular emphasis on TLS1.3 (while now standard practice) suggests that connections may be being downgraded, and the hardware/firmware at CPE bridge may be performing MITM to public sites in earlier versions transparently, if this is the case (its a common capability needed).
The emphasis on using specific DH groups, may point to breaks in the key exchange of groups not known to be broken (but are broken), which may or may not be a factor as well.
If the adversary can control, and insert malicious code into traffic on-the-fly targeting sensitive individuals who have access already, they can easily use information that passes through to break into highly sensitive systems.
The alternative theory while fringe, is maybe they've come up with a way to break feistel networks (in terms of cryptographic breaks).
Awhile back the NSA said they had a breakthrough in cryptography. If that breakthrough was related to attacks on feistel network structures (which almost all modern cryptography is built on), that might explain another way (although this is arguably wild speculation at this point). Nearly every computer has a backdoor co-processor built-in in the form of Trustzone, Management Engine, or AMD's PSP. Its largely only secured by crypto without proper audit trails.
It presents a low hanging concentrated fruit into almost every computation platform on earth, and by design, its largely not auditable or visible. Food for thought.
Quantum computer breaks a single signing key for said systems, acting like a golden key back door to everything. All the eggs in one basket. Not out of the realm of possibility at the nation state level. No visibility means no perception or ability to react, or isolate the issues except indirectly.
The problem with the shared secret model isn’t that it can be stolen, it’s that it is globally shared within a provider network. You can’t root it in a hardware device. You can’t do forensics to see from what node it was stolen.
We are talking about an industry where they still connect console servers, often to serial terminal aggregators that are on the internal network alongside the management Ethernet ports, which have dumb guessable passwords, often the same one on every box, that all their bottom tier overseas contractors know.
It’s just sad.
PRC Targeting of Commercial Telecommunications Infrastructure
https://news.ycombinator.com/item?id=42132014
AT&T, Verizon reportedly hacked to target US govt wiretapping platform
https://news.ycombinator.com/item?id=41766610
Yup. The attack hit the CALEA backdoor via a wiretapping outsourcing company. Which one?
* NEX-TECH: https://www.nex-tech.com/carrier/calea/
* Substentio: https://www.subsentio.com/solutions/platforms-technologies/
* Sy-Tech: https://www.sytechcorp.com/calea-lawful-intercept
Who else is in that business? There aren't that many wiretapping outsourcing companies.
Verisign used to be in this business but apparently no longer is.
[1] https://www.google.com/search?client=firefox-b-d&q=calea+sol...
[2] https://oig.justice.gov/reports/FBI/a0419/findings.htm
Is it a great idea to give all that info to India as well?
This is obviously technically impossible, but the desire for that end state makes a ton of sense from the IC’s perspective.
Secrets fail unsafe. Maybe an alternative doesn't.
Dead Comment
End to end encryption has proven to be unworkable in every context it's been tried. There are no end-to-end encrypted systems in the world today that have any use, and in fact the term has been repurposed by the tech industry to mean pseudo encrypted, where the encryption is done using software that is also controlled by the adversary, making it meaningless. But as nobody was doing real end-to-end encryption anyway, the engineers behind that decision can perhaps be forgiven for it.
I'd say there's a very real use for this, though, which is that with mobile applications it's more complicated to compromise a software deployment chain than it is to compromise a server-side system. If you're a state-level attacker and you want to coordinate a deployment of listening capabilities on Signal, say, you need to persistently compromise Signal's software supply chain and/or build systems, and do so in advance of other attacks you might want to coordinate with, because you need to wait for an entire App Store review cycle for your code to propagate to devices. The moment someone notices (say, a security researcher MITM'ing themselves) that traffic doesn't match the Signal protocol, your existence has been revealed. Whereas for the telcos in question, it seems it was possible to just compromise a server-side system to gain persistent listening capabilities, which could happen silently.
Now, this can and should be a lot better, if, say, the Signal app was built not by Signal but by Apple and Google themselves, on build servers that provably create and release reproducible builds straight from a GitHub commit. It would remove the ability for Signal to be compromised in a non-community-auditable way. But even without this, it's a nontrivial amount of defense-in-depth.