We need a big budget cut in the "homeland security" area. All this interception is not paying off. The biggest "terrorist event" in the US since 2001 was the guy who shot up a gay nightclub in Orlando FL in 2017. That was a solo nutcase; there was no planning chatter to intercept. The Boston Marathon bombing was two brothers. The San Bernardino shooting was a husband and wife.
What's discouraging terrorism is the US's overreaction outside the US. It's become very clear to terrorist organizations that if they attack the US, the US is going to hit back, even if it's insanely expensive and causes collateral damage. The people in charge, and many people around them, end up dead.
Remember ISIS, the Islamic State? ISIS is down to 1.5 square miles, surrounded, and everybody but the most fanatical fighters is surrendering. The holdouts have days to live.
I completely agree with you, but the counter argument is the only incidents that are getting through are the ones that are solo because the more complicated plots are getting intercepted and disrupted.
> I completely agree with you, but the counter argument is the only incidents that are getting through are the ones that are solo because the more complicated plots are getting intercepted and disrupted.
The problem with this argument is that it can't justify continued spending because that would make it unfalsifiable. We need to spend $450B/year on a bear-repelling rocks because we currently pay for the rocks and there are no bears. And if any bears do appear then we obviously didn't have enough bear-repelling rocks and we need to start spending $900B/year.
If there is a real question as to whether the ~0 bears is a result of the rocks, it's time to cut the bear-repelling rock budget in half and see how many bears there are next year. If it's still ~0 then it didn't need to be as high as it was and it may still be too high.
That's because people have hands and throats, weapons and weaknesses. You simply can't stop a killer operating at an animal level unless it's stopped while in progress. Stopping organized killing beforehand is however quite feasible, and society does have some responsibilities there.
So what would things look like if that homeland security spending was useful and needed? How do we know its okay to cut? Cut the programs and see if people start getting blown up?
Funny how word 'Enterprise' picks up more and more negative connotation in modern software world. These days, 'enterprise' means outdated, inflexible and intentionally flawed monster of technology.
Sure, ~20 years ago Sun Microsystems used to sell some "Ultra Enterprise" servers which offered nice reliability features like redundant power supplies and a backplane and slot setup where you could install several CPU/memory boards or I/O boards.
In comparison to some of their other hardware, these servers were more suited to organizations with more demanding needs like minimizing downtime or having lots of compute power or configuration flexibility.
But of course people quickly realized that a key characteristic of actual enterprise computing is large budgets, so it almost immediately turned into a game of labeling things with the word "enterprise" in hopes of vacuuming up as much of that money as possible.
I think it’s an unfair characterisation. Let’s go with early-2000s “Enterprise” stuff: CORBA and SOAP specifically.
There are these large corporations with a significant investment in their existing infrastructure and systems - and now they all need to make them interop. The mindset is “how do we make our CORBA ERP communicate with their Java CRM without needing to make any changes to either of them?”. Hence SOAP: It packages existing method-call semantics into a HTTP message that will cross a firewall: not even the IT dept needs to get involved to change firewall rules. And they hammered out a working spec within a couple years. That’s impressive considering the slow-moving nature of large, risk-averse enterprises. We now know that REST-is-Best, but it took the industry around 10 years to figure that out, and another 5 years for the tooling and ecosystem to catch up. SOAP was a quick-fix that was needed immediately.
So I’d recharacterise “Enterprise software” as “fits into your existing system and does what you need it to, right now” - and their MC Escher-inspired architecture is a consequence of it needing to support and fit-in to whatever systems were prevalent when their project was started.
It’s not Enterprise software that’s rigid and inflexible - but cutting-edge software that I have more problems with. I was working with Neo4j in 2016 and having issues with security because it didn’t have any built-in security support until last year. I had to change what I was doing to accommodate them, instead of vice-versa.
dont forget the enterprise processes (in software for example it would be Agile/Scrum/Lean/Six Sigma/etc.) and the enterprise people deformed by them. Archaeologically speaking it is a whole culture layer :)
> The various suggestions for creating fixed/static Diffie Hellman keys raise interesting possibilities. We would like to understand these ideas better at a technical level and are initiating research into this potential solution.
The core argument made by BITS is that they need a way to log TLS traffic such that it can be decrypted later, in order to provide data retention in line with regulations. While this could be done by logging all ephemeral keys generated by the servers, BITS argues that this isn’t practical due to their use of dedicated packet logging hardware that is key-ignorant. Instead they want to use non-forward-secret TLS so they can decrypt past messages easily. Their beef with TLS 1.3 is that it removes all non-FS key exchange methods, and further that by explicitly obsoleting TLS 1.2 as a standard pushes them to have to adopt 1.3 in an enterprise environment (or risk current/future regulatory scrutiny over their use of an obsoleted standard). Hence why they want to develop a competing, active standard with non-FS key exchange.
I've read Andrew Kennedy's email. This line hits the point for me. His argument is reasonable, sometime it's impractical or hard or costly or all the above to upgrade all the systems to meet regulatory compliance and the newer and stricter and safer security standards.
It is vital to financial institutions and to their customers and regulators
that these institutions be able to maintain both security and regulatory compliance
during and after the transition from TLS 1.2 to TLS 1.3.
One example of that is the NIST's recommendation on password policies. Most of the time the regulatory mandates are outdated and it's hard to bring them up to speed, in the mean time, as a financial institution you simply cannot have your IT system incompliant, even that means having a less security practice.
> as a financial institution you simply cannot have your IT system incompliant
This just isn’t true, or rather “compliance” tends to be quite fuzzy.
Regulators generally expect you follow recommendations from places like NIST. But it’s not a hard requirement, you just need to explain why deviating is better.
Unfortunately most fincial institutions trip up at the “explain why it’s better” bit. Either because they aren’t competent enough, or (more likely) can’t be bothered.
I'm not sure what you are getting at with the NIST example - their recommendations for passwords are pretty reasonable. Maybe their older ones weren't, but, their newer guidelines recommend against outdated ideas such as expiring passwords. (https://pages.nist.gov/800-63-FAQ/#q-b5)
Forward secrecy is always useful for two endpoints that want to have a secure exchange of messages. It's a core component of secure transport these days.
It's not "useful" if your goal is to intercept and decrypt messages that are supposed to be secure, which is what both regulated entities and baddies want to do.
If you don't require forward secrecy you introduce a weakness. The protocol won't distinguish between whether that weakness is being exploited by regulated entities or baddies.
You don't need to weaken TLS in order to do what the regulated entities want to do - you just need to do the retention on the endpoints. The issue isn't that they can't do that, it's that they don't want to do that, probably for cost or convenience reasons. Those aren't reasons to weaken TLS for everyone who actually wants secure comms.
I disagree, it still protects the encryption of information they have in flight.
If they're actually recording the entirety of every TCP stream that comes into the datacenter, how many sets of credentials do you think are stored in that system? And right now, they're all encrypted with a single or small number of keys, that must be available to the system that is storing and parsing this data.
Also, given the breaches that have happened, I keep waiting for there to be a set of regulations from the other side requiring adequate protection and deletion of data. He seems entirely unconcerned with that aspect.
This is a remarkable story. Fortunately, this ETSI-backed "ETS" standard appears to have just about zero uptake or internet presence, let alone vendor acceptance. So although this is fairly outrageous based on the EFF article, it doesn't look like something that's a big threat to TLS at this point.
Either client or server can break secrecy. Server compromise isn't a threat model the client can defend against. For example, the server could simply forward a copy of the whole communication in cleartext to someone, and the client can't know this.
In this case, the server is using a predictable number instead of a random one for part of the protocol. Possibly a client could detect this by doing multiple transactions and seeing if a number gets reused, but that seems outside the scope of TLS.
The expectation is that the encrypted link is not decryptable by a third party. If that isn't always true in the face of an adversary then claims of forward secrecy for TLS 1.3 are false.
Then this is the only way you can have outbound HTTPS connections. And for e.g. a bank, certain legal firms, or any company that has a lot of sensitive data they either don't want to be leaked, or at least want the option of detecting when it is leaked, that is a somewhat reasonable stance.
In the case of banks, this is needed for regulatory compliance regarding insider trading. For legal companies, I imagine this is about ensuring certain confidentiality. I could see the same thing for companies dealing with trade-secrets.
The statement 'Just log it on the end-points' presumes complete access to those end-points and all software running on them.
This method is considered better than terminating TLS early at a proxy and setting up a separate tunnel to the clients because breaking PFS is passive, rather than active. Thus it is a lot less resource intensive, a lot less vulnerable (no internet facing box that, if broken, has all communication in plaintext), and introduces no extra latency.
It is essentially a 'better' way to do an authorized MitM on everything on your network, and some companies want this authorized MitM. Like any authorized MitM, it introduces a third party who can compromise security, which is not generally desirable, but some companies don't mind being that third party to their own employees.
Why not have the endpoints ship their session keys OoB to a centralized place / whatever needs to look at the traffic? Sure, there's more of them, but that shouldn't be a huge volume? (It is insignificant compared to the captured traffic.)
> The statement 'Just log it on the end-points' presumes complete access to those end-points and all software running on them.
There still has to be some control over the endpoints. Otherwise, what prevents them from negotiating an algorithm in TLS 1.2 that has PFS?
And I am not sure if you're attempting to address this, but instead of terminating at a more edge-ish node, why not just decrypt and re-encrypt there? (So, it is still encrypted internally, but the node can inspect the data in an authorized manner.)
(You seem to address it, but I'm not sure what you mean: yeah, having a centralized box decrypting your traffic means that an attacker that gets access to that can see a lot. But what were you doing in TLS 1.2 w/ a non-PFS ciphersuite that didn't involve a machine w/ the ability to decrypt everything?)
It's not just companies that want to MitM their traffic: consumers want this too. Otherwise all the IoT devices in a house will no longer be trustable: Alexa can start uploading everything it hears, even if you never said "hey alexa!". And because of forward secrecy, you can't verify what it did or did not send: it all looks the same.
This expands to any IoT device with proprietary software on it, which by 2023 will be quite a lot of things.
There's a whole IT market segment around TLS decryption for corporate LAN. Basically corporate MITM that will decrypt TLS at the gateway / firewall, and with currently used TLS standards, will then re encrypt the traffic back to the client so the browser thinks it has a legit connection. It's used to scan packets for intrusion detection, for malware, to track for data loss like the article talks about.
But you don't NEED to kill forward secrecy to do that. TLS 1.3 doesn't seem to be a problem for the anti-malware, IPS, or even DLP use cases. You just need to decrypt, inspect, and re-encrypt traffic at the firewall, using a CA cert trusted by your clients. The problem is lazy organizations that just want to passively collect all of the encrypted traffic and then decrypt it later at their leisure, which smells much more like surveillance than security.
All of those require your computer to trust a new Certificate Authority or you will get warnings all over the place. If there is a company that claims to be able to do it without trusting the CA or producing warnings I would love to see it. (seriously, I actually would love to see that).
And if you are in a corporate environment using a company computer you forfeit your privacy anyway. You can always go somewhere else or do your banking and Facebook on a different machine / not on company time.
The original purpose was governments spying on their citizens, which is why a lot of software uses certificate pinning to block this intrusion. These MITM solution just let through the big players’ traffic so you don’t get too much of a fuss while still retaining the ability to ‘check for malware’.
Breaking the security of HTTPS for surveillance and monitoring purposes. The BITS group is formally opposed to secure communications because they want to make it easier to MITM attack the secure communication. They want to make it easier to decrypt communication.
Corporate environments where "endpoint solutions" snoop through all traffic to detect malware activity.
While I understand the use case, I would not support it. What I find unacceptable is, assuming the article is correct, that ETSI is asking NIST to recommend their crippled TLS in their new guidelines rather than TLS1.3.
Disabling PFS and thus enabling the decryption of all TLS sessions should be a conscious decision rather than something that was there 'by default' (and could easily be abused).
The other side of the argument is frequently discounted and as an IT security person myself I understand that. However, there is a real challenge for companies who deal with large amounts of very sensitive data. To be able to effectively monitor for data loss it makes a lot of sense to be able to monitor the connection points between your protected network and outside networks. The move to all traffic being encrypted and uninspectable breaks this paradigm.
You can cover some of the same concern by implementing an agent on every connected computing device but this brings much greater complexity as you are monitoring potentially hundreds to thousands more places and still have to worry if you have complete coverage.
Consider an analogy of going through international customs. Do you employ customs officials at the border who are allowed to sample and inspect private belongings to verify laws are being followed? Or do you employ an official to help pack the belongings of each individual who you think may eventually cross the border? The second example is a bit stretched but hopefully illustrates the scale problem.
Banks are required by regulation to monitor & audit pretty much everything. Previously they did this for internet usage by using MITM proxies. TLS 1.3 makes that approach hard/impossible.
I'm not sure I understand: why can't they record the decrypted traffic instead? (I assume they have it plain text at some point). Of course they could encrypt it again before sending it to their audit server
How does ETS break MITM for corporate LANs that are trusted CAs on work devices? Why can't a proxy still MITM a connection by terminating the client side, establishing the server side, and that be that?
Also, banks seeing their own corporate traffic is ethical and moral. Whether they need to simply find another way to read all data leaving their network is another piece of the story.
What's discouraging terrorism is the US's overreaction outside the US. It's become very clear to terrorist organizations that if they attack the US, the US is going to hit back, even if it's insanely expensive and causes collateral damage. The people in charge, and many people around them, end up dead.
Remember ISIS, the Islamic State? ISIS is down to 1.5 square miles, surrounded, and everybody but the most fanatical fighters is surrendering. The holdouts have days to live.
We don't need more Big Brother.
The problem with this argument is that it can't justify continued spending because that would make it unfalsifiable. We need to spend $450B/year on a bear-repelling rocks because we currently pay for the rocks and there are no bears. And if any bears do appear then we obviously didn't have enough bear-repelling rocks and we need to start spending $900B/year.
If there is a real question as to whether the ~0 bears is a result of the rocks, it's time to cut the bear-repelling rock budget in half and see how many bears there are next year. If it's still ~0 then it didn't need to be as high as it was and it may still be too high.
Also, cutting the government's budget would not impact the cost of compliance to corporations.
Dead Comment
Many that throw jabs at J2EE (written on purpose), never had the joys of trying out xBaseEE, CEE, C++EE (CORBA, DCOM/MTS),...
In comparison to some of their other hardware, these servers were more suited to organizations with more demanding needs like minimizing downtime or having lots of compute power or configuration flexibility.
But of course people quickly realized that a key characteristic of actual enterprise computing is large budgets, so it almost immediately turned into a game of labeling things with the word "enterprise" in hopes of vacuuming up as much of that money as possible.
There are these large corporations with a significant investment in their existing infrastructure and systems - and now they all need to make them interop. The mindset is “how do we make our CORBA ERP communicate with their Java CRM without needing to make any changes to either of them?”. Hence SOAP: It packages existing method-call semantics into a HTTP message that will cross a firewall: not even the IT dept needs to get involved to change firewall rules. And they hammered out a working spec within a couple years. That’s impressive considering the slow-moving nature of large, risk-averse enterprises. We now know that REST-is-Best, but it took the industry around 10 years to figure that out, and another 5 years for the tooling and ecosystem to catch up. SOAP was a quick-fix that was needed immediately.
So I’d recharacterise “Enterprise software” as “fits into your existing system and does what you need it to, right now” - and their MC Escher-inspired architecture is a consequence of it needing to support and fit-in to whatever systems were prevalent when their project was started.
It’s not Enterprise software that’s rigid and inflexible - but cutting-edge software that I have more problems with. I was working with Neo4j in 2016 and having issues with security because it didn’t have any built-in security support until last year. I had to change what I was doing to accommodate them, instead of vice-versa.
It turned me grey, bald and cynical aka experienced in every possible way to fuck something up. That turned out to be quite valuable!
> Tue, 27 September 2016 18:21 UTC
> The various suggestions for creating fixed/static Diffie Hellman keys raise interesting possibilities. We would like to understand these ideas better at a technical level and are initiating research into this potential solution.
The core argument made by BITS is that they need a way to log TLS traffic such that it can be decrypted later, in order to provide data retention in line with regulations. While this could be done by logging all ephemeral keys generated by the servers, BITS argues that this isn’t practical due to their use of dedicated packet logging hardware that is key-ignorant. Instead they want to use non-forward-secret TLS so they can decrypt past messages easily. Their beef with TLS 1.3 is that it removes all non-FS key exchange methods, and further that by explicitly obsoleting TLS 1.2 as a standard pushes them to have to adopt 1.3 in an enterprise environment (or risk current/future regulatory scrutiny over their use of an obsoleted standard). Hence why they want to develop a competing, active standard with non-FS key exchange.
This just isn’t true, or rather “compliance” tends to be quite fuzzy.
Regulators generally expect you follow recommendations from places like NIST. But it’s not a hard requirement, you just need to explain why deviating is better.
Unfortunately most fincial institutions trip up at the “explain why it’s better” bit. Either because they aren’t competent enough, or (more likely) can’t be bothered.
It's not "useful" if your goal is to intercept and decrypt messages that are supposed to be secure, which is what both regulated entities and baddies want to do.
If you don't require forward secrecy you introduce a weakness. The protocol won't distinguish between whether that weakness is being exploited by regulated entities or baddies.
You don't need to weaken TLS in order to do what the regulated entities want to do - you just need to do the retention on the endpoints. The issue isn't that they can't do that, it's that they don't want to do that, probably for cost or convenience reasons. Those aren't reasons to weaken TLS for everyone who actually wants secure comms.
If they're actually recording the entirety of every TCP stream that comes into the datacenter, how many sets of credentials do you think are stored in that system? And right now, they're all encrypted with a single or small number of keys, that must be available to the system that is storing and parsing this data.
Also, given the breaches that have happened, I keep waiting for there to be a set of regulations from the other side requiring adequate protection and deletion of data. He seems entirely unconcerned with that aspect.
PS. I can't even get ETSI's website to load! https://www.etsi.org/
If a TLS 1.3 client will happily connect to an ETS server that isn't playing by the rules, doesn't that indicate a flaw in 1.3?
In this case, the server is using a predictable number instead of a random one for part of the protocol. Possibly a client could detect this by doing multiple transactions and seeing if a number gets reused, but that seems outside the scope of TLS.
There is a way to detect this. Record the last ephemeral public key that server used with you. If it uses the same one again, refuse to connect.
No opaque data leaves my network
Then this is the only way you can have outbound HTTPS connections. And for e.g. a bank, certain legal firms, or any company that has a lot of sensitive data they either don't want to be leaked, or at least want the option of detecting when it is leaked, that is a somewhat reasonable stance. In the case of banks, this is needed for regulatory compliance regarding insider trading. For legal companies, I imagine this is about ensuring certain confidentiality. I could see the same thing for companies dealing with trade-secrets.
The statement 'Just log it on the end-points' presumes complete access to those end-points and all software running on them.
This method is considered better than terminating TLS early at a proxy and setting up a separate tunnel to the clients because breaking PFS is passive, rather than active. Thus it is a lot less resource intensive, a lot less vulnerable (no internet facing box that, if broken, has all communication in plaintext), and introduces no extra latency.
It is essentially a 'better' way to do an authorized MitM on everything on your network, and some companies want this authorized MitM. Like any authorized MitM, it introduces a third party who can compromise security, which is not generally desirable, but some companies don't mind being that third party to their own employees.
> The statement 'Just log it on the end-points' presumes complete access to those end-points and all software running on them.
There still has to be some control over the endpoints. Otherwise, what prevents them from negotiating an algorithm in TLS 1.2 that has PFS?
And I am not sure if you're attempting to address this, but instead of terminating at a more edge-ish node, why not just decrypt and re-encrypt there? (So, it is still encrypted internally, but the node can inspect the data in an authorized manner.) (You seem to address it, but I'm not sure what you mean: yeah, having a centralized box decrypting your traffic means that an attacker that gets access to that can see a lot. But what were you doing in TLS 1.2 w/ a non-PFS ciphersuite that didn't involve a machine w/ the ability to decrypt everything?)
If your stance is ‘no opaque data leaves my network’ your only option is an air gap.
This expands to any IoT device with proprietary software on it, which by 2023 will be quite a lot of things.
And if you are in a corporate environment using a company computer you forfeit your privacy anyway. You can always go somewhere else or do your banking and Facebook on a different machine / not on company time.
Disabling PFS and thus enabling the decryption of all TLS sessions should be a conscious decision rather than something that was there 'by default' (and could easily be abused).
You can cover some of the same concern by implementing an agent on every connected computing device but this brings much greater complexity as you are monitoring potentially hundreds to thousands more places and still have to worry if you have complete coverage.
Consider an analogy of going through international customs. Do you employ customs officials at the border who are allowed to sample and inspect private belongings to verify laws are being followed? Or do you employ an official to help pack the belongings of each individual who you think may eventually cross the border? The second example is a bit stretched but hopefully illustrates the scale problem.
Without telling the person who's things were packed that they were packed by the official.
Also, banks seeing their own corporate traffic is ethical and moral. Whether they need to simply find another way to read all data leaving their network is another piece of the story.