Readit News logoReadit News
mholt · 9 years ago
I really believe this is an important study. To help expose MITM, I implemented HTTPS interception detection features into Caddy, based on the heuristics described in the paper: https://github.com/mholt/caddy/pull/1430

The web server is in a unique position to be able to detect interception where the browser can't, and then choose how to handle it (warn the client, log the event, whatever). If you want to test this feature, I welcome your bug reports!

For example:

    {{if .IsMITM}}
    <b>We have reason to believe your
    connection is not private, even if
    your browser thinks it is.</b>
    {{end}}
Or:

    redir {
       if {mitm} is likely
       /http-451-censorship.html
    }
The researchers won't be releasing the fingerprints they collected until after NDSS '17 (March), but I'll look at taking those into account when they are available.

javajosh · 9 years ago
You're doing excellent work with Caddy, Matt. This solution of yours, which detects inconsistent headers on a single connection, is a good one. What will you do if and when MITM attackers do the extra work to duplicate headers?
mholt · 9 years ago
Thanks Josh, I appreciate it. Their method works by comparing the User-Agent HTTP header to the characteristics of the TLS handshake of the underlying connection.

There are some exceptions, but TLS proxies generally don't touch the User-Agent HTTP header. Doing so runs the risk of breaking things at the application layer. TLS proxies probably don't care if they break things (hence the research) but a proxy that wants to hide (malware, censorship, etc.) would not want to risk breaking HTTP.

This method, for the time being, should effectively force TLS proxies (who want to hide) to preserve the qualities of the original TLS connection. Then if the connection is weak, the browser can at least warn the user. I'm not certain this is a permanent solution, but given the eternal turnaround time of corporate products, I suspect it will be useful for years to come.

zaphar · 9 years ago
Sadly for companies in some industries (i.e defense or healthcare) there are regulatory compliance issues that force them into running something that can intercept TLS connections. These companies are many times in a position of either weakening security or failing an audit. Until the regulations catch up they will be stuck between a rock and a hard place.
Jonnax · 9 years ago
Well part of the argument that Google/Mozilla are putting forward is that some software does their man in the middle attack in bad ways that compromise user security.

The picture in the article states:

Avast 11.7: Advertises DES as a cipher (It's been vulnerable for more than a decade)

AVG and Bit Defender: Vulnerable to Logjam and POODLE

Kaspersky: Vulnerable to CRIME

Net Nanny, KinderGate, CYBERsitter, NOD32, Kaspersky Internet Security (Mac): Doesn't validate certificates.

zaphar · 9 years ago
Sure but for most of these companies conducting a security audit of their AV solution is very much not something are going to be competent at. The typical auditing companies they might hire for there compliance reasons are also not going to be competent at it. As a result, due to a lack of knowledge and a need to check boxes, they will be choosing AV solutions that lower their security. It's a perfect storm. Especially since many of the ones on the list are considered best in market for these people.
Neil44 · 9 years ago
I can only speak to Eset as it's the only one I am sure about, but it definitely does validate certs. I have screen grabs of validation errors because they were interesting.
tony101 · 9 years ago
TLS interception is not the problem. Improper implementation of the protocol is. AV / Appliance vendors need to configure their MITM proxies correctly so that they utilize the same protocols and verifications that the browsers have.
KingMob · 9 years ago
But these devices are rarely updated, so even one that's safe now has no guarantee that it can be kept safe in the future.

As a more general issue, TLS interception is a problem. Widespread acceptance of TLS proxying violates the TLS contract, which is that all my communications between a website and my browser are safely encrypted.

deburo · 9 years ago
The article is also pointing out that ..

"The researchers urge antivirus vendors to stop intercepting HTTPS altogether, since the products already have access to the local filesystem, browser memory, and content loaded over HTTPS"

I wonder why they are resorting to TLS interception then. Is it just easier to intercept TLS then inspecting memory? Is it just a lack of perspective?

mirimir · 9 years ago
> there are regulatory compliance issues that force them into running something that can intercept TLS connections.

Sure. But they also can't be sharing sensitive data with third parties. And yet many AV/security products can upload samples for analysis.[0] Including Word documents.

0) https://www.av-comparatives.org/wp-content/uploads/2014/04/a...

jcrawfordor · 9 years ago
Vendor NDAs and other compliance measures are in place. This is a standard part of negotiating enterprise licensing in security-sensitive sectors.
hehheh · 9 years ago
Should people in those industries have access to the outside web on the computers that have even a smidge of access to data that must be kept private by law? I'd say "no" -- I can't think of a scenario in which a healthcare device needs access to Google or hacker news or whatever.

They could just use a whitelist and replace all CAs on the computer with a (set of?) private CA(s) that allow the user to do work on information that requires such security.

jcrawfordor · 9 years ago
Telling people that they cannot 1) have their records management system and 2) have internet access on their computer is simply unrealistic. Basic measures like blocking webmail providers are extremely unpopular with employees and produce huge executive pushback, a whitelist approach to internet access like you propose would just be a total non-starter. Imagine if you yourself worked in that environment, where your computer could only access a few select websites because you have access to restricted information (which you almost certainly do) - I mean, most tech workers I know are deeply upset about not having local admin on their machines. What you're proposing is about a thousand steps more restrictive.
zaphar · 9 years ago
You would have to lock such machines down such that they have no direct connection to the internet and no way to get data off of them via portable disk storage. In practice this is enough of an impediment to getting actual work done that it's unrealistic. You are basically asking companies to create SKIF's. For a defense contractor working in intelligence it is often the case that they work in SKIF's provided by the Government. But in healthcare it's probably unreasonable to expect it.
paulddraper · 9 years ago
There are internet-based patient management systems.

Yes, it is possible to explicitly list all certificates you need, or you could simply use the same PK infrastructure as the rest of the world.

user5994461 · 9 years ago
> Should people in those industries have access to the outside web ...

They don't have it.

fulafel · 9 years ago
They should just live without HTTPS and not break it for everyone else.
lolc · 9 years ago
They only break HTTPS for themselves. And not using HTTPS is not an option anymore.
rkeene2 · 9 years ago
My main complaint with attempts to MITM TLS is that it is a failure -- you cannot actually MITM TLS without breaking TLS. Specifically, TLS client certificates are almost always broken when by attempts to MITM TLS and we use TLS client certificates for almost all HTTPS connections.
majewsky · 9 years ago
This was actually a huge problem when, at $work, we recently locked down our productive Kubernetes clusters to only allow access via an SSH jump server. The approach that we've settled on is to have a forward HTTP (without S) proxy on the jump server (listening only on localhost) whose port is forwarded to the local machine by SSH, so we can do `export https_proxy=https://localhost:$forwarded_port` to make kubectl work. sshuttle was also evaluated, but it clashes with the VPN client that the remote colleagues need.
majewsky · 9 years ago
Cannot edit anymore, but that sould be "http://" instead of "https://".
paulddraper · 9 years ago
I see a lot of hate for TLS interception of any kind, but I did it just the other day for my CI servers. This isn't what Chrome and Mozilla are upset about, but it's an example of IMO valid TLS MITM.

Our multi-language build process downloads from Bintray, Maven, npm, Github, Cloudfront, S3 using curl, Maven, SBT, npm, apt, etc. To improve times and insulate against downtime, I MITM the CI servers with a caching proxy.

Two environment variables (http_proxy, https_proxy), and everything is cached, fast, and reliable.

rkeene2 · 9 years ago
There is no valid TLS MITM. All attempts at TLS MITM break TLS in some way -- very commonly with TLS client certificates.

Edit: You're only proxying the encrypted data and not trying to do a MITM, so this doesn't break TLS, but it doesn't do a MITM. I added this complaint as a more general statement at the top-level of comments.

wang_li · 9 years ago
It's my network with my assets and my data. Only I decide what is valid wrt to TLS on my network. The number of applications that purport to service a particular purpose but then proceed to exfiltrate substantial amounts of data that is not even tenuously related to the purpose of the application has destroyed any good will on my part.

On my network there are an order of magnitude more valid TLS MITMs happening than there are valid non-MITMed TLS connections.

Deleted Comment

znep · 9 years ago
Just setting https_proxy isn't going to give you any caching benefits for https requests and isn't going to MITM TLS in any way. The client will make a CONNECT request through to the destination and the encryption is end to end and the response not cacheable by the proxy.
paulddraper · 9 years ago
Point https_proxy to a CONNECT-aware HTTPS MITM proxy, and it does.
TheDong · 9 years ago
What you did doesn't sound like MITM/TLS interception.

Assuming you meant you set those environment variables for your applications, then that wasn't mitm. It was application-level supported proxying.

Those are totally different things.

paulddraper · 9 years ago
I apparently was unclear.

I set up a caching MITM TLS proxy (with a trusted cert on my CI server).

user5994461 · 9 years ago
It's not MITM. It's regular HTTPS tunneling through a regular proxy, which is a feature supported by most proxies. Both the client and the proxy must be configured and aware of each other.
mschuster91 · 9 years ago
Have you tried Sonatype Nexus OSS yet? It's free (actually, open-source-free) and supported by docker, npm and maven. No need for strange SSL interception any more.

(Not affiliated, just an extremely happy user)

paulddraper · 9 years ago
Yep, I've used it.

I have to make sure each of the tools is setup to use it, and I have to move the source repos from code to Nexus config, and it doesn't help if anyone does something non-standard, e.g. last I checked installing Angular involved an ad-hoc Github download.

HTTPS_PROXY gets virtually everything in one go.

tyingq · 9 years ago
HPKP[1] everywhere? Are any of the antivirus or corporate proxy products able to defeat it?

[1] https://en.m.wikipedia.org/wiki/HTTP_Public_Key_Pinning

mrbabbage · 9 years ago
> Chrome does not perform pin validation when the certificate chain chains up to a private trust anchor. A key result of this policy is that private trust anchors can be used to proxy (or MITM) connections, even to pinned sites. “Data loss prevention” appliances, firewalls, content filters, and malware can use this feature to defeat the protections of key pinning.

https://www.chromium.org/Home/chromium-security/security-faq...

I can't remember what Firefox does in this situation

tyingq · 9 years ago
Wow..That pretty much neuters the entire purpose.
jpalomaki · 9 years ago
Corporations can defeat all browser based features by forcing their users to use another browser.
tyingq · 9 years ago
Well, I suppose they could alter and hand compile one as well, but there is a point where the work would exceed the value provided. It's too bad, from my perspective, that HPKP won't help with this issue.
j_s · 9 years ago
You can request a second opinion attempting to detect MitM via JavaScript using snuck.me:

https://jlospinoso.github.io/node/javascript/security/crypto...

The usual client-side JavaScript crypto caveats apply.

jpalomaki · 9 years ago
The reason why corporations are doing this is because they are afraid what kind of (malicious) content is coming down to their network and what users are possibly sending out.

Could we fix this by isolating the browser more efficiently from the local workstation environment and thereby removing the need for this kind of security?

What if you would be executing the Internet browser in an environment where you would not need to care so much about the security. Like running the actual browser process in a completely separate environment, maybe located outside your intranet firewall and just streaming the UI via some simple-enough-to-be-secure mechanism to the desktop.

j_s · 9 years ago
An client-side SSL interception API exists: SSLKEYLOGFILE. It isn't any help for security appliances though.

Also, it is only implemented in Firefox and Chrome; Microsoft doesn't support it in their browser or at the OS level.

I'm suprised it still exists, it seems like a juicy malware target, just like these poorly implemented SSL MitMs.