Should they have hardened in all the other ways for defense in depth, e.g requiring authentication from localhost? Sure. Should the attacker have not done the attack? Without a doubt. But should docker have not completely undone a security control?
Yes, and the fact that docker seems to have persisted with the current state is the topic of discussion.
This same one got me after years of using Docker, I only discovered it after using the combination of Ubuntu Server (and it's ufw) on a DMZed device.
I was running what I thought was an internal FTP instance for almost a week.
Luckily it was about as hardened as regular ftp can be, but I noticed the problem when my service wasn't able to log in as the (very low) connection limit was filled by someone attempting passwords.
I've been using https://github.com/shinebayar-g/ufw-docker-automated to make docker compliant with ufw, and defining firewall rules as labels for the containers.
It needs some work still, namely the service should be hosted in it's own container for easier updating, but it works reasonably well.
* Do not perform the migration in prod replica setup with test data and monitor for, observe and fix any oddities
* Cause a 100% data breach by not following a sound change management procedure
Just because this individual is being blamed and not a Ltd. or Corp. doesn't mean that this is how a quality SaaS should be run. I know that more it's-just-a-side-gig SaaS wing it more than they do not but this is exactly what happens when you haven't got good ops. Rather than random error, this is a consequence from a systematic error.
The machine with DB had a public interface. No matter firewalls, this is just bad. DB machine should be in private subnets, preferably with no inet access at all, even via NAT.
Proof that it had public interface. TFA: "Docker helpfully inserted an allow rule into iptables, opening up MongoDB to the world"
>The machine with DB had a public interface. No matter firewalls, this is just bad. DB machine should be in private subnets, preferably with no inet access at all, even via NAT.
This is the type of thinking that lead to the attack in the first place. No, you should not pretend that your database is secure because it is hidden deep behind dozens of layers of firewalls.
You should assume that your database is always accessible from the interne and make sure that internet access to your database is not relevant.
With mongodb you are supposed to use TLS encryption and authentication even for communication within the same host because a low privilege hack or mistake could forward a database port or make it public.
In this scenario you not only need to bypass all the network security theater, you also need the password and a valid client certificate signed by a private CA.
Even Google Cloud SQL just is a managed instance with a public IP (that you don't seem to be able to make private).
Firewalls always help, and thats what Cloud SQL does: whitelist your ip when using the CLI.
Requiring authentication from localhost does not seem relevant to me, given that the creds would be stored somewhere, either in memory either in a file anyway, but exposing a port is not "binding on localhost".
However testing your firewall after publishing a docker port seems common sense.
Indeed, that's how I found out Docker was using the DOCKER-USER iptables chain that you can customize:
Another thing, instead of using exposing ports like that, the easiest is to use Docker-Compose, so that your containers of a stack have their own private shared network, then you won't have to publish ports to make your services communicate. Otherwise, just create a private network yourself and containerize your stuff in it.
So for me that's two newbie mistakes which conducted to falling for this non targeted, script kiddie attack which has been going on since 2017.
But yeah, go ahead publish your ports instead of using docker networks how you should, "believe" in your firewall while you're at it, and then blame "docker footgun".
> Another thing, instead of using exposing ports like that, the easiest is to use Docker-Compose, so that your containers of a stack have their own private shared network, then you won't have to publish ports to make your services communicate.
That only works with local communication unless you use docker swarm. So byebye high availability.
If you're going to make suggestions, at least think about them from a production point of view.
> Yes, and the fact that docker seems to have persisted with the current state is the topic of discussion.
It's clearly written in the docs:
To expose a container’s internal port, an operator can start the container with the -P or -p flag. The exposed port is accessible on the host and the ports are available to any client that can reach the host.
It should be common knowledge by now to either create a virtual interface / network on the host to publish ports on for usage by services that aren't running in Docker or (if you are in an environment where all services are in Docker) use --link between containers.
I don't think that's clear at all. If I set "bind_ip = *" in some application then it's also "available to any client that can reach the host", but the firewall is in front of that. I certainly wouldn't expect an application to frob with my firewall.
And as I understand it, this is very much an unintentional side-effect of ufw and Docker interacting – it's not Docker's the intention at all to override any iptable rules, just an unfortunate side-effect.
"It should be common knowledge by now" is very hand-wavy. I never used Docker much, and I could have been bitten by this. Where do I get this "common knowledge" from? Not everyone is a full-time sysadmin; some people just want to run a small service, read the basic documentation on various things etc., and set things up. Not everyone is super-invested in Docker (or any other tool for that matter) to have all the "common knowledge" that all the experts have. This is why defaults matter.
Everyone wants to be treated as a (software) engineer here, but the engineer's perspective in this situation would be the opposite: The victims are the customers, and the perpetrator, acting in negligence, was Newsblur. Risk management is a core part of the engineer's job. https://www.sebokwiki.org/wiki/Risk_Management
This is completely wrong. By this logic me killing a parent turns the parent into a perpetrator because the parent didn't learn proper self defense.
No, the parent is a victim and the children are dependents of the victim. Thus hurting the parent also hurts the children. It's the same thing with Newsblur. Hurting Newsblur hurts their customers.
> Everyone wants to be treated as a (software) engineer here, but the engineer's perspective in this situation would be the opposite: The victims are the customers, and the perpetrator, acting in negligence, was Newsblur.
There can be multiple victims, multiple causes/threat actors, and overlap between two categories.
But what's the end user a victim of in this case? Other than a brief lack of availability?
Network services really should have auth by default even if they only bind to localhost. There’s so many ways that a localhost service can become accessible to attackers. Any process with network access, no matter what user it’s running as, can access a localhost service - UNIX sockets on the other hand can be restricted by the usual user/group permissions. A localhost service can be exposed by e.g. an SSRF bug from a web server on the same machine. Or, accessed by a browser on the same machine browsing untrusted sites (an attack model for desktop services which open local ports), and so on. (In case you believe that the HTTP protocol restriction in the latter two cases protects you - protocol smuggling over HTTP is frighteningly effective!)
Of course, as seen here, a network service binding to more than localhost is strictly more exposed - and auth should be required in those cases (even when the service should ostensibly be LAN-only).
It’s an unpopular opinion but it’s 100% a good idea. If at all possible, bind local services to UNIX sockets instead of localhost ports. At least that way you’ll get some measure of access control effectively for free.
If you’re writing software that deals with network connections (as a client or server), write the code for UNIX sockets first. It’s usually trivial to bolt network connection code on top, and being able to deal with sockets directly can make automated testing significantly easier.
A similar but more flexible solution I like is to create a wireguard network between all nodes and developer workstations, which creates a separate interface called wg0. Bind every private service to this and publish what you want through a reverse proxy.
It has an added benefit of not needing vendor-specific TLS configuration and firewall rules easy to configure since everything goes through the single VPN port.
Secure by default is best for those people who do not know every little config detail. Secure by default helps those that you try to make life easier for, eg. you open a port because you assume your user don't know how to... having security by default is the best option for those people. And by "those people" I mean everyone, including myself.
Making something secure "less secure" (according to needs) is usually easy. Making something insecure more secure can be complicated, do wrong, easy to forget, or just something people don't know about.
I don't see how NewsBlur is getting a pass on this and Docker is taking all of the blame. Would they still get sympathy if they had "password" as their DB password and were hacked that way?
I would blame MongoDB for its default-insecure configuration. There's no excuse. It's been like that for at least a decade (when I last used it) and it was a bad choice even then. At a _minimum_ when they upgraded the engine to integrate WiredTiger they should have pushed that through as part of the breaking change.
'easy to start developing against' falls flat on its face once you deploy to production and realize you have to bolt on authn and authz to your code and data connections instead of properly designing it in from the get go. (it might work if you never deploy your products though. god knows how many products don't see the light of day.)
Crap like this is why I dont run docker. I'm glad RedHat took a principled stance on it and dropped it. Podman doesn't punch holes you didn't ask for in your firewall. What a ludicrous anti-feature.
MySQL/MariaDB have a completely open root account too... although default firewall rules should prevent public access too, unless Docker likes to punch that hole open too.
Yes, root account password and access permissions should be changed upon a fresh install, but the real issue here is Docker's "helpfulness" by opening ports without explicit permission. That's absurd, and has no reasonable excuse.
From 3.6 the default binding is localhost. That would be ook, except that if you mail the power, you listen to all interfaces. This will happen with Redis as well and is a problem with both docker and its dockerfile
I had the 'opposite' thought: if the whole system were in a VPC, Docker wouldn't have had the authority to expose a port to the outside world.
Looks like the author is ahead of both us. Later in the post he mentions both a planned transition to a VPC, and plans to beef up the security of MongoDB itself.
This isn't a footgun. A footgun is when something happens that should be expected, but isn't for reasons of negligence or ignorance to the thing that should be expected. The example that everyone seems to love is pointer arithmetic. If you make a basic error in your math, invalid memory access may occur and then likely more bad stuff.
Docker altering firewall rules without explicit instructions to do so is either a feature, if you're looking to defend docker, Or a security defect if you're feeling rational. There's an argument to be made for the root cause of the issue, The docker container manifest, or the application itself. But to pretend like this is what docker should be doing, users should know better just seems wrong.
1. Docker doing this iptables change out of the box, and
2. MongoDB not having a password set out of the box
The life of a developer (and solo dev) means you often have limited time you need to navigate a project and try and do your best to understand, deploy and use it -- this is just one of many tasks on your TODO today to get you closer to operating your product.
I really wish these kinds of things were more secure in a few ways:
1. Defensive defaults (passwords, not opening holes in firewalls), and
2. Not making the security hard to use
If its painful to work with the security feature and "get it working", someone with limited time may just undo the defensive defaults to get things working again (doh).
But I get it, sometimes a security piece on by default is so cryptically painful to understand that you get so frustrated with it you just turn it off.
One of the main issue with security is that there's no visible difference between a well secured system and one that's open to any script kiddie. And you might install something or make a mistake in a config file, and suddenly your system is completely unsecure, and again there be will nothing obvious about it.
I think what's missing is an easy to use tool, installed by default - you run it and see a clear overview of your system security - what ports are opened, how is SSH accessible, how secure are the running services, etc. with plain ticks/crosses to show what's good or not. The kind of tool that a developer can install on their server and then check that at least the basics are right.
Understanding security is no longer optional if you try to run a business in the internet. People like to talk about how they want to focus on the product and building features, but if you don’t have security you don’t have a functional product. No, security is not easy, but it’s not an optional skill set anymore. Learn it of fail.
The enterprise space is rife with such tools, but unfortunately they tend to tell you everything, which makes it hard to identify what actually matters.
The biggest issue with docker is the false sense of security it seems to give a lot of engineers into thinking they know infra when they really don't. I stay away from this because I don't understand it fundamentally, and now this proves it's better to not think these new technologies are your friend unless you actually know what you're doing (which apparently most don't).
> the false sense of security it seems to give a lot of engineers into thinking they know infra when they really don't.
I don’t say this lightly: this is a big problem in our industry right now. DevOps means (to some) that developers now handle operations. The reality is that it’s difficult to juggle an operations mindset with a feature driven one.
Even if you focus on infra problems full time there is so much ground left uncovered, it would be impossible for anyone to juggle all of dev and all of infra at once, and this is compounded by the fact that ops is reductionist and dev is additive, which is an incredibly difficult problem for a person to reconcile and give full attention to one side. That’s why devops was supposed to be a division of labour; not just a dude/dudette who can configure nginx and write code.
Staying away from it is still not the best strategy - at least learn and play with it to understand its strengths and weaknesses.
> it seems to give a lot of engineers into thinking they know infra when they really don't
Maybe, but technology changes over time - I don’t see many new projects choosing VMware over docker/OCI for new infrastructure deployment since you usually don’t need a full VM for applocations that just need isolation and easy static deployments.
> Staying away from it is still not the best strategy
There's a whole generation of sysadmins that use docker so that they can stay away from foundational knowledge. We interview experienced devops who do not know/understand how to build basic packages from source (e.g. they don't understand the ./configure, make, make install chain) and who only have basic knowledge of the underlying operating system.
I suppose my premise is that engineers often delude themselves into thinking that they know it well enough to implement docker when they dont. Id much rather prefer that the regular developers focus on the coding and leave those decisions to actual devops folks who can focus on getting this stuff right. But the simplicity of dockerfile lulls many into thinking if they run something with it it's production ready.
I can't help but be reminded that zero trust architecture for security has been a thing for at least a decade, and that the 2004 Jericho Forum concluded that perimeter security was illusory, more akin to a picket fence than a wall.
Everyone embracing immutable server configuration would help too. We give so many tools like docker complete trust to touch configuration and do the right thing. Sometimes it bites us though. In a perfect world you'd see docker try to change iptables and it fail, then investigate what's up and understand that a specific change has to be allowed and all the implications of that change.
Perimeter security would have been just fine here. The breach occurred because the host was exposed directly to the internet, rather than e.g. sharing a private network with a load balancer.
Believing that a zero trust architecture is a replacement for perimeter security is just as illusory and dangerous. Defense in depth ensures that a vulnerability or temporary configuration mistake in one facet of security does not lead to total compromise.
> If a rogue database user starts deleting stories, it would get noticed a whole lot faster than a database being dropped all at once.
This feels like an odd statement. Surely a database being dropped all at once is about the loudest possible thing that could happen to a database-reliant application?
Should they have hardened in all the other ways for defense in depth, e.g requiring authentication from localhost? Sure. Should the attacker have not done the attack? Without a doubt. But should docker have not completely undone a security control?
Yes, and the fact that docker seems to have persisted with the current state is the topic of discussion.
Previously: https://news.ycombinator.com/item?id=27613217
I was running what I thought was an internal FTP instance for almost a week. Luckily it was about as hardened as regular ftp can be, but I noticed the problem when my service wasn't able to log in as the (very low) connection limit was filled by someone attempting passwords.
I've been using https://github.com/shinebayar-g/ufw-docker-automated to make docker compliant with ufw, and defining firewall rules as labels for the containers. It needs some work still, namely the service should be hosted in it's own container for easier updating, but it works reasonably well.
Totally unexpected outcome.
Luckily I discovered it in testing, so didn't make it to production. But annoying that those issues still remain for so long.
Dead Comment
* Run a SaaS that costs money
* Do not perform the migration in prod replica setup with test data and monitor for, observe and fix any oddities
* Cause a 100% data breach by not following a sound change management procedure
Just because this individual is being blamed and not a Ltd. or Corp. doesn't mean that this is how a quality SaaS should be run. I know that more it's-just-a-side-gig SaaS wing it more than they do not but this is exactly what happens when you haven't got good ops. Rather than random error, this is a consequence from a systematic error.
The machine with DB had a public interface. No matter firewalls, this is just bad. DB machine should be in private subnets, preferably with no inet access at all, even via NAT.
Proof that it had public interface. TFA: "Docker helpfully inserted an allow rule into iptables, opening up MongoDB to the world"
This is the type of thinking that lead to the attack in the first place. No, you should not pretend that your database is secure because it is hidden deep behind dozens of layers of firewalls.
You should assume that your database is always accessible from the interne and make sure that internet access to your database is not relevant.
With mongodb you are supposed to use TLS encryption and authentication even for communication within the same host because a low privilege hack or mistake could forward a database port or make it public.
In this scenario you not only need to bypass all the network security theater, you also need the password and a valid client certificate signed by a private CA.
> This creates a firewall rule which maps a container port to a port on the Docker host to the outside world.
Source : https://docs.docker.com/config/containers/container-networki...
Requiring authentication from localhost does not seem relevant to me, given that the creds would be stored somewhere, either in memory either in a file anyway, but exposing a port is not "binding on localhost".
However testing your firewall after publishing a docker port seems common sense.
Indeed, that's how I found out Docker was using the DOCKER-USER iptables chain that you can customize:
https://docs.docker.com/network/iptables/
And that's how I made a simple firewall that works:
https://yourlabs.io/oss/yourlabs.docker/-/blob/master/tasks/...
Another thing, instead of using exposing ports like that, the easiest is to use Docker-Compose, so that your containers of a stack have their own private shared network, then you won't have to publish ports to make your services communicate. Otherwise, just create a private network yourself and containerize your stuff in it.
So for me that's two newbie mistakes which conducted to falling for this non targeted, script kiddie attack which has been going on since 2017.
But yeah, go ahead publish your ports instead of using docker networks how you should, "believe" in your firewall while you're at it, and then blame "docker footgun".
That only works with local communication unless you use docker swarm. So byebye high availability.
If you're going to make suggestions, at least think about them from a production point of view.
Only on IPv4...
It's clearly written in the docs:
(from https://docs.docker.com/engine/reference/run/#expose-incomin...)It should be common knowledge by now to either create a virtual interface / network on the host to publish ports on for usage by services that aren't running in Docker or (if you are in an environment where all services are in Docker) use --link between containers.
And as I understand it, this is very much an unintentional side-effect of ufw and Docker interacting – it's not Docker's the intention at all to override any iptable rules, just an unfortunate side-effect.
"It should be common knowledge by now" is very hand-wavy. I never used Docker much, and I could have been bitten by this. Where do I get this "common knowledge" from? Not everyone is a full-time sysadmin; some people just want to run a small service, read the basic documentation on various things etc., and set things up. Not everyone is super-invested in Docker (or any other tool for that matter) to have all the "common knowledge" that all the experts have. This is why defaults matter.
Everyone wants to be treated as a (software) engineer here, but the engineer's perspective in this situation would be the opposite: The victims are the customers, and the perpetrator, acting in negligence, was Newsblur. Risk management is a core part of the engineer's job. https://www.sebokwiki.org/wiki/Risk_Management
No, the parent is a victim and the children are dependents of the victim. Thus hurting the parent also hurts the children. It's the same thing with Newsblur. Hurting Newsblur hurts their customers.
There can be multiple victims, multiple causes/threat actors, and overlap between two categories.
But what's the end user a victim of in this case? Other than a brief lack of availability?
Of course, as seen here, a network service binding to more than localhost is strictly more exposed - and auth should be required in those cases (even when the service should ostensibly be LAN-only).
Secure-by-default is the best kind of security.
If you’re writing software that deals with network connections (as a client or server), write the code for UNIX sockets first. It’s usually trivial to bolt network connection code on top, and being able to deal with sockets directly can make automated testing significantly easier.
It has an added benefit of not needing vendor-specific TLS configuration and firewall rules easy to configure since everything goes through the single VPN port.
I would blame MongoDB for its default-insecure configuration. There's no excuse. It's been like that for at least a decade (when I last used it) and it was a bad choice even then. At a _minimum_ when they upgraded the engine to integrate WiredTiger they should have pushed that through as part of the breaking change.
'easy to start developing against' falls flat on its face once you deploy to production and realize you have to bolt on authn and authz to your code and data connections instead of properly designing it in from the get go. (it might work if you never deploy your products though. god knows how many products don't see the light of day.)
Yes, root account password and access permissions should be changed upon a fresh install, but the real issue here is Docker's "helpfulness" by opening ports without explicit permission. That's absurd, and has no reasonable excuse.
Looks like the author is ahead of both us. Later in the post he mentions both a planned transition to a VPC, and plans to beef up the security of MongoDB itself.
Dead Comment
Docker altering firewall rules without explicit instructions to do so is either a feature, if you're looking to defend docker, Or a security defect if you're feeling rational. There's an argument to be made for the root cause of the issue, The docker container manifest, or the application itself. But to pretend like this is what docker should be doing, users should know better just seems wrong.
(1) https://i.imgur.com/pHlLFJA.png
1. Docker doing this iptables change out of the box, and
2. MongoDB not having a password set out of the box
The life of a developer (and solo dev) means you often have limited time you need to navigate a project and try and do your best to understand, deploy and use it -- this is just one of many tasks on your TODO today to get you closer to operating your product.
I really wish these kinds of things were more secure in a few ways:
1. Defensive defaults (passwords, not opening holes in firewalls), and
2. Not making the security hard to use
If its painful to work with the security feature and "get it working", someone with limited time may just undo the defensive defaults to get things working again (doh).
But I get it, sometimes a security piece on by default is so cryptically painful to understand that you get so frustrated with it you just turn it off.
I think what's missing is an easy to use tool, installed by default - you run it and see a clear overview of your system security - what ports are opened, how is SSH accessible, how secure are the running services, etc. with plain ticks/crosses to show what's good or not. The kind of tool that a developer can install on their server and then check that at least the basics are right.
I don’t say this lightly: this is a big problem in our industry right now. DevOps means (to some) that developers now handle operations. The reality is that it’s difficult to juggle an operations mindset with a feature driven one.
Even if you focus on infra problems full time there is so much ground left uncovered, it would be impossible for anyone to juggle all of dev and all of infra at once, and this is compounded by the fact that ops is reductionist and dev is additive, which is an incredibly difficult problem for a person to reconcile and give full attention to one side. That’s why devops was supposed to be a division of labour; not just a dude/dudette who can configure nginx and write code.
Reminds me of this: https://www.weforum.org/agenda/2021/04/brains-prefer-adding-...
docker breaches the localhost VS external port security model (and others)
> it seems to give a lot of engineers into thinking they know infra when they really don't
Maybe, but technology changes over time - I don’t see many new projects choosing VMware over docker/OCI for new infrastructure deployment since you usually don’t need a full VM for applocations that just need isolation and easy static deployments.
There's a whole generation of sysadmins that use docker so that they can stay away from foundational knowledge. We interview experienced devops who do not know/understand how to build basic packages from source (e.g. they don't understand the ./configure, make, make install chain) and who only have basic knowledge of the underlying operating system.
This feels like an odd statement. Surely a database being dropped all at once is about the loudest possible thing that could happen to a database-reliant application?