Readit News logoReadit News
floatboth · 9 years ago
Jails are actually very similar to Linux namespaces / unshare. Much more similar than most people in this thread think.

There's one difference though:

In namespaces, you start with no isolation, from zero, and you add whatever you want — mount, PID, network, hostname, user, IPC namespaces.

In jails, you start with a reasonable secure baseline — processes, users, POSIX IPC and mounts are always isolated. But! You can isolate the filesystem root — or not (by specifying /). You can keep the host networking or restrict IP addresses or create a virtual interface. You can isolate SysV IPC (yay postgres!) — or keep the host IPC namespace, or ban IPC outright. See? The interesting parts are still flexible! Okay, not as flexible as "sharing PIDs with one jail and IPC with another", but still.

So unlike namespaces, where user isolation is done with weird UID mapping ("uid 1 in the container is uid 1000001 outside") and PID isolation I don't even know how, jails are at their core just one more column in the process table. PID, UID, and now JID (Jail ID). (The host is JID 0.) No need for weird mappings, the system just takes JID into account when answering system calls.

By the way, you definitely can run X11 apps in a jail :) Even with hardware accelerated graphics (just allow /dev/dri in your devfs ruleset).

P.S. one area where Linux did something years before FreeBSD is resource accounting and limits (cgroups). FreeBSD's answer is simple and pleasant to use though: https://www.freebsd.org/cgi/man.cgi?rctl

deathanatos · 9 years ago
While I'm not sure I agree entirely with the "Complexity == Bugs" section, the main point, that containers are first-class citizens but a (useful) combination of independent mechanisms is spot-on. This has real repercussions: most people I've spoken do don't know these things exist. They know containers do, they have a very vague idea what containers are, but they have no fundamental understanding of the underlying concepts. (And who can blame them? Really, it was marketed that way.)

For example, pid_namespaces, and subreapers are an awesome feature¹, and are extremely handy if you have a daemon that needs to keep track of a set of child jobs that may or may not be well behaved. pid_namespaces ensure that if something bad happens to the parent, the children are terminated; they don't ignorantly continue executing after being reparented to init. Subreapers (if a parent dies, reparent the children to this process, not init) solve the problem of grandchildren getting orphaned to init if the parent dies. Both excellent features for managing subtrees of processes, which is why they're useful for containers. Just not only containers.

But developers aren't going to take advantage of syscalls they have no idea that they exist, of course.

¹although I wish someone could tell me why pid_namespaces are root-only: what's the security risk of allowing unprivileged users to create pid_namespaces?

chatmasta · 9 years ago
This is definitely true, but only as long as docker (or $container_runtime) remains lightweight enough that you can still use those independent parts on their own, compatibly with docker. The risk is that docker grows in complexity such that it creates new dependencies between these independent parts and therefore handicaps their power when used individually.

As an example, it's easy to create network namespaces and add routing rules, interfaces, packet forwarding logic, etc all by using `ip netns exec`. But there is no easy way to launch a docker container into an existing netns. You need to use docker's own network tooling or build your own network driver, which may be more complex than what you need. This strikes me as a code smell in docker.

josteink · 9 years ago
> This is definitely true, but only as long as docker (or $container_runtime) remains lightweight enough that you can still use those independent parts on their own, compatibly with docker.

As someone who has exclusively used LXC containers (which Docker is/was initially built on), none of this applies to me.

Your issue is with Docker, the implementation, not containers as a concept.

Sometimes I feel HN needs to get its head out of Dockers butt and see that there's a world out here too. How many people here even know there are other container types at all? I'm often inclined to think none.

No really. Just once try real, raw containers without all that docker wrapping bloat. How everything works and is tied together is just clear as day and obvious at once. It's all simple and super refreshing.

julie1 · 9 years ago
Docker is to containers what OAUth2.0 is to cryptography: a roll your own solution with a wide complexity.

Whereas jails/zones/VM have a complexity that is mutualized, docker have a feature of being more flexible which comes at the price that you may introduce more escape scenari.

As a result like in cryptography, Docker is kind of a roll your own crypto solution, secured by obfuscation that may if you don't have a lot of knowledge on the topic your own poison.

From this article you can derive 2 conclusions:

- docker is good for a big business having enough knowledge to devote a specialized team for handling the topic, because FEATURES

- jails/zones are more adapted for securing small business

nailer · 9 years ago
Wasn't oauth 1 more a roll-your-own crypto solution? oauth 2 was created precisely to re-use more of what browsers already provide.
XorNot · 9 years ago
I think the biggest problem is most namespace functionality is root-only.

If I could create pid namespaces for my user-space apps, then every program I write forever would, as it's first step, launch into a pid namespace.

cyphar · 9 years ago
You can do that by creating an unprivileged user namespace. To be fair, this does break some things, but this is the key feature that makes rootless containers (in runC) possible.
amorphid · 9 years ago
I don't know if this addresses your question but...

Check out 'runc'. It is the tool Docker uses to start a Docker container. In that program, there is a '-u' option to start a container as whatever user ID (not username) you choose. Meaning you can start a container as a non-root user, although I don't know if that bubbles up as an option in the Docker public API, or can be set in a Dockerfile.

mingodad · 9 years ago
Is it possible to create pid_namespaces for unprivileged users by wrapping pid_namespaces creation in a suid shell script that will take care of loading everithing using the current unprivileged user ?
mrunalp · 9 years ago
If you enable user namespaces as well, then you don't need any of that. For example:

  [mrunal@local rootfs]$ id
  uid=1000(mrunal) gid=1000(mrunal) groups=1000(mrunal),10(wheel) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
  [mrunal@local rootfs]$ unshare -m -u -n -i -p -f --mount-proc -r sh
  sh-4.4# ps -ef
  UID        PID  PPID  C STIME TTY          TIME CMD
  root         1     0  0 09:42 pts/12   00:00:00 sh
  root         2     1  0 09:42 pts/12   00:00:00 ps -ef
  sh-4.4#

yjftsjthsd-h · 9 years ago
Shell scripts can't be suid. But a binary wrapper could work.
wlamartin · 9 years ago
You can create a pid namespace if you also create a user namespace at the same time.
dreamcompiler · 9 years ago
Ignorance admission time: I still have no idea what problem containers are supposed to solve. I understand VMs. I understand chroot. I understand SELinux. Hell, I even understand monads a little bit. But I have no idea what containers do or why I should care. And I've tried.
sarnowski · 9 years ago
Containers are just advanced chroots. They do the same with the network interface, process list and your local user list as chroot is doing with your filesystem. In addition, containers often throttle resource consumption of CPU, memory, block I/O and network I/O of the running application to have some QoS for other colocated applications in the same machine.

It is the spot between chroot and VM. Looks like a VM from the inside, provides some degree of resource usage QoS and does not require you to run a full operating system like a VM.

Another concept that is now also often automatically connected to containers is the distribution mechanism that Docker brought. While provisioning is an orthogonal topic to runtime, it is nice that these two operational topics are solved at the same time in a convinient way.

rkt did some nice work to allow you to choose the runtime isolation level while sticking to the same provisioning mechanism:

https://coreos.com/rkt/docs/latest/devel/architecture.html#s...

AstralStorm · 9 years ago
Unfortunately containers provide about the same security as chroots too. Nothing even close to a true virtual machine with not much lower cost.
friendzis · 9 years ago
Is there any fundamental difference between containers and shared-kernel virtualization (OpenVZ) that I am missing?
dreamcompiler · 9 years ago
Very helpful. Thanks.
eddieroger · 9 years ago
I'm with you, but I've found a single use case that I'm running with, and potentially a second that I'm becoming sold on. So far, the most useful thing for me is being able to take a small application I've written, package it as a container, and package it in a manner when I know it will run identically on multiple remote machines that I will not have proximity to manage should something go wrong. I can also make a Big Red Button to blow the whole thing away and redownload the container if need be, since I was (correctly) forced to externalize storage and database. I can also push application updates just by having a second Big Red Button marked "update" which performs a docker pull and redeploy. So now, what was a small, single-purpose Rails app can be pushed to a dozen or so remote Mac minis with a very simple GUI to orchestrate docker commands, and less-than-tech-savvy field workers can manage this app pretty simply.

I'm also becoming more sold on the Kubernetes model, which relies on containers. Build your small service, let the system scale it for you. I don't have as much hands-on here yet, but so far it seems pretty great.

Neither of those are the same problems that VMs or chroot are solving, as I see it, but a completely different problem that gets much less press.

voidfunc · 9 years ago
Everyone says containers help resource utilization but I think there killer raison d'etre is that they are a common static binary packaging mechanism. I can ship Java, Go, Python, or whatever and the download and run mechanism is all abstracted away.
dreamcompiler · 9 years ago
Does this mean we're admitting defeat with shared libraries and we're going back to static libraries again?
ori_b · 9 years ago
This is why something like 25% of containers in the docker registry ship with known vulnerabilities.
wmf · 9 years ago
Or you could have done the same thing years earlier with AMIs?
vosper · 9 years ago
I'm very new to containers, but I think I'm starting to get the hype a bit. Recently I was working on a couple of personal projects, and for one I wanted a Postgres server, and for the other PhantomJS so that I could do some webscraping. Since I try to keep my projects self-contained I try to avoid installing software onto my Mac. So my usual workflow would be to use Vagrant (sometimes with Ansible) to configure a VM. I do this infrequently enough that I can never remember the syntax, and there's a relatively long feedback loop when trying to debug install commands, permissions etc. I gave Docker a try out of frustration, but was simply delighted when I discovered that I could just download and start Postgres in a self-contained way. And reset it or remove it trivially. I know there's a lot more to containers than this, but it was an eye-opener for me.
cookiecaper · 9 years ago
You can do this with Vagrant already. Before Vagrant, people distributed slimmed-down VM images for import and execution. Why is this ascribed as a unique benefit of containers?
jbattle · 9 years ago
Yeah this fits my experience exactly. I suppose I use docker a lot like a package manager (easy to install software and when I remove something I know it will be cleaned up).

Nearly every time I install actual software on my mac (beyond editors & a few other things) I feel like I end up tripping over it later when I find half my work wants version N and the other wants version M

elvinyung · 9 years ago
Am also a huge newcomer to this.

Yeah, I think a lot of it is better resource utilization compared to VMs. At the same time, though, I don't think containers are the thing, but just a thing that paves the way for something very powerful: datacenter-level operating systems.

In 2010, Zaharia et al. presented [1], which basically made the argument that increasing scale of deployments and variety of distributed applications means that we need better deployment primitives than just at the machine level. On the topic of virtualization, it observed:

> The largest datacenter operators, including Google, Microsoft, and Yahoo!, do not appear to use virtualization due to concerns about overhead. However, as virtualization overhead goes down, it is natural to ask whether virtualization could simplify scheduling.

But what they didn't know was that Google has been using containers for a long time. [2] They're deployed with Borg, an internal cluster scheduler (probably better known as the predecessor to the open-source Kubernetes), which essentially serves exactly as an operating system for datacenters that Zaharia et al. described. When you think about it that way, a container is better thought of not as a thinner VM, but as a thicker process.

> Because well-designed containers and container images are scoped to a single application, managing containers means managing applications rather than machines.

In the open-source world, we now have projects like Kubernetes and Mesos. They're not mature enough yet, but they're on the way.

[1] https://cs.stanford.edu/~matei/papers/2011/hotcloud_datacent...

[2] http://queue.acm.org/detail.cfm?id=2898444

candiodari · 9 years ago
The big missing "virtualization" technology is the Apache/CGI model. You essentially upload individual script-language (or compiled on the spot) functions that are then executed on the server in the context of the host process directly.

This exploits the fact that one webserver only differs from another by the contents of it's response method, and other differences are actually unwanted. You can make this a lot more efficient by simply having everything except the contents of the response method be shared between different customers.

This meant that all the Apache mod_x (famously mod_php and mod_perl) can manage websites on behalf of large amounts of customers on extremely limited hardware.

It does provide for a challenging security environment. That can be improved when starting from scratch though.

AstralStorm · 9 years ago
You can share resources between VMs (frontswap etc. and deduplication, using network file systems like V9FS instead of partitions) but it complicates security.

It is still safer than containers as one kernel local root bug does not break a VM, but breaks a container. The access to hardware support also allows compartmentalized drivers and hardware.

mping · 9 years ago
I will show you some use cases:

- have different versions of libs/apps on the same OS (or run different OS's) - tinker with linux kernel, etc without breaking your box (remember the 90's?) - building immutable images packed with dependencies, ready for deploy - testing distributed software without VMs (because containers are faster to run) - if you have a big box (say 64gb, eight core or whateva) or multiple big boxes, you can manage the box resources through containerization, which can be useful if you need to run different software. Say every team builds a container image, then you can deploy any image, do HA, Load balancing, etc. Ofc this use case is highly debatable

dreamcompiler · 9 years ago
These comments are helpful. Thanks. Sounds like for a given piece of hardware you might be able to fit 2 or 3 VMs on it, or a lot more containers. But without the security barriers of VMs.

That being the case, why not just use the OS? And processes and shared libraries?

mrbrowning · 9 years ago
The article touches on the technical details of this briefly, but the underlying point here is that containers effectively do use the OS, and processes. Like Frazelle says in the article: "a 'container' is just a term people use to describe a combination of Linux namespaces and cgroups." If that's nonsense to you, check out some of her talks, they treat those topics in a friendly way. At the most basic level, though, a container is just a process (or process tree) running in an isolated context.

Sharing library code between processes running in containers is more complicated, since it depends on whether and how you've set up filesystem isolation for those processes, but it's possible to do.

ianburrell · 9 years ago
The isolation means that don't have to worry about containers interfering with each other. It is more about separating and hiding processes rather than protecting from hostile attacks.

The other big advantage is containers provide a way to distribute and run applications with all their dependencies except for the kernel. This means not having to worry about incompatible libraries or installing the right software on each machine.

toast0 · 9 years ago
It can be easier to run a jail (or container) and assign it an IP and run standard applications with standard configs than to run a second instance of something in a weird directory listening in a special way.

The other big difference between this and a VM is that timekeeping just works.

You're not necessarily restricted to friendly-only tenants, either. Depending on how you configure it, there can be pretty good isolation between the inside and the outside and the other insides. You lose a layer of isolation, but it's not impossible to escape a virtual machine either.

mwpmaybe · 9 years ago
> That being the case, why not just use the OS? And processes and shared libraries?

That's essentially what a Linux container is: a process (that can fork) with its own shlib. If you have lots of processes that don't need to be isolated from each other and can share the same shlib, then no, you don't need this mechanism.

raesene9 · 9 years ago
I think it's important to make the distinction that containers do provide a level of security isolation, but that in most cases it's not as much protection as it provided by VM isolation.

There are companies doing multi-tenant Container setups, with untrusted customers, so it's not an unknown concept for sure.

what I'd say is that the attack surface is much larger than a VM hypervisor , so there's likely more risk of a container breakout than a VM one.

unixhero · 9 years ago
I don't understand Docker to be honest. It was a big pain to have unexplainable race conditions when I tried to use it for production apps.

Ended with a spectacular data loss, of my own company's financial data. Luckily I had 7-day old SQL exports.

colordrops · 9 years ago
In my experience it does two things that VMs don't do as well:

1. More efficient use of hardware (including spin up time) 2. Better mechanisms for tying together and sharing resources across boundaries.

But in the end they don't really do anything you couldn't do with a VM. It's just that people realized that VMs are overkill for many use cases.

deckiedan · 9 years ago
They make shared folders and individual files a lot easier than VMs, also process monitoring from the "host".
betaby · 9 years ago
Increase server utilization by packing multiple non-hostile tenants on it, quickly create test environments, have a volatile env. You can have all of those with VMs although at much higher CPU, RAM usage cost.
boris · 9 years ago
With one big limitation: they must all run the same os kernel (so you cannot run say a Windows or FreeBSD container on a Linux host).

In fact, nobody guarantees that say Fedora will run on an Ubuntu-built kernel. Or even on a kernel from a different version of Fedora. So, IMO, anything other than running the exact same OS on host and in container is a hack.

AstralStorm · 9 years ago
Measure the "much higher" before deciding. Especially after you apply solutions to reduce the memory and disk cost.

I'd say the "much higher" is nowadays a relic of the past.

_pmf_ · 9 years ago
Same with me. This plays right into the complexity issue.

Even if you understand them, you have to understand the specific configuration (unlike VMs, where you have a very limited set of configurable options, and the isolation guarantees are pretty much clear).

mbesto · 9 years ago
Eliminates the redundancy of maintaining an OS across more than 1 service.
RantyDave · 9 years ago
They're VM's but much more efficient and start faster. There's a clever but shockingly naive build system involved. That's pretty much it.

Going beyond this you get orchestration - which you can certainly do with VM's but it's slow; and various hangovers from SOA rebadged and called microservices.

But they're really, really efficient compared to VM's.

kerny · 9 years ago
> The're VM's

They are definitely not VMs.

> But they're really, really efficient compared to VM's.

I think that the virtualisation CPU overhead is below 1%. Layered file systems are possible with virtual machines as well so disk space usage could be comparable.

What do you mean that they are "really, really efficient" ?

nisa · 9 years ago
As a lowly user¹: linux containers are more like gaffer tape around namespaces and cgroups than something like lego. You want real memory usage in your cgroup? let's mount some fuse filesystem: https://github.com/lxc/lxcfs - https://www.cvedetails.com/vulnerability-list/vendor_id-4781...

We have to gaffer tape with AppArmor and SELinux to fix all the holes the kernel doesn't care about: https://github.com/lxc/lxc/blob/master/config/apparmor/conta...

Solaris Zones are more designed and an evolution from FreeBSD Jails. Okay, the military likely paid for that: https://blogs.oracle.com/darren/entry/overview_of_solaris_ke...

Maybe it's Deathstar vs. Lego. But I assume you can survive a lot longer in a Deathstar in vacuum than in your Lego spaceship hardened by gaffa tape.

1: I have uttermost respect for anyone working on this stuff. No offense, but as a user sometimes a lack of design and implementation of bigger concepts (not as in more code, but better design, more secure) in the Linux world is sad. It's probably the only way to move forward but you could read on @grsecurity Twitter years ago that this idea is going to be a fun ride full of security bugs. There might be a better way?

djsumdog · 9 years ago
I really wish this post went into more detail. It feels too high level to be useful.

I ran into the memory issue recently. In DC/OS when you use the marathon scheduler, if you go above the allocated memory limit, the scheduler kills your task and restarts it.

The trouble is, if you ran top inside your container, and you're running on a DC/OS node with 32GB of memory, top reports all 32GB of memory. So interpreters that do garbage collection (like Java) will just continue to eat memory if you don't specify the right limits/parameters. The OS will eve let it allocate past the container limit, but just kill the container afterwards.

Now the container limit is available under the /proc/cgroups somewhere, but now interpreters need to check to see if they're running in a container and adjust everything accordingly.

Of course you could always tell your scheduler not to hard kill something when it goes over a memory limit, which is why we never ran into that when we were running things on CoreOS since we didn't configure hard limits per container.

chatmasta · 9 years ago
This is an interesting observation. It seems the simple fix would be tricking the containerized application into reading the "total system memory" from a syscall hooked by the container runtime to return the configured memory limit of the container making the syscall. I'm surprised this is not already done; is there some inherent limit that prohibits this?

It seems an unintended consequence of containerizaion is that the responsibility for garbage collection effectively moves from the containerized application (e.g. the interpreter) to the container runtime, which "collects garbage" by terminating containers at their memory limit, just like a process level garbage collector terminates (or prunes) functions or data structures at their memory limit.

I'm not sure this is a bad thing. Moving garbage collection up one level in the "stack" of abstraction seems in line with the idea that containers are the building blocks of a "data center operating system."

Naturally then, shouldn't garbage collection happen at the level of the container runtime? Otherwise you're wasting compute cycles by collecting garbage at two levels.

When garbage collection moves to the container runtime, it should mean that the application no longer has to worry about garbage collection, since the container runtime will terminate the container when it reaches its memory limit. Therefore, the application (e.g. the java interpreter) only needs to make sure it can handle frequent restarts. In practice this means coding stateless applications with fast startup times.

Applications like the java interpreter were designed in an era dominated by long running, stateful processes. Now we are seeing a move to stateless applications with fast boot times (i.e. "serverless" shudder). Stateless applications are a prerequisite to turning the data center into an "operating system" because they essentially take the role of function calls in a traditional operating system. Both containers and function calls are the "building blocks" of their respective levels of abstraction. In a traditional OS, you wouldn't expect a single function call to run forever and do its own garbage collection, so why would you expect the same from a containerized application in a datacenter OS?

Xylakant · 9 years ago
> top reports all 32GB of memory.

Same applies to CPU and number of cores. A common pitfall for example for ElasticSearch, which bases its default threadpools on the number of visible CPUs. The isolation layer is indeed very thing and leaky in all places.

felipelemos · 9 years ago
Your Java container must have a limit too, regardless of where it is running. You must be in control of the application.
zuzun · 9 years ago
Yeah, Linux security features work like: throw ... against a wall and see what sticks. I find it amusing when people say: "We should write a new kernel" and their only proposed security feature is using that memory safe language (TM)... they'd have my attention if they said "We should write a new kernel and design all the permissions/isolations/resource limits from the ground up".
dom0 · 9 years ago
I.e. an enterprise operating system.
erikb · 9 years ago
Yes, and as long as your life isn't threatened and you live in a world full of other people, problems, opportunities, the lego ship with gaffa tape is way more useful.
lloydde · 9 years ago
It feels like Ms Frazelle's essay ends abruptly. I was looking forward to the other use cases of non-Linux containers.

I think most people are considering these OS-level virtualization systems for the same or or very similar use cases: familiar, scalable, performant and maintainable general purpose computing. Linux containers win because Linux won. Linux didn't have to be designed for OS virt. People have been patient as long as they've continued to see progress -- and be able to rely on hardware virt. Containers are a great example of where even with all of the diverse stakeholders of Linux, the community continues to be adaptive and create a better and better system at a consistent pace in and around the kernel.

That my $job - 2, Joyent, re-booted Lx-branded zones to make Linux applications run on illumos (descendent of OpenSolaris) is more than a "can't beat them join them strategy" as it allows their Triton (OSS) users full access, not only to Linux API and toolchains, but to the Docker APIs and image ecosystem and has been an environment for their own continued participation in micro services evolution.

Although Joyent adds an additional flavor, it targets the same scalable, performant and maintainable cloud/IaaS/PaaS-ish use case. In hindsight, it's crazy that I worked at three companies in a row in this space, Piston Cloud, Joyent, Apcera, and each time I didn't think I'd be competing against my former company, but each time the business models as a result of the ecosystems shifted. Thankfully with $job I'm now a consumer of all of the awesome innovations in this space.

dom0 · 9 years ago
I think an interesting bit here is that e.g. Solaris first had Zones (i.e. "Containers"), while virtualisation was added later (sun4v), while the story is exactly the other way around for Linux.
erikb · 9 years ago
I also felt that way. It it like the beginning of an awesome blog post. Maybe she'll continue later on after thinking more about it.
nikcub · 9 years ago
Its probably a good time to stop using containers to mean LXC considering the new OCI runc specs containers on Solaris using Zones and Windows using Hyper-V:

https://github.com/opencontainers/runtime-spec/blob/master/s...

ffk · 9 years ago
I don't think anyone in the container dev community thinks that containers means LXC only. Even back in 2013, docker's front end api was designed to support other runtimes such as VMs and chroot. Perhaps this is a marketing story gone awry?
erikb · 9 years ago
Afaik even the docker noob tutorial already points out that it is not just LXC only.
lgas · 9 years ago
It's probably a good idea time to stop thinking that anyone cares what Solaris calls anything.
jo909 · 9 years ago
I think it's important to realize that the reduced isolation of containers can also have pretty significant upsides.

For example monitoring the host and all running containers and all future containers only means running one extra (privileged) container on each host. I don't need to modify the host itself, or any of the other containers, and no matter who builds the containers my monitoring will always work the same.

The same goes for logging. Mainly there is an agreed-upon standard that containers should just log to stdout/stderr, which makes it very flexible to process the logs however you want on the host. But also if your application uses a log file somewhere inside the container, I can start another container (often called "sidecar") with my tools that can have access to that file and pipe it into my logging infrastructure.

If I want multiple containers can share the same network namespace. So I listen on "localhost:8080" in one container, and connect to "localhost:8080" in another, and that just works without any overhead. I can share socket files just the same.

I can run one (privileged) container on each host that starts more containers and bootstraps f.e. a whole kubernetes cluster with many more components.

You can save yourself much "infrastructure" stuff with containers, because the host provides them or they are done conceptually different. For example ntp, ssh, cron, syslog, monitoring, configuration management, security updates, dhcp/dns, network access to internal or external services like package repositories.

My main point is that by embracing what containers are and using that to your advantage, you gain much more than by just viewing them as lightweight virtualisation with lower overhead and a nicer image distribution.

Edit: I want to add that not all of that is necessarily exclusive to containers or mandatory. For example throwing away the whole VM and booting a new one for rolling updates is done a lot, but with containers it became a very integral and universally accepted standard workflow and way of thinking, and you will get looked at funny if you DON'T do it that way.

cperciva · 9 years ago
The meme image ("Can't have 0days or bugs... if I don't write any code") is incorrect.

You can't have bugs if you don't have any code, but not writing code just means that your bugs are guaranteed to be someone else's bugs. Now, this may be a good thing -- other people's code has probably been reviewed more closely than yours, for one thing -- but using other people's code doesn't make you invulnerable, and other people's code often doesn't necessarily match your precise requirements.

If you have a choice between writing 10 lines of code or reusing 100,000 lines of someone else's code, unless you're a truly awful coder you'll end up with fewer bugs if you take the "10 lines of code" option.

geofft · 9 years ago
There's probably no good way to pick up this context from the article, but the meaning of that particular meme is that the caption is supposed to be a shortsighted analysis. See http://knowyourmeme.com/memes/roll-safe , which lists examples like "You can't be broke if you don't check your bank account" or "If you're already late.. Take your time.. You can't be late twice."
rileymat2 · 9 years ago
> If you have a choice between writing 10 lines of code or reusing 100,000 lines of someone else's code, unless you're a truly awful coder you'll end up with fewer bugs if you take the "10 lines of code" option.

I disagree, this is only true if you understand why the other code has 100k lines [Although this example is a bit extreme].

A good example that could send a junior developer astray is date handling. Or most likely date mishandling if they are coding it themselves.

cperciva · 9 years ago
Sure. I'm talking about the case where the 100k lines of code provides a large set of features you're not intending to use.
jjn2009 · 9 years ago
These container and container like solutions are not 10 lines of code, no implementation will be 10 lines. Therefore solutions which have had time to stabilize will be better since 10 lines of code isn't even a valid solution. New code causes new issues and increased complexity, thats the only point to be made by the meme.