Readit News logoReadit News
falcolas · 10 years ago
You know, standardization is a great thing. But, fuck, another major release with huge changes... What ever happened to stable architectures with hardening burn in periods? What new bugs are going to creep in due to these huge changes? How is this going to break our work arounds for previous versions?

It's a great time to be in operations, containers are a huge step forward, but how are we supposed to be confident in a technology with so many major changes every other month? Even Node.js has LTS releases.

And yes. Get off my lawn, too.

tveita · 10 years ago
If you want something stable I'd stay away from containers and use well-tested technologies like VMs and configuration management.

In my experience many of the tools in the container ecosystem are in the early adopter phase, promising a lot of convenience for some use cases, but too buggy and feature incomplete to really deliver yet. Not to mention the breaking changes and occasional migration to a new tool.

vacri · 10 years ago
We were using docker up to about a year ago, and ripped it out for just this reason - we didn't have the time available to keep coding the workarounds. Great if you have the time, not so great if you don't.
tachion · 10 years ago
Or simply use containers technology that's been around for a decade and is battle tested and stable, like FreeBSD Jails - https://www.freebsd.org/doc/handbook/jails.html

Linux is not the only OS around, and Docker is not the only container technology available, that's important thing to remember.

musha68k · 10 years ago
You are mostly right but devops people should still play around with the newest tools to form their own opinions.

It's complicated.

I wouldn't stop considering containers per-se (SmartOS zones/FreeBSD jails are fine!) but the whole tooling surrounding the management of Linux containers and corresponding images is still in the "cambrian" phase. Security issues and the general over-selling of the technology (Docker and CoreOS especially are almost too good at marketing their products..) shouldn't discourage you to play around with "those toys" though. Even for certain production scenarios there is a bunch of setups which work fine already (RedHat's OpenShift platform comes to mind).

mhotchen · 10 years ago
There's also these:

- https://runc.io/

- https://linuxcontainers.org/

And I'm not sure if they're the same thing, compatible, or what. I'm going to let someone else pick the winner for me. I'll come back to containers in like a year or so and see what the lowdown is.

solarengineer · 10 years ago
Plus, there's FreeBSD Jails, Illumos Zones, and SmartOS' support for Linux apps (Dockerized or otherwise). SmartOS is based on Illumos.

I was reading the recent HN discussion + article on the recent network namespaces, and wondered at how much ahead networking on Illumos is.

sams99 · 10 years ago
This is a huge upgrade that is super welcome.

Recently (on 1.9) we have seen quite a few cases where we had "zombie" containers, these are container that can no longer be started or stopped due to cgroup misconfiguration or something along those lines.

The new architecture means that for weird cases like this all we need to do is kill off runc without forcing every container on the box to restart (by restarting the docker service which is what we do now).

It is also really nice that you can now launch apps on runc direct without needing a docker intermediary.

cpuguy83 · 10 years ago
Sounds likely to be https://github.com/docker/docker/issues/18180

If so, it's a bug with a particular (newer) version of AUFS that has been fixed in most distros.

rodionos · 10 years ago
We hit 18180 in all our images with Java applications. Workaround was to upgrade the kernel.
sams99 · 10 years ago
most likely that bug, been tracking it, but I think this goes beyond this specific issue. It makes docker a lot more robust when bugs like this pop up in future.
facorreia · 10 years ago
I particularly like that in addition to refactoring the architecture, they worked on stabilization.

> With the containerd integration comes an impressive cleanup of the Docker codebase and a number of historical bugs being fixed. In general, splitting Docker up into focused independent tools mean more focused maintainers, and ultimately better quality software.

esseti · 10 years ago
> DNS round robin load balancing: It’s now possible to load balance between containers with Docker’s networking. If you give multiple containers the same alias, Docker’s service discovery will return the addresses of all of the containers for round-robin DNS.

wait wait wait. I'm kind of new into docker world. And so far i've been struggling in understanding how to replicate a container in order to scale. For example i want to run the same Django project 3 time as web1, web2, web3. If i do so now i've to expose 3 different ports, one for each. Plus, to make it working, I need to have a load balancing, thus i've to use HAProxy and do a load balancing to point to each of the container. Anytime I add a new machine (e.g., web4) i've to change the conf of HAProxy and restart it. This brings down the system for a moment. (is this the right approach btw?)

Going back to the cite paragraph. Now i can create many containers and dockers automatically does the routing for me? Am I right? or I misunderstood the meaning of the cited point?

I just need to create all of them with a --alias web ?

e12e · 10 years ago
> Anytime I add a new machine (e.g., web4) i've to change the conf of HAProxy and restart it. This brings down the system for a moment. (is this the right approach btw?)

You might find this story from a year ago interesting:

"True Zero Downtime HAProxy Reloads" http://engineeringblog.yelp.com/2015/04/true-zero-downtime-h...

HN discussion (with some answers from the post author): https://news.ycombinator.com/item?id=9369051

I'm curious as to how much of an actual issue you experience though. Barring an error that prevents HAproxy from starting, it should be pretty quick? Maybe not quick enough for streaming media/realtime audio-visual communication though.

m_mueller · 10 years ago
> (is this the right approach btw?)

This is why I use hipache for load balancing / routing - it is the only solution I've found where you can change the routing or add new backends in a live system without any downtime. Here is the main problem though: Its load balancing isn't exactly smart, for example it won't keep the same client IP on the same replica, thus creating problems when a client writes something on one replica, then gets switched to another that didn't get the update yet.

I'd love a better solution for this btw. Is noone working on Hipache anymore? Other than this problem I find it very elegant.

code_research · 10 years ago
you might want to take a look at fabio:

https://github.com/eBay/fabio

sanimej · 10 years ago
Yes, for ex: docker run -d --name web1 --net prod --net-alias web <webapp> docker run -d --name web2 --net prod --net-alias web <webapp>

resolution for 'web' will return IPs of both the containers. You might still have to watch out for the DNS caching at the application level.

moondev · 10 years ago
Why load balance three copies of a container on the same server? Just set your cpu and memory constraints to what you need for one.
esseti · 10 years ago
you got a point indeed. My idea was to handle more things in parallel for services that are not heavy in resources. such as a service that checks the auth of a user from a ID/Key. anyway, with the swarm I could put the containers in various machines.
thresh · 10 years ago
You can use nginx to have zero-time redeployments without gross hacks haproxy requires to do that.
falcolas · 10 years ago
I would also recommend nginx. For pure proxying it's rather convenient, and offers some nice high level caching if you desire it.
codehusker · 10 years ago
The Open Container Initiative is an important step towards some form of standardization for the future of containers, and I think the Linux Foundation will steward the project well.
sengork · 10 years ago
Standardisation at the format level will help with portability across different platform implementations.
wmf · 10 years ago
This "standardization" is pretty kludgey if you have to docker pull then docker export.
alex1 · 10 years ago
I think that's because OCI currently only has a specification for the runtime, not the distributable image. But it seems like as of a few weeks ago, work is underway to standardize the distributable image as well: https://github.com/opencontainers/image-spec
ibotty · 10 years ago
There are other tools that can get data from a docker v2 registry.
jamescun · 10 years ago
It is standardisation of the file describing what parameters were used to build a container, ultimately every "implementation" has been a tarball of a linux filesystem which is usable with any linux container system (and possibly soon Windows).
ageofwant · 10 years ago
"As an example, 1.11 is a huge step toward allowing Engine restarts/upgrades without restarting the containers, improving the availability of containers."

This is great.

emdd · 10 years ago
ELI5 the OCI & why it's important? I follow Docker from a distance and find it fascinating.
shykes · 10 years ago
Disclaimer: I work at Docker.

Docker has become a de facto standard for executing programs in a portable sandbox (aka "container"). There were increasing demands for making it a "proper" standard, so last year we donated a spec and reference implementation for a universal intermediary format - a "PDF of containers" if you will, and partnered with the Linux Foundation to manage it. The majority of the industry followed.

Now that the Docker container engine supports this intermediary format, and other providers will soon follow suit, it reduces the risk of depending on one provider. If you want to switch away from Docker, you can run your containers elsewhere.

Now everyone can focus on building better tools, instead of trying to make "their" format win. The result is better tools.

patrickg_zill · 10 years ago
I am sorry if this comes across as mean-spirited, but, I think it is quite disingenuous to claim that Docker is a "de facto standard".

OpenVZ for instance, has well over 77K host-nodes which run over 840K containers.

These are not ephemeral setups on someone's laptop but servers that companies are spending money on each month to have in a datacenter.

IMHO (because the reporting method requires a small stats program to run, which is optional and many don't run it) there are 2 to 10 times more in actual existence.

Source: https://stats.openvz.org/

simula67 · 10 years ago
Am I correct in assuming that containers specified by OCI can be exported and managed by other tools like lmctfy. You see your competitive advantage being in docker.
skj · 10 years ago
As ELI5s go, this seems not well-targeted at 5-year-olds :)
ysh7 · 10 years ago
Disclaimer didn't say you are the CEO.

I hope this guy does this again here, https://news.ycombinator.com/item?id=11379475

robszumski · 10 years ago
As an industry, we hadn't put down on paper what it means to build, execute or discover a container.

If you wanted to sit down and write software to build or execute a container, there wasn't a spec to follow or a checklist of the features you needed to support. Basically, you had no idea if what you wrote today would continue to work in the future. Or if someone else constructed a container, that you could run it.

(CoreOS employee)

HeadlessChild · 10 years ago
Garbage collecting for the registry is a long overdue feature that is quite critical. I'm glad to see that it's now possible.