LXD is actually a cool technology. It's not like Docker/k8s in terms of each "node" usually running only one thing. It's more along the lines of a "VPS in a box", where you can launch virtual servers using a simple command line interface.
I use it to run all of the virtual server machines in my home LAN (mostly virtual desktops) with each machine exposed on the real LAN's DHCP server (so I can move them to a different box if needed), and to launch ephemeral "server boxes" to run some tests or builds or whatever on without polluting my dev environment (think virtualenv, but for anything).
I also used it as the base for my virtual system builder scripts ("container" VMs with LXD, "VM" VMs with libvirt since LXD didn't support VMs back then): https://github.com/kstenerud/virtual-builders
LXD is indeed nice, but it still isn't very widely adopted. I believe this comes down to 3 main issues:
1) It's strongly connected to Canonical and Ubuntu. This is mostly a matter of perception and it is an actual community project. However, I can understand people not feeling comfortable with "snap install lxd".
2) It sits somewhere in between k8s and docker engine. Over time, it will probably get more k8s-like features but still it is a weird position to be in.
3) It lacks a reach ecosystem of tools supporting it and a web UI. This makes it hard for newcomers to adopt. We're working on a web UI ourselves as part of our open source cloud management platform (https://github.com/mistio/mist-ce) and love to hear your thoughts.
Indeed, we have major issues with snap (doesn't work with $HOME on nfs, and auto updates with very few controls are a terrible idea on servers) so avoid anything dependent on it. Otherwise, I really like the concept.
As it requires docker probably difficult for us to try. Do you have any instructions on how to install it on bare metal? Using them can convert them to LXD container images to run, can reverse engineer Dockerfile to create LXD containers but try to avoid environment variables for configuring and running services as they are kind of security vulnerabilities.
I wonder how does Mist compare with OpenNebula, which seems to be in the same area? From a quick glance it looks more complex. (OpenNebula supports LXD directly.)
> 1) It's strongly connected to Canonical and Ubuntu. This is mostly a matter of perception and it is an actual community project. However, I can understand people not feeling comfortable with "snap install lxd".
Last time I tried it on Fedora it did not work (less than 6 months ago).
Also it offers nothing I want over podman with --rootfs
It is a pretty nifty idea but like all things made by Canonical it is basically digitized garbage.
I tried to get LXC running on fedora some months, wondering why there are no official packages and then I very quickly noticed why. LXD uses the old Ubuntu trick of calling sysv style init scripts from systemd unit files without so much as a "|| exit 1" [1][2] - and I don't have time for that bullshit.
If you want vastly superior quality try podman with --rootfs or virt-install. At least those are not made by people with a disdain for error checking.
> It is a pretty nifty idea but like all things made by Canonical it is basically digitized garbage.
> At least those are not made by people with a disdain for error checking.
Every single time there's a topic about Ubuntu or Canonical you seem to go straight into offensive mode. I can tell you use Fedora, but that's not typical of Fedora users and developers and no excuse.
I'm an open source developer and believer, but this kind of behavior slowly burns my soul, even more when it seems accepted. I've worked on enough projects in my life that it's absolutely certain that you use my code regularly. It's inside Go, Python, APT, and RPM by the way (hello Jeff Johnson, wherever you are), and in key libraries that you surely depend on as well. And I never heard you complaining about any of that with such anger here.
I'm also Canonical's CTO, and I was one of the key designers and developers that started snaps, and juju, and other key projects from Canonical. I usually hear such blind hate in silence, but sometimes it's just too much. I don't understand why do we do that to ourselves, as a Linux community. Why is it okay to openly offend unknown people that we almost certainly depend upon? What is it that we came here to do, again?
LXC is legacy and is mostly left as-is for existing users who are happy [and don't want the LXD paradigm shift]. Think of it as 1.0. You're complaining about stuff that is effectively a maintenance branch. At least your links point to LXC, not LXD.
LXD is the new thing. Think of it as 2.0 and onwards.
Thanks! That's good to know because I've been having troubles with Ubuntu 20 and have been considering switching back to redhat.
Then the only remaining part would be figuring out how to do zfs on root in redhat. I do it with a chroot bash script for Ubuntu server atm, but hopefully there's a debootstrap kind of thing on redhat that I can use as a base.
> Now with virtual machines being supported by LXD, we found ourselves needing to support attaching both our traditional filesystem backed volumes to virtual machines (which has been possible for a while and uses 9p) as well as allowing for additional raw disks to be attached to virtual machines.
Didn't know virtual machine support was offered by LXD -- I haven't had much time to actually play with LXD but sure hope it gets more coverage in the future. It's like the alternate more stable (and feature-ful in some rights since they've had rootless containers longer) kubernetes that no one's ever heard of that's been around just as long if not longer.
A short primer -- LXD is the "kubernetes" part, LXC is the "containerd"/"docker" part. LXD is far more "batteries-included" than LXD and runs "system containers" (user namespace-mapped containers that offer better security) easily.
"and feature-ful in some rights since they've had rootless containers longer"
Does LXD actually allow you to run and manage containers as a normal user like podman does? All the official instructions involve adding one's user account to the "lxd" group, which is equivalent to granting oneself root privileges without a password. [1]
lxd does not run your containers -- it uses lxc to do so, and yes, lxc supports user namespaces (this is how you get rootless containers). In the end all these tools get here by doing the same thing, they do uid/gid mapping[0 - search "UID MAPPINGS"][1] and some filesystem module (whether kernel or user space) support. Why that is necessary is basically due to how the kernel works, and how containers work in general which I will leave as an exercise in the reader (hint: a "container" struct does not exist in the linux kernel, containers are a combination of processes and isolation via namespaces and cgroups and other kernel features).
What lxd provides is a layer above lxc to coordinate the containers being run (by lxc) and relevant resources on the machines you want.
"The remote API uses either TLS client certificates or Candid based authentication. Canonical RBAC support can be used combined with Candid based authentication to limit what an API client may do on LXD"
So it's not configured out of the box, but possible.
In that sense docker is also "rootless". Won't be surprising if Canonical does not understand the words they use.
EDIT: actually I find no claims that LXD/LXC supports rootless containers, I think the person claiming it does just don't understand it, would like to see a citation for it.
I’ve only ever used the low level lxc-* tools to manage IPv6 only containers on one host. What are some things that would be easier with LXD? I’ve always wanted to try it but never found the motivation.
Many things are easier in LXD, like building a highly available, fault tolerant cluster of container and vm-instances.
You can deploy on the cloud of your choice or directly on bare-metal.
You can use REST API, Python API, Golang API, PHP API, Puppet, Chef, Ansible, Dockerfile, Fabric, Capistrano or shell scripts whatever you are familiar with to build container or VM images and also manage running instances. You are not limited by Dockerfile format with mixture of shell scripts only to built images. It's a much easier way to handle containers then learning the complexity of Dockerfile, Kubernetes, Help Charts various YAML (designed for google kind of operations for billions of users).
It's more secure by default than an equivalent Docker or Kubernetes based system as it runs VM and Containers in user namespace for a very long time since LXC 1.0 release. Docker itself started it's life with LXC and moved away from it and became popular with marketing and venture capital money.
The multi-machine case is probably where you'd feel the benefit the most -- managing which container runs where, network, storage pooling, etc, that is what lxd is for. LXD sits on top of lxc to get things done, so you can imagine it like just a scripting layer to access lxc.
If you're well served by ssh/scripts/ansible to set up the machines and managing them by manipulating lxc on each one then there's absolutely no need to change! But what lxd offers is a dynamic system that manages that (and other things) for you on the fly.
Stolen from their features page:
> Secure by design (unprivileged containers, resource restrictions and much more)
> Scalable (from containers on your laptop to thousand of compute nodes)
> Intuitive (simple, clear API and crisp command line experience)
> Image based (with a wide variety of Linux distributions published daily)
> Support for Cross-host container and image transfer (including live migration with CRIU)
> Advanced resource control (cpu, memory, network I/O, block I/O, disk usage and kernel resources)
> Device passthrough (USB, GPU, unix character and block devices, NICs, disks and paths)
> Network management (bridge creation and configuration, cross-host tunnels, ...)
> Storage management (support for multiple storage backends, storage pools and storage volumes)
If you don't need nay of these things, then no biggie, but if you're trying to orchestrate those containers on multiple machines it might be worth looking into a management solution. The cross-host container/image transfer and live migration are pretty cool but not very necessary if you can just... let it die and restart somewhere else.
LXD is nothing like Kubernetes which is why it never took off. While is does some things impressively, there is little need for it at this point with KubeVirt and other virtualization on Kubernetes solutions.
> LXD is nothing like Kubernetes which is why it never took off
I assume this is hyperbole, but for anyone who is unfamiliar with either system this statement isn't right -- Kubernetes does a lot of things, but first and foremost it is a container orchestration solution. LXD is also that. Literally from the kubernetes website:
> Kubernetes (K8s) is an open-source system for automating deployment, scaling, and management of containerized applications
And from LXD:
> LXD is a next generation system container manager. It offers a user experience similar to virtual machines but using Linux containers instead.
> The core of LXD is a privileged daemon which exposes a REST API over a local unix socket as well as over the network (if enabled).
> Clients, such as the command line tool provided with LXD itself then do everything through that REST API. It means that whether you're talking to your local host or a remote server, everything works the same way.
If you strip kubernetes down to it's essentials, it is a group of one or more machines running kubelet, with one or more running kube-apiserver. The workloads you run on kubernetes are containerized 99% of the time, with RuntimeClass[0] (and previous to that untrusted runtime support) existing as an option to facilitate VMs and other runtimes that can run a container-ish process.
> While is does some things impressively, there is little need for it at this point with KubeVirt and other virtualization on Kubernetes solutions.
This is also not true -- KubeVirt is just one way to do virtualization on Kubernetes, and there are situations where it may not be optimal. Competition is also a good thing -- If you waited for Kubernetes to get easy/proper support for runtimes with user namespacing, you would have been waiting a long time, whereas the LXD ecosystem has had this for a long time.
The Pokemon Go team actually ran kubernetes on LXD for this reason[1], and gained value from it.
I wonder why systemd-nspawn isn't mentioned/used more. I've been using it for years for full system containerization and I'm really happy with how lightweight and functional it is.
No need for additinal LXC/LXD layers.
I've been using LXD for over a year now, its quite nice if you can get over the snap/ubuntu/canonical link. I think a lot of people are confused of its purpose- We still use K8s and docker everywhere for new applications, but weve ported all of our old VM based applications to LXD. It gives you containerization at a system level (you legitimately cant tell you are on a container if you log in) with a ton of extra niceness like easy snapshoting, transfers to other LXD hosts, clean networking and fast spin ups.
I highly recommend checking it out if you have a bunch of legacy VMs that are a pain to manage.
I use LXD for ad-hoc clean throw-away Linux instances a lot and it is pretty good. One thing that is confusing is that the command line tool it provides is called lxc and not lxd. Even more so as LXC already is a pretty overloaded term.
I've been running the same containers for many years, changing hardware every few years. I started on OpenVZ, moved to linux-vserver, then to LXC, then to LXD ... then on my most recent hardware switch, back to LXD.
LXD's CLI-tool-driven configurations stored in sqlite might be nice if you're starting from scratch, but I gave up trying to determine the incantations required to port my stuff to it. It seemed like I couldn't use the CLI to get from here to there, but I was able to do so by directly modifying the sqlite database. When a CLI doesn't make all valid changes possible, that's frustrating at best. Going back to LXC took an afternoon and I have complete visibility over my configuration, because it's all simple text files.
LXD feels like it's at a point between LXC and K8s, but I'm not sure how many real-world systems want to be there.
One of my main complains about LXD is the documentation. It's scattered and generally not all that good.
Logging is also a bit of an issue for me, but that might just be me not configuring it correctly. Basically I don't think LXD provides enough logging to correctly and easily pinpoint problems.
I know it's a Canonical project, but at time it seems like Stéphane Graber is the only person actively involved in the project. Everything seems to point back to him and his Github account in some way.
I use it to run all of the virtual server machines in my home LAN (mostly virtual desktops) with each machine exposed on the real LAN's DHCP server (so I can move them to a different box if needed), and to launch ephemeral "server boxes" to run some tests or builds or whatever on without polluting my dev environment (think virtualenv, but for anything).
I also used it as the base for my virtual system builder scripts ("container" VMs with LXD, "VM" VMs with libvirt since LXD didn't support VMs back then): https://github.com/kstenerud/virtual-builders
1) It's strongly connected to Canonical and Ubuntu. This is mostly a matter of perception and it is an actual community project. However, I can understand people not feeling comfortable with "snap install lxd".
2) It sits somewhere in between k8s and docker engine. Over time, it will probably get more k8s-like features but still it is a weird position to be in.
3) It lacks a reach ecosystem of tools supporting it and a web UI. This makes it hard for newcomers to adopt. We're working on a web UI ourselves as part of our open source cloud management platform (https://github.com/mistio/mist-ce) and love to hear your thoughts.
Last time I tried it on Fedora it did not work (less than 6 months ago).
Also it offers nothing I want over podman with --rootfs
It is a pretty nifty idea but like all things made by Canonical it is basically digitized garbage.
I tried to get LXC running on fedora some months, wondering why there are no official packages and then I very quickly noticed why. LXD uses the old Ubuntu trick of calling sysv style init scripts from systemd unit files without so much as a "|| exit 1" [1][2] - and I don't have time for that bullshit.
If you want vastly superior quality try podman with --rootfs or virt-install. At least those are not made by people with a disdain for error checking.
[1]: https://github.com/lxc/lxc/blob/master/config/init/systemd/l... [2]: https://github.com/lxc/lxc/blob/master/config/init/common/lx...
> At least those are not made by people with a disdain for error checking.
Every single time there's a topic about Ubuntu or Canonical you seem to go straight into offensive mode. I can tell you use Fedora, but that's not typical of Fedora users and developers and no excuse.
I'm an open source developer and believer, but this kind of behavior slowly burns my soul, even more when it seems accepted. I've worked on enough projects in my life that it's absolutely certain that you use my code regularly. It's inside Go, Python, APT, and RPM by the way (hello Jeff Johnson, wherever you are), and in key libraries that you surely depend on as well. And I never heard you complaining about any of that with such anger here.
I'm also Canonical's CTO, and I was one of the key designers and developers that started snaps, and juju, and other key projects from Canonical. I usually hear such blind hate in silence, but sometimes it's just too much. I don't understand why do we do that to ourselves, as a Linux community. Why is it okay to openly offend unknown people that we almost certainly depend upon? What is it that we came here to do, again?
LXC is legacy and is mostly left as-is for existing users who are happy [and don't want the LXD paradigm shift]. Think of it as 1.0. You're complaining about stuff that is effectively a maintenance branch. At least your links point to LXC, not LXD.
LXD is the new thing. Think of it as 2.0 and onwards.
Then the only remaining part would be figuring out how to do zfs on root in redhat. I do it with a chroot bash script for Ubuntu server atm, but hopefully there's a debootstrap kind of thing on redhat that I can use as a base.
About six months ago I tried running SystemTap on Ubuntu...
Didn't know virtual machine support was offered by LXD -- I haven't had much time to actually play with LXD but sure hope it gets more coverage in the future. It's like the alternate more stable (and feature-ful in some rights since they've had rootless containers longer) kubernetes that no one's ever heard of that's been around just as long if not longer.
For those who want more introduction to the ecosystem -- https://linuxcontainers.org
A short primer -- LXD is the "kubernetes" part, LXC is the "containerd"/"docker" part. LXD is far more "batteries-included" than LXD and runs "system containers" (user namespace-mapped containers that offer better security) easily.
Does LXD actually allow you to run and manage containers as a normal user like podman does? All the official instructions involve adding one's user account to the "lxd" group, which is equivalent to granting oneself root privileges without a password. [1]
[1] https://linuxcontainers.org/lxd/docs/master/security
What lxd provides is a layer above lxc to coordinate the containers being run (by lxc) and relevant resources on the machines you want.
[0]: https://linuxcontainers.org/lxc/manpages/man5/lxc.container....
[1]: http://docs.podman.io/en/latest/markdown/podman-create.1.htm...
"The remote API uses either TLS client certificates or Candid based authentication. Canonical RBAC support can be used combined with Candid based authentication to limit what an API client may do on LXD"
So it's not configured out of the box, but possible.
EDIT: actually I find no claims that LXD/LXC supports rootless containers, I think the person claiming it does just don't understand it, would like to see a citation for it.
You can deploy on the cloud of your choice or directly on bare-metal.
You can use REST API, Python API, Golang API, PHP API, Puppet, Chef, Ansible, Dockerfile, Fabric, Capistrano or shell scripts whatever you are familiar with to build container or VM images and also manage running instances. You are not limited by Dockerfile format with mixture of shell scripts only to built images. It's a much easier way to handle containers then learning the complexity of Dockerfile, Kubernetes, Help Charts various YAML (designed for google kind of operations for billions of users).
Just check https://www.youtube.com/watch?v=RnBu7t2wD4U and see how easy it is to setup 3 node fault tolerant cluster. Now with 4.3 it's mature and very resilient.
It's more secure by default than an equivalent Docker or Kubernetes based system as it runs VM and Containers in user namespace for a very long time since LXC 1.0 release. Docker itself started it's life with LXC and moved away from it and became popular with marketing and venture capital money.
If you're well served by ssh/scripts/ansible to set up the machines and managing them by manipulating lxc on each one then there's absolutely no need to change! But what lxd offers is a dynamic system that manages that (and other things) for you on the fly.
Stolen from their features page:
> Secure by design (unprivileged containers, resource restrictions and much more) > Scalable (from containers on your laptop to thousand of compute nodes) > Intuitive (simple, clear API and crisp command line experience) > Image based (with a wide variety of Linux distributions published daily) > Support for Cross-host container and image transfer (including live migration with CRIU) > Advanced resource control (cpu, memory, network I/O, block I/O, disk usage and kernel resources) > Device passthrough (USB, GPU, unix character and block devices, NICs, disks and paths) > Network management (bridge creation and configuration, cross-host tunnels, ...) > Storage management (support for multiple storage backends, storage pools and storage volumes)
If you don't need nay of these things, then no biggie, but if you're trying to orchestrate those containers on multiple machines it might be worth looking into a management solution. The cross-host container/image transfer and live migration are pretty cool but not very necessary if you can just... let it die and restart somewhere else.
I assume this is hyperbole, but for anyone who is unfamiliar with either system this statement isn't right -- Kubernetes does a lot of things, but first and foremost it is a container orchestration solution. LXD is also that. Literally from the kubernetes website:
> Kubernetes (K8s) is an open-source system for automating deployment, scaling, and management of containerized applications
And from LXD:
> LXD is a next generation system container manager. It offers a user experience similar to virtual machines but using Linux containers instead.
> The core of LXD is a privileged daemon which exposes a REST API over a local unix socket as well as over the network (if enabled).
> Clients, such as the command line tool provided with LXD itself then do everything through that REST API. It means that whether you're talking to your local host or a remote server, everything works the same way.
If you strip kubernetes down to it's essentials, it is a group of one or more machines running kubelet, with one or more running kube-apiserver. The workloads you run on kubernetes are containerized 99% of the time, with RuntimeClass[0] (and previous to that untrusted runtime support) existing as an option to facilitate VMs and other runtimes that can run a container-ish process.
> While is does some things impressively, there is little need for it at this point with KubeVirt and other virtualization on Kubernetes solutions.
This is also not true -- KubeVirt is just one way to do virtualization on Kubernetes, and there are situations where it may not be optimal. Competition is also a good thing -- If you waited for Kubernetes to get easy/proper support for runtimes with user namespacing, you would have been waiting a long time, whereas the LXD ecosystem has had this for a long time.
The Pokemon Go team actually ran kubernetes on LXD for this reason[1], and gained value from it.
[0]: https://kubernetes.io/blog/2018/10/10/kubernetes-v1.12-intro...
[1]: https://www.youtube.com/watch?v=kQslklE5dKs&t=56s
No, LXD is not 'the "kubernetes" part'. There is no ' the "kubernetes" part' in the steaming pile of shit that LXD is.
EDIT: Also, please cite the claim that LXD/LXC supports rootless containers. Can't find anything substantial to back it up.
I highly recommend checking it out if you have a bunch of legacy VMs that are a pain to manage.
LXD's CLI-tool-driven configurations stored in sqlite might be nice if you're starting from scratch, but I gave up trying to determine the incantations required to port my stuff to it. It seemed like I couldn't use the CLI to get from here to there, but I was able to do so by directly modifying the sqlite database. When a CLI doesn't make all valid changes possible, that's frustrating at best. Going back to LXC took an afternoon and I have complete visibility over my configuration, because it's all simple text files.
LXD feels like it's at a point between LXC and K8s, but I'm not sure how many real-world systems want to be there.
I would suggest give podman with --rootfs a try.
Logging is also a bit of an issue for me, but that might just be me not configuring it correctly. Basically I don't think LXD provides enough logging to correctly and easily pinpoint problems.
I know it's a Canonical project, but at time it seems like Stéphane Graber is the only person actively involved in the project. Everything seems to point back to him and his Github account in some way.