It's something you can do since a lot of years. I used to do so 10 years ago, when I've got the first motherboard with UEFI. But is it useful? It saves a minimal time in the boot sequence, but at what cost?
The bootloader (being it grub, or something more simple as systemd-boot) is useful to me for a couple of reasons:
- it allows to dual-boot with Windows easily: motherboard boot menu is often not easy to access, you need to perform some key combination in a short window, also modern bootloader save the last boot option such that if Windows reboots for an update Linux does not start
- it allows to edit the cmdline of the kernel to recover a system that does not boot, e.g. start in single user mode. That can really save your day if you don't have on hand an USB stick and another PC to flash it
- it allows you to choose between multiple kernels and initrd images easily, again for recovery purposes
- it has a voice for entering the UEFI setup menu: in most modern systems again entering the UEFI with a keyboard combination is unnecessarily difficult and has a too short timeout
- it allows you to boot any other EFI application, such as memtest, or efi shell. Most UEFI firmwares doesn't have a menu to do so.
If I'm understanding correctly, it might help to point out that in spite of the title they are proposing a bootloader, which can still let you modify the cmdline, boot to other OSs, etc. It's just that the bootloader is itself using the Linux kernel so it can do things like read all Linux filesystems for "free" without having to rewrite filesystem drivers.
you seem to be saying that they are using two separate kernels, one for the bootloader and one for the final boot target
the title text says 'Loaded by the EFI stub on UEFI, and packed into a unified kernel image (UKI), the kernel, initramfs, and kernel command line, contain everything they need to reach the final boot target' which sounds like they're not talking about using two separate kernels, one for the bootloader and one for the final boot target, but rather only one single kernel. possibly that is not the case because the actual information is hidden in a video i haven't watched
You can have command line parameters baked into the EFISTUB.
I also have two kernels, so there's two UKIs on /efi, and I have both added as separate boot options in BIOS.
Just because the boot loader is using Linux, it doesn’t prevent an alternative OS from being booted into, so there is nothing fundamentally stopping all of grub’s features from working in this new scheme.
It is a bit more complex, though. Quoting "nmbl: we don’t need a bootloader" from last month[1]:
> - Possibility to chainload from Linux while using Secure / Trusted boot: Dual-booting, although not supported on RHEL, is important for Fedora. While there are attempts to kexec any PE binary, our plan is to set BootNext and then reset, which will preserve the chain of trust that originates in firmware, while not interfering with other bootloaders.
It could be seen as an advantage to do chainloading by setting BootNext and resetting. I think Windows even does this now. However, it certainly is a different approach with more moving parts (e.g. the firmware has to not interfere or do anything stupid, harder than you'd hope) and it's definitely slower. It'd be ideal if both options were on the table (being able to `kexec` arbitrary UEFI PE binaries) but I can't imagine kexec'ing random UEFI binaries will ever be ideal. It took long enough to really feel like kexec'ing other Linux kernels was somewhat reliable.
If you embed an x86 system somewhere then you might find yourself not wanting to use GRUB because you don't want to display any boot options anywhere other than the Linux kernel. The EFI stub is really handy for this use case. And on platforms where UBoot is common UBoot supports EFI which makes GRUB superfluous in those cases.
Many of the Linux systems I support don't have displays and EFI is supported through UBoot. In those cases you're using a character-based console of some sort like RS232.
A lot of those GRUB options could also be solved by embedding a simple pre-boot system in an initial ramdisk to display options, which maintains all of the advantages of not using GRUB and also gives you the ability to make your boot selection. The only thing GRUB is doing here is allowing you to select which kernel to chain-load, and you can probably do the same thing in initramfs too through some kind of kernel API that is disabled after pivot root.
I just have two kernels with two boot options in BIOS. I just hit F11 at boot time and choose a BIOS boot option for either kernel. Of-course, you need to add the entries in UEFI, either from UEFI shell either with some tool (efibootmgr).
This scheme also supports secure booting and silent booting. The stubs are signed after being generated.
Does Windows not ensure that the UEFI boots back into Windows when it does an auto-reboot for updates? There's a UEFI variable called BootNext which Windows already knows how to use since the advanced startup options must be setting it to allow rebooting directly to the UEFI settings.
Given that Windows tries to restore open windows to make it look like it didn't even reboot, I'm surprised they wouldn't make sure that the reboot actually goes back into Windows.
No, it doesn't. Even a sysprepped image of Windows (which thus runs Setup to install drivers and finalize the installation) doesn't change the boot order on UEFI machines. I think just the installer does this when you first install Windows.
Not in my experience. For my typical dual boot situation where Grub is installed as the bootloader, I have to update the Grub settings like so to allow Windows updates to go smoothly:
What kind of machines are people using that entering the UEFI boot menu is difficult? On all three of mine I just press F10 during the first 5 or seconds the vendor logo shows, and I end up in a nice menu where I could select Windows, other kernels, memtest, or the EFI shell or setup.
One easy way to meet Microsoft's boot time requirements is to skip input device enumeration, so there's a lot of machines meeting the Windows sticker requirements where entering the firmware either requires a bunch of failed boots or getting far enough into the boot process that you can be offered an opportunity to reboot into the setup menu.
I was working on my Dad's Dell laptop this weekend, and no matter how quickly I spammed the correct key (F12 in this case) it would miss it and continue to a full boot about 3/4 times. I never figured out if it is just picky about timing, or if it had different types of reboots where some of them entering BIOS wasn't even an option.
On my last two uefi boards, if I press F12 or F8 too soon after power on it either stalls the boot, or it makes it restart. When the latter happens, I’m always too careful in pressing it causing me to miss the window of opportunity and booting right to the OS. Entering the bios or choosing the boot drive regularly takes me 3 tries. (Gigabyte with Intel and Asus with AMD.)
> - it allows to dual-boot with Windows easily: motherboard boot menu is often not easy to access, you need to perform some key combination in a short window, also modern bootloader save the last boot option such that if Windows reboots for an update Linux does not start
Do people really dual boot a lot in 2024? It was a good use case when virtualization was slow but decades after the CPU started shipping with virtualization extensions there is virtually zero overhead in using VM nowadays and it is much more convenient than rebooting and losing all your open applications just to start one on another OS.
> - it allows you to boot any other EFI application, such as memtest, or efi shell. Most UEFI firmwares doesn't have a menu to do so.
How many times in a decade are you running memtest?
Getting to UEFI firmware or booting another OS/drive is just a matter of holding one key on my thinkpad. I would just simply not buy and bad hardware that doesn't allow me to do that. Vote with you wallet damit.
I would also argue that you can perfectly have grub sitting alongside a direct boot to kernel in an UEFI setup. There are many other bootloaders than grub and users are still free to use them instead of what the distro is shipping. UEFI basically allows you to have as many bootloader as you have space on that small fat partition.
> there is virtually zero overhead in using VM nowadays
Not for real-time audio production. The state of audio plugins having Linux support from vendors like EastWest, Spitfire, Native Instruments, iZotope is abysmal and even Wine does not run them nowadays.
Even with a virtual machine that has pinned cores and USB pass-through of a dedicated audio interface, it practically locks you to one sample rate, any change causes crackles, try to load more than one plugin and you hear crackles. There is plenty of overhead.
The state of GPU virtualisation, for example, is a spectrum from doesn't exist/sucks to only affordable for enterprise customers.
So unless you have a second graphics card to do pass through with, if you want to use your GPU under both OSes then you almost always have to dual boot (yes, there are other options like running Linux headless, but it's not even remotely easier to set up than dual boot)
One of the big problems is with the graphics cards, because the vendors block a driver functionality ( SR-IOV ) for consumer GPUs that would allow single GPU passthrough for VMs.
The alternative is to leave the system headless (reboot needed, and the VM need to run as root), or to use two graphics cards (wasting power, hardware resources, etc.), for which you also need to add an extra delay layer inside the VM for to re-send the graphics back to the screen, or to connect two wire inputs to the monitor.
> there is virtually zero overhead in using VM nowadays
It might be more accurate to say that if you have a fast computer with lots of resources the experience running a basic desktop experience feels perceptibly native. This means it is a great answer to running windows software that neither needs a discrete GPU nor direct access to hardware on the minority of machines that are capable enough for this to be comfortable.
In actuality laptops are more common than desktops and the majority of computers have 8GB of RAM or less. 60% all form factors 66% laptops. This just isn't enough to comfortably run both.
Furthermore while most Linux users are comfortable installing and running windows and Linux whereas they may or may not be familiar with virtualization.
Also probably the number one reason someone might dual boot is probably still gaming which although light years ahead of years prior still doesn't have 100% compatibility with Windows. In theory GPU passthrough is an option but in reality this is a complicated niche configuration unsuitable for the majority of use cases. Anyone who isn't happy with steam/proton/wine is probably more apt to dual boot rather than virtualize.
Yes, people dual boot. Particularly people who are contemplating a move from Windows. I'd hate to see Linux take the "my way or the highway" attitude of Windows.
I dual-boot on my personal desktop. I mostly use Debian, but there's a Windows partition for games and a few other Windows-specific things. The GPU in it was way too expensive to justify buying two, and I use it under Linux for ML, hash-cracking, etc.
My original plan was to do everything in a Windows VM, but there was too much of a performance hit for some of my purposes, and VMWare doesn't allow attaching physical disks or non-encrypted VMDKs to a Windows 11 VM, so it's actually easier to have a data drive that's accessible from both OSes with dual boot than it would be with a VM.[1] I'm still disappointed about that.
[1] Using HGFS to map a host path through to the VM is not an option because of how slow that is, especially when accessing large numbers of files.
As much as I generally detest indirection, for me a bootloader is a necessity; I need the flexibity to boot different OS kernels. AFAIK, UEFI offers no such flexibility. NetBSD's bootloader is best for me. UEFI seems like an OS unto itself. A command line, some utilties and network connectivity (UNIX-like textmode environment) is, with few exceptions, 100% of what I need from a computer. To me, UEFI seems potentially quite useful. But not as a replacement for a bootloader.
>I need the flexibity to boot different OS kernels. AFAIK, UEFI offers no such flexibility.
Yes it does, I use it with two kernels, just have different entry for each stub in UEFI. Whenever I want to boot the non-default kernel I just hit F11 (for BIOS boot menu, on my motherboard) and choose the boot option. You just need to add the boot options in UEFI, pointing to the corresponding EFI files. They also have the kernel command line parameters baked into them and you can set your desired ones (silent boot whatever).
You left out the most important reason I went back to using grub: some motherboards have dodgy UEFI support, and having an extra layer of indirection seems to be more robust sometime for some reason.
I dual boot Win/Arch easily with EFISTUB setup. It's super quick to boot to a usb stick of arch if I need to edit anything with the configuration in an "emergency" situation as well. https://wiki.archlinux.org/title/EFISTUB
I've used gummiboot before systemd ate it; and I've used rEFInd. Mainly, I just followed the excellent documentation @ https://www.rodsbooks.com/; that's also how I first familiarized myself with UEFI (Thanks Rod!).
My brain has leaked all the information I understood (unfortunately). Is rEFInd still active? Is there a gummiboot fork (besides systemd)?
Personally, I kind of hate Redhat calling itself that now, it's IBM. You can tell because all of the online knowledge from the community on their websites are now pay-walled. RIP Redhat (CentOS) (I'll miss you)
> it allows to dual-boot with Windows easily: motherboard boot menu is often not easy to access, you need to perform some key combination in a short window
Hardly a problem in my experience - just hold down the key while booting.
And dual booting is rarely needed anyway and generally just a pita. Just always boot into your preferred OS and virtualize the other one when you really need it.
> also modern bootloader save the last boot option such that if Windows reboots for an update Linux does not start
You can change the EFI boot entries including priority from the OS, e.g. via efibootmgr under Linux. Should be easy to setup each OS to make itself the default on boot if that's really what you want.
> it allows to edit the cmdline of the kernel to recover a system that does not boot, e.g. start in single user mode. That can really save your day if you don't have on hand an USB stick and another PC to flash it
All motherboards I have used had an EFI shell that you can use to run EFI programs such as the Linux kernel with efistub with whatever command-line options you want.
> it allows you to choose between multiple kernels and initrd images easily, again for recovery purposes
EFI can have many boot entries too.
> it has a voice for entering the UEFI setup menu
What does "a voice" here mean? Or you meant "a choice"? Either way, same as with the boot menu you can just hold down the key while booting IME.
> it allows you to boot any other EFI application, such as memtest, or efi shell. Most UEFI firmwares doesn't have a menu to do so.
In my experience the EFI shell has always been accessible without a bootloader.
> And dual booting is rarely needed anyway and generally just a pita. Just always boot into your preferred OS and virtualize the other one when you really need it.
I've been dual-booting linux since the kernel 2.2.x era and being able to do it was a major driver to migrate away from windows. It is super important for onboarding of new users that can't yet get rid of windows fully - mostly because of gaming (yes proton is nice, but anything competive that uses anti-cheat won't work yet is the majority share of gaming). And that is the reason I still boot into Windows on my dual-boot machine: Gaming. For me that windows is just a glorified bootloader into GoG or Steam, yet desperately needed and virtualization won't solve anything here.
I have experience with two different laptops:
1. Dell enterprise laptops generally have a robust EFI system which allows for all kinds of `.efi` files to boot on `vfat` partitions. Dell laptops also have a good firmware setup for stuff like mokutils to work so that people can use measured boot with their own version of linux. They also work extremely well with self-encrypting nvme drives.
2. HP consumer laptops which are the worst of lot and essentially prevent you from doing anything apart from stock configurations, almost like on purpose.
3. All other laptops which have various levels of incompetence but seems pretty harmless.
For all laptops apart from Dell, Grub is the bootloader that EFI could never be.
> - it allows to edit the cmdline of the kernel to recover a system that does not boot, e.g. start in single user mode. That can really save your day if you don't have on hand an USB stick and another PC to flash it
You can use the UEFI shell for this. It's kind of a replacement for the old MS-DOG command line.
It is bold of RedHat to claim this is 'their solution'. UEFI has already been used for years to boot without grub. Some examples, MacOS, HP-UX, or systemd-boot via UEFI.
> it allows to edit the cmdline of the kernel to recover a system
Except they've made it increasingly harder to do this over the years. Nowadays you have to guess when it is on the magic 1 second of "GRUB time" before it starts loading and then smack all the F keys and ESC key and DEL key at the same time with both hands and both feet because there is nothing on the screen that tells you which key it actually is.
All while your monitor blanks out for 3 seconds trying to figure out what HDMI mode it is using, hoping that after those 3 seconds are over that you smacked the right key at the right time.
And then you accidentally get into the BIOS instead of the GRUB.
It used to be a nice long 10 seconds with a selection menu and clearly indicated keyboard shortcuts at the bottom, and you could press ENTER to skip the 10 second delay. That was a much better experience. If you're in front of the computer and care about boot time, you hit enter. If you're not in front of the computer, the 10 seconds don't matter.
I know you can add the delay back, I just wish the defaults were better.
> - it allows to edit the cmdline of the kernel to recover a system that does not boot, e.g. start in single user mode. That can really save your day if you don't have on hand an USB stick and another PC to flash it
This is an indication of bad admin choice. The kernel defaults should not corrupt the boot process and if you add further experimental flags for testing you ought to have a recovery mechanism in place beforehand.
Windows Boot Manager can chainload into any arbitrary bit of code if you point it where it needs to hand off.
It's a feature that goes back to Windows NT (NTLDR) supporting dual boot for Windows 9x, but it can be repurposed to boot anything you would like so long as it can execute on its own merit.
eg: Boot into Windows Boot Manager and, instead of booting Windows, it can hand off control to GRUB or systemd-boot to boot Linux.
>Windows Boot Manager can chainload into any arbitrary bit of code if you point it where it needs to hand off.
With the NT6 bootloader this appears to be limited to operating only in BIOS mode using bootmgr.exe. The traditional chainloading is still possible by pointing to a binary file which is a copy of a valid partition bootsector, whether it is a Microsoft bootsector or not.
The equalent BCD for UEFI mode uses bootmgr.efi (instead of bootmgr.exe), and does not seem to be capable of chainloading even when there is an equivalent BOOTSECTOR bootentry on the NT6 multiboot menu.
It would be good to see an example of the NT6 bootloader successfully handling UEFI multibooting which includes starting Linux from the EXTx partition it is installed on. Still works perfectly in BIOS since early NT, but in UEFI not so much.
It allows you to enter your passphrase to unlock your Linux LUKS partition before you even get a menu to chainload Windows.
At least this is what an Arch Linux derivative (Artix) system of mine does, amusingly. It sort of gives an observer the impression that it's an encrypted Windows system on boot.
I'd have preferred CoreBoot or OpenFirmware, but the PC industry was too slow to move and let Intel -- still smarting from Microsoft forcing it to adopt AMD's 64-bit x86 extensions -- take control of the firmware.
I personally think they're moving in the wrong direction. I'd rather have "NMIRFS" (no more initramfs). Eg, a smarter bootloader that understands all bootable filesystems and cooperates with the kernel to pre-load modules needed for boot and obviates the need for initramfs.
FreeBSD's loader does this, and its so much easier to deal with. Eg, it understands ZFS, and can pre-load storage driver modules and zfs.ko for the kernel, so that the kernel has everything it needs to boot up. It also understands module dependencies, and will preload all modules that are needed for a module you specify (similar to modprobe).
As other sibling comments have explained, an initramfs is usually optional for booting Linux.
If you build the drivers for your storage media and filesystem into the kernel (not as a module), and the filesystem is visible to the kernel without any userland setup required beforehand (e.g. the filesystem is not in an LVM volume, not on an MD-RAID array, not encrypted), it is fully capable of mounting the real root filesystem and booting init directly from it.
The only point of consideration is that it doesn't understand filesystem UUIDs or labels (this is part of libuuid which is used by userland tools like mount and blkid), so you have to specify partition UUIDs or labels instead (if you want to use UUIDs or labels). For GPT disks, this is natively available (e.g. root=PARTUUID=4F68BCE3-E8CD-4DB1-96E7-FBCAF984B709 or root=PARTLABEL=Root). For MS-DOS disks, this is emulated for UUIDs only by using the disk ID and partition number (e.g. root=PARTUUID=11223344-02).
You can also specify the device name directly (e.g. root=/dev/sda2) or the major:minor directly (e.g. root=08:02), but this is prone to enumeration order upset. If you can guarantee that this is the only disk it will ever see, or that it will always see that disk first, this is often the most simple approach, but these days I use GPT partition UUIDs.
Yes, I think he realizes it's optional for booting Linux.
In practice, we have generic kernels which require a lot of stuff in modules for real user systems running on distributions. Instead, though, we could have a loader which doesn't require this big relatively-opaque blob and instead loads the modules necessary at boot time (and does any necessary selection of critical boot devices). i.e. like FreeBSD does.
There's advantages each way. You can do fancier things with an initramfs than you ever could do in the loader. On the other hand, you can change what's happening during boot (e.g. loading different drivers) without a lot of ancillary tooling to recover a system.
Not entirely, it is possible for simple boot solutions, but to enable LUKS and various other rootfs, it usually requires some userspace probing, authentication, etc. I'm not so sure it would be easy to convince the kernel maintainers to add a user prompt for pin code to the kernel just to avoid having an initrd.
The Linux kernel does not require an initramfs. You can build a kernel with everything compiled in; with no modules needed at all. Initramfs is used for generic kernels where you don't know beforehand which features will be required. This allows you to avoid wasting RAM on features you don't use. But it is optional.
I realize that. But every distro I've used uses an initramfs, so unless you want to build your own kernels, you're stuck with it, and the painfully slow initramfs updates when you update packages, and dkms (or similar) updates the initramfs with the newer version of your out-of-tree modules.
Is anyone really wanting to get back into the business of building their own kernels? I started using Linux heavily in '92, and I've built a lot of kernels, and am quite happy to not be building them anymore.
> Initramfs is used for generic kernels where you don't know beforehand which features will be required.
And also for e.g. cases where you've got some custom stack of block devices that you need to set up before the root FS and other devices can be mounted. It's not just about loading kernel modules.
This requires a bunch of additional logic in the bootloader (eg, providing disk encryption keys), and since you're not doing this in a fully-featured OS environment (as you are in the initramfs case) it's going to be hard providing a consistent experience through the entire boot process. Having the pre-boot environment be a (slightly) cut-down version of the actual OS means all your tooling can be shared between the two environments.
This is a step in that direction. What they are proposing is not so much "no bootloader" but using a small Linux as bootloader. I'm using a similar setup for some time and it gives some of these advantages. Especially you get support for all relevant filesystems (you can support everything Linux supports because it is Linux), it can dynamically build a minimal initramfs with only the needed drivers if you want to and understands module dependencies (e.g. it can just dump the list of modules it uses itself) and is generally much more flexible.
Linux has multiple choices for filesystems for root, even if you only count the most popular ones. And on top of that they could be encrypted by LUKS. Duplicating all that into the bootloader is what GRUB does, and poorly. Putting the kernel into the ESP is much better in that regard.
Does FreeBSD's loader share code with the kernel? It does seem like a lot of duplication of systems to make it work in comparison to just using the same code.
A lot of the commentary here is based on misunderstandings of the capabilities and constraints of a UEFI environment and what the actual goals of this project are, and I think miss the mark to a large degree. Lennart's written some more explicit criticism at https://lwn.net/Articles/981149/ and I think that's a much more interesting set of concerns.
I have to say I find Lennart's arguments quite unconvincing. As another person said, the vast majority of people just want default boot to the most recent kernel (which this proposal could do well).
But then when it comes to the other points, yes I want to be able to reliably boot into other systems, but both systemd-boot and grub are notoriously bad at detecting other systems on disks (both use install-time detection IIRC). The only one which does a reasonable job is rEFInd. But even more a kernel with appropriate drivers could even add kernels/systems on usb disks to the selections (why do I have to go to the UEFI selection to boot from USB).
The next thing he completely ignores is booting into zfs or btrfs snapshots, which is not possible using systemd-boot AFAIK, and again would be much nicer to do with a kernel.
Also, from what I understand after watching some of the video demonstration in the Q&A, I could just have another EFI entry point towards the nmbl configuration with a grub like menu, and get an exact replica of the grub experience. Having to go through the BIOS boot menu for those rare occasions where I need it is perfectly reasonable.
I feel like that post misses the biggest one that pulls people to GRUB: complicated boot sources and procedures. Filesystems that UEFI doesn't understand, more complex network boot sources, all that kind of complex messiness that GRUB enables and others don't. Now, whether those are good idea or not is a different question, but I think this is a good concept for a full replacement for GRUB, as opposed to the existing replacements which already cover the 90% case pretty well. (And I think it's got a case for handling the other cases OK: from the sounds of it they plan to lean on UEFI and A/B image to handle fallback, and it'll basically just work as a direct UEFI boot in the common case)
Thinking about it a bit more, though, it does feel like a hybrid approach is probably better. For dual-booting off local disks and other simple cases, just having the kernel and initramfs alongside other OS options makes a lot of sense, and you can use the UEFI boot menu or something deliberately simple like systemd-boot to select between them for dual-boot or recovery. For more complex cases (where your rootfs is not just something the kernel can mount on its own), instead you basically just want a process for building your initramfs to do that from a config like grub (which is already how a lot of cases like that are solved, anyway), and in extreme cases where you also want to stash a kernel in some other location then you can use kexec from that. But for just a boot menu (which is aready in the minority case and 90% of users in that case need nothing more) it feels even heavier than grub for little benefit.
> completely useless if you care about Measured Boot
I stopped reading there. All these engineers who help build and defend this draconian crap should be forced to used only an iPad for the rest of their lives.
Measured boot is, in itself, under user control - you can seal whatever secrets you want to any specific state and they'll only be accessible in that situation. This has obvious benefits in terms of being able to (for instance) tie disk encryption keys to a known boot state and so avoid needing to type in a decryption phrase while still preventing anyone from being able to simply modify your boot process to obtain that secret. The largest risk around this is from remote attestation, and that's simply not something where the infrastructure exists for anyone to implement any kind of user restriction (and also it's trivial to circumvent by simply tying any remote attestation to a TPM that's not present at boot time and so can be programmed as necessary - it's just not good at being useful DRM)
This reminds me of MILO for booting Linux on some (?) DEC Alpha systems back in the 90s. I don't remember much about the actual firmware anymore. Much like today with UEFI, the system had some low-level UI and built-in drivers to support diagnostics, disk and network booting, etc.
MILO could be installed as a boot entry in the firmware-level boot menu. MILO was a sort of stripped down Linux kernel that used its drivers to find and load the real kernel, ending with a kexec to hand over the system.
No matter how you slice it, I think you'll always come around to wanting this sort of intermediate bootloader that has a different software maintenance lifecycle from the actual kernel. It is a fine idea to reuse the same codebase and get all the breadth of drivers and capabilities, but you want the bootloader to have a very "stability" focused release cycle so it is highly repeatable.
And, I think you want a data-driven, menu/config layer to make it easy to add new kernels and allow rollback to prior kernels at runtime. I hope we don't see people eventually trying to push Android-style UX onto regular Linux computers, i.e. where the bootloader is mostly hidden and the kernel treated as if it is firmware, with at most some A/B boot selection option.
My previous laptop was a Chromebook running Linux+Coreboot. Unfortunately the usual Tianocore UEFI BIOS people use had some bugs in the nvme and keyboard drivers, which I gave up fixing or working around (at the time). Obviously Linux had working drivers because that's all ChromeOS is, so we setup a minimal Linux install as the Coreboot payload in the firmware flash, and I wrote a little Rust TUI to mount all visible partitions and kexec anything that looked like a kernel image. It worked like a charm and had all kinds of cool features, like wifi and a proper terminal for debugging in the BIOS! Based on that experience I don't see any reason why we don't just use Linux direct instead for everything. Why duplicate all the drivers?
Just checked and amusingly I'd forgotten that boot/root predated LILO, I must've first seen LILO when I installed Softlanding Linux.
Since I didn't have any networking on my home machine, Linux was basically a "Look, run GCC on your home machine!" option for '91 that didn't involve going through DJGPP's DOS port.
I'm curious if they're proposal will be capable of handling multi-os boots. I know grub can, I can have Linux and windows and possibly even a third OS if I want. I am concerned that red hats solution the well-intended, may be rather myopic, and be commercial only. What I failed to understand, is what problem this solves for systems that I probably only reboot once or twice a year. (Given that it only works with Linux only systems)
The issue it solves, according to the talk, is that grub presents a fairly big attack surface for something that is sparsely maintained and that could be done in the kernel, which has a lot of active devs.
Yeah, look at Windows 10 if you want to see how this can be done poorly. Its boot menu works by booting Windows 10 first and then restarting the computer if you choose another OS. This includes going all the way through POST again. Took something like two minutes end-to-end to get to Windows 7.
I've thought about something like this before, but I have so many questions on just the basic premise...
First: Linux could already be booted directly from the UEFI manager. You don't need GRUB at all. So why a new scheme - why weren't they just doing that?
Second (and third, etc.): If I have multiple Linux installations along with a Windows installation, wouldn't this mean one of them now has to be the one acting as the boot loader? Could it load the other one regardless of what distro it is, without requiring e.g. an extra reboot? And wouldn't this mean they would no longer be on equal footing, since one of them would now become the "primary" one when booting? Would its kernel have to be on the UEFI partition...?
Booting linux directly just boots you into that install. It doesn't give you a boot menu or any of the other functionality GRUB provides. This project is basically proposing building that in a small initramfs userland instead (which has the advantage of requiring much less effort and code duplication). It's functionally very similar to GRUB, including with regard to your last point: generally speaking at the moment one OS needs to be managing the boot menu, and when they fight over it things go badly (see the status quo where Windows will occasionally just insert itself as the default after an update). UEFI could in principle have fixed this, but the inconsistent implementation between vendors makes it an unreliable option for OS developers.
(And in principle this system could load other linux distros assuming there was some co-ordination in how to do so. Windows is more difficult, as is interaction with secure boot)
> Booting linux directly just boots you into that install. It doesn't give you a boot menu or any of the other functionality GRUB provides. This project is basically proposing building that in a small initramfs userland instead
I indeed understood that part, but their motivation for this was security. If you want security, you should want to boot directly into the kernel. And if you're the occasional user who has multiple OSes installed in parallel... you can just add more kernels from your dual-boot installs directly to the UEFI screen; there's really no need to go through any form of intermediate stage, whether kernel-based or boot-loader-based.
What I'm trying to say is: as cool as this is from a technical standpoint, I just don't understand the root of the premise or motivation here whose optimal solution is this approach. Whom is RedHat trying to please with this? The small fraction of users who dual-boot Linux, or the rest of the users who just have a single install? And what problem are they actually trying to solve -- security, performance, or something else? Because the optimal solution to the first two doesn't feel like this one, unless they're targeting a niche use case I'm not seeing? e.g., do they have lots of enterprise users that boot off a network, but whom would rather have a local Linux install whose sole job is to boot that...?
The bootloader (being it grub, or something more simple as systemd-boot) is useful to me for a couple of reasons:
- it allows to dual-boot with Windows easily: motherboard boot menu is often not easy to access, you need to perform some key combination in a short window, also modern bootloader save the last boot option such that if Windows reboots for an update Linux does not start
- it allows to edit the cmdline of the kernel to recover a system that does not boot, e.g. start in single user mode. That can really save your day if you don't have on hand an USB stick and another PC to flash it
- it allows you to choose between multiple kernels and initrd images easily, again for recovery purposes
- it has a voice for entering the UEFI setup menu: in most modern systems again entering the UEFI with a keyboard combination is unnecessarily difficult and has a too short timeout
- it allows you to boot any other EFI application, such as memtest, or efi shell. Most UEFI firmwares doesn't have a menu to do so.
the title text says 'Loaded by the EFI stub on UEFI, and packed into a unified kernel image (UKI), the kernel, initramfs, and kernel command line, contain everything they need to reach the final boot target' which sounds like they're not talking about using two separate kernels, one for the bootloader and one for the final boot target, but rather only one single kernel. possibly that is not the case because the actual information is hidden in a video i haven't watched
https://news.ycombinator.com/item?id=40909165 seems to confirm that they are indeed not saying what you thought
edit: they're proposing both configurations
> - Possibility to chainload from Linux while using Secure / Trusted boot: Dual-booting, although not supported on RHEL, is important for Fedora. While there are attempts to kexec any PE binary, our plan is to set BootNext and then reset, which will preserve the chain of trust that originates in firmware, while not interfering with other bootloaders.
It could be seen as an advantage to do chainloading by setting BootNext and resetting. I think Windows even does this now. However, it certainly is a different approach with more moving parts (e.g. the firmware has to not interfere or do anything stupid, harder than you'd hope) and it's definitely slower. It'd be ideal if both options were on the table (being able to `kexec` arbitrary UEFI PE binaries) but I can't imagine kexec'ing random UEFI binaries will ever be ideal. It took long enough to really feel like kexec'ing other Linux kernels was somewhat reliable.
[1]: https://fizuxchyk.wordpress.com/2024/06/13/nmbl-we-dont-need...
Many of the Linux systems I support don't have displays and EFI is supported through UBoot. In those cases you're using a character-based console of some sort like RS232.
A lot of those GRUB options could also be solved by embedding a simple pre-boot system in an initial ramdisk to display options, which maintains all of the advantages of not using GRUB and also gives you the ability to make your boot selection. The only thing GRUB is doing here is allowing you to select which kernel to chain-load, and you can probably do the same thing in initramfs too through some kind of kernel API that is disabled after pivot root.
Given that Windows tries to restore open windows to make it look like it didn't even reboot, I'm surprised they wouldn't make sure that the reboot actually goes back into Windows.
Do people really dual boot a lot in 2024? It was a good use case when virtualization was slow but decades after the CPU started shipping with virtualization extensions there is virtually zero overhead in using VM nowadays and it is much more convenient than rebooting and losing all your open applications just to start one on another OS.
> - it allows you to boot any other EFI application, such as memtest, or efi shell. Most UEFI firmwares doesn't have a menu to do so.
How many times in a decade are you running memtest?
Getting to UEFI firmware or booting another OS/drive is just a matter of holding one key on my thinkpad. I would just simply not buy and bad hardware that doesn't allow me to do that. Vote with you wallet damit.
I would also argue that you can perfectly have grub sitting alongside a direct boot to kernel in an UEFI setup. There are many other bootloaders than grub and users are still free to use them instead of what the distro is shipping. UEFI basically allows you to have as many bootloader as you have space on that small fat partition.
Yes.
> there is virtually zero overhead in using VM nowadays
Not for real-time audio production. The state of audio plugins having Linux support from vendors like EastWest, Spitfire, Native Instruments, iZotope is abysmal and even Wine does not run them nowadays.
Even with a virtual machine that has pinned cores and USB pass-through of a dedicated audio interface, it practically locks you to one sample rate, any change causes crackles, try to load more than one plugin and you hear crackles. There is plenty of overhead.
Yes, there are still use cases for it.
The state of GPU virtualisation, for example, is a spectrum from doesn't exist/sucks to only affordable for enterprise customers.
So unless you have a second graphics card to do pass through with, if you want to use your GPU under both OSes then you almost always have to dual boot (yes, there are other options like running Linux headless, but it's not even remotely easier to set up than dual boot)
One of the big problems is with the graphics cards, because the vendors block a driver functionality ( SR-IOV ) for consumer GPUs that would allow single GPU passthrough for VMs.
The alternative is to leave the system headless (reboot needed, and the VM need to run as root), or to use two graphics cards (wasting power, hardware resources, etc.), for which you also need to add an extra delay layer inside the VM for to re-send the graphics back to the screen, or to connect two wire inputs to the monitor.
Yes. I work on Linux and play most games on Windows. Playing games on a VM is... pretty terrible.
Seems the answer is yes. https://linux-hardware.org/?view=os_dual_boot_win
> there is virtually zero overhead in using VM nowadays
It might be more accurate to say that if you have a fast computer with lots of resources the experience running a basic desktop experience feels perceptibly native. This means it is a great answer to running windows software that neither needs a discrete GPU nor direct access to hardware on the minority of machines that are capable enough for this to be comfortable.
In actuality laptops are more common than desktops and the majority of computers have 8GB of RAM or less. 60% all form factors 66% laptops. This just isn't enough to comfortably run both.
https://linux-hardware.org/?view=memory_size&formfactor=all
Furthermore while most Linux users are comfortable installing and running windows and Linux whereas they may or may not be familiar with virtualization.
Also probably the number one reason someone might dual boot is probably still gaming which although light years ahead of years prior still doesn't have 100% compatibility with Windows. In theory GPU passthrough is an option but in reality this is a complicated niche configuration unsuitable for the majority of use cases. Anyone who isn't happy with steam/proton/wine is probably more apt to dual boot rather than virtualize.
My original plan was to do everything in a Windows VM, but there was too much of a performance hit for some of my purposes, and VMWare doesn't allow attaching physical disks or non-encrypted VMDKs to a Windows 11 VM, so it's actually easier to have a data drive that's accessible from both OSes with dual boot than it would be with a VM.[1] I'm still disappointed about that.
[1] Using HGFS to map a host path through to the VM is not an option because of how slow that is, especially when accessing large numbers of files.
Yes it does, I use it with two kernels, just have different entry for each stub in UEFI. Whenever I want to boot the non-default kernel I just hit F11 (for BIOS boot menu, on my motherboard) and choose the boot option. You just need to add the boot options in UEFI, pointing to the corresponding EFI files. They also have the kernel command line parameters baked into them and you can set your desired ones (silent boot whatever).
Isn’t this how Apple’s Bootcamp works (at least on Intel based Macs)?
Personally I still use GRUB for all of the reasons you stated above. But rEFInd + kernel gets you pretty close.
My brain has leaked all the information I understood (unfortunately). Is rEFInd still active? Is there a gummiboot fork (besides systemd)?
Personally, I kind of hate Redhat calling itself that now, it's IBM. You can tell because all of the online knowledge from the community on their websites are now pay-walled. RIP Redhat (CentOS) (I'll miss you)
P.S. Thanks Rocky Linux (and others like it)
Hardly a problem in my experience - just hold down the key while booting.
And dual booting is rarely needed anyway and generally just a pita. Just always boot into your preferred OS and virtualize the other one when you really need it.
> also modern bootloader save the last boot option such that if Windows reboots for an update Linux does not start
You can change the EFI boot entries including priority from the OS, e.g. via efibootmgr under Linux. Should be easy to setup each OS to make itself the default on boot if that's really what you want.
> it allows to edit the cmdline of the kernel to recover a system that does not boot, e.g. start in single user mode. That can really save your day if you don't have on hand an USB stick and another PC to flash it
All motherboards I have used had an EFI shell that you can use to run EFI programs such as the Linux kernel with efistub with whatever command-line options you want.
> it allows you to choose between multiple kernels and initrd images easily, again for recovery purposes
EFI can have many boot entries too.
> it has a voice for entering the UEFI setup menu
What does "a voice" here mean? Or you meant "a choice"? Either way, same as with the boot menu you can just hold down the key while booting IME.
> it allows you to boot any other EFI application, such as memtest, or efi shell. Most UEFI firmwares doesn't have a menu to do so.
In my experience the EFI shell has always been accessible without a bootloader.
I've been dual-booting linux since the kernel 2.2.x era and being able to do it was a major driver to migrate away from windows. It is super important for onboarding of new users that can't yet get rid of windows fully - mostly because of gaming (yes proton is nice, but anything competive that uses anti-cheat won't work yet is the majority share of gaming). And that is the reason I still boot into Windows on my dual-boot machine: Gaming. For me that windows is just a glorified bootloader into GoG or Steam, yet desperately needed and virtualization won't solve anything here.
For all laptops apart from Dell, Grub is the bootloader that EFI could never be.
You can use the UEFI shell for this. It's kind of a replacement for the old MS-DOG command line.
Except they've made it increasingly harder to do this over the years. Nowadays you have to guess when it is on the magic 1 second of "GRUB time" before it starts loading and then smack all the F keys and ESC key and DEL key at the same time with both hands and both feet because there is nothing on the screen that tells you which key it actually is.
All while your monitor blanks out for 3 seconds trying to figure out what HDMI mode it is using, hoping that after those 3 seconds are over that you smacked the right key at the right time.
And then you accidentally get into the BIOS instead of the GRUB.
It used to be a nice long 10 seconds with a selection menu and clearly indicated keyboard shortcuts at the bottom, and you could press ENTER to skip the 10 second delay. That was a much better experience. If you're in front of the computer and care about boot time, you hit enter. If you're not in front of the computer, the 10 seconds don't matter.
I know you can add the delay back, I just wish the defaults were better.
This is an indication of bad admin choice. The kernel defaults should not corrupt the boot process and if you add further experimental flags for testing you ought to have a recovery mechanism in place beforehand.
Windows Boot Manager can chainload into any arbitrary bit of code if you point it where it needs to hand off.
It's a feature that goes back to Windows NT (NTLDR) supporting dual boot for Windows 9x, but it can be repurposed to boot anything you would like so long as it can execute on its own merit.
eg: Boot into Windows Boot Manager and, instead of booting Windows, it can hand off control to GRUB or systemd-boot to boot Linux.
With the NT6 bootloader this appears to be limited to operating only in BIOS mode using bootmgr.exe. The traditional chainloading is still possible by pointing to a binary file which is a copy of a valid partition bootsector, whether it is a Microsoft bootsector or not.
The equalent BCD for UEFI mode uses bootmgr.efi (instead of bootmgr.exe), and does not seem to be capable of chainloading even when there is an equivalent BOOTSECTOR bootentry on the NT6 multiboot menu.
It would be good to see an example of the NT6 bootloader successfully handling UEFI multibooting which includes starting Linux from the EXTx partition it is installed on. Still works perfectly in BIOS since early NT, but in UEFI not so much.
At least this is what an Arch Linux derivative (Artix) system of mine does, amusingly. It sort of gives an observer the impression that it's an encrypted Windows system on boot.
I'd have preferred CoreBoot or OpenFirmware, but the PC industry was too slow to move and let Intel -- still smarting from Microsoft forcing it to adopt AMD's 64-bit x86 extensions -- take control of the firmware.
And is also themed like XBill.
FreeBSD's loader does this, and its so much easier to deal with. Eg, it understands ZFS, and can pre-load storage driver modules and zfs.ko for the kernel, so that the kernel has everything it needs to boot up. It also understands module dependencies, and will preload all modules that are needed for a module you specify (similar to modprobe).
If you build the drivers for your storage media and filesystem into the kernel (not as a module), and the filesystem is visible to the kernel without any userland setup required beforehand (e.g. the filesystem is not in an LVM volume, not on an MD-RAID array, not encrypted), it is fully capable of mounting the real root filesystem and booting init directly from it.
The only point of consideration is that it doesn't understand filesystem UUIDs or labels (this is part of libuuid which is used by userland tools like mount and blkid), so you have to specify partition UUIDs or labels instead (if you want to use UUIDs or labels). For GPT disks, this is natively available (e.g. root=PARTUUID=4F68BCE3-E8CD-4DB1-96E7-FBCAF984B709 or root=PARTLABEL=Root). For MS-DOS disks, this is emulated for UUIDs only by using the disk ID and partition number (e.g. root=PARTUUID=11223344-02).
You can also specify the device name directly (e.g. root=/dev/sda2) or the major:minor directly (e.g. root=08:02), but this is prone to enumeration order upset. If you can guarantee that this is the only disk it will ever see, or that it will always see that disk first, this is often the most simple approach, but these days I use GPT partition UUIDs.
In practice, we have generic kernels which require a lot of stuff in modules for real user systems running on distributions. Instead, though, we could have a loader which doesn't require this big relatively-opaque blob and instead loads the modules necessary at boot time (and does any necessary selection of critical boot devices). i.e. like FreeBSD does.
There's advantages each way. You can do fancier things with an initramfs than you ever could do in the loader. On the other hand, you can change what's happening during boot (e.g. loading different drivers) without a lot of ancillary tooling to recover a system.
And also for e.g. cases where you've got some custom stack of block devices that you need to set up before the root FS and other devices can be mounted. It's not just about loading kernel modules.
I don’t see the point of doing so however.
In many cases, you don't need initramfs. I rarely use one in embedded systems.
the other direction is to put everything in systemd. :)
And if this bit is closed source, and something doesn't work, you don't have a recourse.
Having lucked into using Lilo and no initramfs for several years now, I'm very happy with robustness and straightforwardness of the solution.
In contrast, on the rare occasion I've dealt with somebody elses' GRUB and initramfs setups, they turn out brittle and complex.
But then when it comes to the other points, yes I want to be able to reliably boot into other systems, but both systemd-boot and grub are notoriously bad at detecting other systems on disks (both use install-time detection IIRC). The only one which does a reasonable job is rEFInd. But even more a kernel with appropriate drivers could even add kernels/systems on usb disks to the selections (why do I have to go to the UEFI selection to boot from USB).
The next thing he completely ignores is booting into zfs or btrfs snapshots, which is not possible using systemd-boot AFAIK, and again would be much nicer to do with a kernel.
I stopped reading there. All these engineers who help build and defend this draconian crap should be forced to used only an iPad for the rest of their lives.
MILO could be installed as a boot entry in the firmware-level boot menu. MILO was a sort of stripped down Linux kernel that used its drivers to find and load the real kernel, ending with a kexec to hand over the system.
No matter how you slice it, I think you'll always come around to wanting this sort of intermediate bootloader that has a different software maintenance lifecycle from the actual kernel. It is a fine idea to reuse the same codebase and get all the breadth of drivers and capabilities, but you want the bootloader to have a very "stability" focused release cycle so it is highly repeatable.
And, I think you want a data-driven, menu/config layer to make it easy to add new kernels and allow rollback to prior kernels at runtime. I hope we don't see people eventually trying to push Android-style UX onto regular Linux computers, i.e. where the bootloader is mostly hidden and the kernel treated as if it is firmware, with at most some A/B boot selection option.
The code is here although it hasn't been touched it years: https://gitlab.com/samsartor/alamode-boot
Because that would only allow you to boot Linux kernels. One of the benefits of bootloaders is the ability to boot other OSs. You can’t kexec windows.
https://yosemitefoothills.com/LinuxBoot/BD-1Disk.htm
Here's some more documentation on this: https://www.kernel.org/doc/Documentation/x86/boot.txt
What's old is new again... except 100x more complex and likely more than necessary.
First: Linux could already be booted directly from the UEFI manager. You don't need GRUB at all. So why a new scheme - why weren't they just doing that?
Second (and third, etc.): If I have multiple Linux installations along with a Windows installation, wouldn't this mean one of them now has to be the one acting as the boot loader? Could it load the other one regardless of what distro it is, without requiring e.g. an extra reboot? And wouldn't this mean they would no longer be on equal footing, since one of them would now become the "primary" one when booting? Would its kernel have to be on the UEFI partition...?
(And in principle this system could load other linux distros assuming there was some co-ordination in how to do so. Windows is more difficult, as is interaction with secure boot)
I indeed understood that part, but their motivation for this was security. If you want security, you should want to boot directly into the kernel. And if you're the occasional user who has multiple OSes installed in parallel... you can just add more kernels from your dual-boot installs directly to the UEFI screen; there's really no need to go through any form of intermediate stage, whether kernel-based or boot-loader-based.
What I'm trying to say is: as cool as this is from a technical standpoint, I just don't understand the root of the premise or motivation here whose optimal solution is this approach. Whom is RedHat trying to please with this? The small fraction of users who dual-boot Linux, or the rest of the users who just have a single install? And what problem are they actually trying to solve -- security, performance, or something else? Because the optimal solution to the first two doesn't feel like this one, unless they're targeting a niche use case I'm not seeing? e.g., do they have lots of enterprise users that boot off a network, but whom would rather have a local Linux install whose sole job is to boot that...?