A Linux-compatible kernel under a BSD license is intriguing :)
Jokes aside, this looks great as an educational project, because you can run real Linux software on this kernel, from Vim to Doom.
Things will become really interesting once this gets ported to ARM. There is a serious amount of MCUs with 4 MB of RAM where Tilck should run fine, but even Yocto or OpenWRT won't fit.
> A Linux-compatible kernel under a BSD license is intriguing :)
> Jokes aside
If you just were. That project uses indeed a BSD license. Isn't that making it unnecessarily hard on oneself as one then cannot 'borrow' Linux kernel code, including drivers?
> [..] in kernel mode while retaining the ability to compare how the very same usermode bits run on the Linux kernel as well. That's a unique feature in the realm of educational kernels.
Nice to see managarm mentioned! managarm does have vastly different goals though, as in running Linux software unmodified (i.e. compiled from source, not necessarily binary compatible due to ABI differences) on a fully async microkernel, including desktop apps a user might find useful for general-purpose stuff. Currently, we run weston (the wayland reference compositor) and have support for Xwayland and some graphical applications (as both Qt and GTK are ported). However, there's still a large part of Linux' API surface to be covered, so support will only improve with time.
It seems we are more and more at taking a shot at kernel dev, I mean a full kernel.
What is really annoying with linux (and *BSD), it is their dependence on gcc/clang. It is not C, it is a dialect of C using too many bazillions gcc extensions. It means in the long run, we are hit straight in the face with planned obsolescence from C syntax, as ISO tantrums(c11/c17/c7489374893749387) or some ultra-recent gcc extensions ("sorry, you need THIS extension only available in gcc from 2 weeks ago").
I have been coding assembly lately, and being independent of those compilers gives me an exhilarating feeling of freedom.
Who said 64Bits RISC-V assembly written (with a conservative usage of a preprocessor) linux compatible enough kernel which runs the steam client, dota2/csgo and those horrible "modern" javascripted web browsers (I personnaly use only noscript/basic (x)html browsers)? Would even be fine for a self-hosted mini server (email, personal web/git(mirrors) site, p2p, all ipv6 to avoid that disgusting ipv4 NAT, etc).
To stay real, I would go incrementaly: I would start slowly by moving back some linux code to real C (namely _NOT_ compiling only with gcc/clang, and this will be hard), port some code paths to assembly, probably x86_64 at first then 64bits RISC-V would follow (and why not ARM64).
I am more than fine with linux GPLv2, actually, I would provide the "new" code under affero GPLv3 with a similar linux "normal" program exception (but more accurately defined, to target drivers in userspace and stay explicitely ok with closed source userspace programs using those drivers) _and_ as linux GPLv2(and more based on their author wishes) to be legal with re-used linux code.
So it used to be possible to compile a (lightly patched?) Linux with tcc, which was part of the really cool "live disk that compiles linux and then boots it in a matter of seconds" demo[0]. I think the problem is that kernels (need to!) care about extremely precise implementation details that most programs don't need; things like "this data structure of exactly this many bytes must be placed at this exact memory address with this exact alignment, then we need to set the CPU registers to this exact list without touching the stack, then run this CPU instruction to make the hardware do the action we just staged", and they care about doing so with good performance, and AIUI, spec-based C either can't do all the things that modern OSs want, need you to jump through a lot of unergonomic hoops, or don't get the performance that people want. Hence, compiler extensions to do what the kernel wants to do while minimizing undefined behavior but keeping performance. Honestly, the fact that every major OS (I specifically know about Linux, every BSD, illumos nee Open/Solaris, but I'd be shocked if NT/Darwin were different) needs to extend the compiler is probably a glaring criticism of the C standard.
It means that in the end, maintaining duplicated assembly code paths for different ISAs would have been cheaper and much easier than the absurdely complex linux+gcc(or clang) duo. And I could bet than some code paths could be kept in simple and plain C (compiling with _not_ only gcc/clang) without that much loss of performance.
>>OpenMandriva Lx is a unique and independent linux distribution, a direct descendant of Mandriva Linux and the first Linux distribution utilise the LLVM compiler
IMhO, experimental OSes greatly benefits from supporting _some_ way of running _some_ Linux binaries - it solves the bootstrap problem and makes it so much easier to explore the system.
OT: I always felt the synchronous system call model is very dated. All high performance systems I know of (say, GPUs, NVMe, NFSv4, etc) all use a similar async command list model. Each exchange package up as much as possible and more work is done per context switch. Instead in Linux we just get an ever growing list of compound system calls (like pwritev, pwrite64, renameat). There's some hope with io_uring and ebpf, but it really should just be general mechanism. There's no need for a context switch outside of exceptions (like page fault) or blocking on completions for commands.
One could probably implement a Linux-compatible kernel that only implements io_uring (and only whatever sync calls are required to set it up). May not run many precompiled executables at the moment but maybe an interesting research direction.
Jokes aside, this looks great as an educational project, because you can run real Linux software on this kernel, from Vim to Doom.
Things will become really interesting once this gets ported to ARM. There is a serious amount of MCUs with 4 MB of RAM where Tilck should run fine, but even Yocto or OpenWRT won't fit.
I mean, in all seriousness, NetBSD and FreeBSD both have binary compatibility layers to run Linux programs
But maybe it's just there because it's fun...and that's absolutely ok.
> Jokes aside
If you just were. That project uses indeed a BSD license. Isn't that making it unnecessarily hard on oneself as one then cannot 'borrow' Linux kernel code, including drivers?
Porting to ARM is not a priority, and ARM could fund it if they cared.
There's also Kerla: https://github.com/nuta/kerla
0. https://github.com/managarm/managarm
Tilck – Tiny Linux-Compatible Kernel - https://news.ycombinator.com/item?id=28040210 - Aug 2021 (7 comments)
What is really annoying with linux (and *BSD), it is their dependence on gcc/clang. It is not C, it is a dialect of C using too many bazillions gcc extensions. It means in the long run, we are hit straight in the face with planned obsolescence from C syntax, as ISO tantrums(c11/c17/c7489374893749387) or some ultra-recent gcc extensions ("sorry, you need THIS extension only available in gcc from 2 weeks ago").
I have been coding assembly lately, and being independent of those compilers gives me an exhilarating feeling of freedom.
Who said 64Bits RISC-V assembly written (with a conservative usage of a preprocessor) linux compatible enough kernel which runs the steam client, dota2/csgo and those horrible "modern" javascripted web browsers (I personnaly use only noscript/basic (x)html browsers)? Would even be fine for a self-hosted mini server (email, personal web/git(mirrors) site, p2p, all ipv6 to avoid that disgusting ipv4 NAT, etc).
To stay real, I would go incrementaly: I would start slowly by moving back some linux code to real C (namely _NOT_ compiling only with gcc/clang, and this will be hard), port some code paths to assembly, probably x86_64 at first then 64bits RISC-V would follow (and why not ARM64).
I am more than fine with linux GPLv2, actually, I would provide the "new" code under affero GPLv3 with a similar linux "normal" program exception (but more accurately defined, to target drivers in userspace and stay explicitely ok with closed source userspace programs using those drivers) _and_ as linux GPLv2(and more based on their author wishes) to be legal with re-used linux code.
[0] https://github.com/seyko2/tccboot
All that is a very strong case for RISC-V.
>>OpenMandriva Lx is a unique and independent linux distribution, a direct descendant of Mandriva Linux and the first Linux distribution utilise the LLVM compiler
[1] https://clangbuiltlinux.github.io/
I feel like I read a post saying all patches for clang have been merged but my Google-fu seems to be lacking.
OT: I always felt the synchronous system call model is very dated. All high performance systems I know of (say, GPUs, NVMe, NFSv4, etc) all use a similar async command list model. Each exchange package up as much as possible and more work is done per context switch. Instead in Linux we just get an ever growing list of compound system calls (like pwritev, pwrite64, renameat). There's some hope with io_uring and ebpf, but it really should just be general mechanism. There's no need for a context switch outside of exceptions (like page fault) or blocking on completions for commands.
One could probably implement a Linux-compatible kernel that only implements io_uring (and only whatever sync calls are required to set it up). May not run many precompiled executables at the moment but maybe an interesting research direction.
[1] https://man7.org/linux/man-pages/man2/readv.2.html
Dead Comment
https://kernel-recipes.org/en/2022/talks/developing-tilck-a-...
I wrote a few words live during this talk: https://kernel-recipes.org/en/2022/live-blog-day-2-morning/#...