I really hope users and developers can embrace the amazing technology that is present in illumos and its derivatives, much of which came out of Sun Solaris.
Things like ZFS, Zones, DTrace, SMF, and Crossbow were locked up inside Sun Solaris for many years because it was proprietary. The effort to open source it took a long time and because of how Solaris was built (with various third party things) Sun essentially had to release under a slightly odd license, the CDDL.
Meanwhile Linux, which had ascended to dominance post dot com bubble, solidified its lead as the defacto default server operating system, a fairly well earned dominance built on being the first truly free Unix available for x86/PC hardware, and at a time when that was a truly lesser tier of hardware (unlike today).
But today I worry we've settled into a mindset of "the linux way is best because it is dominant" - if you use linux you can google your stack traces, you know that even if the tech is inferior (coughbtrfscoughsystemtapcoughsystemdcoughepoll*cough) many other people are in the same boat and in theory hordes of developers will -- might? -- make patches to fix the problem. No one ever got fired for choosing linux, basically.
I've migrated a bunch of personal projects to a server I own running SmartOS, an Illumos derivative. One thing I've learned is that it's actually really viable and nice these days to use an alternative operating system. I imagine it's similar with FreeBSD, OpenBSD, vanilla Illumos, etc. These systems run on a pretty impressive array of hardware and are able to leverage the near total standardization of the hardware/bios stack and the inroads made by other projects to bootstrap to viability. E.g. SmartOS uses pkgsrc from the netbsd project for packaging, took some boot technology from FreeBSD, and in general is able to tap into the universe of "unix like" tools, even if many of those tools are most often used on linux.
Anyway if you get a chance give this stuff a spin, it's pretty eye opening what you can accomplish. Zones, to me in particular, feel like a game changer.
> you know that even if the tech is inferior (coughbtrfscoughsystemtapcoughsystemdcoughepoll*cough) many other people are in the same boat and in theory hordes of developers will -- might? -- make patches to fix the problem.
Bit of a bad cough you have there! Need a lozenge?
With regard to technology, Solaris/Illumos went through the same init system battle with their next-gen init system SMF as GNU/Linux did with systemd. They had their share of hard-nosed refuseniks who vowed to remain on Solaris 9 forever rather than embrace the new. Lennart Poettering said that systemd was inspired in part by SMF[1].
Sure SMF wasn't universally liked with its introduction. Just like Apples Launchd. However the big difference is that for those 2 the resistance was mostly attributable to people not liking change. And that kind of resistance fades over time. Since its basically still the same thing only somewhat different.
However the problem most people have with systemd is the massive (growing) scope. So because it keeps growing and changing the "resistance" people have towards it gets refueled all the time.
The main point of contention with SMF was the dismal lack of documentation on the XML required when it was first introduced into Solaris 10 (what we call "putback"). At the time, Sun had the inventor of XML on their payroll and suddenly XML started showing up everywhere, but XML has no business in configuration files on UNIX, which were traditionally designed to be trivial to understand and parse by both humans and tools like sed, AWK, grep and cut. As if that wasn't bad enough, adding insult to injury was that the configuration to various services was being stored in a private copy of SQLite, and there was no documentation on which application was storing which settings -- the only way to know was from reading the blogs of the engineers working on SMF. As you can imagine, this led to a lot of frustration, because the primary users of SMF were system administrators and they were basically left to fend for themselves: lacking documentation and third party developers not wanting to waste their time reverse-engineering SMF to write XML manifests for their services.
However, SMF is actually very well designed and this makes it very powerful: I can for example declare configuration files as my dependencies in the XML manifest and when I import it, SMF will know to watch those files. If they change, SMF will call the manifest's refresh method, whatever it is. That's barely scratching the surface. What I think no one on the outside realized when SMF came out is that it was part of FMA, the fault management architecture, which makes Solaris and his derivatives be able to heal themselves provided the system is configured with enough redundancy, even on really unreliable shitty hardware like intel / AMD based systems. Nowadays SMF is well regarded and well understood, but the engineers rightly regret making it use XML, and it took years for the documentation to reach that expected level of high quality which we're so spoiled with in the rest of Solaris and illumos manual pages.
Agreed, too much monoculture is harmful. Although I've been using Linux since the beginning ('92), I'm installing more BSDs these days because quality is declining in the Linux world (as you well coughed).
Small historical timeline correction though:
> Things like ZFS, Zones, DTrace, SMF, and Crossbow were locked up inside Sun Solaris for many years
Solaris 10 (which contained the bulk of these) came out in 2005 and Open Solaris open sourced the code also in 2005. The timeline is more complex because not everything came in Solaris 10 at once and not everything open sourced at once... but at a high level, most of these technologies were open sourced relatively soon after initial release.
(I was at Sun back then and involved with Open Solaris effort.)
Vehemently agree. Monoculture is a huge risk to IT generally right now. I love Linux and must use it for some things, because nothing else is really supported, but I really worry about a world where all the servers run Linux.
I think the thing I miss most about the 90s and early 00s was that different unix-like systems took big, bold, and divergent approaches. Computing felt less “nailed down”... like there were more possibilities.
I got into OpenSolaris shortly before Oracle swooped in and killed it. For a long while, I ran OpenIndiana on a server where I work, but I found that it really wasn't friendly to white-box hardware. In particular, it was prone to hiccuping badly if I hot-swapped SATA drives on an AMD AHCI controller. I probably would have had better luck if the system had an SAS HBA for the drives, but that wasn't going to happen.
I finally switched it to Ubuntu Server with ZoL, and the hot-swap woes went away completely.
As for systemd vs. SMF, I'd take systemd any day over the crawling XML horror of SMF. :)
illumos split from Solaris around the time Solaris 11 has been released. Since then the development of both mostly diverge. Unlike others I think that Solaris is still being developed but not so obviously. Partly because it is limited to paying customers (non-paying customers can only access official releases like 11.1, 11.2, 11.3 and so on. And this happens seldomly) and most changes happen in not so obvious parts (eg. low level tools and protocols).
BTW: From its Solaris inheritance illumos got a lot of technologies despite the always mentioned SMF, ZFS, Crossbow and DTrace. Eg. authorizations, privileges and roles. Things that I miss when I look at other OS's. Many valuable things are hidden below the surface.
Between Linux and illumos there is a big difference between their development philosophies. For me Linux takes now the part of Windows two decades ago: It is heading towards a monoculture with masses of lemmings running with it mostly ignoring superior technology around.
I'm not a huge ZFS fan but I think ZFS-on-Linux more or less spells the death of BtrFS. BtrFS has been radically unfinished and unready for over a decade now; meanwhile ZFS is decades old and in-production for much of that time. Does anyone use BtrFS in production? To the best of my knowledge, the answer is no.
(To be clear, I don't think Solaris is the best platform to run ZFS today.)
Illumos is a fork of OpenSolaris that's under active development, and the last Unix based on AT&T Unix System V. It's features have been around forever, and have been ported to Linux and BSD (with various levels of success). So "next-gen" is a strange way to describe it...
Probably the worst ever OS to develop against. I worked for almost 10 years with a product codebase that was maintained on Solaris (really nice), Linux (nice), AIX (not so nice) and HP-UX (not at all nice).
HP-UX is EOL. I use both Redhat and HP-UX at work. HPE's plan is to containerize the OS (an X86 container, not a HP-UX container), to help with migrations. Although some people have replied with less then positive things to say about HP-UX, there are a several features in Linux that stem from HP-UX, such as LVM. I'm hoping HP-UX gets open sourced so the good parts can be learned from and be used in other projects.
I ran HP-UX at home for a very long time. Loved the OS and that it's SVR4, but if one didn't have connections to get the special media, one lacked software mirroring and the compilers, which made running HP-UX a non-starter after a while. I even went so far and deep to port quite a bit of open source software to it and learned how to create PSF's, its native (and quite advanced) packaging format packages and bundles.
ZFS is still a pain to use for your root filesystem not because zfs on linux doesn't work but because its been legally questionable due to license constraints.
> ZFS is still a pain to use for your root filesystem not because zfs on linux doesn't work but because its been legally questionable due to license constraints.
There are also technical reasons. ZFS is designed for Solaris' virtual filesystem (VFS) so every other operating system has to wrap some sort of Solaris Porting Layer ("SPL") on top to integrate it into their system. These are not insurmountable (clearly, it works well now).
I think part of the reason for not mentioning OpenSolaris in the main page is because it, in fact, isn't really being worked on... it's effectively dead, and Illumos really isn't the same as it was a decade ago.
Why not state that it is a fork of OpenSolaris on the homepage? Also the "next gen FS" is just ZFS, first released in 2005 (and they say software moves fast).
I think the homepage is mostly tageting techies, why not put a little more jargon in there and be transparent about the origins?
It's been a tough needle to thread. Most everybody who has positive feelings about Solaris history already knows where we come from, and it's mentioned in the history section of our documentation site.
For others, prominently featuring our Solaris heritage doesn't always result in a good first impression. It's pretty easy to hear Solaris and think "old" and "unmaintained" and so on, for whatever reason. We're trying instead to have people judge the software we ship first, rather than prejudge based on our history.
The other perception we try to combat is that we're compatible with binaries for Solaris 11 -- after a decade of divergent development on both sides, we're just not.
> Most everybody who has positive feelings about Solaris history already knows where we come fro
I was in doubt. There is an OS release every week now, I think being based on a mature one only gives me more hope.
Maybe also say something about what you want to achieve (a serious alternative for Linux on the server), or name some companies that use it currently in production.
> It's pretty easy to hear Solaris and think "old" and "unmaintained"
This is better countered by a list of recent releases and/or recent commits.
> The other perception we try to combat is that we're compatible with binaries for Solaris 11 -- after a decade of divergent development on both sides, we're just not.
This is not for the front page, but good to mention at some point.
> Also the "next gen FS" is just ZFS, first released in 2005 (and they say software moves fast)
And yet, ZFS still seems to be the only filesystem in the world that looks remotely appealing. Or can you name a single newer filesystem that has a reasonable subset of the features and does not actually corrupt your data (let alone run on more than one OS)? The main competitors seem BTRFS (linux only) and APFS (apple only), and the former despite being around a decade still had plenty of data corruption reports till fairly recently (and is being removed from RHEL) whilst APFS doesn't even bother to checksum your data.
5 years ago, I would have agreed with you about BtrFS.
Since 2012 or 2013, I'd been using BtrFS essentially as a cache on a build server; ephemeral data was kept there, if it was lost, it just meant that later operation would be slow. And I did see corruption, every few months I'd end up having to reformatting it.
And then at some point I realized that I hadn't had to reformat it in a long time. And so when I got a new laptop in 2016 I felt comfortable formatting the whole thing (including / and /home) with BtrFS. And then I was building another server in September 2018, which would store a bunch of data (currently about 4TB), and went with BtrFS. And then a work laptop later in 2018.
So using BtrFS on a variety of systems (all using close to the latest kernels the whole time), I haven't seen any data corruption since at least 2016. I'm sad that RH decided to no longer invest in it.
ZFS released in 2005 was light years ahead of file systems at the time (it can do instant snapshots, compression, at rest encryption, protect you from bit rot, etc.) and is still inspiring filesystems today.
And Solaris carries with it baggage that I don’t blame them for not highlighting.
Matter of perspective and presentation. I think, or at least I think the thinking is that at some point a project has to move on from being a fork of their predecessor and get comfortable with having its own identity.
“Illumos is a cool Unix operating system” sounds a little bit nicer than “Illumos is an operating system for OpenSolaris refugees”.
> at some point a project has to move on from being a fork of their predecessor
Sure. But what about doing so when it has become an established name? Right now I'm only interested because it is a fork, those toy OSes that get released every week rarely tickle my interest.
> Why not state that it is a fork of OpenSolaris on the homepage?
Why does it matter? Ubuntu doesn't say it's a fork of Debian. Suse doesn't say it's a fork of Slackware. OpenBSD doesn't say it's a fork of NetBSD and FreeBSD and NetBSD doesn't talk about 386BSD either. Also I'm pretty sure Microsoft wouldn't mention OS/2 once in any of their Windows 10 documentation let alone on their landing page.
Pretty much all modern operating systems are evolutions, forks, derivatives, etc but usually they diverge enough over time to be their own distinct platform. So while their heritage is an interesting topic it doesn't actually explain anything about the OS and thus has no place on a landing page.
> Also the "next gen FS" is just ZFS, first released in 2005 (and they say software moves fast).
Because it's still a "next gen FS" and will be until it becomes common current gen with a new generation superseding it (software doesn't actually move that fast when it comes to file systems).
Personally I can't see ZFS superseded for a long time to come because the way we store data has changed (eg cloud services might use object storage such as S3 instead of large NFS mounted volumes) and thus the file systems ZFS was looking to displace have once again become "good enough" for most people.
I think it would take another seismic shift in how servers and infrastructure are architecture. Likely a shift away from proprietary clouds and towards open cloud computing standards -- if such a movement is even possible in a capitalist-drive world. But I digress.
I'm a "casual" user - I do various hobbyist activities (Python and machine learning projects, 3d modeling, photo editing, et al) as well as basic stuff like email and browsing Hacker News. I don't intend to do much "low-level hacking" or application development.
Is Illumos interesting for someone like me? Or should I stick with GNU/Linux?
Competition from Linux has become very stiff. For me, Kubuntu 19.10 delivers on all of the promises that OpenSolaris made back in the day. My laptop has two 1TB SSDs in a ZFS mirror. Docker works as you'd hope, creating a new ZFS dataset for each container. I have remarkably good hardware support, everything just works. I'd say that the user experience rivals or beats MacOS or Windows, although at this point I'd say it is largely just a matter of preference.
Linux still lacks the multitenant security guarantees of Solaris Zones but this isn't so bad. Hard multitenancy is a problem that only concerns a fairly small subset of people and even then, there are solutions. You can do hard multitenancy with Linux containers (namespaces/cgroups to be pedantic) if you start restricting the system call table. There is also Firecracker[1] which obviously isn't as efficient as Zones (because it uses VMs), but is proven in production and does provide additional security guarantees.
The technical advantages of Illumos are outweighed (or rivaled) at this point, in my opinion. When I show people things on my laptop, I always end up talking about the desktop cube in KDE or my neovim/tmux/zsh setup. Due to ZFS snapshots and the power of zfs send/recv, I expect this system installation to last for years to come. No more reinstalls, we're good, this operating system thing is covered.
To get off into the weeds a bit... I've actually spent quite a bit of time using Illumos. The epicenter of The Suck begins, in my mind, with the effort involved in actually building the system.
GCC (and Clang) obviously didn't exist back in the day, so none of the Makefiles for the base system are written for a compiler that was released in the past 20 years. There is actually a wrapper that takes SunCC arguments and calls GCC... so that kinda works, right?
Okay, well, yes. It works. But then, it all kinda starts snowballing. There is a big investment in a debugger from the 1980s which can't unwind stacks without frame pointers. So... we have to emit frame pointers. Okay, that works. Because we care a lot about this, we'll also ensure that all function arguments promoted to registers are also copied to the stack so we can always look at them. Okay, fair enough.
Is this idiosyncratic enough for you yet? Okay, let's step it up a few notches and actually implement the Linux system call table on top of this. Wait, what? The struggle is real.
This is all technically brilliant in any number of ways, but I have things to do this week. All of the man years of effort spent on Linux and the surrounding ecosystem have actually solved a lot of real technical problems and solved them incredibly well.
Like any large scale project with a long history, we certainly have our share of eccentricities. We're working on moving the argument translation from cw (our "compiler wrapper") more directly into the Makefiles themselves.
I would note that both Studio and GCC are both older than 20 years if you're going by the date when they were first released; releases of both that we were using (Studio is no longer in use) were released more recently than 20 years ago. We currently recommend people use GCC 7.3 or 7.4 to build the OS, and those were released in the last two years.
The debugger I believe you're talking about is MDB[1], which is emphatically not from the 1980s. It was started as a project in the lead up to Solaris 7, as I understand, which was closer to 2000. GDB was first released in 1986, but I don't think either of these dates are a meaningful observation: they're both actively maintained debuggers with a different approach and a different focus.
There are lots of arguments for, and presumably against, the use of %rsp as a frame pointer. They're all trade-offs. The amd64 architecture is, at least, less register starved than 32-bit x86. Stack unwinding with correct use of the frame pointer is substantially less complex than using the unwinding information provided by DWARF sections. This helps a lot when making the stack() and ustack() DTrace routines available, as they need to unwind the stack in an extremely limited context in the kernel. Saving arguments to the stack (-msave-args) is another trade-off tilting the scale toward a better debugging experience, because yes, we do care a lot about that as a project.
At the end of the day, you should use whatever makes you happy and solves problems for you. Those of us who work on illumos, use it at home, or ship it in products, are doing so for just that reason.
Illumos has some incredible technology packed into it but lots of those have been ported, re-implemented, or turned into billion dollar companies. The extra special parts like being able to debug v8 in mdb have this strange alignment issue: How many node developers are going to fire up an alternative operating system and learn a new debugger to be able to find a memory leak?
> How many node developers are going to fire up an alternative operating system and learn a new debugger to be able to find a memory leak?
I don't think this is much of a barrier, especially when "fire up" means: run a VM instance. Using various VMs and/or containers for development is already a common workflow.
I think the unfamiliarity of Illumos for most developers would be more of a barrier.
Things like ZFS, Zones, DTrace, SMF, and Crossbow were locked up inside Sun Solaris for many years because it was proprietary. The effort to open source it took a long time and because of how Solaris was built (with various third party things) Sun essentially had to release under a slightly odd license, the CDDL.
Meanwhile Linux, which had ascended to dominance post dot com bubble, solidified its lead as the defacto default server operating system, a fairly well earned dominance built on being the first truly free Unix available for x86/PC hardware, and at a time when that was a truly lesser tier of hardware (unlike today).
But today I worry we've settled into a mindset of "the linux way is best because it is dominant" - if you use linux you can google your stack traces, you know that even if the tech is inferior (coughbtrfscoughsystemtapcoughsystemdcoughepoll*cough) many other people are in the same boat and in theory hordes of developers will -- might? -- make patches to fix the problem. No one ever got fired for choosing linux, basically.
I've migrated a bunch of personal projects to a server I own running SmartOS, an Illumos derivative. One thing I've learned is that it's actually really viable and nice these days to use an alternative operating system. I imagine it's similar with FreeBSD, OpenBSD, vanilla Illumos, etc. These systems run on a pretty impressive array of hardware and are able to leverage the near total standardization of the hardware/bios stack and the inroads made by other projects to bootstrap to viability. E.g. SmartOS uses pkgsrc from the netbsd project for packaging, took some boot technology from FreeBSD, and in general is able to tap into the universe of "unix like" tools, even if many of those tools are most often used on linux.
Anyway if you get a chance give this stuff a spin, it's pretty eye opening what you can accomplish. Zones, to me in particular, feel like a game changer.
Bit of a bad cough you have there! Need a lozenge?
With regard to technology, Solaris/Illumos went through the same init system battle with their next-gen init system SMF as GNU/Linux did with systemd. They had their share of hard-nosed refuseniks who vowed to remain on Solaris 9 forever rather than embrace the new. Lennart Poettering said that systemd was inspired in part by SMF[1].
[1] https://coreos.com/blog/qa-with-lennart-systemd.html
However the problem most people have with systemd is the massive (growing) scope. So because it keeps growing and changing the "resistance" people have towards it gets refueled all the time.
However, SMF is actually very well designed and this makes it very powerful: I can for example declare configuration files as my dependencies in the XML manifest and when I import it, SMF will know to watch those files. If they change, SMF will call the manifest's refresh method, whatever it is. That's barely scratching the surface. What I think no one on the outside realized when SMF came out is that it was part of FMA, the fault management architecture, which makes Solaris and his derivatives be able to heal themselves provided the system is configured with enough redundancy, even on really unreliable shitty hardware like intel / AMD based systems. Nowadays SMF is well regarded and well understood, but the engineers rightly regret making it use XML, and it took years for the documentation to reach that expected level of high quality which we're so spoiled with in the rest of Solaris and illumos manual pages.
Small historical timeline correction though:
> Things like ZFS, Zones, DTrace, SMF, and Crossbow were locked up inside Sun Solaris for many years
Solaris 10 (which contained the bulk of these) came out in 2005 and Open Solaris open sourced the code also in 2005. The timeline is more complex because not everything came in Solaris 10 at once and not everything open sourced at once... but at a high level, most of these technologies were open sourced relatively soon after initial release.
(I was at Sun back then and involved with Open Solaris effort.)
Deleted Comment
I think the thing I miss most about the 90s and early 00s was that different unix-like systems took big, bold, and divergent approaches. Computing felt less “nailed down”... like there were more possibilities.
As long as BSD stays with BSD license, it won't happen.
I finally switched it to Ubuntu Server with ZoL, and the hot-swap woes went away completely.
As for systemd vs. SMF, I'd take systemd any day over the crawling XML horror of SMF. :)
> coughbtrfscoughsystemtapcoughsystemdcoughepoll
Btrfs is not as mature but design wise it is competitive with ZFS. Pool resizing is actually a lot nicer in Btrfs.
SystemTap was not as good as Dtrace but Bpftrace is ihmo better than both.
Event ports remain better than epoll but as a user that's not particularly significant.
Systemd is ihmo better than SMF was.
The combination of Linux namespaces and cgroup allow you to do everything zones can but are more flexible.
(To be clear, I don't think Solaris is the best platform to run ZFS today.)
Deleted Comment
[1] https://en.wikipedia.org/wiki/Unix#/media/File:Unix_history-... [2] https://en.wikipedia.org/wiki/HP-UX
Probably the worst ever OS to develop against. I worked for almost 10 years with a product codebase that was maintained on Solaris (really nice), Linux (nice), AIX (not so nice) and HP-UX (not at all nice).
https://www.anandtech.com/show/13924/intel-to-discontinue-it...
https://en.wikipedia.org/wiki/Illumos#Current_distributions
A good contrast between zones, jails, containers in linux
https://www.youtube.com/watch?v=BsA4DcX47E0
ZFS is still a pain to use for your root filesystem not because zfs on linux doesn't work but because its been legally questionable due to license constraints.
There are also technical reasons. ZFS is designed for Solaris' virtual filesystem (VFS) so every other operating system has to wrap some sort of Solaris Porting Layer ("SPL") on top to integrate it into their system. These are not insurmountable (clearly, it works well now).
As an OS person, I found this summary on the subject pretty interesting: https://wiki.freebsd.org/AndriyGapon/AvgVfsSolarisVsFreeBSD
And Windows, apparently, at least for DTrace.
* https://openzfsonwindows.org
* https://twitter.com/openzfsonwindow
I think the homepage is mostly tageting techies, why not put a little more jargon in there and be transparent about the origins?
For others, prominently featuring our Solaris heritage doesn't always result in a good first impression. It's pretty easy to hear Solaris and think "old" and "unmaintained" and so on, for whatever reason. We're trying instead to have people judge the software we ship first, rather than prejudge based on our history.
The other perception we try to combat is that we're compatible with binaries for Solaris 11 -- after a decade of divergent development on both sides, we're just not.
I was in doubt. There is an OS release every week now, I think being based on a mature one only gives me more hope.
Maybe also say something about what you want to achieve (a serious alternative for Linux on the server), or name some companies that use it currently in production.
> It's pretty easy to hear Solaris and think "old" and "unmaintained"
This is better countered by a list of recent releases and/or recent commits.
> The other perception we try to combat is that we're compatible with binaries for Solaris 11 -- after a decade of divergent development on both sides, we're just not.
This is not for the front page, but good to mention at some point.
And yet, ZFS still seems to be the only filesystem in the world that looks remotely appealing. Or can you name a single newer filesystem that has a reasonable subset of the features and does not actually corrupt your data (let alone run on more than one OS)? The main competitors seem BTRFS (linux only) and APFS (apple only), and the former despite being around a decade still had plenty of data corruption reports till fairly recently (and is being removed from RHEL) whilst APFS doesn't even bother to checksum your data.
Since 2012 or 2013, I'd been using BtrFS essentially as a cache on a build server; ephemeral data was kept there, if it was lost, it just meant that later operation would be slow. And I did see corruption, every few months I'd end up having to reformatting it.
And then at some point I realized that I hadn't had to reformat it in a long time. And so when I got a new laptop in 2016 I felt comfortable formatting the whole thing (including / and /home) with BtrFS. And then I was building another server in September 2018, which would store a bunch of data (currently about 4TB), and went with BtrFS. And then a work laptop later in 2018.
So using BtrFS on a variety of systems (all using close to the latest kernels the whole time), I haven't seen any data corruption since at least 2016. I'm sad that RH decided to no longer invest in it.
And Solaris carries with it baggage that I don’t blame them for not highlighting.
“Illumos is a cool Unix operating system” sounds a little bit nicer than “Illumos is an operating system for OpenSolaris refugees”.
Sure. But what about doing so when it has become an established name? Right now I'm only interested because it is a fork, those toy OSes that get released every week rarely tickle my interest.
Why does it matter? Ubuntu doesn't say it's a fork of Debian. Suse doesn't say it's a fork of Slackware. OpenBSD doesn't say it's a fork of NetBSD and FreeBSD and NetBSD doesn't talk about 386BSD either. Also I'm pretty sure Microsoft wouldn't mention OS/2 once in any of their Windows 10 documentation let alone on their landing page.
Pretty much all modern operating systems are evolutions, forks, derivatives, etc but usually they diverge enough over time to be their own distinct platform. So while their heritage is an interesting topic it doesn't actually explain anything about the OS and thus has no place on a landing page.
> Also the "next gen FS" is just ZFS, first released in 2005 (and they say software moves fast).
Because it's still a "next gen FS" and will be until it becomes common current gen with a new generation superseding it (software doesn't actually move that fast when it comes to file systems).
Personally I can't see ZFS superseded for a long time to come because the way we store data has changed (eg cloud services might use object storage such as S3 instead of large NFS mounted volumes) and thus the file systems ZFS was looking to displace have once again become "good enough" for most people.
I think it would take another seismic shift in how servers and infrastructure are architecture. Likely a shift away from proprietary clouds and towards open cloud computing standards -- if such a movement is even possible in a capitalist-drive world. But I digress.
Well the thing is that *nix is so old that anything made after 2000 is certainly "next gen".
My point was more: just say it's ZFS in stead of a buzzword-thingy that needs my click to find out ZFS is meant.
Dead Comment
Is Illumos interesting for someone like me? Or should I stick with GNU/Linux?
Linux still lacks the multitenant security guarantees of Solaris Zones but this isn't so bad. Hard multitenancy is a problem that only concerns a fairly small subset of people and even then, there are solutions. You can do hard multitenancy with Linux containers (namespaces/cgroups to be pedantic) if you start restricting the system call table. There is also Firecracker[1] which obviously isn't as efficient as Zones (because it uses VMs), but is proven in production and does provide additional security guarantees.
The technical advantages of Illumos are outweighed (or rivaled) at this point, in my opinion. When I show people things on my laptop, I always end up talking about the desktop cube in KDE or my neovim/tmux/zsh setup. Due to ZFS snapshots and the power of zfs send/recv, I expect this system installation to last for years to come. No more reinstalls, we're good, this operating system thing is covered.
1: https://github.com/firecracker-microvm/firecracker-container...
GCC (and Clang) obviously didn't exist back in the day, so none of the Makefiles for the base system are written for a compiler that was released in the past 20 years. There is actually a wrapper that takes SunCC arguments and calls GCC... so that kinda works, right?
Okay, well, yes. It works. But then, it all kinda starts snowballing. There is a big investment in a debugger from the 1980s which can't unwind stacks without frame pointers. So... we have to emit frame pointers. Okay, that works. Because we care a lot about this, we'll also ensure that all function arguments promoted to registers are also copied to the stack so we can always look at them. Okay, fair enough.
Is this idiosyncratic enough for you yet? Okay, let's step it up a few notches and actually implement the Linux system call table on top of this. Wait, what? The struggle is real.
This is all technically brilliant in any number of ways, but I have things to do this week. All of the man years of effort spent on Linux and the surrounding ecosystem have actually solved a lot of real technical problems and solved them incredibly well.
I would note that both Studio and GCC are both older than 20 years if you're going by the date when they were first released; releases of both that we were using (Studio is no longer in use) were released more recently than 20 years ago. We currently recommend people use GCC 7.3 or 7.4 to build the OS, and those were released in the last two years.
The debugger I believe you're talking about is MDB[1], which is emphatically not from the 1980s. It was started as a project in the lead up to Solaris 7, as I understand, which was closer to 2000. GDB was first released in 1986, but I don't think either of these dates are a meaningful observation: they're both actively maintained debuggers with a different approach and a different focus.
There are lots of arguments for, and presumably against, the use of %rsp as a frame pointer. They're all trade-offs. The amd64 architecture is, at least, less register starved than 32-bit x86. Stack unwinding with correct use of the frame pointer is substantially less complex than using the unwinding information provided by DWARF sections. This helps a lot when making the stack() and ustack() DTrace routines available, as they need to unwind the stack in an extremely limited context in the kernel. Saving arguments to the stack (-msave-args) is another trade-off tilting the scale toward a better debugging experience, because yes, we do care a lot about that as a project.
At the end of the day, you should use whatever makes you happy and solves problems for you. Those of us who work on illumos, use it at home, or ship it in products, are doing so for just that reason.
[1]: https://en.wikipedia.org/wiki/Modular_Debugger
I don't think this is much of a barrier, especially when "fire up" means: run a VM instance. Using various VMs and/or containers for development is already a common workflow.
I think the unfamiliarity of Illumos for most developers would be more of a barrier.