Not to mention there are some significant performance issues on aarch64. Love2D, a popular game framework, opted to not use JIT on macOS for this reason.
For a long time a new garbage collector has been an open issue for LuaJIT, which would fix this particular issue, and make the language even faster. Last I checked this was actively being worked on.
I think it’s gets about the right amount of credit. It isn’t obscure by any means, openresty is used quite a bit in odd places, so many games have used it. PyTorch (well what became it) was originally written in it (it was just torch but basically LuaTorch)
It doesn’t get more adoption because its warts and limitations really start to show once pushed into more into a “real” language. It doesn’t have the browser monopoly. It’s also quite slow, and LuaJIT doesn’t always magically fix that.
There are very valid complaints about Lua. It lacks useful quality of life features. Some of its design decisions grate on users coming from other languages. Its standard library is barebones verging on anemic, and its ecosystem of libraries does not make up for it.
But after using Lua in many places for well over a decade, I’ve gotta say, this is the first time I’ve heard someone claim it’s slow. Even without JIT, just interpreter to interpreter, it’s consistently 5-10x faster than similar languages like Python or Ruby. Maybe you’re comparing it to AOT systems languages like C and C++? That’s obviously not fair. But if you put LuaJIT head to head with AOT systems languages you’ll find it’s within an order of magnitude, just like all the other high quality JITs.
Which, yeah, I don’t think that’s much of a revelation to anyone discussing Intel’s future. There’s no argument put forward that it’s not still the best path for Intel.
FreeBSD is like a great grandparent, related but still very different.
In fact, it's been partially done for FreeBSD, https://github.com/dspinellis/unix-history-repo
We could in principle do something similar for Darwin (if we had enough of the historical code), which is the core of MacOS, which is based on NeXT, which was based on BSD with a new kernel. That makes MacOS every bit as much a member of the Unix/BSD family as FreeBSD is.
Mac OS X was essentially a continuation of NeXTSTEP, which is BSD with a new novel kernel. In fact, if you look into the research history of the Mach kernel at the core of XNU, it was intended as a novel kernel _for_ BSD. NeXT went and hired one of the key people behind Mach (Avie Tevanian), and he became one of the core systems guy that designed NeXTSTEP as a full OS around Mach.
Early in the proliferation of the Unix family, member systems went in one of two directions -- they based their OS on upstream AT&T Unix, or they based it on Berkley's BSD, and added their own features on top. NeXT was one of the latter. Famously, the original SunOS also was.
While Sun would eventually work closely with AT&T to unify their codebase with upstream, NeXT made no such change. NeXTSTEP stayed BSD-based.
The other extant BSDs like FreeBSD and NetBSD were also based directly on the original BSD code, through 386BSD.
If I have my history correct, Apple would later bring in code improvements from both NetBSD and FreeBSD, including some kernel code, and newer parts of the FreeBSD userland, to replace their older NeXT userland which was based on now-outdated 4.3BSD code. I think this is where the confusion comes in. People assume MacOS is a only "technically" a Unix by way of having borrowed some code from NetBSD and FreeBSD. They don't realize that it's fully and truly a BSD and Unix by way of having been built from NeXT and tracing its lineage directly through the original Berkeley Software Distribution. That code they borrowed was replacing older code, also BSD-derived.
If Sable dedicates their patents to the public, what does it mean for CF to be granted a "royalty-free license to its entire patent portfolio"?
>The video depicts resistance to Filho's request to get information to statically encode file system interface semantics in Rust bindings, as a way to reduce errors. Ts'o's objection is that C code will continue to evolve, and those changes may break the Rust bindings – and he doesn't want the responsibility of fixing them if that happens.
>"If it isn't obvious, the gentleman yelling in the mic that I won't force them to learn Rust is Ted Ts'o. But there are others. This is just one example that is recorded and readily available."
A lot of the quotes, this in particular, really comes off as petty. Not wanting a maintenance burden on your hands is a perfectly sane response from Ts'o
>But the rules put in place to guide open source communities tend not to constrain behavior as comprehensively as legal requirements. The result is often dissatisfaction when codes of conduct or other project policies deliver less definitive or explainable results than corporate HR intervention or adversarial litigation.
>Citing the Linux kernel community's reputation for undiplomatic behavior, The Reg asked whether kernel maintainers need to learn how to play well with others.
Lol. You need to set firm rules. No, not like that! You need to play nice with others! Upper-management telling you off is bad only when The Register says so
>As an alternative, [Drew DeVault] has proposed starting anew, without trying to wedge Rust into legacy C code. He wrote that "a motivated group of talented Rust OS developers could build a Linux-compatible kernel, from scratch, very quickly, with no need to engage in LKML [Linux kernel mailing list] politics. You would be astonished by how quickly you can make meaningful gains in this kind of environment; I think if the amount of effort being put into Rust-for-Linux were applied to a new Linux-compatible OS we could have something production-ready for some use cases within a few years."
Ah, yes. Drew DeVault. The expert in A) non-divisive/petty community politics, and B) developing 30M LOC OS kernels with billions of dollars on R&D investment in a few years with a small team in an experimental language. "Just make Linux 2", it's so simple, why didn't we think of this?!
This is an unusual take considering Drew DeVault actually does have experience developing new kernels [1] in experimental languages [2].
Drew's own post [3] (which the linked article references) doesn't downplay the effort involved in developing a kernel. But you're definitely overplaying it. 30M SLOC in the Linux kernel is largely stuff like device drivers, autogenerated headers, etc. While the Linux kernel has a substantial featureset, those features comprise a fraction of that LOC count.
Meanwhile, what Drew's suggesting is a kernel that aims for ABI compatibility. That's significantly less work than a full drop-in replacement, since it doesn't imply every feature is supported.
Not to mention, some effort could probably be put into developing mechanisms to assist in porting Linux device drivers and features over to such a replacement kernel using a C interface boundary that lets them run the original unsafe code as a stopgap.
[1] https://sr.ht/~sircmpwn/helios/
[3] https://drewdevault.com/2024/08/30/2024-08-30-Rust-in-Linux-...
Would be interested to know what others have set up as I'm not really happy with how I do it. I have zfs on my NAS running locally. I backup to that from my PC via rsync triggered by anacron daily. From my NAS I use rclone to send encrypted backups to Backblaze.
I'd be happier with something more frequent from PC to NAS. Syncthing maybe? Then just do zfs sync to some off site zfs server.