> In Rust, creating a mutable global variable is so hard that there are long forum discussions on how to do it. In Zig, you can just create one, no problem.
Well, no, creating a mutable global variable is trivial in Rust, it just requires either `unsafe` or using a smart pointer that provides synchronization. That's because Rust programs are re-entrant by default, because Rust provides compile-time thread-safety. If you don't care about statically-enforced thread-safety, then it's as easy in Rust as it is in Zig or C. The difference is that, unlike Zig or C, Rust gives you the tools to enforce more guarantees about your code's possible runtime behavior.
After using Rust for many years now, I feel that a mutable global variable is the perfect example of a "you were so busy figuring out whether you could, you never stopped to consider whether you should".
Moving back to a language that does this kind of thing all the time now, it seems like insanity to me wrt safety in execution
Global mutable state is like a rite of passage for devs.
Novices start slapping global variables everywhere because it makes things easy and it works, until it doesn't and some behaviour breaks because... I don't even know what broke it.
On a smaller scale, mutable date handling libraries also provide some memorable WTF debugging moments until one learns (hopefully) that adding 10 days to a date should probably return a new date instance in most cases.
> [...] is trivial in Rust [...] it just requires [...]
This is a tombstone-quality statement. It's the same framing people tossed around about C++ and Perl and Haskell (also Prolog back in the day). And it's true, insofar as it goes. But languages where "trivial" things "just require" rapidly become "not so trivial" in the aggregate. And Rust has jumped that particular shark. It will never be trivial, period.
> languages where "trivial" things "just require" rapidly become "not so trivial" in the aggregate
Sure. And in C and Zig, it's "trivial" to make a global mutable variable, it "just requires" you to flawlessly uphold memory access invariants manually across all possible concurrent states of your program.
Stop beating around the bush. Rust is just easier than nearly any other language for writing concurrent programs, and it's not even close (though obligatory shout out to Erlang).
It just requires unsafe. One concept, and then you can make a globally mutable variable.
And it's a good concept, because it makes people feel a bit uncomfortable to type the word "unsafe", and they question whether a globally mutable variable is in fact what they want. Which is great! Because this is saving every future user of that software from concurrency bugs related to that globally mutable variable, including ones that aren't even preserved in the software now but that might get introduced by a later developer who isn't thinking about the implications of that global unsafe!
A language that makes making a global mutable variable feel like making any other binding is a anti-pattern and something I'm glad Rust doesn't try to pretend is the same thing.
If you treat shared state like owned state, you're in for a bad time.
so does the rust compiler check for race conditions between threads at compile time? if so then i can see the allure of rust over c, some of those sync issues are devilish. and what about situations where you might have two variables closely related that need to be locked as a pair whenever accessed.
> so does the rust compiler check for race conditions between threads at compile time?
My understanding is that Rust prevents data races, but not all race conditions.
You can still get a logical race where operations interleave in unexpected ways. Rust can’t detect that, because it’s not a memory-safety issue.
So you can still get deadlocks, starvation, lost wakeups, ordering bugs, etc., but Rust gives you:
- No data races
- No unsynchronized aliasing of mutable data
- Thread safety enforced through type system (Send/Sync)
> what about situations where you might have two variables closely related that need to be locked as a pair whenever accessed.
This fits quite naturally in Rust. You can let your mutex own the pair: locking a `Mutex<(u32, u32)>` gives you a guard that lets you access both elements of the pair. Very often this will be a named `Mutex<MyStruct>` instead, but a tuple works just as well.
In rust, there are two kinds of references, exclusive (&mut) and shared(&). Rustc guarantees you that if you provide an exclusive reference, no other thread will have that. If your thread has an exclusive reference, then it can mutate the contents of the memory. Rustc also guarantees that you won't end up with a dropped reference inside of your threads, so you will always have allocated memory.
Because rust guarantees you won't have multiple exclusive (and thus mutable refs), you won't have a specific class of race conditions.
Sometimes however, these programs are very strict, and you need to relax these guarantees. To handle those cases, there are structures that can give you the same shared/exclusive references and borrowing rules (ie single exclusive, many shared refs) but at runtime. Meaning that you have an object, which you can reference (borrow) in multiple locations, however, if you have an active shared reference, you can't get an exclusive reference as the program will (by design) panic, and if you have an active exclusive reference, you can't get any more references.
This however isn't sufficient for multithreaded applications. That is sufficient when you have lots of pieces of memory referencing the same object in a single thread. For multi-threaded programs, we have RwLocks.
Rust approach to shared memory is in-place mutation guarded by locks. This approach is old and well-know, and has known problems: deadlocks, lock contention, etc. Rust specifically encourages coarse-granular locks by design, so lock contention problem is very pressing.
There are other approaches to shared memory, like ML-style mutable pointers to immutable data (perfected in Clojure) and actors. Rust has nothing to do with them, and as far as I understand the core choices made by the language make implementing them very problematic.
If I created a new programming language I would just outright prohibit mutable global variables. They are pure pure pure evil. I can not count how many times I have been pulled in to debug some gnarly crash and the result was, inevitably, a mutable global variable.
They are to be used with caution. If your execution environment is simple enough they can be quite useful and effective. Engineering shouldn't be a religion.
> I can not count how many times I have been pulled in to debug some gnarly crash and the result was, inevitably, a mutable global variable.
I've never once had that happen. What types of code are you working on that this occurs so frequently?
You need to be pragmatic and practical. Extra large codebases have controllers/managers that must be accessible by many modules. A single global vs dozens of local references to said “global” makes code less practical.
In my programming language (see my latest submission) I wanted to do so. But then I realized, that in rare cases global mutable variables (including thread-local ones) are necessary. So, I added them, but their usage requires using an unsafe block.
Not really possible in a systems level programming language like rust/zig/C. There really is only one address space for the process... and if you have the ability to manipulate it you have global variables.
There's lots of interest things you could do with a rust like (in terms of correctness properties) high level language, and getting rid of global variables might be one of them (though I can see arguments in both directions). Hopefully someone makes a good one some day.
That seems unusual. I would assume trivial means the default approach works for most cases. Perhaps mutable global variables are not a common use case. Unsafe might make it easier, but it’s not obvious and probably undesired.
I don’t know Rust, but I’ve heard pockets of unsafe code in a code base can make it hard to trust in Rust’s guarantees. The compromise feels like the language didn’t actually solve anything.
Outside of single-initialization/lazy-initialization (which are provided via safe and trivial standard library APIs: https://doc.rust-lang.org/std/sync/struct.LazyLock.html ) almost no Rust code uses global mutable variables. It's exceedingly rare to see any sort of global mutable state, and it's one of the lovely things about reading Rust code in the wild when you've spent too much of your life staring at C code whose programmers seemed to have a phobia of function arguments.
The default approach is to use a container that enforces synchronization. If you need manual control, you are able to do that, you just need to explicitly opt into the responsibility that comes with it.
If you use unsafe to opt out of guarantees that the compiler provides against data races, it’s no different than doing the exact same thing in a language that doesn’t protect against data races.
So I've got a crate I built that has a type that uses unsafe. Couple of things I've learned. First, yes, my library uses unsafe, but anyone who uses it doesn't have to deal with that at all. It behaves like a normal implementation of its type, it just uses half the memory. Outside of developing this one crate, I've never used unsafe.
Second, unsafe means the author is responsible for making it safe. Safe in rust means that the same rules must apply as unsafe code. It does not mean that you don't have to follow the rules. If one instead used it to violate the rules, then the code will certainly cause crashes.
I can see that some programmers would just use unsafe to "get around a problem" caused by safe rust enforcing those rules, and doing so is almost guaranteed to cause crashes. If the compiler won't let you do something, and you use unsafe to do it anyway, there's going to be a crash.
If instead we use unsafe to follow the rules, then it won't crash. There are tools like Miri that allow us to test that we haven't broken the rules. The fact that Miri did find two issues in my crate shows that unsafe is difficult to get right. My crate does clever bit-tricks and has object graphs, so it has to use unsafe to do things like having back pointers. These are all internal, and you can use the crate in safe rust. If we use unsafe to implement things like doubly-linked lists, then things are fine. If we use unsafe to allow multiple threads to mutate the same pointers (Against The Rules), then things are going to crash.
The thing is, when you are programming in C or C++, it's the same as writing unsafe rust all the time. In C/C++, the "pocket of unsafe code" is the entire codebase. So sure, you can write safe C, like I can write safe "unsafe rust". But 99% of the code I write is safe rust. And there's no equivalent in C or C++.
> I would assume trivial means the default approach works for most cases.
I mean, it does. I'm not sure what you consider the default approach, but to me it would be to wrap the data in a Mutex struct so that any thread can access it safely. That works great for most cases.
> Perhaps mutable global variables are not a common use case.
I'm not sure how common they are in practice, though I would certainly argue that they shouldn't be common. Global mutable variables have been well known to be a common source of bugs for decades.
> Unsafe might make it easier, but it’s not obvious and probably undesired.
All rust is doing is forcing you to acknowledge the trade-offs involved. If you want safety, you need to use a synchronization mechanism to guard the data (and the language provides several). If you are ok with the risk, then use unsafe. Unsafe isn't some kind of poison that makes your program crash, and all rust programs use unsafe to some extent (because the stdlib is full of it, by necessity). The only difference between rust and C is that rust tells you right up front "hey this might bite you in the ass" and makes you acknowledge that. It doesn't make that global variable any more risky than it would've been in any other language.
> I would assume trivial means the default approach works for most cases. Perhaps mutable global variables are not a common use case. Unsafe might make it easier, but it’s not obvious and probably undesired.
I'm a Rust fan, and I would generally agree with this. It isn't difficult, but trivial isn't quite right either. And no, global vars aren't terribly common in Rust, and when used, are typically done via LazyLock to prevent data races on intialization.
> I don’t know Rust, but I’ve heard pockets of unsafe code in a code base can make it hard to trust in Rust’s guarantees. The compromise feels like the language didn’t actually solve anything.
Not true at all. First, if you aren't writing device drivers/kernels or something very low level there is a high probability your program will have zero unsafe usages in it. Even if you do, you now have an effective comment that tells you where to look if you ever get suspicious behavior. The typical Rust paradigm is to let low level crates (libraries) do the unsafe stuff for you, test it thoroughly (Miri, fuzzing, etc.), and then the community builds on these crates with their safe programs. In contrast, C/C++ programs have every statement in an "unsafe block". In Rust, you know where UB can or cannot happen.
The reason I really like Zig is because there's finally a language that makes it easy to gracefully handle memory exhaustion at the application level. No more praying that your program isn't unceremoniously killed just for asking for more memory - all allocations are assumed fallible and failures must be handled explicitly. Stack space is not treated like magic - the compiler can reason about its maximum size by examining the call graph, so you can pre-allocate stack space to ensure that stack overflows are guaranteed never to happen.
This first-class representation of memory as a resource is a must for creating robust software in embedded environments, where it's vital to frontload all fallibility by allocating everything needed at start-up, and allow the application freedom to use whatever mechanism appropriate (backpressure, load shedding, etc) to handle excessive resource usage.
> No more praying that your program isn't unceremoniously killed just for asking for more memory - all allocations are assumed fallible and failures must be handled explicitly.
But for operating systems with overcommit, including Linux, you won't ever see the act of allocation fail, which is the whole point. All the language-level ceremony in the world won't save you.
Even on Linux with overcommit you can have allocations fail, in practical scenarios.
You can impose limits per process/cgroup. In server environments it doesn't make sense to run off swap (the perf hit can be so large that everything times out and it's indistinguishable from being offline), so you can set limits proportional to physical RAM, and see processes OOM before the whole system needs to resort to OOMKiller.
Processes that don't fork and don't do clever things with virtual mem don't overcommit much, and large-enough allocations can fail for real, at page mapping time, not when faulting.
Additionally, soft limits like https://lib.rs/cap make it possible to reliably observe OOM in Rust on every OS. This is very useful for limiting memory usage of a process before it becomes a system-wide problem, and a good extra defense in case some unreasonably large allocation sneaks past application-specific limits.
These "impossible" things happen regularly in the services I worked on. The hardest part about handling them has been Rust's libstd sabotaging it and giving up before even trying. Handling of OOM works well enough to be useful where Rust's libstd doesn't get in the way.
Sure, but you can do the next best thing, which is to control precisely when and where those allocations occur. Even if the possibility of crashing is unavoidable, there is still huge operational benefit in making it predictable.
Simplest example is to allocate and pin all your resources on startup. If it crashes, it does so immediately and with a clear error message, so the solution is as straightforward as "pass bigger number to --memory flag" or "spec out larger machine".
Overcommit only matters if you use the system allocator.
To me, the whole point of Zig's explicit allocator dependency injection design is to make it easy to not use the system allocator, but something more effective.
For example imagine a web server where each request handler gets 1MB, and all allocations a request handler does are just simple "bump allocations" in that 1MB space.
This design has multiple benefits:
- Allocations don't have to synchronize with the global allocator.
- Avoids heap fragmentation.
- No need to deallocate anything, we can just reuse that space for the next request.
- No need to care about ownership -- every object created in the request handler lives only until the handler returns.
- Makes it easy to define an upper bound on memory use and very easy to detect and return an error when it is reached.
In a system like this, you will definitely see allocations fail.
And if overcommit bothers someone, they can allocate all the space they need at startup and call mlock() on it to keep it in memory.
I imagine people who care about this sort of thing are happy to disable overcommit, and/or run Zig on embedded or specialized systems where it doesn't exist.
I don't know Zig. The article says "Many people seem confused about why Zig should exist if Rust does already." But I'd ask instead why does Zig exist when C does already? It's just a "better" C? But has the drawback that makes C problematic for development, manual memory management? I think you are better off using a language with a garbage collector, unless your usage really needs manual management, and then you can pick between C, Rust, and Zig (and C++ and a few hundred others, probably.)
yeah, its a better c, but like wouldnt it be nice if c had stadardized fat pointers so that if you move from project to project you don't have to triple check the semantics? for example and like say 50+ "learnings" from 40 years c that are canonized and first class in the language + stdlib
I don't think manual memory management is c's problem. a very large number of errors i see in 'c' programs comes from the null terminated string paradigm and also mistakes from raw pointer manipulation (slices/fat pointers help a lot here).
> Stack space is not treated like magic - the compiler can reason about its maximum size by examining the call graph, so you can pre-allocate stack space to ensure that stack overflows are guaranteed never to happen.
How does that work in the presence of recursion or calls through function pointers?
Recursion: That's easy, don't. At least, not with a call stack. Instead, use a stack container backed by a bounded allocator, and pop->process->push in a loop. What would have been a stack overflow is now an error.OutOfMemory enum that you can catch and handle as desired. All that said, there is a proposal that addresses making recursive functions more friendly to static analysis [0].
Function pointers: Zig has a proposal for restricted function types [1], which can be used to enforce compile-time constraints on the functions that can be assigned to a function pointer.
Linux has overcommit so failing malloc hasnt been a thing for over a decade. Zig is late to the party since it strong arms devs to cater to a scenerio which no longer exists.
On Linux you can turn this off. On some OS's it's off by default. Especially in embedded which is a major area of native coding. If you don't want to handle allocation failures in your app you can abort.
Also malloc can fail even with overcommit, if you accidentally enter an obviously incorrect size like -1.
> I’m not the first person to pick on this particular Github comment, but it perfectly illustrates the conceptual density of Rust:
But you only need about 5% of the concepts in that comment to be productive in Rust. I don't think I've ever needed to know about #[fundamental] in about 12 years or so of Rust…
> In both Go and Rust, allocating an object on the heap is as easy as returning a pointer to a struct from a function. The allocation is implicit. In Zig, you allocate every byte yourself, explicitly. […] you have to call alloc() on a specific kind of allocator,
> In Go and Rust and so many other languages, you tend to allocate little bits of memory at a time for each object in your object graph. Your program has thousands of little hidden malloc()s and free()s, and therefore thousands of different lifetimes.
Rust can also do arena allocations, and there is an allocator concept in Rust, too. There's just a default allocator, too.
And usually a heap allocation is explicit, such as with Box::new, but that of course might be wrapped behind some other type or function. (E.g., String, Vec both alloc, too.)
> In Rust, creating a mutable global variable is so hard that there are long forum discussions on how to do it.
The linked thread is specifically about creating a specific kind of mutable global, and has extra, special requirements unique to the thread. The stock "I need a global" for what I'd call a "default situation" can be as "simple" as,
static FOO: Mutex<T> = Mutex::new(…);
Since mutable globals are inherently memory unsafe, you need the mutex.
(Obviously, there's usually an XY problem in such questions, too, when someone wants a global…)
To the safety stuff, I'd add that Rust not only champions memory safety, but the type system is such that I can use it to add safety guarantees to the code I write. E.g., String can guarantee that it always represents a Unicode string, and it doesn't really need special support from the language to do that.
> But you only need about 5% of the concepts in that comment to be productive in Rust.
The similar argument against C++ is applicable here: another programmer may be using 10% (or a different 5%) of the concepts. You will have to learn that fraction when working with him/her. This may also happen when you read the source code of some random projects. C programmers seldom have this problem. Complexity matters.
There's also the problem of the people who are either too clever for their own good, or not nearly as clever as they think they are. Either group can produce horribly convoluted code to perform relatively simple tasks, and it's irritating as hell everytime I run into it. That's not unique to Rust of course, but the more tools you give to them the bigger mess they make.
A similar problem applies to Go, just inverted. Take iteration. The vast majority of use cases for iterating over containers are map, filter, reduce. Go doesn't have these functions. That's very simple! All Go developers are aligned here: just use a for loop. There's no room for "10% of concepts corners", there's just that 1 corner.
But, for loops get tedious. So people will make helper functions. Generic ones today, non-generic in the past. The result is that you have a zoo of iteration-related helper functions all throughout. You'll need to learn those when onboarding to a new code base as well. Go's readability makes this easier, but by definitions everything's entirely non-standard.
No, this is overly simplistic. The features in the quoted comment are largely things that nobody other than stdlib developers need to understand. There is no bespoke subset-dialect of Rust where people are tossing around the `fundamental` attribute--it is strictly an obscure detail that not even an expert Rust programmer would be expected to have even heard of.
>> In Go and Rust and so many other languages, you tend to allocate little bits of memory at a time for each object in your object graph. Your program has thousands of little hidden malloc()s and free()s, and therefore thousands of different lifetimes.
> Rust can also do arena allocations, and there is an allocator concept in Rust, too. There's just a default allocator, too.
Thank you. I've seen this repeated so many times. Casey Muratori did a video on batch allocations that was extremely informative, but also stupidly gatekeepy [1]. I think a lot of people who want to see themselves as super devs have latched onto this point without even understanding it. They talk like RAII makes it impossible to batch anything.
Last year the Zig Software Foundation wrote about Asahi Lina's comments around Rust and basically implied she was unknowingly introducing these hidden allocations, citing this exact Casey Muratori video. And it was weird. A bunch of people pointed out the inaccuracies in the post, including Lina [2]. That combined with Andrew saying Go is for people without taste (not that I like Go myself), I'm not digging Zig's vibe of dunking on other companies and languages to sell their own.
"Batch allocation" in Rust is just a matter of Box-ing a custom-defined tuple of objects as opposed to putting each object in its own little Box. You can even include MaybeUninit's in the tuple that are then initialized later in unsafe code, and transmuted to the initialized type after-the-fact. You don't need an allocator library at all for this easy case, that's more valuable when the shape of allocations is in fact dynamic.
The author isn't saying it's literally impossible to batch allocate, just that the default happy path of programming in Rust & Go tends to produce a lot of allocations. It's a take more nuanced than the binary possible vs impossible.
suppose you had an m:n system (like say an evented http request server split over several threads so that a thread might handle several inbound requests), would you be able to give each request its own arena?
Allocators in rust are objects that implement the allocator trait. One (generally) passes the allocator object to functions that use the allocator. For example, `Vec` has `Vec::new_in(alloc: A) where A: Allocator`.
And so if in your example every request can have the same Allocator type, and then have distinct instances of that type . For example, you could say "I want an Arena" and pick the Arena type that impls Allocator, and then create a new instance of Arena for each `Vec::new_in(alloc)` call.
Alternately, if you want every request to have a distinct Allocator type as well as instance, one can use `Box<dyn Allocator>` as the allocators type (or use any other dispatch pattern), and provide whatever instance of the allocator is appropriate.
The standard library provides a global allocator. The collections in the standard library currently use that allocator.
It also provides an unstable interface for allocators in general. That's of course useful someday, but also doesn't prevent people from using whatever allocators they want in the meantime. It just means that libraries that want to be generic over one cannot currently agree. The standard library collections also will use that once it becomes stable.
No. There is a global allocator which is used by default, but all the stdlib functions that allocate memory have a version which allows you to pass in a custom allocator. These functions are still "unstable" though, so they can currently only be used with development builds of the compiler.
> The idea seems to be that you can run your program enough times in the checked release modes to have reasonable confidence that there will be no illegal behavior in the unchecked build of your program. That seems like a highly pragmatic design to me.
This is only pragmatic if you ignore the real world experience of sanitizers which attempt to do the same thing and failing to prevent memory safety and UB issues in deployed C/C++ codebases (eg Android definitely has sanitizers running on every commit and yet it wasn’t until they switched to Rust that exploits started disappearing).
Can you provide the source of "(eg Android definitely has sanitizers running on every commit and yet it wasn’t until they switched to Rust that exploits started disappearing)"?
Here’s the report showing the impact has had on memory vulnerabilities in Rust. I guess you’ll have to take my word that they run sanitizers, I don’t know of any good link summarizing their usage, other than it’s listed in AOSP and has instructions on how to use it.
> In Go, a slice is a fat pointer to a contiguous sequence in memory, but a slice can also grow, meaning that it subsumes the functionality of Rust’s Vec<T> type and Zig’s ArrayList.
Well, not exactly. This is actually a great example of the Go philosophy of being "simple" while not being "easy".
A Vec<T> has identity; the memory underlying a Go slice does not. When you call append(), a new slice is returned that may or may not share memory with the old slice. There's also no way to shrink the memory underlying a slice. So slices actually very much do not work like Vec<T>. It's a common newbie mistake to think they do work like that, and write "append(s, ...)" instead of "s = append(s, ...)". It might even randomly work a lot of the time.
Go programmer attitude is "do what I said, and trust that I read the library docs before I said it". Rust programmer attitude is "check that I did what I said I would do, and that what I said aligns with how that library said it should be used".
So (generalizing) Go won't implement a feature that makes mistakes harder, if it makes the language more complicated; Rust will make the language more complicated to eliminate more mistakes.
> It's a common newbie mistake to think they do work like that, and write "append(s, ...)" instead of "s = append(s, ...)". It might even randomly work a lot of the time.
"append(s, ...)" without the assignment doesn't even compile. So your entire post seems like a strawman?
> So (generalizing) Go won't implement a feature that makes mistakes harder, if it makes the language more complicated
No, I think it is more that the compromise of complicating the language that is always made when adding features is carefully weighed in Go. Less so in other languages.
Clipping doesn't seem to automatically move the data, so while it does mean appending will reallocate, it doesn't actually shrink the underlying array, right?
Writing "append(s, ...)" instead of "s = append(s, ...)" results in a compiler error because it is an unused expression. I'm not sure how a newbie could make this mistake since that code doesn't compile.
It seems kind of odd that the Go community doesn't have a commonly-used List[T] type now that generics allow for one. I suppose passing a growable list around isn't that common.
> Go programmer attitude is "do what I said, and trust that I read the library docs before I said it".
I agree and think Go gets unjustly blamed for some things: most of the foot guns people say Go has are clearly laid out in the spec/documentation. Are these surprising behaviors or did you just not read?
Getting a compiler and just typing away is not a great way of going about learning things if that compiler is not as strict.
Good write up, I like where you're going with this. Your article reads like a recent graduate who's full of excitement and passion for the wonderful world of programming, and just coming into the real world for the first time.
For Go, I wouldn't say that the choice to avoid generics was either intentional or minimalist by nature. From what I recall, they were just struggling for a long time with a difficult decision, which trade-offs to make. And I think they were just hoping that, given enough time, the community could perhaps come up with a new, innovative solution that resolves them gracefully. And I think after a decade they just kind of settled on a solution, as the clock was ticking. I could be wrong.
For Rust, I would strongly disagree on two points. First, lifetimes are in fact what tripped me up the most, and many others, famously including Brian Kernighan, who literally wrote the book on C. Second, Rust isn't novel in combining many other ideas into the language. Lots of languages do that, like C#. But I do recall thinking that Rust had some odd name choices for some features it adopted. And, not being a C++ person myself, it has solutions to many problems I never wrestled with, known by name to C++ devs but foreign to me.
For Zig's manual memory management, you say:
> this is a design choice very much related to the choice to exclude OOP features.
Maybe, but I think it's more based on Andrew's need for Data-Oriented Design when designing high performance applications. He did a very interesting talk on DOD last year[1]. I think his idea is that, if you're going to write the highest performance code possible, while still having an ergonomic language, you need to prioritize a whole different set of features.
> For Go, I wouldn't say that the choice to avoid generics was either intentional or minimalist by nature. From what I recall, they were just struggling for a long time with a difficult decision, which trade-offs to make.
Indeed, in 2009 Russ Cox laid out clearly the problem they had [1], summed up thus:
> The generic dilemma is this: do you want slow programmers, slow compilers and bloated binaries, or slow execution times?
My understanding is that they were eventually able to come up with something clever under the hood to mitigate that dilemma to their satisfaction.
Ironically, the latest research by Google has now conclusively shown that Rust programmers aren't really any "slower" or less productive than Go programmers. That's especially true once you account for the entire software lifecycle, including production support and maintenance.
I love this take - partly because I agree with it - but mostly because I think that this is the right way to compare PLs (and to present the results). It is honest in the way it ascribes strengths and weaknesses, helping to guide, refine, justify the choice of language outside of job pressures.
I am sad that it does not mention Raku (https://raku.org) ... because in my mind there is a kind of continuum: C - Zig - C++ - Rust - Go ... OK for low level, but what about the scriptier end - Julia - R - Python - Lua - JavaScript - PHP - Raku - WL?
I tried to get an LLM to write a Raku chapter in the same vein - naah. Had to write it myself:
Raku
Raku stands out as a fast way to working code, with a permissive compiler that allows wide expression.
Its an expressive, general-purpose language with a wide set of built-in tools. Features like multi-dispatch, roles, gradual typing, lazy evaluation, and a strong regex and grammar system are part of its core design. The language aims to give you direct ways to reflect the structure of a problem instead of building abstractions from scratch.
The grammar system is the clearest example. Many languages treat parsing as a specialized task requiring external libraries. Raku instead provides a declarative syntax for defining rules and grammars, so working with text formats, logs, or DSLs often requires less code and fewer workarounds. This capability blends naturally with the rest of the language rather than feeling like a separate domain.
Raku programs run on a sizeable VM and lean on runtime dispatch, which means they typically don’t have the startup speed or predictable performance profile of lower-level or more static languages. But the model is consistent: you get flexibility, clear semantics, and room to adjust your approach as a problem evolves. Incremental development tends to feel natural, whether you’re sketching an idea or tightening up a script that’s grown into something larger.
The language’s long development history stems from an attempt to rethink Perl, not simply modernize it. That history produced a language that tries to be coherent and pleasant to write, even if it’s not small. Choose Raku if you want a language that let's you code the way you want, helps you wrestle with the problem and not with the compiler.
I see that my Raku chapter was downvoted a couple of times. Well OK, I am an unashamed shill for such a fantastic and yet despised language. Don’t knock til you try it.
Some comments below on “I want a Go, but with more powerful OO” - well Raku adheres to the Smalltalk philosophy… everything is an object, and it has all the OO richness (rope) of C++ with multiple inheritance, role composition, parametric roles, MOP, mixins… all within an easy to use, easy to read style.
I think the Go part is missing a pretty important thing: the easiest concurrency model there is. Goroutines are one of the biggest reasons I even started with Go.
Agreed. Rob Pike presented a good talk "Concurrency is not Parallelism" which explains the motivations behind Go's concurrency model: https://youtu.be/oV9rvDllKEg
Between the lack of "colored functions" and the simplicity of communicating with channels, I keep surprising myself with how (relatively) quick and easy it is to develop concurrent systems with correct behavior in Go.
Its a bit messy to do parallelism with it but it still works and its a consistent pattern and their are libraries that add it for the processing of slices and such. It could be made easier IMO, they are trying to dissuade its use but its actually really common to want to process N things distributed across multiple CPUs nowadays.
Just the fact that you can prototype with a direct solution and then just pretty much slap on concurrency by wrapping it in "go" and adding channels is amazing.
But how does one communicate and synchronize between tasks with structured concurrency?
Consider a server handling transactional requests, which submit jobs and get results from various background workers, which broadcast change events to remote observers.
This is straightforward to set up with channels in Go. But I haven't seen an example of this type of workload using structured concurrency.
Erlang is great for distributed systems. But my bugbear is when people look at how distributed systems are inherently parallel, and then look at a would-be concurrent program and go, "I know, I'll make my program concurrent by making it into a distributed system".
But distributed systems are hard. If your system isn't inherently distributed, then don't rush towards a model of concurrency that emulates a distributed system. For anything on a single machine, prefer structured concurrency.
The new (unreleased right now, in the nightly builds) std.Io interface in Zig maps quite nicely to the concurrency constructs in Go. The go keyword maps to std.Io.async to run a function asynchronously. Channels map to the std.Io.Queue data structure. The select keyword maps to the std.Io.select function.
One other thing I think it misses, is how easy it is to navigate a massive code base because everything looks the same. In a large team, this is crucial and I value the legibility over cleverness (I really dislike meta programming).
Really the only thing I found difficult is finding the concrete implementation of an interface when the interface is defined close to where it is, and when interfaces are duplicated everywhere.
Well, no, creating a mutable global variable is trivial in Rust, it just requires either `unsafe` or using a smart pointer that provides synchronization. That's because Rust programs are re-entrant by default, because Rust provides compile-time thread-safety. If you don't care about statically-enforced thread-safety, then it's as easy in Rust as it is in Zig or C. The difference is that, unlike Zig or C, Rust gives you the tools to enforce more guarantees about your code's possible runtime behavior.
Moving back to a language that does this kind of thing all the time now, it seems like insanity to me wrt safety in execution
Novices start slapping global variables everywhere because it makes things easy and it works, until it doesn't and some behaviour breaks because... I don't even know what broke it.
On a smaller scale, mutable date handling libraries also provide some memorable WTF debugging moments until one learns (hopefully) that adding 10 days to a date should probably return a new date instance in most cases.
This is a tombstone-quality statement. It's the same framing people tossed around about C++ and Perl and Haskell (also Prolog back in the day). And it's true, insofar as it goes. But languages where "trivial" things "just require" rapidly become "not so trivial" in the aggregate. And Rust has jumped that particular shark. It will never be trivial, period.
Sure. And in C and Zig, it's "trivial" to make a global mutable variable, it "just requires" you to flawlessly uphold memory access invariants manually across all possible concurrent states of your program.
Stop beating around the bush. Rust is just easier than nearly any other language for writing concurrent programs, and it's not even close (though obligatory shout out to Erlang).
And it's a good concept, because it makes people feel a bit uncomfortable to type the word "unsafe", and they question whether a globally mutable variable is in fact what they want. Which is great! Because this is saving every future user of that software from concurrency bugs related to that globally mutable variable, including ones that aren't even preserved in the software now but that might get introduced by a later developer who isn't thinking about the implications of that global unsafe!
If you treat shared state like owned state, you're in for a bad time.
Maybe, but the language being hard in aggregate is very different from the quoted claim that this specific thing is hard.
Deleted Comment
Deleted Comment
My understanding is that Rust prevents data races, but not all race conditions. You can still get a logical race where operations interleave in unexpected ways. Rust can’t detect that, because it’s not a memory-safety issue.
So you can still get deadlocks, starvation, lost wakeups, ordering bugs, etc., but Rust gives you:
- No data races
- No unsynchronized aliasing of mutable data
- Thread safety enforced through type system (Send/Sync)
This fits quite naturally in Rust. You can let your mutex own the pair: locking a `Mutex<(u32, u32)>` gives you a guard that lets you access both elements of the pair. Very often this will be a named `Mutex<MyStruct>` instead, but a tuple works just as well.
Because rust guarantees you won't have multiple exclusive (and thus mutable refs), you won't have a specific class of race conditions.
Sometimes however, these programs are very strict, and you need to relax these guarantees. To handle those cases, there are structures that can give you the same shared/exclusive references and borrowing rules (ie single exclusive, many shared refs) but at runtime. Meaning that you have an object, which you can reference (borrow) in multiple locations, however, if you have an active shared reference, you can't get an exclusive reference as the program will (by design) panic, and if you have an active exclusive reference, you can't get any more references.
This however isn't sufficient for multithreaded applications. That is sufficient when you have lots of pieces of memory referencing the same object in a single thread. For multi-threaded programs, we have RwLocks.
https://doc.rust-lang.org/std/cell/index.html
Rust approach to shared memory is in-place mutation guarded by locks. This approach is old and well-know, and has known problems: deadlocks, lock contention, etc. Rust specifically encourages coarse-granular locks by design, so lock contention problem is very pressing.
There are other approaches to shared memory, like ML-style mutable pointers to immutable data (perfected in Clojure) and actors. Rust has nothing to do with them, and as far as I understand the core choices made by the language make implementing them very problematic.
Logical race conditions and deadlocks can still happen.
Deleted Comment
They are to be used with caution. If your execution environment is simple enough they can be quite useful and effective. Engineering shouldn't be a religion.
> I can not count how many times I have been pulled in to debug some gnarly crash and the result was, inevitably, a mutable global variable.
I've never once had that happen. What types of code are you working on that this occurs so frequently?
There's lots of interest things you could do with a rust like (in terms of correctness properties) high level language, and getting rid of global variables might be one of them (though I can see arguments in both directions). Hopefully someone makes a good one some day.
If you use unsafe to opt out of guarantees that the compiler provides against data races, it’s no different than doing the exact same thing in a language that doesn’t protect against data races.
Second, unsafe means the author is responsible for making it safe. Safe in rust means that the same rules must apply as unsafe code. It does not mean that you don't have to follow the rules. If one instead used it to violate the rules, then the code will certainly cause crashes.
I can see that some programmers would just use unsafe to "get around a problem" caused by safe rust enforcing those rules, and doing so is almost guaranteed to cause crashes. If the compiler won't let you do something, and you use unsafe to do it anyway, there's going to be a crash.
If instead we use unsafe to follow the rules, then it won't crash. There are tools like Miri that allow us to test that we haven't broken the rules. The fact that Miri did find two issues in my crate shows that unsafe is difficult to get right. My crate does clever bit-tricks and has object graphs, so it has to use unsafe to do things like having back pointers. These are all internal, and you can use the crate in safe rust. If we use unsafe to implement things like doubly-linked lists, then things are fine. If we use unsafe to allow multiple threads to mutate the same pointers (Against The Rules), then things are going to crash.
The thing is, when you are programming in C or C++, it's the same as writing unsafe rust all the time. In C/C++, the "pocket of unsafe code" is the entire codebase. So sure, you can write safe C, like I can write safe "unsafe rust". But 99% of the code I write is safe rust. And there's no equivalent in C or C++.
I mean, it does. I'm not sure what you consider the default approach, but to me it would be to wrap the data in a Mutex struct so that any thread can access it safely. That works great for most cases.
> Perhaps mutable global variables are not a common use case.
I'm not sure how common they are in practice, though I would certainly argue that they shouldn't be common. Global mutable variables have been well known to be a common source of bugs for decades.
> Unsafe might make it easier, but it’s not obvious and probably undesired.
All rust is doing is forcing you to acknowledge the trade-offs involved. If you want safety, you need to use a synchronization mechanism to guard the data (and the language provides several). If you are ok with the risk, then use unsafe. Unsafe isn't some kind of poison that makes your program crash, and all rust programs use unsafe to some extent (because the stdlib is full of it, by necessity). The only difference between rust and C is that rust tells you right up front "hey this might bite you in the ass" and makes you acknowledge that. It doesn't make that global variable any more risky than it would've been in any other language.
I'm a Rust fan, and I would generally agree with this. It isn't difficult, but trivial isn't quite right either. And no, global vars aren't terribly common in Rust, and when used, are typically done via LazyLock to prevent data races on intialization.
> I don’t know Rust, but I’ve heard pockets of unsafe code in a code base can make it hard to trust in Rust’s guarantees. The compromise feels like the language didn’t actually solve anything.
Not true at all. First, if you aren't writing device drivers/kernels or something very low level there is a high probability your program will have zero unsafe usages in it. Even if you do, you now have an effective comment that tells you where to look if you ever get suspicious behavior. The typical Rust paradigm is to let low level crates (libraries) do the unsafe stuff for you, test it thoroughly (Miri, fuzzing, etc.), and then the community builds on these crates with their safe programs. In contrast, C/C++ programs have every statement in an "unsafe block". In Rust, you know where UB can or cannot happen.
Deleted Comment
This first-class representation of memory as a resource is a must for creating robust software in embedded environments, where it's vital to frontload all fallibility by allocating everything needed at start-up, and allow the application freedom to use whatever mechanism appropriate (backpressure, load shedding, etc) to handle excessive resource usage.
But for operating systems with overcommit, including Linux, you won't ever see the act of allocation fail, which is the whole point. All the language-level ceremony in the world won't save you.
You can impose limits per process/cgroup. In server environments it doesn't make sense to run off swap (the perf hit can be so large that everything times out and it's indistinguishable from being offline), so you can set limits proportional to physical RAM, and see processes OOM before the whole system needs to resort to OOMKiller. Processes that don't fork and don't do clever things with virtual mem don't overcommit much, and large-enough allocations can fail for real, at page mapping time, not when faulting.
Additionally, soft limits like https://lib.rs/cap make it possible to reliably observe OOM in Rust on every OS. This is very useful for limiting memory usage of a process before it becomes a system-wide problem, and a good extra defense in case some unreasonably large allocation sneaks past application-specific limits.
These "impossible" things happen regularly in the services I worked on. The hardest part about handling them has been Rust's libstd sabotaging it and giving up before even trying. Handling of OOM works well enough to be useful where Rust's libstd doesn't get in the way.
Rust is the problem here.
Simplest example is to allocate and pin all your resources on startup. If it crashes, it does so immediately and with a clear error message, so the solution is as straightforward as "pass bigger number to --memory flag" or "spec out larger machine".
To me, the whole point of Zig's explicit allocator dependency injection design is to make it easy to not use the system allocator, but something more effective.
For example imagine a web server where each request handler gets 1MB, and all allocations a request handler does are just simple "bump allocations" in that 1MB space.
This design has multiple benefits: - Allocations don't have to synchronize with the global allocator. - Avoids heap fragmentation. - No need to deallocate anything, we can just reuse that space for the next request. - No need to care about ownership -- every object created in the request handler lives only until the handler returns. - Makes it easy to define an upper bound on memory use and very easy to detect and return an error when it is reached.
In a system like this, you will definitely see allocations fail.
And if overcommit bothers someone, they can allocate all the space they need at startup and call mlock() on it to keep it in memory.
ever? If you have limited RAM and limited storage on a small linux SBC, where does it put your memory?
How does that work in the presence of recursion or calls through function pointers?
Function pointers: Zig has a proposal for restricted function types [1], which can be used to enforce compile-time constraints on the functions that can be assigned to a function pointer.
[0]: https://github.com/ziglang/zig/issues/1006 [1]: https://github.com/ziglang/zig/issues/23367
Certainly I agree that allocations in your dependencies (including std) are more annoying in Rust since it uses panics for OOM.
The no-std set of crates is all setup to support embedded development.
Deleted Comment
Also malloc can fail even with overcommit, if you accidentally enter an obviously incorrect size like -1.
But you only need about 5% of the concepts in that comment to be productive in Rust. I don't think I've ever needed to know about #[fundamental] in about 12 years or so of Rust…
> In both Go and Rust, allocating an object on the heap is as easy as returning a pointer to a struct from a function. The allocation is implicit. In Zig, you allocate every byte yourself, explicitly. […] you have to call alloc() on a specific kind of allocator,
> In Go and Rust and so many other languages, you tend to allocate little bits of memory at a time for each object in your object graph. Your program has thousands of little hidden malloc()s and free()s, and therefore thousands of different lifetimes.
Rust can also do arena allocations, and there is an allocator concept in Rust, too. There's just a default allocator, too.
And usually a heap allocation is explicit, such as with Box::new, but that of course might be wrapped behind some other type or function. (E.g., String, Vec both alloc, too.)
> In Rust, creating a mutable global variable is so hard that there are long forum discussions on how to do it.
The linked thread is specifically about creating a specific kind of mutable global, and has extra, special requirements unique to the thread. The stock "I need a global" for what I'd call a "default situation" can be as "simple" as,
Since mutable globals are inherently memory unsafe, you need the mutex.(Obviously, there's usually an XY problem in such questions, too, when someone wants a global…)
To the safety stuff, I'd add that Rust not only champions memory safety, but the type system is such that I can use it to add safety guarantees to the code I write. E.g., String can guarantee that it always represents a Unicode string, and it doesn't really need special support from the language to do that.
The similar argument against C++ is applicable here: another programmer may be using 10% (or a different 5%) of the concepts. You will have to learn that fraction when working with him/her. This may also happen when you read the source code of some random projects. C programmers seldom have this problem. Complexity matters.
But, for loops get tedious. So people will make helper functions. Generic ones today, non-generic in the past. The result is that you have a zoo of iteration-related helper functions all throughout. You'll need to learn those when onboarding to a new code base as well. Go's readability makes this easier, but by definitions everything's entirely non-standard.
> Rust can also do arena allocations, and there is an allocator concept in Rust, too. There's just a default allocator, too.
Thank you. I've seen this repeated so many times. Casey Muratori did a video on batch allocations that was extremely informative, but also stupidly gatekeepy [1]. I think a lot of people who want to see themselves as super devs have latched onto this point without even understanding it. They talk like RAII makes it impossible to batch anything.
Last year the Zig Software Foundation wrote about Asahi Lina's comments around Rust and basically implied she was unknowingly introducing these hidden allocations, citing this exact Casey Muratori video. And it was weird. A bunch of people pointed out the inaccuracies in the post, including Lina [2]. That combined with Andrew saying Go is for people without taste (not that I like Go myself), I'm not digging Zig's vibe of dunking on other companies and languages to sell their own.
[1] https://www.youtube.com/watch?v=xt1KNDmOYqA [2] https://lobste.rs/s/hxerht/raii_rust_linux_drama
Deleted Comment
Is there a language that can't?
The author isn't saying it's literally impossible to batch allocate, just that the default happy path of programming in Rust & Go tends to produce a lot of allocations. It's a take more nuanced than the binary possible vs impossible.
aren't allocators types in rust?
suppose you had an m:n system (like say an evented http request server split over several threads so that a thread might handle several inbound requests), would you be able to give each request its own arena?
And so if in your example every request can have the same Allocator type, and then have distinct instances of that type . For example, you could say "I want an Arena" and pick the Arena type that impls Allocator, and then create a new instance of Arena for each `Vec::new_in(alloc)` call.
Alternately, if you want every request to have a distinct Allocator type as well as instance, one can use `Box<dyn Allocator>` as the allocators type (or use any other dispatch pattern), and provide whatever instance of the allocator is appropriate.
Just a pure question: Is Rust allocator global? (Will all heap allocations use the same allocator?)
The standard library provides a global allocator. The collections in the standard library currently use that allocator.
It also provides an unstable interface for allocators in general. That's of course useful someday, but also doesn't prevent people from using whatever allocators they want in the meantime. It just means that libraries that want to be generic over one cannot currently agree. The standard library collections also will use that once it becomes stable.
> The idea seems to be that you can run your program enough times in the checked release modes to have reasonable confidence that there will be no illegal behavior in the unchecked build of your program. That seems like a highly pragmatic design to me.
This is only pragmatic if you ignore the real world experience of sanitizers which attempt to do the same thing and failing to prevent memory safety and UB issues in deployed C/C++ codebases (eg Android definitely has sanitizers running on every commit and yet it wasn’t until they switched to Rust that exploits started disappearing).
https://security.googleblog.com/2025/11/rust-in-android-move...
https://source.android.com/docs/security/test/sanitizers
Well, not exactly. This is actually a great example of the Go philosophy of being "simple" while not being "easy".
A Vec<T> has identity; the memory underlying a Go slice does not. When you call append(), a new slice is returned that may or may not share memory with the old slice. There's also no way to shrink the memory underlying a slice. So slices actually very much do not work like Vec<T>. It's a common newbie mistake to think they do work like that, and write "append(s, ...)" instead of "s = append(s, ...)". It might even randomly work a lot of the time.
Go programmer attitude is "do what I said, and trust that I read the library docs before I said it". Rust programmer attitude is "check that I did what I said I would do, and that what I said aligns with how that library said it should be used".
So (generalizing) Go won't implement a feature that makes mistakes harder, if it makes the language more complicated; Rust will make the language more complicated to eliminate more mistakes.
Sorry, that is incorrect: https://pkg.go.dev/slices#Clip
> It's a common newbie mistake to think they do work like that, and write "append(s, ...)" instead of "s = append(s, ...)". It might even randomly work a lot of the time.
"append(s, ...)" without the assignment doesn't even compile. So your entire post seems like a strawman?
https://go.dev/play/p/icdOMl8A9ja
> So (generalizing) Go won't implement a feature that makes mistakes harder, if it makes the language more complicated
No, I think it is more that the compromise of complicating the language that is always made when adding features is carefully weighed in Go. Less so in other languages.
Clipping doesn't seem to automatically move the data, so while it does mean appending will reallocate, it doesn't actually shrink the underlying array, right?
Deleted Comment
I agree and think Go gets unjustly blamed for some things: most of the foot guns people say Go has are clearly laid out in the spec/documentation. Are these surprising behaviors or did you just not read?
Getting a compiler and just typing away is not a great way of going about learning things if that compiler is not as strict.
[0] https://pkg.go.dev/time@go1.22.12#Timer.Reset
For Go, I wouldn't say that the choice to avoid generics was either intentional or minimalist by nature. From what I recall, they were just struggling for a long time with a difficult decision, which trade-offs to make. And I think they were just hoping that, given enough time, the community could perhaps come up with a new, innovative solution that resolves them gracefully. And I think after a decade they just kind of settled on a solution, as the clock was ticking. I could be wrong.
For Rust, I would strongly disagree on two points. First, lifetimes are in fact what tripped me up the most, and many others, famously including Brian Kernighan, who literally wrote the book on C. Second, Rust isn't novel in combining many other ideas into the language. Lots of languages do that, like C#. But I do recall thinking that Rust had some odd name choices for some features it adopted. And, not being a C++ person myself, it has solutions to many problems I never wrestled with, known by name to C++ devs but foreign to me.
For Zig's manual memory management, you say:
> this is a design choice very much related to the choice to exclude OOP features.
Maybe, but I think it's more based on Andrew's need for Data-Oriented Design when designing high performance applications. He did a very interesting talk on DOD last year[1]. I think his idea is that, if you're going to write the highest performance code possible, while still having an ergonomic language, you need to prioritize a whole different set of features.
[1] https://www.youtube.com/watch?v=IroPQ150F6c
Indeed, in 2009 Russ Cox laid out clearly the problem they had [1], summed up thus:
> The generic dilemma is this: do you want slow programmers, slow compilers and bloated binaries, or slow execution times?
My understanding is that they were eventually able to come up with something clever under the hood to mitigate that dilemma to their satisfaction.
[1] https://research.swtch.com/generic
Deleted Comment
I am sad that it does not mention Raku (https://raku.org) ... because in my mind there is a kind of continuum: C - Zig - C++ - Rust - Go ... OK for low level, but what about the scriptier end - Julia - R - Python - Lua - JavaScript - PHP - Raku - WL?
Raku
Raku stands out as a fast way to working code, with a permissive compiler that allows wide expression.
Its an expressive, general-purpose language with a wide set of built-in tools. Features like multi-dispatch, roles, gradual typing, lazy evaluation, and a strong regex and grammar system are part of its core design. The language aims to give you direct ways to reflect the structure of a problem instead of building abstractions from scratch.
The grammar system is the clearest example. Many languages treat parsing as a specialized task requiring external libraries. Raku instead provides a declarative syntax for defining rules and grammars, so working with text formats, logs, or DSLs often requires less code and fewer workarounds. This capability blends naturally with the rest of the language rather than feeling like a separate domain.
Raku programs run on a sizeable VM and lean on runtime dispatch, which means they typically don’t have the startup speed or predictable performance profile of lower-level or more static languages. But the model is consistent: you get flexibility, clear semantics, and room to adjust your approach as a problem evolves. Incremental development tends to feel natural, whether you’re sketching an idea or tightening up a script that’s grown into something larger.
The language’s long development history stems from an attempt to rethink Perl, not simply modernize it. That history produced a language that tries to be coherent and pleasant to write, even if it’s not small. Choose Raku if you want a language that let's you code the way you want, helps you wrestle with the problem and not with the compiler.
Some comments below on “I want a Go, but with more powerful OO” - well Raku adheres to the Smalltalk philosophy… everything is an object, and it has all the OO richness (rope) of C++ with multiple inheritance, role composition, parametric roles, MOP, mixins… all within an easy to use, easy to read style.
Look away now if you hate sigils.Between the lack of "colored functions" and the simplicity of communicating with channels, I keep surprising myself with how (relatively) quick and easy it is to develop concurrent systems with correct behavior in Go.
Consider a server handling transactional requests, which submit jobs and get results from various background workers, which broadcast change events to remote observers.
This is straightforward to set up with channels in Go. But I haven't seen an example of this type of workload using structured concurrency.
Erlang programmers might disagree with you there.
But distributed systems are hard. If your system isn't inherently distributed, then don't rush towards a model of concurrency that emulates a distributed system. For anything on a single machine, prefer structured concurrency.
Really the only thing I found difficult is finding the concrete implementation of an interface when the interface is defined close to where it is, and when interfaces are duplicated everywhere.