Readit News logoReadit News
Svetlitski commented on Look up macOS system binaries   macosbin.com... · Posted by u/tolerance
WesolyKubeczek · a month ago
Except stuff in /Library/Application Support. Oh, and /Library/Extensions. Oh, and /Library/DriverExtensions. Oh, and /Library/LaunchAgents. And /Library/LaunchDaemons. Also /Library/Perl (this is Apple-provided) and /Library/TeX (this is not, this is MacTeX). And /Library/Developer.

Also, the dread of "removal instructions" that include stuff like "go through these directories and delete things that look like they belong to this software".

Svetlitski · a month ago
When available I prefer installing applications via brew as casks, since at least this way if I decide to uninstall it later brew will take care of deleting all of these associated directories. I remember using an app called AppZapper several years ago which did this but for arbitrary applications. No idea if it’s still around/maintained.
Svetlitski commented on Compression Dictionary Transport   developer.mozilla.org/en-... · Posted by u/todsacerdoti
longhaul · 2 months ago
Why can’t browsers/servers just store a standard English dictionary and communicate via indexes?. Anything that isn’t in the dictionary can be sent raw. I’ve always had this thought but don’t see why it isn’t implemented. Might get a bit more involved with other languages but the principle remains the same.

Thinking about it a bit more, we are doing this at the character level- a Unicode table, so why can’t we lookup words or maybe even common sentences ?

Svetlitski · 2 months ago
Compression algorithms like Brotli already do this:

https://www.rfc-editor.org/rfc/rfc7932#page-28

Svetlitski commented on Jemalloc Postmortem   jasone.github.io/2025/06/... · Posted by u/jasone
michaelcampbell · 3 months ago
> filed an issue because our test suite didn’t pass on Itanium lol

For the non low-level programmers in the bowels of memory allocators among us, why is this a "lol"?

Svetlitski · 3 months ago
The Itanium ISA was an infamous failure, never seeing widespread usage, hence people often referring to it as “The Itanic” (i.e. the much-touted ship that immediately sunk). The fact that anyone would be using it today at all is sort of hilariously niche, and is illustrative of how wide-ranging and obscure the issues filed to the GitHub repo could be. On a similar token I recall seeing an issue (or maybe it was a PR?) to fix our build on GNU Herd.
Svetlitski commented on Jemalloc Postmortem   jasone.github.io/2025/06/... · Posted by u/jasone
chubot · 3 months ago
Nice post -- so does Facebook no longer use jemalloc at all? Or is it maintenance mode?

Or I wonder if they could simply use tcmalloc or another allocator these days?

Facebook infrastructure engineering reduced investment in core technology, instead emphasizing return on investment.

Svetlitski · 3 months ago
As of when I left Meta nearly two years ago (although I would be absolutely shocked if this isn’t still the case) Jemalloc is the allocator, and is statically linked into every single binary running at the company.

> Or I wonder if they could simply use tcmalloc or another allocator these days?

Jemalloc is very deeply integrated there, so this is a lot harder than it sounds. From the telemetry being plumbed through in Strobelight, to applications using every highly Jemalloc-specific extension under the sun (e.g. manually created arenas with custom extent hooks), to the convergent evolution of applications being written in ways such that they perform optimally with respect to Jemalloc’s exact behavior.

Svetlitski commented on Jemalloc Postmortem   jasone.github.io/2025/06/... · Posted by u/jasone
kstrauser · 3 months ago
Stuff like this is what keeps me coming back here. Thanks for posting this!

What's hard about using TCMalloc if you're not using bazel? (Not asking to imply that it's not, but because I'm genuinely curious.)

Svetlitski · 3 months ago
It’s just a huge pain to build and link against. Before the bazel 7.4.0 change your options were basically:

1. Use it as a dynamically linked library. This is not great because you’re taking at a minimum the performance hit of going through the PLT for every call. The forfeited performance is even larger if you compare against statically linking with LTO (i.e. so that you can inline calls to malloc, get the benefit of FDO , etc.). Not to mention all the deployment headaches associated with shared libraries.

2. Painfully manually create a static library. I’ve done this, it’s awful; especially if you want to go the extra mile to capture as much performance as possible and at least get partial LTO (i.e. of TCMalloc independent of your application code, compiling all of TCMalloc’s compilation units together to create a single object file).

When I was at Meta I imported TCMalloc to benchmark against (to highlight areas where we could do better in Jemalloc) by pain-stakingly hand-translating its bazel BUILD files to buck2 because there was legitimately no better option.

As a consequence of being so hard to use outside of Google, TCMalloc has many more unexpected (sometimes problematic) behaviors than Jemalloc when used as a general purpose allocator in other environments (e.g. it basically assumes that you are using a certain set of Linux configuration options [1] and behaves rather poorly if you’re not)

[1] https://google.github.io/tcmalloc/tuning.html#system-level-o...

Svetlitski commented on Jemalloc Postmortem   jasone.github.io/2025/06/... · Posted by u/jasone
Svetlitski · 3 months ago
I understand the decision to archive the upstream repo; as of when I left Meta, we (i.e. the Jemalloc team) weren’t really in a great place to respond to all the random GitHub issues people would file (my favorite was the time someone filed an issue because our test suite didn’t pass on Itanium lol). Still, it makes me sad to see. Jemalloc is still IMO the best-performing general-purpose malloc implementation that’s easily usable; TCMalloc is great, but is an absolute nightmare to use if you’re not using bazel (this has become slightly less true now that bazel 7.4.0 added cc_static_library so at least you can somewhat easily export a static library, but broadly speaking the point still stands).

I’ve been meaning to ask Qi if he’d be open to cutting a final 6.0 release on the repo before re-archiving.

At the same time it’d be nice to modernize the default settings for the final release. Disabling the (somewhat confusingly backwardly-named) “cache oblivious” setting by default so that the 16 KiB size-class isn’t bloated to 20 KiB would be a major improvement. This isn’t to disparage your (i.e. Jason’s) original choice here; IIRC when I last talked to Qi and David about this they made the point that at the time you chose this default, typical TLB associativity was much lower than it is now. On a similar note, increasing the default “page size” from 4 KiB to something larger (probably 16 KiB), which would correspondingly increase the large size-class cutoff (i.e. the point at which the allocator switches from placing multiple allocations onto a slab, to backing individual allocations with their own extent directly) from 16 KiB up to 64 KiB would be pretty impactful. One of the last things I looked at before leaving Meta was making this change internally for major services, as it was worth a several percent CPU improvement (at the cost of a minor increase in RAM usage due to increased fragmentation). There’s a few other things I’d tweak (e.g. switching the default setting of metadata_thp from “disabled” to “auto”, changing the extent-sizing for slabs from using the nearest exact multiple of the page size that fits the size-class to instead allowing ~1% guaranteed wasted space in exchange for reducing fragmentation), but the aforementioned settings are the biggest ones.

Svetlitski commented on Show HN: I built a Rust crate for running unsafe code safely   github.com/brannondorsey/... · Posted by u/braxxox
braxxox · 5 months ago
Thanks for that suggestion.

I'm adding a few more limitations in this PR: https://github.com/brannondorsey/mem-isolate/pull/44

I know async-signal-safety is particularly important for, you know, signal handlers. But aside from those, and the multi-threading use case you describe, is there another use case where calling non async-signal-safe code from inside this module would lead to issues (that isn't covered in the new limitations)?

I can add another limitation is issues can transpire if the code you run in `callable()` isn't async-signal-safe, but I'd like to offer a few additional examples of gotchas or surprises to point out there.

Svetlitski · 5 months ago
At the risk of sounding overly harsh, I just don’t think this crate is a particularly good idea. I really do mean this as constructive criticism, so let me explain my reasoning.

A function being marked unsafe in Rust indicates that there are required preconditions for safely invoking the function that the compiler cannot check. Your “safe” function provided by this crate sadly meets that definition. Unless you take great care to uphold the requirements of async-signal-safety, calling your function can result in some nasty bugs. You haven’t made a “safe” wrapper for unsafe code like the crate claims, so much as you’ve really just traded one form of unsafety for another (and one that’s arguably harder to get right at that).

Svetlitski commented on Show HN: I built a Rust crate for running unsafe code safely   github.com/brannondorsey/... · Posted by u/braxxox
Svetlitski · 5 months ago
This is likely to violate async-signal-safety [1] in any non-trivial program, unless used with extreme care. Running code in between a fork() and an exec() is fraught with peril; it's not hard to end up in a situation where you deadlock because you forked a multi-threaded process where one of the existing threads held a lock at the time of forking, among other hazards.

[1] https://man7.org/linux/man-pages/man7/signal-safety.7.html

u/Svetlitski

KarmaCake day847February 16, 2019
About
https://github.com/Svetlitski
View Original