Readit News logoReadit News
kevincox · 2 years ago
The default limit is in place because select() uses a fixed size bitmap for file descriptors. If a program that uses select opens more than that many files it will make an out of bounds write. It is probably better to make the file descriptor allocation fail than have memory corruption.

All programs that don't use select() should raise the limit to the hard limit on startup. Then drop it back down before exec()ing another program. It is a silly dance but that is the cost of legacy. I'm surprised that these programs that are affected can't add this dance in.

arghwhat · 2 years ago
I am fairly confident that this is not the reason for the file descriptor limit, especially since select was superseded over 3 decades ago by by poll (1987, picked up by Linux in 1997), which in turn is superseded locally by kqueue. Use of select in modern times is a bug.

ulimits are quotas that disallow excessive resource consumption by an application, not bug shields.

crest · 2 years ago
Select has 1024 bit/file descriptor bitmap and anyone using it gets what they deserve. MacOS also provides poll() as a slightly less braindead, but still POSIX option. The proper solution is to use kqueue()/kevent() allowing userspace processes to efficiently get notified about changes to tens of thousands of file descriptors (tracking a million file descriptors using this API on FreeBSD works fine).
kevincox · 2 years ago
I don't know if I would call anyone unfortunate enough to not know about the limitations of select() deserving of such limitations. At least on my Linux box the man page does have a warning at the top of the description section but it is easy to accidentally skim over.

It would be interesting to do something like avoid defining that symbol by default, require `-DENABLE_OBSOLETE_SELECT_API` to make it available. It would cause trouble for compiling old software but it is easy to remedy and at least makes new users extra aware that they shouldn't start using this function.

mort96 · 2 years ago
I like using poll(). I don't generally write software which waits for tens of thousands of FDs, I wait for a couple. Sacrificing platform support for more or less nothing in return doesn't make a lot of sense.

Kqueue/kevent/epoll are good options if you have very particular needs, such as a huge number of file descriptors, but I'd argue poll() should be the go-to.

saagarjha · 2 years ago
Are you sure it's that, and not people setting ulimit to something very low and causing critical programs to start failing in ways that open up security issues?
kevincox · 2 years ago
It could be both. But having a failure be a security issue is a fail-open design and should be avoided in most cases. Having out-of-bounds memory writes can be exploited in a variety of ways and can provide exploits that affect even fail-closed designs.
hashhar · 2 years ago
The reasonable solution for that would be to fail calls to select() if more than FD_SETSIZE fds are being held instead of nerfing all applications to do the setrlimit dance - some of which may not be actively maintained or even if they are would take time for fixes to be available and distributed.
kevincox · 2 years ago
The problem is that the memory corruption occurs when preparing the arguments to `select()` so by the time `select` is called it is already too late. Having select abort the program could make it harder to exploit as the corruption likely occurs on the caller's stack but doesn't completely solve the problem.

I guess the real solution would be updating `FD_SET()` and `FD_CLR()` to abort if `fd > FD_SETSIZE`. IDK if writing to the fd set outside of these two functions is officially supported.

StewardMcOy · 2 years ago
This seems like an unfortunate, but reasonable, given the circumstances, approach, but you'd have to make sure that none of your dependencies, including Apple's frameworks, have code that calls select.
crest · 2 years ago
At least it's only the soft limit. The hard limit isn't reduced. The simplest workaround is to wrap the start of the of process with a shell script raising the soft limit before it execs into the application you want to run.
c0l0 · 2 years ago
Let's just hope Apple will not alter the deal any further, right? [-:
Angostura · 2 years ago
The error message indicates that "Operation not permitted while System Integrity Protection is engaged".

.... and if SIP is turned off?

saagarjha · 2 years ago
It works as expected.

  $ sudo launchctl limit maxfiles
   maxfiles    256            unlimited
  $ sudo launchctl limit maxfiles 65536 200000
  $ sudo launchctl limit maxfiles
   maxfiles    65536          200000

bombcar · 2 years ago
So can it be “changed” and then sip turned back on, or does SIP reset it?
sgjohnson · 2 years ago
> .... and if SIP is turned off?

You lose Touch ID & Apple Pay

macNchz · 2 years ago
Ugh, it has been a few years since I did any software development directly within MacOS, but when I did I found it really annoying to have SIP fully enabled. At the time, though, disabling it didn’t take away core features of the computer.

The most frustrating thing with SIP was that I’d always butt up against it at the most inopportune moments: deep in a rabbithole of diagnosing some unusual issue, I’d finally have everything running and set up just right to reproduce it, realize I needed to trace a specific process or something, only to have the system tell me I wasn’t allowed to.

Perfect timing to have to restart the computer and wait while it slowly boots into recovery, then remember where I left off and recreate my environment.

The continuing iOS-ification of MacOS really drove me away from using it as my main computer, despite having been a lifelong Mac user. I still have a Macbook Air, but for any real work it’s just a thin client to my Linux desktop.

saagarjha · 2 years ago
Touch ID works when SIP is disabled
c-hendricks · 2 years ago
I've ... always used ulimit on macOS, going back almost a decade. Never heard of using `launchctl` to change it.

https://www.google.com/search?q=ulimit+mac+site%3Astackoverf...

vs

https://www.google.com/search?q=%22launchctl+limit+maxfiles%...

henvic · 2 years ago
Globals are a nightmare. Despite this being something really common to do for developers and something that will cause a pain for a lot of early adopters of [whatever software that requires it], I see this as a great move!
crest · 2 years ago
Do you really consider 256 file descriptors a sane resource limit on processes?
stephenr · 2 years ago
Do you really think it's sane to write a program that just relies on the user increasing global limits, rather than calling setrlimit appropriately?
mmis1000 · 2 years ago
I hit it because i use ssh to tunnel connections to mac. And sshd seems to use file socket to handle the sessions. I honest don't think it is a sane limit at 2023. Even the most complained 'low performance' platform like nodejs handles thousands of connections just fine. What it the point to have a limit of 256?
CodesInChaos · 2 years ago
An application that opens many files and doesn't use `select`, should raise its own soft-limit to match the hard-limit (or a reasonable number below the hard limit, if it wants to self-limit for some reason).

The default soft-limit should match FD_SETSIZE and should not be raised globally by the user. I don't know why the default hard limit is 256 and not 1024. Perhaps FD_SETSIZE was lower than 1024 historically?

isodev · 2 years ago
I believe it is given the risks of increasing the limit system-wide. Processes which really require more still have the option to ulimit/setrlimit.
wiredfool · 2 years ago
I hit the old limit of 512 in eMacs after a month or so.
semireg · 2 years ago
As an aside, Quinn “The Eskimo!” is a legend. They have helped me on a few occasions with code signing electron apps. In the American south there is a saying “doing the lords work.” Thank goodness we have people like Quinn. Their understanding of complex system and abilities to troubleshoot are invaluable.
dinkblam · 2 years ago
also, his posts on the Apple Developer Forums are also much more professional and insightful than the crap which Apple calls their documentation.
macshome · 2 years ago
I keep an archive and index of his posts on GitHub so that they are easy to find. https://github.com/macshome/The-Wisdom-of-Quinn
saagarjha · 2 years ago
It sucks because the forums are one of the worst place he could be putting this information–it's a terrible medium, hard to search, and full of information that ought to be elsewhere. I do not appreciate that Apple can rely on this horrible stopgap instead of writing technotes like they're supposed to.
KerrAvon · 2 years ago
Pre-NeXT takeover, Apple had a lot of folks like Quinn to support developers, and an entire documentation department churning out content. Quinn was always one of if not the best.

While much was gained in the NeXT merger, the biggest thing that was lost was developer support and documentation. They were gutted in March 1997 and were never restored. Avie Tevanian had no respect for documentation and his attitude infected the rest of the organization. One dude did so much irreparable damage.

coldcode · 2 years ago
I worked with Quinn at Apple during my brief stay in the mid-90s. He's amazingly knowledgeable about almost everything in the Apple world, which is why he is so valuable at answering questions.
jamesfmilne · 2 years ago
Quinn's been around forever, you can see his name in Apple Developer Connection CDs from the 90s.

http://preserve.mactech.com/articles/mactech/Vol.16/16.06/Ju...

e61133e3 · 2 years ago
Agree! Quinn “The Eskimo!” is indeed a legend. Hope to meet him one day during WWDC. Just to shake his hands.
roblh · 2 years ago
I am, at this exact moment, getting bodied by a ulimit problem on mac. Apparently with pacman, directories attached with bind mounts have a fixed ulimit of 64 internally and running npm install inside a mounted project explodes completely because of it. Funny that this turns up right now, even if it’s not a fix for my particular problem.
CodesInChaos · 2 years ago
I think a security option that does the following would make sense:

1. exec should reduce the hard-limit to FD_SETSIZE unless it's passed a flag

2. When a process calls `select` while having a hard-limit > FD_SETSIZE, it gets terminated. (Strictly speaking this doesn't prevent the memory corruption, since it happens in select's support macros, not select itself. But I'd expect it to be good enough in practice)

A modern application which doesn't use `select` should raise its own hard-limit during startup, option into the select-termination in exchange for the ability to open many files.