>Changing a security model that has been used for decades to a more restrictive model is difficult, especially in something as complicated as macOS. Attaching debuggers is just one example, there are many similar techniques that could be used to inject code into a different process. Apple has squashed many of these techniques, but many other ones are likely still undiscovered.
> Aside from Apple’s own code, these vulnerabilities could also occur in third-party software. It’s quite common to find a process injection vulnerability in a specific application, which means that the permissions (TCC permissions and entitlements) of that application are up for grabs for all other processes. Getting those fixed is a difficult process, because many third-party developers are not familiar with this new security model. Reporting these vulnerabilities often requires fully explaining this new model! Especially Electron applications are infamous for being easy to inject into, as it is possible to replace their JavaScript files without invalidating the code signature.
It makes me sad that we are likely not going to see any new fundamental design rethink for security's sake in mainstream operating systems. It is cost prohibitive at this point to do something that gets security right for the world of 2022 and yet not break all the apps which would never be rewritten!
Mobile OSes were a good break off point as far as security goes but that came with a lot of functionality sacrifice.
Although something like QubesOS can theoretically dream of being semi-mainstream with support from hardware and OSS OS vendors like RH/Suse/Canonical or even Microsoft.
It makes me sad that we are likely not going to see any new fundamental design rethink for security's sake in mainstream operating systems.
Contrarily, that makes me happy because if that happens we are really going to lose what little computing freedom we have left, as it will only make the walled garden silos even stronger.
It’s hard to sign off SOC2 compliance when you know that npm and maven packages can be introduced by any mistake on your team and contain an “Upload all files from HDD to a rogue server” script. I’d need to operate administrative files (customer records, contracts, accounting) on a separate machine from dev…
App sandboxing is… the way it will go, for insurance reasons.
Yeah. The problem with all these security innovations is they allow corporations to seize control. The ability to debug and intercept is also the ability to reverse engineer and override.
We need secure software that empowers us, not some secure walled garden.
Fuchsia does a pure capability model, and resources, even as basic as the file system are provided through handles - and your file system handle is a handle to what would be a directory elsewhere, but that is the entire file system so it’s not a matter of finding a traversal exploit.
In principle you could do something similar with the Mac/iOS sandbox by starting a process with a compute only sandbox, and then provide it with specific sandbox entitlements (from the parent) which includes specific file systems, however they still in principle can see a full version of the fs.
And yeah, any OS that isn’t completely new is burdened with support for old apps, but then iOS which used its newness to have a stronger base security model is constantly beaten up for that model.
Ah, I forgot about Fuchsia - https://arxiv.org/pdf/2108.04183.pdf seems to do a good job of explaining the security architecture without complicating things. With Google's backing (if they don't lose interest that is) and the potential to take over Nest/Chromebook/Android devices it might go farther than most new OS experiments.
Qubes is a good product while simultaneously being both the best and an objectively poor solution to a problem that shouldn't exist. That's how much of a mess the situation is.
Qubes is sandboxing at a machine level by putting every application set into it's own O/S. Is that in any way clean or ideal? Not at all. But it's necessary if you don't trust your own O/S not to have been compromised by the software running inside it.
I get the arguments against centralised software distribution - it encourages monopolistic behaviour and removes user freedom - but it does at least make a problem of this nature fixable if you can enforce compliance to breaking changes at distribution time.
I'd like to think a new layer could be added to Linux or BSD that would marshall this kind of compliance centrally without deliberately conflating payment into it, but I've no idea how you'd get widespread adoption of something like that even if you could organise to implement it. You'd also likely need to implement code-signing everywhere which was such a challenge in the past that (at least in the Linux kernel) it was abandoned.
A similar model to how domain registration works for white-listing might make sense here, in that it's not centralised - but it's not fully decentralised either - and at some point there needs to be a process to onboard new authorities who everyone needs to trust, at which point it's a slippery slope to self-certification.
> It is cost prohibitive at this point to do something that gets security right for the world of 2022 and yet not break all the apps which would never be rewritten!
You could quarantine your old applications to their own individual VMs, while newer programs can make use of your shiny new security features?
Harder than it sounds, though. You still have to be able to communicate with other processes to present the user with a usable UI, so the isolation is never really complete. There are lots of variations on this theme and they all have to compromise in some way.
I think a lot of users would be fine with apps being fully silo'ed from eachother with the exception of being able to copy and paste between them, and have 'File Open' dialog boxes able to access files from anywhere.
Every other kind of interaction between seperate applications can be blocked without breaking much functionality. And you can limit breakage further by running all 'legacy' applications in the same silo, and only running new applications disconnected from eachother.
Literally any other kind of interaction between applications feels like the apps are spying on me. I don't even want one to suggest which app I should open for a doc. Mac OS has taken security in a strange direction where you can't change the suffix of a file without exposing possible vulnerabilities by apps automatically opening it - and the backstop is supposed to be validating all applications through your "apple account" (whatever that is?). (although in some sense the suffix and permissions flaws have been around since System 7).
> Although something like QubesOS can theoretically dream of being semi-mainstream with support from hardware and OSS OS vendors like RH/Suse/Canonical or even Microsoft.
I'm trying to encourage as many people as I can to run it, or at least play with it for some while to gain familiarity with it. Properly applied, I think it does add quite a bit of useful practical security... though it's not going to automatically solve all problems. I like the silos of compromise, at least, and you can do high risk things (like "anything web") in disposable VMs that reset on VM power cycle.
My only major concern with Qubes is that I've not decided if it's weird and niche enough to be mostly left alone by the 0day markets, or if it's a super high priority, high value target to attack because of the type of people who are likely to use it. I'd like to see an ARM port of it, because Xen on ARM is quite a bit simpler than Xen on x86, but the hardware to run that doesn't quite exist yet. Maybe with the RK3588...
The fundamental problem here is that software developers (and I'm guilty here too, as much as anyone in that industry) tend to view complexity as a one way ratchet function - add features. Add features. Add features. Add knobs. And when it comes crashing down around your ears, "add security" (in the form of sandboxes, or process isolation, or... https://xkcd.com/2044/ applies here).
And then it turns out that "adding complexity to solve problems created by complexity" isn't a strategy with a great long term success rate.
I'm slightly encouraged by Apple admitting, as clearly as they ever admit anything, that this strategy isn't working - with their Lockdown mode, that's "only for people with the most extreme threats, blah blah blah," and I'd expect anyone in the security or software industry to turn that on basically as soon as they get iOS 16 and not look back. Or install the beta to give that option.
I have tried the QubesOS and boy does it bring back memories of Windows being called Pentium to 286 converter.
It is slow as molasses on hardware where even Windows 10 and Gnome are both fast and to make it usable you have to keep relaxing the security to the point where it is probably less secure than regular OS. And don't even bother if you have to use scaling other than 100%, sure you can scale the DOM0 but the rest of the VMs are not scaled and there is no documentation on how to do it.
What we need are simple sandboxes that isolate GUI applications into chroot environment and keep them away from other applications and documents.
I have been using Lockdown mode for a few weeks on iOS 16 beta and iPadOS 16 beta. I really like it, it does not ruin the experience of using my devices and I feel like it makes my devices safer. I plan on always having Lockdown configured. What about the rare web sites that don’t work with Lockdown? I either ignore them or add the URIs to my todo list and visit them when using a laptop. BTW, I only use macOS and Linux laptops when I am developing software. Otherwise I use either my small or large iPad Pros.
> I'd like to see an ARM port of it, because Xen on ARM is quite a bit simpler than Xen on x86
AWS VMs use Xen on x86 mostly, right? So if someone has a Xen/x86 0day, they're going to use it to break out of EC2 guests and perform cross-tenant attacks, not use it on relatively low-value Qubes where you'd need to already have RCE anyway to make the attack work.
> Mobile OSes were a good break off point as far as security goes but that came with a lot of functionality sacrifice.
I have a tiny bit of hope that they'll eventually be able to replace desktop OSes through virtualization. It's actually what I long thought Apple would do with iPad OS (why else put in an M1?).
> It makes me sad that we are likely not going to see any new fundamental design rethink for security's sake in mainstream operating systems. It is cost prohibitive at this point to do something that gets security right for the world of 2022 and yet not break all the apps which would never be rewritten!
I don't think that's the case though. Microsoft overhauled the entire security post of windows during Vista. It wasn't a good OS when it came out, but Windows is much better for it.
But yes, if you're going to start from the ground up, you're going to lose a bunch of functionality. Not only due to the nature of rewrites, but also because of "what new limitations does the security model imply"?
Google’s upcoming Fuchsia OS has an entirely new security model that looks really solid. I imagine it’s still a few years away however from any kind of desktop usage.
Well, GNU Hurd 2 is going to be released real soon now. It will fix all the issues with current OS architectures.
And than, at some point Google's Fuchsia will also surface in mainstream likely.
Both systems will bring capability-security. (You know, almost like seL4, only less secure). A technology that wasn't used until now, even it's available for almost 50 years, and would solve almost all security problems of computers. We could have had that in fact directly supported by hardware almost four decades ago… But the market didn't like that.
The problem is that there is and won't be any progress in computer technology. It's like that since at least one hundred years. We're buried in the von Neumann local "optimum" since than… As long as this does not change we're doomed to have crappy, inefficient, and insecure computers. The invisible hand will just prevent any progress until forever. (OK, our future AI-Overlords could change that possibly).
But who cares about the status quo? The market insists on it. So any attempt at resistance is therefore futile.
___
Please excuse the slight amounts of sarcasm. I just couldn't hold back.
And seriously: There may be some jokes hidden in here. If you try you'll recognize one or two of them, I bet… :-D
> It is unclear what security the AES encryption here is meant to add, as the key is stored right next to it. There is no MAC, so no integrity check for the ciphertext
I imagine this is to prevent accidental disclosure of sensitive data through basic tools (only) like grep, and Spotlight.
Also to prevent layman attempts at tweaking the files in a text editor like one used to do tweaking save files as a kid. But not to protect against dedicated attackers.
This type of security is bad. Either something should be possible, and easy for anyone to do... Or it should not be possible, and protected by real cryptography.
Hiding something with ROT-13 just 'so it doesn't show up in grep' is a bad idea.
I actually think something like ROT-13 is fine in applications where obscuring it from humans is all you care about. It's serving the same purpose as the "Staff only" sign on that door in the restaurant. Does it somehow prevent you entering without an employment agreement? Would it stop a robber or thief? Nope. But since there's a sign you know that's the wrong way and will stay out of where you aren't wanted.
AES looks like security, ROT-13 is clearly not security, so there's no illusion.
Suppose a maintenance programmer is looking at logs around a weird issue, scrolling through hundreds of entries they happen to notice that the phase "FuckDonaldTrump" appears in the logs - huh, what? Oh, it's the password for the administrator user. Well, the way human memories work that password is stuck in their head now. They didn't try to learn the admin password but now they know it, whereas if the log said "ShpxQbanyqGehzc" well even though that's the "same" information your brain doesn't retain it automatically because it doesn't mean anything.
They're not trying to learn the admin password, and with ROT-13 they are less likely to accidentally do so, that's actually a benefit.
If data is to be used locally, it has to be encryptable and decryptable locally with just resources accessible locally, so it's pretty much not secureable from local software.
Surely there is a better fix to a deserialization exploit than just making every new app implement `bool dontBeHackable { return true; }`??
Even just, I don't know, forcing all builds to silently include this property in the compiled output - I mean, interacting with OS app state data files should be abstracted away from the programmer anyway, so it shouldn't matter if they're signed/encrypted behind the scenes, just handle it automatically, right?
Apple originally intended to require NSSecureCoding in some way after it was introduced in macOS 10.8 (2012) but constantly delayed those plans due to the compatibility issues sibling mentioned.
It could break a bunch of apps. Could argue that the user should get to decide, but it would have consequences to just do it across the board automatically
Related to this, is there an easy way to run arbitrary programs in the macOS sandbox? AFAIK sandboxing is opt-in for app developers at the moment. Does manual invocation of sandbox-exec on the command line still work, and are there GUI helpers for running arbitrary apps with this tool?
Edit: Apparently sandbox-exec is still usable, just not (publicly) well-documented. Would be nice of Apple to make sandboxing easier for regular users running untrusted apps that don't opt-in to the sandbox. I’m thinking of Firefox and Firefox extensions in particular.
>The only non-folkloric documentation is found in the man page for sandbox-exec [...]
>Sandbox documentation has been a moving target over the years. Because it is a private interface, Apple is under no obligation to maintain forward or backward compatibility. Take note of the publication date of any information found online.
>Just to be clear, the sandbox profile format is not documented for third party use. Feel free to experiment with this stuff, but please don’t try to ship a product based on it.
Question from reading the article, although I might have missed the response.
Is a user's latest version of macOS still vulnerable to this exploit if they're running any applications that do not return true for this boolean?
(i.e. does this mean older apps still make the entire machine vulnerable?)
If so, is there a means for the users to enforce this flag globally and just deal with the crashes if an app tries to do something that relies on this privilege?
Oh that hurts to watch, brutal. The "Pwn" button helps keep it light, and I'm definitely stealing that, but ouch.
Can someone who knows mac OS development educate me about why it would be broken/expensive/inadequate to just page the application's mapped pages to disk? I gather at least on the lower memory Apple Silicon devices that swap is pretty aggressive even for running applications?
I worked on AppKit's persistent state feature. One of its primary uses is persisting UI state across app restarts, for example when performing a software update. Simply writing out memory to disk would "persist" data like Mach ports or file descriptors, which would no longer be valid when the app is re-launched.
You most likely meant to ask any questions about deep lore Apple internals to the sibling, who is a super well-known expert on the topic. I barely know my way around XCode. :)
Backwards compatibility over security, reminds me of a certain, popular OS.
They could enforce it for all applications signed after a certain date and/or mark the unsafe method as deprecated. This would still not prevent downgrade attacks for a while, but at least offer a path forward.
> Aside from Apple’s own code, these vulnerabilities could also occur in third-party software. It’s quite common to find a process injection vulnerability in a specific application, which means that the permissions (TCC permissions and entitlements) of that application are up for grabs for all other processes. Getting those fixed is a difficult process, because many third-party developers are not familiar with this new security model. Reporting these vulnerabilities often requires fully explaining this new model! Especially Electron applications are infamous for being easy to inject into, as it is possible to replace their JavaScript files without invalidating the code signature.
It makes me sad that we are likely not going to see any new fundamental design rethink for security's sake in mainstream operating systems. It is cost prohibitive at this point to do something that gets security right for the world of 2022 and yet not break all the apps which would never be rewritten!
Mobile OSes were a good break off point as far as security goes but that came with a lot of functionality sacrifice.
Although something like QubesOS can theoretically dream of being semi-mainstream with support from hardware and OSS OS vendors like RH/Suse/Canonical or even Microsoft.
Contrarily, that makes me happy because if that happens we are really going to lose what little computing freedom we have left, as it will only make the walled garden silos even stronger.
App sandboxing is… the way it will go, for insurance reasons.
We need secure software that empowers us, not some secure walled garden.
In principle you could do something similar with the Mac/iOS sandbox by starting a process with a compute only sandbox, and then provide it with specific sandbox entitlements (from the parent) which includes specific file systems, however they still in principle can see a full version of the fs.
And yeah, any OS that isn’t completely new is burdened with support for old apps, but then iOS which used its newness to have a stronger base security model is constantly beaten up for that model.
Qubes is sandboxing at a machine level by putting every application set into it's own O/S. Is that in any way clean or ideal? Not at all. But it's necessary if you don't trust your own O/S not to have been compromised by the software running inside it.
I get the arguments against centralised software distribution - it encourages monopolistic behaviour and removes user freedom - but it does at least make a problem of this nature fixable if you can enforce compliance to breaking changes at distribution time.
I'd like to think a new layer could be added to Linux or BSD that would marshall this kind of compliance centrally without deliberately conflating payment into it, but I've no idea how you'd get widespread adoption of something like that even if you could organise to implement it. You'd also likely need to implement code-signing everywhere which was such a challenge in the past that (at least in the Linux kernel) it was abandoned.
A similar model to how domain registration works for white-listing might make sense here, in that it's not centralised - but it's not fully decentralised either - and at some point there needs to be a process to onboard new authorities who everyone needs to trust, at which point it's a slippery slope to self-certification.
You could quarantine your old applications to their own individual VMs, while newer programs can make use of your shiny new security features?
Every other kind of interaction between seperate applications can be blocked without breaking much functionality. And you can limit breakage further by running all 'legacy' applications in the same silo, and only running new applications disconnected from eachother.
I'm trying to encourage as many people as I can to run it, or at least play with it for some while to gain familiarity with it. Properly applied, I think it does add quite a bit of useful practical security... though it's not going to automatically solve all problems. I like the silos of compromise, at least, and you can do high risk things (like "anything web") in disposable VMs that reset on VM power cycle.
My only major concern with Qubes is that I've not decided if it's weird and niche enough to be mostly left alone by the 0day markets, or if it's a super high priority, high value target to attack because of the type of people who are likely to use it. I'd like to see an ARM port of it, because Xen on ARM is quite a bit simpler than Xen on x86, but the hardware to run that doesn't quite exist yet. Maybe with the RK3588...
The fundamental problem here is that software developers (and I'm guilty here too, as much as anyone in that industry) tend to view complexity as a one way ratchet function - add features. Add features. Add features. Add knobs. And when it comes crashing down around your ears, "add security" (in the form of sandboxes, or process isolation, or... https://xkcd.com/2044/ applies here).
And then it turns out that "adding complexity to solve problems created by complexity" isn't a strategy with a great long term success rate.
I'm slightly encouraged by Apple admitting, as clearly as they ever admit anything, that this strategy isn't working - with their Lockdown mode, that's "only for people with the most extreme threats, blah blah blah," and I'd expect anyone in the security or software industry to turn that on basically as soon as they get iOS 16 and not look back. Or install the beta to give that option.
AWS VMs use Xen on x86 mostly, right? So if someone has a Xen/x86 0day, they're going to use it to break out of EC2 guests and perform cross-tenant attacks, not use it on relatively low-value Qubes where you'd need to already have RCE anyway to make the attack work.
Why's that? If there's just less backwards compatibility, could we make a cut down version on x86?
I have a tiny bit of hope that they'll eventually be able to replace desktop OSes through virtualization. It's actually what I long thought Apple would do with iPad OS (why else put in an M1?).
I don't think that's the case though. Microsoft overhauled the entire security post of windows during Vista. It wasn't a good OS when it came out, but Windows is much better for it.
But yes, if you're going to start from the ground up, you're going to lose a bunch of functionality. Not only due to the nature of rewrites, but also because of "what new limitations does the security model imply"?
And than, at some point Google's Fuchsia will also surface in mainstream likely.
Both systems will bring capability-security. (You know, almost like seL4, only less secure). A technology that wasn't used until now, even it's available for almost 50 years, and would solve almost all security problems of computers. We could have had that in fact directly supported by hardware almost four decades ago… But the market didn't like that.
The problem is that there is and won't be any progress in computer technology. It's like that since at least one hundred years. We're buried in the von Neumann local "optimum" since than… As long as this does not change we're doomed to have crappy, inefficient, and insecure computers. The invisible hand will just prevent any progress until forever. (OK, our future AI-Overlords could change that possibly).
But who cares about the status quo? The market insists on it. So any attempt at resistance is therefore futile.
___
Please excuse the slight amounts of sarcasm. I just couldn't hold back.
And seriously: There may be some jokes hidden in here. If you try you'll recognize one or two of them, I bet… :-D
Deleted Comment
I imagine this is to prevent accidental disclosure of sensitive data through basic tools (only) like grep, and Spotlight.
Also to prevent layman attempts at tweaking the files in a text editor like one used to do tweaking save files as a kid. But not to protect against dedicated attackers.
Hiding something with ROT-13 just 'so it doesn't show up in grep' is a bad idea.
AES looks like security, ROT-13 is clearly not security, so there's no illusion.
Suppose a maintenance programmer is looking at logs around a weird issue, scrolling through hundreds of entries they happen to notice that the phase "FuckDonaldTrump" appears in the logs - huh, what? Oh, it's the password for the administrator user. Well, the way human memories work that password is stuck in their head now. They didn't try to learn the admin password but now they know it, whereas if the log said "ShpxQbanyqGehzc" well even though that's the "same" information your brain doesn't retain it automatically because it doesn't mean anything.
They're not trying to learn the admin password, and with ROT-13 they are less likely to accidentally do so, that's actually a benefit.
It is often good to leave keys right next to a locked lock.
Even just, I don't know, forcing all builds to silently include this property in the compiled output - I mean, interacting with OS app state data files should be abstracted away from the programmer anyway, so it shouldn't matter if they're signed/encrypted behind the scenes, just handle it automatically, right?
Edit: Apparently sandbox-exec is still usable, just not (publicly) well-documented. Would be nice of Apple to make sandboxing easier for regular users running untrusted apps that don't opt-in to the sandbox. I’m thinking of Firefox and Firefox extensions in particular.
https://7402.org/blog/2020/macos-sandboxing-of-folder.html
>The only non-folkloric documentation is found in the man page for sandbox-exec [...]
>Sandbox documentation has been a moving target over the years. Because it is a private interface, Apple is under no obligation to maintain forward or backward compatibility. Take note of the publication date of any information found online.
>Just to be clear, the sandbox profile format is not documented for third party use. Feel free to experiment with this stuff, but please don’t try to ship a product based on it.
Is a user's latest version of macOS still vulnerable to this exploit if they're running any applications that do not return true for this boolean?
(i.e. does this mean older apps still make the entire machine vulnerable?)
If so, is there a means for the users to enforce this flag globally and just deal with the crashes if an app tries to do something that relies on this privilege?
> This vulnerability will therefore be present for as long as there is backwards compatibility with older macOS applications!
Can someone who knows mac OS development educate me about why it would be broken/expensive/inadequate to just page the application's mapped pages to disk? I gather at least on the lower memory Apple Silicon devices that swap is pretty aggressive even for running applications?
They could enforce it for all applications signed after a certain date and/or mark the unsafe method as deprecated. This would still not prevent downgrade attacks for a while, but at least offer a path forward.