> …breeze right through an in-retrospect-sketchy authentication dialog…
I can't blame them for this. A surprising number of apps ask for root (inc. Adobe installers and Chrome). As far as I know, it's to make updates more reliable when an admin installs a program for a day-to-day user who can't write to /Applications and /Library.
We're long overdue for better sandboxing on desktop (outside of app stores).
I only do root for administration tasks. Filesystem stuff, hardware, server config. All the goodies are in my homedir. Exfiltration is easy as that. Running bad binaries is easy as running under my username.
In the end, there's no protections of what my username can do to files owned by my user. And that's why nasty tool that:
1. generates priv/pub key using gpg
2. emails priv key elsewhere and deletes
3. crypts everything it can grab in ~
4. Pops up nasty message demanding money
works so easily, and so well.
The only thing I know that can thwart attacks like this is Qubes, or a well setup SELinux.. But SELinux then impedes usage. (down the rabbit hole we go).
Edit: Honestly, I'm waiting for a Command and Control to be exclusively in Tor, email keys only through a Tor gateway, and also serve as a slave node to control and use. I could certainly see a "If you agree to keep this application on here, we will give you your files back over the course of X duration".
There's plenty more nefarious ways this all can be used to cause more damage, and "reward" the user with their files back, by being a slave node for more infection. IIRC, there was one of these malware tools that granted access to files if you screwed over your friends and they paid.
The thing is that, at least on the Mac, there easily can be protections on what your username can do to files owned by your user. There's an extensive sandboxing facility which limits apps to touching files within their own container, or files explicitly chosen by the user. All apps distributed through the App Store have to use it, and apps distributed outside the App Store can use it as well, but don't have to.
As I see it, the problem on the Mac boils down to:
1. Sandboxing your app is often a less-than-fun experience for the developer, so few bother with it unless they're forced to (because they want to sell in the App Store).
2. Apple doesn't put much effort into non-App-Store distribution, so there's no automatic checking or verification that sandboxing is enabled for a freshly-downloaded app. You have to put in some non-trivial effort to see if an app is sandboxed, and essentially nobody does.
I think these two feed on each other, too. Developers don't sandbox, so there's little point in checking. Users don't check, so there's little point in sandboxing. If Apple made the tooling better and we could convince users to check and developers to sandbox whenever practical, it would go a long way toward improving this.
For most non-developer users, there are few if any applications they use that both did not come with the system and need to write any files except files that the user explicitly requests them to, temporary files, and settings files.
Even most applications that they use that did come with the system, such as web browsers, have a quite limited set of files they should be writing. Browsers, for example, will need to write in the user's downloads directory, anywhere the user explicitly asks to save something, in their cache directory, in their settings file, and in a temporary files directory.
It's also similar for most third party applications they will use, such as word processors and spreadsheets.
It seems it should be possible to design a system that takes advantage of this to make it hard for ransomware and other malware that relies on overwriting your files, yet without being intrusive or impeding usage.
"The only thing I know that can thwart attacks like this is Qubes, or a well setup SELinux.. But SELinux then impedes usage. (down the rabbit hole we go)."
Or the easier method.
rdiff-bacukp + cron job. Or Duplicity. Or Tarsnap. Or CrashPlan. Or...
That is to say backups with multiple stored versions, to another system where the (infected) client does not have direct write access. Ransomware can infect my home directory if it wants to. A fire can burn down my house. Zaphod Beeblebrox can cause my hard drive to experience a spontaneous existence failure. But I've got off-site automatic backups, so I'll never pay the ransom. (I will pay more over time for the off-site storage, but given that I'd pay for that anyway to guard against natural disasters / disk failure / etc it's not really an added cost).
Is SELinux that hard? I have been running with Enforcing on my laptop for last 8 months and usually I can make selinux error go away by following directions in the selinux alert popup (or using search for selinux alerts from cli)
I used to be in boat where my first instinct was to disable SELinux but I must say it wasn't that hard.
> Edit: Honestly, I'm waiting for a Command and Control to be exclusively in Tor, email keys only through a Tor gateway, and also serve as a slave node to control and use
Correct me if I'm wrong, but most ransomware is operated almost completely through Tor. Doing email this way may be a problem (for obvious reasons), but for anonymity and uptime's sake most rely on it pretty heavily.
yup... the long-standing UNIX user privilege separation security model is obsolete. We need inter-app privilege separation as is being experimented on mobile phones.
True story. The status quo unfortunately conditions people into "just answer yes to the stupid questions" which then renders everything from developer certs to elevation warnings moot. "But you got a warning" is little recourse for when your hard drive is encrypted by some ransomware. (I know I'm mixing current events here – cut a fella some slack!)
Yes. I'm pretty sure WannaCry and its variants could have asked users with a system dialog whether it should proceed encrypting all their data and we still would have seen similar numbers of affected machines (if operated by a human).
macOS has optional per app sandboxing already, but many developers elect to sidestep it because it's a buggy, rigid mess. Last I worked with it there were issues with simple things like displaying an open/save panel inside of a sandbox — sometimes when the user requested one, an error would occur under the hood and give zero feedback to the user. It's also a pain in the rear for apps that need to be able to arbitrarily access files on disk to function.
Yes. I have a Mac app that is sandboxed, and occasionally it fails to open files the user selected in the Open dialog. There is zero diagnostics available, besides a message stating "permission denied" (the system error message confuses users even more by suggesting to check file system permissions).
All I can tell my customers is to restart their Macs.
it does matter what you want to protect from. often using root to start an app can actually make it more secure by allowing it to start worker processes with a non privileged user and do some sandboxing with name spaces.
I'm a bit surprised at the "personalized attention" from the attacker: that a human on the other end takes time to poke around individual machines, recognize the developer, and tailor a source code theft + ransom campaign to them. I had assumed that these are bulk compromises of at least thousands of machines and they just blast out scripts to turn them into spam proxies or whatever.
Maybe given the limited scale of this one and the obvious interest the attacker has in producing trojaned versions of popular software, this is actually what they were hoping for in the first place.
It might be as simple as an automated "look for ssh keys" in the malware. If you find an SSH key, pretty good odds it's a developer. Scan for git repos, or check their email address to see where they work and go from there.
This makes me wonder, is it easy enough to write a kernel extension such that whenever any process that tries to open(2) my ssh private key, or any hardlinks or symlinks pointing to it, it checks against a known whitelist, and if the process is not in the whitelist, a dialog pops up and asks me for my permission. Is this easy to implement?
Frankly I can only think of a small number of processes that need to automatically access the file: backupd, sshd, and Carbon Copy Cloner. Everything else should require my attention.
In this case the attacker actually guessed the names of the git repos based on knowlege that the owner of the attacked computer was a Panic employee. The attacker guessed the repo names was Panic software names. So it was a very manual process indeed
I find this story pretty fascinating. First, it's interesting how a broad attack, such as putting malware into software used by a large number of people, suddenly becomes a targeted attack: the attackers grab SSH keys and start cloning git repositories. I'm assuming that there was a significant number of victims in this attack. Were they targeting developers? Or did they just happen to comb through all this data and find what looked to be source code / git repositories.
The other thing I find interesting is this comment:
> We’re working on the assumption that there’s no point in paying — the attacker has no reason to keep their end of the bargain.
If you really want to be successful in exploiting people through cyber attacks, I guess you will need some kind of system to provide guaranteed contracts, i.e. proof that if a victim pays the ransom, then the other end of the bargain will be held.
It might seem that there's some incentive for ransom holders to hold up their end of the bargain for the majority of cases if they want their attacks to be profitable.
> If you really want to be successful in exploiting people through cyber attacks, I guess you will need some kind of system to provide guaranteed contracts, i.e. proof that if a victim pays the ransom, then the other end of the bargain will be held.
You're describing a legal system and the rule of law. I'm not sure there's way to guarantee anything like you describe when there is some illegality in the nature of the process.
Trade only works when you can trust either the parties involved or the system as a whole to uphold their promises (for the system, that's that involved parties that don't uphold their ends will be punished).
> You're describing a legal system and the rule of law. I'm not sure there's way to guarantee anything like you describe when there is some illegality in the nature of the process.
Legal systems aren't the only way to give confidence that both ends of a bargain will be held. As one example, some darknet markets have escrow systems for this purpose. It's not too hard to imagine a way to do this with ransomed code. Reputation-based systems also provide incentives for sellers to deliver on their promises.
How about an ethereum smart contract that gives back your money unless the owner releases the key used to encrypt your files (which may be possible to verify in the contract)
This is historically where the Mafia came from, as a means to keep members of a price fixing cartel mutually honest. The old saying about "no honour amongst thieves" being solved by outsourcing to a body to provide a parallel system of contract enforcement.
Harder to achieve online but not impossible, though plenty of criminals make enough without essentially having to place themselves at risk of physical attack from organised crime.
> If you really want to be successful in exploiting people through cyber attacks, I guess you will need some kind of system to provide guaranteed contracts, i.e. proof that if a victim pays the ransom, then the other end of the bargain will be held.
Could a smart contract system work here ? In this example, the smart contract would assure you the hash of the repo sent to you corresponds to the one you already had locally. You'd add automatic payment when conditions are fullfilled...
The problem is that you have no way of knowing how many copies of the data the hacker has. It's very easy to confirm that the hacker has your data, but confirming the opposite - that the attacker no longer has your data - is pretty much impossible. If there's even a way to do it it would surely involve require the hacker to have encrypted data which can only be decrypted if certain conditions are met. If you're going to go to that length then why not just encrypt it by a conventional means and not risk your data at all?
Unless someone fancies setting up a trusted hacker escrow that acts an intermediary between compromised servers and hackers? That sounds incredibly complicated, highly illegal and unlikely to be trusted by either hacker or hacked though.
Simplest solution: payment put into escrow, ransom is released to the ransom holder after 365 days provided the source code is not leaked, the ransom is released to the victim if the source code is leaked prior. If the ransom holder released the source after the fact it would be a year out of date.
> It might seem that there's some incentive for ransom holders to hold up their end of the bargain for the majority of cases if they want their attacks to be profitable.
There's also the fact that they don't care about who you are or what you do, their only consideration is financial.
How does one realistically protect against these new attack vectors? It's all become so quick - the malware infects your machine, and seconds later your repos are cloned.
Most computers are always connected to the internet when they're on, even if they don't necessarily need to be. Airgapping isn't really used outside of very sensitive networks, but I'm starting to think we need to head towards a model of connecting machines only when really needed.
Of course the cloud based world doesn't allow for that, and perhaps I'm a luddite, but I increasingly find myself disabling the network connection when I'm working on my PC. Kind of like the dial-up days.
Have a fun laptop, a work laptop, and maybe banking tablet?
As a good corporate drone, this arrangement is kind of forced on me, but a lot of small company / startup folks totally mix the two. Might be a good thing to not do.
Sure it doesn't protect you from e.g. a tool you need for work being compromised, but it reduces the attack surface - this guy probably wouldn't have installed handbrake on his work machine.
Another thing we do specifically because medical data is, a lot of the time I'm forced to work inside a non internet connected network that I vpn and then remote desktop to. Firewall rules mean the only thing getting in from my laptop is vnc. Some systems also require plugging into a specific physical network. Overkill for most uses but it makes losing laptops fat less scary if you can keep a lot of your stuff on a more secure remote system.
> Have a fun laptop, a work laptop, and maybe banking tablet?
I would both prefer and hate this setup. I use my personal laptop for work and having all my apps, data, settings, etc available in one place is amazing. I could get past using different computers but the sad reality is my provided work computer is underpowered compared to my 3.5 year old macbook. I can run circles around my coworker's machines on the simple fact I have an SSD. IDEA opens in seconds for me while they go get a cup of coffee. Our desktops haven't been updated in probably 4+ years and I strongly believe they'd be more productive on macOS than on whatever flavor of linux they are using (Most use Linux because they'd rather die than use Windows and they can eek a little more performance out). A number of them have older macbooks they use for meetings but they aren't powerful enough to actually develop on.
* How does one realistically protect against these new attack vectors?*
Do not install unsigned software is a good start. Does that dialog need a secondary 'Are you really really sure?' absolutely .. but the basic defence in this specific case was in place.
We had a system that was used to generate television graphics. Our installer for new software was capable of bringing up a system with a new hard drive, so one of the options it had was to format the hard drive. The installer asked three times if you were sure, with increasingly severe warnings about losing all your data. Sure enough, a customer with an existing hard drive ran through all three warnings, formatted their hard drive, and then called customer service to complain about losing all their data.
The solution, of course, was to add a fourth question...
>How does one realistically protect against these new attack vectors? It's all become so quick - the malware infects your machine, and seconds later your repos are cloned.
1) Don't install random crap off of the internet: only use the Mac App Store, with sandboxed apps and "System integrity protection" turned on.
2) If you absolutely need to have some non-MAS app, check the checksum, download the DMG, but let it rest, and only install it a month or so later, if no news of breach, malware etc has been announced.
3) Don't give a third party program root privileges -- don't give your credentials when a random program you've downloaded asks for them.
4) Have any sensitive data (e.g. work stuff, etc) on an encrypted .DMG volume or similar, that you only decrypt when you need to check something. Even if your mac is infected, they'll either get just an encrypted image of those, or wont be able to read it at all.
5) Install an application firewall, like Little Snitch.
I definitely agree with this advice in general, but as it so happens, users who installed HandBrake via homebrew (a package manager for macOS) were affected by this too because the hash for the latest version of HandBrake was changed to the infected version[1]. Still, package managers definitely make it harder for the attacker in most cases.
In cases where I need to download and install unsigned software that's not available via package manager, I run hashes (MD5, SHA1, SHA256, etc.) on the downloaded file and the run Google searches on those hashes. As long as the software has been released for more than a day or two and it has a decent sized user base, the hashes will show up in various places such as fossies.org and will be cached by google. That would have protected against this particular attack.
EDIT: But in this case, the software in question is signed, so the (fallback) technique described above is not necessary. The download page [0] contains a GPG signature along with a link to the author's GPG public key. Checking the signature would have prevented the attack.
IF you really want to be pedantic about it, utilize an egress firewall policy, whether on your machine, or your router. For MacOS, LittleSnitch or RadioSilence. For Linux/BSD, setup your firewall of choice to do some filtering.
Yes, it will take alot of effort to setup, and some effort to maintain, but it helps.
Would a passphrase on the SSH key help in this case? Attacker would have the SSH key but need the passphrase to be able to use it. That's how I have my SSH keys.
I believe this malware included a keylogger. Retrieving the correct passphrase would be another step for the attacker but wouldn't stop them if they're determined.
> I also likely bypassed the Gatekeeper warning without even thinking about it, because I run a handful of apps that are still not signed by their developers.
Apple really needs to fix this. In particular open source applications don't sign for whatever reason and it's clear that barring some change they aren't going to start now.
> In particular open source applications don't sign for whatever reason
Most open source applications are signed, just not by Apple's Appstore. Instead, most OSS downloads provide a GPG signature. You should not execute downloaded code before checking signatures - either by the Appstore or package manager, or manually.
A big problem I find with signatures is that I'm not sure what extra security they provide in cases such as this. If the binary can be changed, how can I be sure that the attacker hasn't been able to change the sha1 file or has been able to re-sign with the developer private key?
Slightly OT: I'm a reasonably competent Mac user, I use them all day and depend on them to control my house as I'm disabled. In the event I were to be compromised, can anyone suggest a logging tool/tools that I might be able to use on my network such that I could work out what the problem was and correct anything that needs correcting please?
We are looking at four or five Macs of differing types but all running the latest OS, a number of iPhones, iPads, more Raspberry Pi's than I'm going to admit to and a number of other IoT devices.
TIA!
Also, I really wish more companies would be this forthcoming when they pwned. I think it's really good when are large company comes out with this type of mea culpa, mea maxima culpa. If professionals can get totally pwned, I really do think it tends to make ordinary users think about their security a little more. Or maybe I'm just hopelessly optimistic!
I've been doing some googling and it looks like syslog is something that I run on every machine, and then it passes the results of its logging to the Raspberry Pi for collation and possible inspection later on. Have I got the basic gist of it?
One way to protect against this is to not have SSH keys on your laptop. I've been using Kryptonite https://krypt.co/ lately, which is sort of like two-factor for SSH keys.
Kryptonite is more like a GPG smartcard (or YubiKey) of sorts (with some security trade-offs for the arguably better UX). The key never leaves your mobile device. Backdoors and bugs are still a possibility of course (when aren't they?), and you probably wouldn't wan to run this on an outdated mobile device.
Of course, all you're doing with any of this is preventing your key from leaking. A sufficiently motivated attacker could just backdoor your ssh/git binary and access things through that instead, but it's still a good defense-in-depth mechanism, IMO.
Yes, but passwords can be keylogged and SSH keys on a hardware token (like yubikey) can't. Also if you've got touch-to-use enabled on the token you can't even use the key without physically touching the token.
Great writeup!. I think a lot of developers would do well to understand both the 'right' way to respond to this sort of event, and the tools you need in order to do so. Most importantly being detailed loggging and processes for re-keying everything.
I've participated in, and run, exercises where such damage is inflicted on purpose to surface gaps in the the response processes and to fix them. I was inspired by the Google DiRT (disaster recovery) and NetFlix Chaos Monkey exercises. Both of these create not simply review processes but simulation by action, or actually doing the damage to see the process work. Setting up your systems so that you can do that is a really powerful tool.
That actually goes a step further than Chaos Monkey. I wonder how many organizations would survive that approach if it were intense enough from day #1. Better to ramp that up carefully and give people room to breathe and fix things.
I can't blame them for this. A surprising number of apps ask for root (inc. Adobe installers and Chrome). As far as I know, it's to make updates more reliable when an admin installs a program for a day-to-day user who can't write to /Applications and /Library.
We're long overdue for better sandboxing on desktop (outside of app stores).
I only do root for administration tasks. Filesystem stuff, hardware, server config. All the goodies are in my homedir. Exfiltration is easy as that. Running bad binaries is easy as running under my username.
In the end, there's no protections of what my username can do to files owned by my user. And that's why nasty tool that:
works so easily, and so well.The only thing I know that can thwart attacks like this is Qubes, or a well setup SELinux.. But SELinux then impedes usage. (down the rabbit hole we go).
Edit: Honestly, I'm waiting for a Command and Control to be exclusively in Tor, email keys only through a Tor gateway, and also serve as a slave node to control and use. I could certainly see a "If you agree to keep this application on here, we will give you your files back over the course of X duration".
There's plenty more nefarious ways this all can be used to cause more damage, and "reward" the user with their files back, by being a slave node for more infection. IIRC, there was one of these malware tools that granted access to files if you screwed over your friends and they paid.
As I see it, the problem on the Mac boils down to:
1. Sandboxing your app is often a less-than-fun experience for the developer, so few bother with it unless they're forced to (because they want to sell in the App Store).
2. Apple doesn't put much effort into non-App-Store distribution, so there's no automatic checking or verification that sandboxing is enabled for a freshly-downloaded app. You have to put in some non-trivial effort to see if an app is sandboxed, and essentially nobody does.
I think these two feed on each other, too. Developers don't sandbox, so there's little point in checking. Users don't check, so there's little point in sandboxing. If Apple made the tooling better and we could convince users to check and developers to sandbox whenever practical, it would go a long way toward improving this.
Even most applications that they use that did come with the system, such as web browsers, have a quite limited set of files they should be writing. Browsers, for example, will need to write in the user's downloads directory, anywhere the user explicitly asks to save something, in their cache directory, in their settings file, and in a temporary files directory.
It's also similar for most third party applications they will use, such as word processors and spreadsheets.
It seems it should be possible to design a system that takes advantage of this to make it hard for ransomware and other malware that relies on overwriting your files, yet without being intrusive or impeding usage.
Or the easier method.
rdiff-bacukp + cron job. Or Duplicity. Or Tarsnap. Or CrashPlan. Or...
That is to say backups with multiple stored versions, to another system where the (infected) client does not have direct write access. Ransomware can infect my home directory if it wants to. A fire can burn down my house. Zaphod Beeblebrox can cause my hard drive to experience a spontaneous existence failure. But I've got off-site automatic backups, so I'll never pay the ransom. (I will pay more over time for the off-site storage, but given that I'd pay for that anyway to guard against natural disasters / disk failure / etc it's not really an added cost).
I used to be in boat where my first instinct was to disable SELinux but I must say it wasn't that hard.
Correct me if I'm wrong, but most ransomware is operated almost completely through Tor. Doing email this way may be a problem (for obvious reasons), but for anonymity and uptime's sake most rely on it pretty heavily.
Deleted Comment
All I can tell my customers is to restart their Macs.
Extremely frustrating.
I have ~/Applications and ~/Library which is where anything I install should go.
Deleted Comment
Maybe given the limited scale of this one and the obvious interest the attacker has in producing trojaned versions of popular software, this is actually what they were hoping for in the first place.
Frankly I can only think of a small number of processes that need to automatically access the file: backupd, sshd, and Carbon Copy Cloner. Everything else should require my attention.
The other thing I find interesting is this comment:
> We’re working on the assumption that there’s no point in paying — the attacker has no reason to keep their end of the bargain.
If you really want to be successful in exploiting people through cyber attacks, I guess you will need some kind of system to provide guaranteed contracts, i.e. proof that if a victim pays the ransom, then the other end of the bargain will be held.
It might seem that there's some incentive for ransom holders to hold up their end of the bargain for the majority of cases if they want their attacks to be profitable.
You're describing a legal system and the rule of law. I'm not sure there's way to guarantee anything like you describe when there is some illegality in the nature of the process.
Trade only works when you can trust either the parties involved or the system as a whole to uphold their promises (for the system, that's that involved parties that don't uphold their ends will be punished).
Legal systems aren't the only way to give confidence that both ends of a bargain will be held. As one example, some darknet markets have escrow systems for this purpose. It's not too hard to imagine a way to do this with ransomed code. Reputation-based systems also provide incentives for sellers to deliver on their promises.
Harder to achieve online but not impossible, though plenty of criminals make enough without essentially having to place themselves at risk of physical attack from organised crime.
Could a smart contract system work here ? In this example, the smart contract would assure you the hash of the repo sent to you corresponds to the one you already had locally. You'd add automatic payment when conditions are fullfilled...
Is that feasible?
Unless someone fancies setting up a trusted hacker escrow that acts an intermediary between compromised servers and hackers? That sounds incredibly complicated, highly illegal and unlikely to be trusted by either hacker or hacked though.
There's also the fact that they don't care about who you are or what you do, their only consideration is financial.
Most computers are always connected to the internet when they're on, even if they don't necessarily need to be. Airgapping isn't really used outside of very sensitive networks, but I'm starting to think we need to head towards a model of connecting machines only when really needed.
Of course the cloud based world doesn't allow for that, and perhaps I'm a luddite, but I increasingly find myself disabling the network connection when I'm working on my PC. Kind of like the dial-up days.
As a good corporate drone, this arrangement is kind of forced on me, but a lot of small company / startup folks totally mix the two. Might be a good thing to not do.
Sure it doesn't protect you from e.g. a tool you need for work being compromised, but it reduces the attack surface - this guy probably wouldn't have installed handbrake on his work machine.
Another thing we do specifically because medical data is, a lot of the time I'm forced to work inside a non internet connected network that I vpn and then remote desktop to. Firewall rules mean the only thing getting in from my laptop is vnc. Some systems also require plugging into a specific physical network. Overkill for most uses but it makes losing laptops fat less scary if you can keep a lot of your stuff on a more secure remote system.
Try out Qubes: http://qubes-os.org
I would both prefer and hate this setup. I use my personal laptop for work and having all my apps, data, settings, etc available in one place is amazing. I could get past using different computers but the sad reality is my provided work computer is underpowered compared to my 3.5 year old macbook. I can run circles around my coworker's machines on the simple fact I have an SSD. IDEA opens in seconds for me while they go get a cup of coffee. Our desktops haven't been updated in probably 4+ years and I strongly believe they'd be more productive on macOS than on whatever flavor of linux they are using (Most use Linux because they'd rather die than use Windows and they can eek a little more performance out). A number of them have older macbooks they use for meetings but they aren't powerful enough to actually develop on.
Do not install unsigned software is a good start. Does that dialog need a secondary 'Are you really really sure?' absolutely .. but the basic defence in this specific case was in place.
The solution, of course, was to add a fourth question...
1) Don't install random crap off of the internet: only use the Mac App Store, with sandboxed apps and "System integrity protection" turned on.
2) If you absolutely need to have some non-MAS app, check the checksum, download the DMG, but let it rest, and only install it a month or so later, if no news of breach, malware etc has been announced.
3) Don't give a third party program root privileges -- don't give your credentials when a random program you've downloaded asks for them.
4) Have any sensitive data (e.g. work stuff, etc) on an encrypted .DMG volume or similar, that you only decrypt when you need to check something. Even if your mac is infected, they'll either get just an encrypted image of those, or wont be able to read it at all.
5) Install an application firewall, like Little Snitch.
6) Keep backups.
[1]: https://github.com/caskroom/homebrew-cask/pull/33354
EDIT: But in this case, the software in question is signed, so the (fallback) technique described above is not necessary. The download page [0] contains a GPG signature along with a link to the author's GPG public key. Checking the signature would have prevented the attack.
[0] https://handbrake.fr/rotation.php?file=HandBrake-1.0.7.dmg
Yes, it will take alot of effort to setup, and some effort to maintain, but it helps.
Apple really needs to fix this. In particular open source applications don't sign for whatever reason and it's clear that barring some change they aren't going to start now.
Most open source applications are signed, just not by Apple's Appstore. Instead, most OSS downloads provide a GPG signature. You should not execute downloaded code before checking signatures - either by the Appstore or package manager, or manually.
We are looking at four or five Macs of differing types but all running the latest OS, a number of iPhones, iPads, more Raspberry Pi's than I'm going to admit to and a number of other IoT devices.
TIA!
Also, I really wish more companies would be this forthcoming when they pwned. I think it's really good when are large company comes out with this type of mea culpa, mea maxima culpa. If professionals can get totally pwned, I really do think it tends to make ordinary users think about their security a little more. Or maybe I'm just hopelessly optimistic!
Thanks for the answer, greatly appreciated. :-)
You can get a similar ssh 2fa setup with Google Authenticator's PAM module (https://github.com/google/google-authenticator-libpam), and maintain full control over your infrastructure.
Of course, all you're doing with any of this is preventing your key from leaking. A sufficiently motivated attacker could just backdoor your ssh/git binary and access things through that instead, but it's still a good defense-in-depth mechanism, IMO.
I've participated in, and run, exercises where such damage is inflicted on purpose to surface gaps in the the response processes and to fix them. I was inspired by the Google DiRT (disaster recovery) and NetFlix Chaos Monkey exercises. Both of these create not simply review processes but simulation by action, or actually doing the damage to see the process work. Setting up your systems so that you can do that is a really powerful tool.