This week, Google Cloud paid out their highest bug bounty yet ($150k) for a vulnerability that could have been prevented with ASI [0]. Good to see that Google is pushing forward with ASI despite the performance impact, because it would benefit the security of all hosting companies that use Linux/KVM, not just the cloud providers of big tech.
When enabling this new protection, could we potentially disable other mitigation techniques which become redundant and therefore re-gain some performance?
Yes! The numbers in the posting don't account for this.
Before doing this though, you need to be sure that ASI actually protects all the memory you care about. The version that currently exists protects all user memory but if the kernel copies something into its own memory it's now unprotected. So that needs to be addressed first (or some users might tolerate this risk).
My understanding was that many of the fixes for speculative execution issues themselves led to performance degradation, does anyone know the latest on that and how this compares?
Are these performance hit numbers inclusive of turning off the other mitigations?
I'm all for giving programmers a way to flush state, and maybe this is just a matter of taste, but I wouldn't characterize this as "taking care of the problem once and for all" unless there's a [magic?] way to recover from the performance trade-off that you'd see in "normal" operating systems (ie. not seL4).
It doesn't change the fact that when you implement a RISC-V core, you're going to have to partition/tag/track resources for threads that you want to be separated. Or, if you're keeping around shared state, you're going to be doing things like "flush all caches and predictors on every context switch" (can't tell if thats more or less painful).
Anyway, that all still seems expensive and hard regardless of whether or not the ISA exposes it to you :(
These numbers are all Vs a completely unmitigated system. AND, this is an extra-expensive version of ASI that does more work than really needed on this HW, to ensure we can measure the impact of the recent changes. (Details of this are in the posting).
So I should probably post something more realistic, and compare against the old mitigations. This will make ASI look a LOT better. But I'm being very careful not to avoid looking like a salesman here. It's better that I risk making things look worse than they are, than risk having people worry I'm hiding issues.
Not sure if you wrote this article and I appreciate an engineering desire to undersell, but if this is faster than what people actually do in practice, then the takeaway is different than if it is slower, so I think you're doing folks a disservice by not comparing to a realistic baseline in addition to an unmitigated one.
That's still really massive. It would only make sense in very high security environments.
Honestly running system services in VMs would be cheaper and just as good, or an OS like Qubes. VM hit is much smaller, less than 1% in some cases on newer hardware.
It makes sense in any environment you have two workloads sharing compute from two parties, public clouds.
The protection here is to ensure the vms are isolated. Without doing this there is the potential you can leak data via speculative execution across guests.
From reading the article that is the exactly also the feeling of the people involved. The question is if they are on track towards e.g. the 1% eventually.
The next steps should make this much faster. Google's internal version generally gives us a sub-1% hit on everything we measure.
If the community is up for merging this (which is a genuine question - the complexity hit is significant) I expect it to become the default everywhere and for most people it should be a performance win Vs the current default.
But, yes. Not there right now, which is annoying. I'm hoping the community is willing to start merging this anyway with the trust we can get it to be really fast later. But they might say "no, we need a full prototype that's super fast right now", which would be fair.
Overhead should be minimal but something is preventing it from working as well as it theoretically should. AFAIK Microsoft has been improving VBS but I don't think it's completely fixed yet.
BF6 requiring VBS (or at least "VBS capable" systems) will probably force games to find a way to deal with VBS as much as they can, but for older titles it's not always a bad idea to turn off VBS to get a less stuttery experience.
We're working on HPC / graphics / computer-vision software and noticed a particularly nasty issue with VBS enabled just last week. Although, have to be mentioned it was on Win10 Pro.
Sometimes something in me starts thinking about if this regularly occurring slowing of chips through exploit mitigation is deliberate.
All of big tech wins: CPUs get slower and we need more vcpu's and more memory to serve our javascript slop to end customers: The hardware companies sell more hardware, the cloud providers sell more cloud.
I think it’s more pragmatic. We can eliminate hyperthreading to solve this, or increase memory safety at the cost of performance. One is a 50% hit in terms of vcpus, the other is now sub 50%.
These types of mitigations have the biggest benefit when resources are shared. Do you really think cloud vendors want to lose performance to CPU or other mitigations when they could literally sell those resources to customers instead?
They don't lose anything since they sell the same instance which performs less with the mitigations on.
Customers are paying because they need more instances.
Sometimes its fun to engage in a little conspiratorial thinking. My 2 cents... That TPM 2.0 requirement on Windows 11 is about to create a whole ton of e-waste in October (Windows 10 EOL).
I'm not so sure. Many people still ran Windows XP/7 long after the EOL date. Unless Chrome, Steam, etc drop support for Windows 10, I don't think many people will care.
[0] https://cyberscoop.com/cloud-security-l1tf-reloaded-public-c...
Before doing this though, you need to be sure that ASI actually protects all the memory you care about. The version that currently exists protects all user memory but if the kernel copies something into its own memory it's now unprotected. So that needs to be addressed first (or some users might tolerate this risk).
Are these performance hit numbers inclusive of turning off the other mitigations?
The RISC-V ISA has an effort to standardize a timing fence[1][2], to take care of this once and for all.
0. https://tomchothia.gitlab.io/Papers/EuroSys19.pdf
1. https://lf-riscv.atlassian.net/wiki/spaces/TFXX/pages/538379...
2. https://sel4.org/Summit/2024/slides/hardware-support.pdf
It doesn't change the fact that when you implement a RISC-V core, you're going to have to partition/tag/track resources for threads that you want to be separated. Or, if you're keeping around shared state, you're going to be doing things like "flush all caches and predictors on every context switch" (can't tell if thats more or less painful).
Anyway, that all still seems expensive and hard regardless of whether or not the ISA exposes it to you :(
So I should probably post something more realistic, and compare against the old mitigations. This will make ASI look a LOT better. But I'm being very careful not to avoid looking like a salesman here. It's better that I risk making things look worse than they are, than risk having people worry I'm hiding issues.
Honestly running system services in VMs would be cheaper and just as good, or an OS like Qubes. VM hit is much smaller, less than 1% in some cases on newer hardware.
The protection here is to ensure the vms are isolated. Without doing this there is the potential you can leak data via speculative execution across guests.
Probably works best running VMs with the same kernel and software version.
If the community is up for merging this (which is a genuine question - the complexity hit is significant) I expect it to become the default everywhere and for most people it should be a performance win Vs the current default.
But, yes. Not there right now, which is annoying. I'm hoping the community is willing to start merging this anyway with the trust we can get it to be really fast later. But they might say "no, we need a full prototype that's super fast right now", which would be fair.
That's not something I'd easily associate with a step forward in security.
What kind of workloads have noticeably lower performance with VBS?
Overhead should be minimal but something is preventing it from working as well as it theoretically should. AFAIK Microsoft has been improving VBS but I don't think it's completely fixed yet.
BF6 requiring VBS (or at least "VBS capable" systems) will probably force games to find a way to deal with VBS as much as they can, but for older titles it's not always a bad idea to turn off VBS to get a less stuttery experience.
All of big tech wins: CPUs get slower and we need more vcpu's and more memory to serve our javascript slop to end customers: The hardware companies sell more hardware, the cloud providers sell more cloud.
Can't just turn off hyperthreading.