Readit News logoReadit News
gddbvxmm · 12 days ago
This week, Google Cloud paid out their highest bug bounty yet ($150k) for a vulnerability that could have been prevented with ASI [0]. Good to see that Google is pushing forward with ASI despite the performance impact, because it would benefit the security of all hosting companies that use Linux/KVM, not just the cloud providers of big tech.

[0] https://cyberscoop.com/cloud-security-l1tf-reloaded-public-c...

WhyNotHugo · 12 days ago
When enabling this new protection, could we potentially disable other mitigation techniques which become redundant and therefore re-gain some performance?
bjackman · 11 days ago
Yes! The numbers in the posting don't account for this.

Before doing this though, you need to be sure that ASI actually protects all the memory you care about. The version that currently exists protects all user memory but if the kernel copies something into its own memory it's now unprotected. So that needs to be addressed first (or some users might tolerate this risk).

Eridrus · 12 days ago
My understanding was that many of the fixes for speculative execution issues themselves led to performance degradation, does anyone know the latest on that and how this compares?

Are these performance hit numbers inclusive of turning off the other mitigations?

snvzz · 12 days ago
There's about one way[0] to fix timing side channels.

The RISC-V ISA has an effort to standardize a timing fence[1][2], to take care of this once and for all.

0. https://tomchothia.gitlab.io/Papers/EuroSys19.pdf

1. https://lf-riscv.atlassian.net/wiki/spaces/TFXX/pages/538379...

2. https://sel4.org/Summit/2024/slides/hardware-support.pdf

eigenform · 11 days ago
I'm all for giving programmers a way to flush state, and maybe this is just a matter of taste, but I wouldn't characterize this as "taking care of the problem once and for all" unless there's a [magic?] way to recover from the performance trade-off that you'd see in "normal" operating systems (ie. not seL4).

It doesn't change the fact that when you implement a RISC-V core, you're going to have to partition/tag/track resources for threads that you want to be separated. Or, if you're keeping around shared state, you're going to be doing things like "flush all caches and predictors on every context switch" (can't tell if thats more or less painful).

Anyway, that all still seems expensive and hard regardless of whether or not the ISA exposes it to you :(

bjackman · 11 days ago
These numbers are all Vs a completely unmitigated system. AND, this is an extra-expensive version of ASI that does more work than really needed on this HW, to ensure we can measure the impact of the recent changes. (Details of this are in the posting).

So I should probably post something more realistic, and compare against the old mitigations. This will make ASI look a LOT better. But I'm being very careful not to avoid looking like a salesman here. It's better that I risk making things look worse than they are, than risk having people worry I'm hiding issues.

Eridrus · 11 days ago
Not sure if you wrote this article and I appreciate an engineering desire to undersell, but if this is faster than what people actually do in practice, then the takeaway is different than if it is slower, so I think you're doing folks a disservice by not comparing to a realistic baseline in addition to an unmitigated one.
0cf8612b2e1e · 12 days ago
Furthermore, if the OS level mitigations are in place, would the hardware ones be disabled?
api · 12 days ago
That's still really massive. It would only make sense in very high security environments.

Honestly running system services in VMs would be cheaper and just as good, or an OS like Qubes. VM hit is much smaller, less than 1% in some cases on newer hardware.

gpapilion · 12 days ago
It makes sense in any environment you have two workloads sharing compute from two parties, public clouds.

The protection here is to ensure the vms are isolated. Without doing this there is the potential you can leak data via speculative execution across guests.

eptcyka · 12 days ago
VMs suffer from memory use overhead. Would be cool if the guest kernel would cooperate with the host on that.
jeroenhd · 12 days ago
There's KSM that should help: https://pve.proxmox.com/wiki/Kernel_Samepage_Merging_(KSM)

Probably works best running VMs with the same kernel and software version.

traverseda · 12 days ago
It will! For Linux hosts and Linux guests, if you use virtio and memory ballooning.
api · 12 days ago
It's possible to address this to some extent with ballooning memory drivers, etc.
russdill · 12 days ago
Look at it this way, any time a new side channel attack comes out the situation changes. Having this as a mitigation that can be turned on is helpful
riedel · 12 days ago
From reading the article that is the exactly also the feeling of the people involved. The question is if they are on track towards e.g. the 1% eventually.
bjackman · 11 days ago
The next steps should make this much faster. Google's internal version generally gives us a sub-1% hit on everything we measure.

If the community is up for merging this (which is a genuine question - the complexity hit is significant) I expect it to become the default everywhere and for most people it should be a performance win Vs the current default.

But, yes. Not there right now, which is annoying. I'm hoping the community is willing to start merging this anyway with the trust we can get it to be really fast later. But they might say "no, we need a full prototype that's super fast right now", which would be fair.

kookamamie · 12 days ago
Windows suffers from similar effects when Virtualization-Based Security is active.
Avamander · 12 days ago
At the same time VBS is one of the biggest steps forward in terms of Windows kernel security. It's actually considered a proper security boundary.
munchlax · 12 days ago
Funny that they called it VBS.

That's not something I'd easily associate with a step forward in security.

transpute · 12 days ago
Hypervisor overhead should be low, https://www.howtogeek.com/does-windows-11-vbs-slow-pc-games/

What kind of workloads have noticeably lower performance with VBS?

jeroenhd · 12 days ago
It was measured to have a performance impact of up to 10%, with even higher numbers for the nth percentile lows: https://www.tomshardware.com/news/windows-vbs-harms-performa...

Overhead should be minimal but something is preventing it from working as well as it theoretically should. AFAIK Microsoft has been improving VBS but I don't think it's completely fixed yet.

BF6 requiring VBS (or at least "VBS capable" systems) will probably force games to find a way to deal with VBS as much as they can, but for older titles it's not always a bad idea to turn off VBS to get a less stuttery experience.

kookamamie · 12 days ago
We're working on HPC / graphics / computer-vision software and noticed a particularly nasty issue with VBS enabled just last week. Although, have to be mentioned it was on Win10 Pro.
lenerdenator · 12 days ago
Anything that runs on an ISA that has certain features has these effects, IIRC.
Traubenfuchs · 12 days ago
Sometimes something in me starts thinking about if this regularly occurring slowing of chips through exploit mitigation is deliberate.

All of big tech wins: CPUs get slower and we need more vcpu's and more memory to serve our javascript slop to end customers: The hardware companies sell more hardware, the cloud providers sell more cloud.

gpapilion · 12 days ago
I think it’s more pragmatic. We can eliminate hyperthreading to solve this, or increase memory safety at the cost of performance. One is a 50% hit in terms of vcpus, the other is now sub 50%.
Traubenfuchs · 12 days ago
They also need some phony justifications though.

Can't just turn off hyperthreading.

Avamander · 12 days ago
These types of mitigations have the biggest benefit when resources are shared. Do you really think cloud vendors want to lose performance to CPU or other mitigations when they could literally sell those resources to customers instead?
bzzzt · 12 days ago
They don't lose anything since they sell the same instance which performs less with the mitigations on. Customers are paying because they need more instances.
depingus · 12 days ago
Sometimes its fun to engage in a little conspiratorial thinking. My 2 cents... That TPM 2.0 requirement on Windows 11 is about to create a whole ton of e-waste in October (Windows 10 EOL).
e2le · 12 days ago
I'm not so sure. Many people still ran Windows XP/7 long after the EOL date. Unless Chrome, Steam, etc drop support for Windows 10, I don't think many people will care.
AlienRobot · 12 days ago
Hey, it's not nice to call Linux users "e-waste."
bzzzt · 12 days ago
Why would big tech do this when customers bring it upon themselves by building Javascript slop?
worthless-trash · 12 days ago
Big tech isnt running their stack on js.