I've been working on technology like this for the past six years.
The benefits of transparent systems are likely considerable. The combination of reproducible builds, remote attestation and transparency logging allows trivial detection of a range of supply chain attacks. It can allow users to retroactively audit the source code of remote running systems. Yes, there are attacks that the threat model doesn't protect against. That doesn't mean it isn't immensely useful.
I've also worked in this field but it feels like a foundation built on quicksand. You depend on so many turtle layers and only one of them has to be adversarial and game over.
> it feels like a foundation built on quicksand. You depend on so many turtle layers and only one of them has to be adversarial and game over
Interesting. Please elaborate.
Here's how I see it.
Reproducible builds: I think we'll eventually see Linux distributions like Debian make reproducible builds mandatory by enforcing it in apt-get's trust policy. The trust policy could be expressed as "I will only trust .deb packages where their build hash and source hash are signed by three different build pipelines I trust".
Remote attestation: If you ensure that the server's CPU SoC and the TPM have different supply chains, you could construct a protocol where the supply chain attacker would have to own both supply chains in order to impersonate the server.
Transparency logging: One of the projects I've been working on for the past four years is Sigsum (sigsum.org). It is a transparency log with distributed trust assumptions. Our goal was to figure out the essence of transparency logging technology, identify the most significant design parameters, and for each parameter minimise the attack surface. You'll find the threat model on our website.
Here's a recent presentation by me on the subject of system transparency / runtime transparency / the technology underlying Apple PCC: https://www.youtube.com/watch?v=Lo0gxBWwwQE
I think the only shaking part, is the Secure Enclave, which provides the root of the guarantees. From there, everything is attested so if one layer is adversarial, other layers can notice.
I hope this helps people to consider Swift 6 as a viable option for server-side development, since it offers many of the modern safety features of Rust, including simpler memory management through ARC, compared to Rust’s more complex ownership system and more predictable than Go's garbage collector.
I'd love to use Swift on Cloudflare Workers, but SwiftWASM doesn't seem production ready whereas Rust just works (mostly) on workers. Swift on AWS Lambda looks promising though.
It's worth keeping in mind that these AI machines run an environment very similar to Mac OS, XNU kernel and all, and are powered by Apple Silicon. Using Swift in that context makes sense.
At least according to what we publicly know, no other backend Apple services follow this model.
What do we know about apples other backend services? I’ve worked in compute infra in big tech for 8 years and I don’t know anything about apple’s backend.
Most editors will do, Xcode is mostly needed for iOS / macOS development if you want to submit to the App Store or work with a lot of Apple frameworks.
Have you used it recently, on an M series Mac? I used to feel the same, it was sluggish and crashed frequently. It's become usable now, even pleasant to use. Also it's great they support Vim keybindings now out-of-the-box.
I feel like this is all smoke and mirrors to redirect from the likelihood intentional silicon backdoors that are effectively undetectable. Without open silicon, there's no way to detect that -- say -- when registers r0-rN are set to values [A, ..., N] and a jump to address 0xCONSTANT occurs, additional access is granted to a monitor process.
Of course, this limits the potential attackers to 1) exactly one government (or N number of eyes) or 2) one company, but there's really no way that you can trust remote hardware.
This _does_ increase the trust that the VMs are safe from other attackers, but I guess this depends on your threat model.
> I feel like this is all smoke and mirrors to redirect from the likelihood intentional silicon backdoors that are effectively undetectable.
The technologies Apple PCC is using has real benefits and is most certainly not "all smoke and mirrors". Reproducible builds, remote attestation and transparency logging are individually useful, and the combination of them even more so.
As for the likelihood of Apple launching Apple PCC to redirect attention from backdoors in their silicon, that seems extremely unlikely. We can debate how unlikely, but there are many far more likely explanations. One is that Apple PCC is simply good business. It'll likely reduce security costs for Apple, and strengthen the perception that Apple respects users' privacy.
> when registers r0-rN are set to values [A, ..., N] and a jump to address 0xCONSTANT occurs
I would recommend something more deniable, or at the very least something that can't easily be replayed. Put a challenge-response in there, or attack the TRNG. It is trivial to make a stream of bytes appear random while actually being deterministic. Such an attack would be more deniable, while also allowing a passive network attacker to read all user data. No need to get code execution on the machines.
Apple forgot to disable some cache debugging registers a while back which in effect was similar to something GP described, although exploitation required root privileges and would allow circumventing their in-kernel protections; protections most other systems do not have. (And they still didn't manage to achieve persistence, despite having beyond-root privileges).
If you take as a fundamental assumption that all your hardware is backdoored by Mossad who has unlimited resources and capacity to intercept and process all your traffic, the game is already lost and there’s no point in doing anything.
If instead you assume your attackers have limited resources, things like this increase the costs attackers have to spend to compromise targets, reducing the number of viable targets and/or the depth to which they can penetrate them.
Some of us just assume Apple itself is a bad actor planning to use and sell customer data for profit; makes all of this smoke and mirrors like GP said.
There is absolutely no technical solution where Apple can prove our data isn't exfiltrated as long as this is their software that runs on their hardware.
The economics of silicon manufacturing and Apple's own security goals (including the security of their business model) restrict the kinds of backdoors you can embed in their servers at that level.
Let's assume Apple has been compromised in some way and releases new chips with a backdoor. It's expensive to insert extra logic into just one particular spin of a chip; that involves extra tooling cost that would be noticeable line-items and show up in discovery were Apple to be sued about their false claims. So it needs to be on all the chips, not just a specific "defeat PCC" spin of their silicon. So they'd be shipping iPads and iPhones with hardware backdoors.
What happens when those backdoors inevitably leak? Well, now you have a trivial jailbreak vector that Apple can't patch. Apple's security model could be roughly boiled down as "our DRM is your security"; while they also have lots of actual security, they pride themselves on the fact that they have an economic incentive to lock the system down to keep both bad actors and competing app stores out. So if this backdoor was inserted without the knowledge of Apple management, there are going to be heads rolling. And if it was, then they're going to be sued up the ass once people realize the implications of such a thing, because Tim Cook went up on stage and promised everyone they were building servers that would refuse to let them read your Siri queries.
All remote attestation technology is rooted by a PKI (the DCA certificate authority in this case). There's some data somewhere that simply asserts that a particular key was generated inside a CPU, and everything is chained off that. There's currently no good way to prove this step so you just have to take it on faith. Forge such an assertion and you can sign statements that device X is actually a Y and it's game over, it's not detectable remotely.
Therefore, you must take on faith the organization providing the root of trust i.e. the CPU. No way around it. Apple does the best it can within this constraint by trying to have numerous employees be involved, and there's this third party auditor they hired, but that auditor is ultimately engaging in a process controlled by Apple. It's a good start but the whole thing assumes either that Apple employees will become whistleblowers if given a sufficiently powerful order, or that the third party auditor will be willing and able to shut down Apple Intelligence if they aren't satisfied with the audit. Given Apple's legal resources and famously leak-proof operation, is this a convincing proposition?
Conventional confidential computing conceptually works, because the people designing and selling the CPUs are different to the people deploying them to run confidential workloads. The deployers can't forge an attestation (assuming absence of bugs) because they don't have access to the root signing keys. The CPU makers could, theoretically, but they have no reason to because they aren't running any confidential workloads so there's no data to steal. And they are in practice constrained by basic problems like not knowing what CPU the deployers actually have, not being able to force changes to other people's hardware, not being able to intercept the network connections and so on.
So you need a higher authority that can force them to conspire which in practice means only the US government.
In this case, Apple is doing everything right except that the root of trust for everything is Apple itself. They can publish in their log an entry that claims to be an Apple CPU but for which the key was generated outside of the manufacturing process, and that's all it takes to dismantle the entire architecture. Apple know this and are doing the best they can within the "don't team up with competitors" constraint they obviously are placed under. But trust is ultimately a human thing and the purpose of corporations is to let us abstract and to some extent anthropomorphize large groups. So I'm not totally sure this works, socially.
>backdoors inevitably leak? Well, now you have a trivial jailbreak vector
the discover-ability of an exploit vector relates little to its trivialness, definitely when considering the context (nation-state-APTs)
You can hold the enter key down for 40 seconds to login into any certain Linux Server distro, for years. No one knew, ez to do.
You can have a chip inside your chip that only accepts encrypted and signed microcode and has control over the superior chip. Everyone knows - nothing you can do.
Nation state actors however, can facilitate either; APT's can forge fake digital forensics that imply another motive/state/false flag.
This is an interesting idea. However what does open hardware mean? How can you prove that the design or architecture that was “opened” is actually what was built? What does the attestation even mean in this scenario?
Great question. Most hardware projects I've seen that market themselves as open source hardware provide the schematic and PCB design, but still use ICs that are proprietary. One of my companies, Tillitis, uses an FPGA as the main IC, and we provide the hardware design configured on the FPGA. Still, the FPGA itself is proprietary.
Another aspect to consider is whether you can audit and modify the design artefacts with open source tooling. If the schematics and PCB design is stored in a proprietary format I'd say that's slightly less open source hardware than if the format was KiCad EDA, which is open source. Similarly, in order to configure the HDL onto the FPGA, do you need to use 50 GB of proprietary Xilinx tooling, or can you use open tools for synthesis, place-and-route, and configuration? That also affects the level of openness in my opinion.
We can ask similar questions of open source software. People who run a Linux distribution typically don't compile packages themselves. If those packages are not reproducible from source, in what sense is the binary open source? It seems we consider it to be open source software because someone we trust claimed it was built from open source code.
This is my thought exactly. I really love the idea of open hardware, but I don’t see how it would protect against cover surveillance. What’s stopping a company/government/etc from adding surveillance to an open design? How would you determine that the hardware being used is identical to the open hardware design? You still ultimately have to trust that the organisations involved in manufacturing/assembling/installing/operating the hardware in question hasn’t done something nefarious. And that brings us back to square one.
If this is your position then you might as well stop using any computing devices of any kind. Which includes any kind of smart devices. Since you obviously aren't doing that, then you're trying to hold Apple to a standard you won't even follow yourself.
On top of which, your comment is a complete non-sequitur to the topic at hand. You could reply with this take to literally any security/privacy related thread.
No one should consider this any protection against nation state actors who are in collaboration against Apple. That doesn't mean it's pointless. Removing most of the cloud software stack from the TCB and also protecting against malicious or compromised system administrators is still very valuable for people who are going to move to the cloud anyway.
The Bloomberg SuperMicro implant in its various forms is an exceptionally poor example here: it's been widely criticized, never corroborated, and, Apple's Private Compute architecture has extensive mitigation against every type of purported attack in the various forms the SuperMicro story has taken. UEFI/BIOS backdoors, implanted chips affecting the BMC firmware, and malicious/tampered storage device firmware are all accounted for in the Private Compute trust model.
iirc, no real proof was ever provided for that bloomberg article (despite it also never being retracted). many looked for the chips and from everything I heard there was never a concrete situation where this was discovered.
Doesn't make the possible threat less real (see recent news in Lebanon), but that story in particular seems to have not stood up to closer inquiry.
Transparency through things like attestation is capable of proving nothing unexpected is running; for instance you can provide power/CPU time numbers or hashes of arbitrary memory and this can make it arbitrarily hard to run extra code since it would take more time.
And the secure routing does make most of these attacks infeasible.
There's been some limited research in this space; see for instance xoreaxeaxeax's sandsifter tool which has found millions of undocumented processor instructions [0].
Yeah, but, considering the sheer complexity of modern CPUs and SoCs, this is still the case even if you have the silicon in front of you. That ship sailed some time ago.
It depends on what you want to do. If all you're trying to do is produce an Ed25519 signature you could use something like the Tillitis TKey. It's a product developed by one of my companies. As I've mentioned elsewhere in this thread it is open source hardware in the sense that the schematic, PCB design _and_ hardware design (FPGA configuration) are all open source. Not only that, the FPGA only has about 5000 logic cells. This makes it feasible for an individual to audit the software and the hardware it is running on to a much greater extent than any other system available for purchase. At least I'm not aware of a more open and auditable system than ours.
Also, it is clear that the code is cross platform (it references iOS and macOS).
So the code here gives clues as to the security operation of iOS as well in case you wanted to do iOS security research.
It is lovely to see the middleware here written in Swift. It is quite chunky. Reading all that XPC code gives me the shivers (as I've personal experience with how tricky that can get).
Overall it is a very interesting offering. I wish I had two weeks to burn through the details...
[I am the author of The Road to Zero, and iOS Crash Dump Analysis].
A lot of people seem to be focusing on how this program isn’t sufficient as a guarantee, but those people are missing the point.
The real value of this system is that Apple is making legally enforceable claims about their system. Shareholders can, and do, sue companies that make inaccurate claims about their infrastructure.
I’m 100% sure that Apple’s massive legal team would never let this kind of program exist if _they_ weren’t also confident in these claims. And a legal team at Apple certainly has both internal and external obligations to verify these claims.
America’s legal system is in my opinion what allows the US to dominate economically, creating virtuous cycles like this.
Unfortunately that doesn’t help anyone outside the US, not because of differences in the legal systems, but because as an American company Apple will always have to defer to the US agencies first.
I’m pretty sure a foreign shareholder can sue in a US court of law. While I agree that “shareholder” in this case means extra-massive moneyed entity, I firmly believe that even this provides a deterrence effect. At the very least, for the scale of operations in the US, there’s an extremely high trust environment. That level of trust doesn’t exist even for orders of magnitude smaller issues in most other countries
This marketing is dumb, and if Apple believes that not even they themselves can get access to the information running on the platform, then they could put their money where their mouth is. Increase the max bounty reward from $50 000 to $50 000 000 000 with no other rules than if you can get access to users' request data without having the phone it's sent from, then you get the money, and Apple will not legally pursue the attacker.
Is it as secure as they say? Then it doesn't matter if all the money Apple has is the reward, because nobody can get it. A max bounty of $50 000 for "Accidental or unexpected data disclosure due to deployment or configuration issue" is silly low.
The benefits of transparent systems are likely considerable. The combination of reproducible builds, remote attestation and transparency logging allows trivial detection of a range of supply chain attacks. It can allow users to retroactively audit the source code of remote running systems. Yes, there are attacks that the threat model doesn't protect against. That doesn't mean it isn't immensely useful.
Interesting. Please elaborate.
Here's how I see it.
Reproducible builds: I think we'll eventually see Linux distributions like Debian make reproducible builds mandatory by enforcing it in apt-get's trust policy. The trust policy could be expressed as "I will only trust .deb packages where their build hash and source hash are signed by three different build pipelines I trust".
Remote attestation: If you ensure that the server's CPU SoC and the TPM have different supply chains, you could construct a protocol where the supply chain attacker would have to own both supply chains in order to impersonate the server.
Transparency logging: One of the projects I've been working on for the past four years is Sigsum (sigsum.org). It is a transparency log with distributed trust assumptions. Our goal was to figure out the essence of transparency logging technology, identify the most significant design parameters, and for each parameter minimise the attack surface. You'll find the threat model on our website.
Here's a recent presentation by my colleague Rasmus on the subject: https://www.youtube.com/watch?v=Mp23yQxYm2c
Here's a recent presentation by me on the subject of system transparency / runtime transparency / the technology underlying Apple PCC: https://www.youtube.com/watch?v=Lo0gxBWwwQE
Deleted Comment
Repo: https://github.com/apple/security-pcc
https://github.com/ixy-languages/ixy-languages
At least according to what we publicly know, no other backend Apple services follow this model.
https://marketplace.visualstudio.com/items?itemName=sswg.swi...
Of course, this limits the potential attackers to 1) exactly one government (or N number of eyes) or 2) one company, but there's really no way that you can trust remote hardware.
This _does_ increase the trust that the VMs are safe from other attackers, but I guess this depends on your threat model.
The technologies Apple PCC is using has real benefits and is most certainly not "all smoke and mirrors". Reproducible builds, remote attestation and transparency logging are individually useful, and the combination of them even more so.
As for the likelihood of Apple launching Apple PCC to redirect attention from backdoors in their silicon, that seems extremely unlikely. We can debate how unlikely, but there are many far more likely explanations. One is that Apple PCC is simply good business. It'll likely reduce security costs for Apple, and strengthen the perception that Apple respects users' privacy.
> when registers r0-rN are set to values [A, ..., N] and a jump to address 0xCONSTANT occurs
I would recommend something more deniable, or at the very least something that can't easily be replayed. Put a challenge-response in there, or attack the TRNG. It is trivial to make a stream of bytes appear random while actually being deterministic. Such an attack would be more deniable, while also allowing a passive network attacker to read all user data. No need to get code execution on the machines.
If instead you assume your attackers have limited resources, things like this increase the costs attackers have to spend to compromise targets, reducing the number of viable targets and/or the depth to which they can penetrate them.
One of these threat models is actually useful.
There is absolutely no technical solution where Apple can prove our data isn't exfiltrated as long as this is their software that runs on their hardware.
American Lawyers of the highest pedigree (HNWI) don't even use email.
Your hardware is back-doored, as Intel is named "Intel" for a (nearly too poignant) reason.
Let's assume Apple has been compromised in some way and releases new chips with a backdoor. It's expensive to insert extra logic into just one particular spin of a chip; that involves extra tooling cost that would be noticeable line-items and show up in discovery were Apple to be sued about their false claims. So it needs to be on all the chips, not just a specific "defeat PCC" spin of their silicon. So they'd be shipping iPads and iPhones with hardware backdoors.
What happens when those backdoors inevitably leak? Well, now you have a trivial jailbreak vector that Apple can't patch. Apple's security model could be roughly boiled down as "our DRM is your security"; while they also have lots of actual security, they pride themselves on the fact that they have an economic incentive to lock the system down to keep both bad actors and competing app stores out. So if this backdoor was inserted without the knowledge of Apple management, there are going to be heads rolling. And if it was, then they're going to be sued up the ass once people realize the implications of such a thing, because Tim Cook went up on stage and promised everyone they were building servers that would refuse to let them read your Siri queries.
All remote attestation technology is rooted by a PKI (the DCA certificate authority in this case). There's some data somewhere that simply asserts that a particular key was generated inside a CPU, and everything is chained off that. There's currently no good way to prove this step so you just have to take it on faith. Forge such an assertion and you can sign statements that device X is actually a Y and it's game over, it's not detectable remotely.
Therefore, you must take on faith the organization providing the root of trust i.e. the CPU. No way around it. Apple does the best it can within this constraint by trying to have numerous employees be involved, and there's this third party auditor they hired, but that auditor is ultimately engaging in a process controlled by Apple. It's a good start but the whole thing assumes either that Apple employees will become whistleblowers if given a sufficiently powerful order, or that the third party auditor will be willing and able to shut down Apple Intelligence if they aren't satisfied with the audit. Given Apple's legal resources and famously leak-proof operation, is this a convincing proposition?
Conventional confidential computing conceptually works, because the people designing and selling the CPUs are different to the people deploying them to run confidential workloads. The deployers can't forge an attestation (assuming absence of bugs) because they don't have access to the root signing keys. The CPU makers could, theoretically, but they have no reason to because they aren't running any confidential workloads so there's no data to steal. And they are in practice constrained by basic problems like not knowing what CPU the deployers actually have, not being able to force changes to other people's hardware, not being able to intercept the network connections and so on.
So you need a higher authority that can force them to conspire which in practice means only the US government.
In this case, Apple is doing everything right except that the root of trust for everything is Apple itself. They can publish in their log an entry that claims to be an Apple CPU but for which the key was generated outside of the manufacturing process, and that's all it takes to dismantle the entire architecture. Apple know this and are doing the best they can within the "don't team up with competitors" constraint they obviously are placed under. But trust is ultimately a human thing and the purpose of corporations is to let us abstract and to some extent anthropomorphize large groups. So I'm not totally sure this works, socially.
You can hold the enter key down for 40 seconds to login into any certain Linux Server distro, for years. No one knew, ez to do.
You can have a chip inside your chip that only accepts encrypted and signed microcode and has control over the superior chip. Everyone knows - nothing you can do.
Nation state actors however, can facilitate either; APT's can forge fake digital forensics that imply another motive/state/false flag.
Great question. Most hardware projects I've seen that market themselves as open source hardware provide the schematic and PCB design, but still use ICs that are proprietary. One of my companies, Tillitis, uses an FPGA as the main IC, and we provide the hardware design configured on the FPGA. Still, the FPGA itself is proprietary.
Another aspect to consider is whether you can audit and modify the design artefacts with open source tooling. If the schematics and PCB design is stored in a proprietary format I'd say that's slightly less open source hardware than if the format was KiCad EDA, which is open source. Similarly, in order to configure the HDL onto the FPGA, do you need to use 50 GB of proprietary Xilinx tooling, or can you use open tools for synthesis, place-and-route, and configuration? That also affects the level of openness in my opinion.
We can ask similar questions of open source software. People who run a Linux distribution typically don't compile packages themselves. If those packages are not reproducible from source, in what sense is the binary open source? It seems we consider it to be open source software because someone we trust claimed it was built from open source code.
On top of which, your comment is a complete non-sequitur to the topic at hand. You could reply with this take to literally any security/privacy related thread.
Deleted Comment
The system is protecting you against Apple employees, but not against law enforcement.
No matter how much layer of technology you put, at the end of the day, the US companies have to respect the law of the US.
The requests can be routed to specific investigation / debugging / beta nodes.
Just to turn-on a flag on specific users.
It's not like ultimate privacy, but at least it will prevent Apple engineers from snooping into private chatlogs.
(like some pervert at Gmail was stalking a little girl https://www.gawkerarchives.com/5637234/gcreep-google-enginee... , or Zuckerberg himself reading chatlogs https://www.vanityfair.com/news/2010/03/mark-zuckerberg-alle... )
https://www.bloomberg.com/news/articles/2018-10-04/the-big-h...
> The requests can be routed to specific investigation / debugging / beta nodes.
No, this is not possible with the design of PCC; they can't control how your requests are routed and there cannot be nodes with extra debugging.
This has not been corroborated and Bloomberg has not produced any supporting evidence.
Doesn't make the possible threat less real (see recent news in Lebanon), but that story in particular seems to have not stood up to closer inquiry.
And the secure routing does make most of these attacks infeasible.
[0] https://www.youtube.com/watch?v=ajccZ7LdvoQ
37C3 - Operation Triangulation: What You Get When Attack iPhones of Researchers https://www.youtube.com/watch?v=1f6YyH62jFE
Absolutely insane attack. Really opens your eyes on what nation-state attackers are capable of.
The level of conspiracy needed to keep something like this a secret would be unprecedented.
And if Apple was able to do that why wouldn't they just backdoor iOS/OSX instead of baking it into the hardware.
Dead Comment
For example, https://github.com/search?q=repo%3Aapple%2Fsecurity-pcc%20rd..., lists out all references to `rdar` which is a link schema for Apple's bug management system.
Also, it is clear that the code is cross platform (it references iOS and macOS). So the code here gives clues as to the security operation of iOS as well in case you wanted to do iOS security research.
It is lovely to see the middleware here written in Swift. It is quite chunky. Reading all that XPC code gives me the shivers (as I've personal experience with how tricky that can get).
Overall it is a very interesting offering. I wish I had two weeks to burn through the details... [I am the author of The Road to Zero, and iOS Crash Dump Analysis].
The real value of this system is that Apple is making legally enforceable claims about their system. Shareholders can, and do, sue companies that make inaccurate claims about their infrastructure.
I’m 100% sure that Apple’s massive legal team would never let this kind of program exist if _they_ weren’t also confident in these claims. And a legal team at Apple certainly has both internal and external obligations to verify these claims.
America’s legal system is in my opinion what allows the US to dominate economically, creating virtuous cycles like this.
Is it as secure as they say? Then it doesn't matter if all the money Apple has is the reward, because nobody can get it. A max bounty of $50 000 for "Accidental or unexpected data disclosure due to deployment or configuration issue" is silly low.
I hope you'll consider adding witness cosignatures on your transparency log though. :)
there is no private other-people-computer, by definition.
what those attempts try to do, is to change the definition.