The problem isn't the capability of remote attestation. The problem is who's using it, i.e. who's defining what "security" means. As noted above, for a company, "security" often means intentionally inhibiting my freedom, not actually securing anything I care about.
We would benefit from a better public discussion of what "security" encompasses. Else, we risk conflating "what MS wants me to do with my computer" with "preventing hackers from stealing my credit card number".
Imagine a world where you could submit personal information to a company, with the technological assurance that this information would not leave that company... and you could verify this with remote attestation of the software running on that company's servers.
Imagine a world where you could submit personal information to a company, with the technological assurance that this information would not leave that company... and you could verify this with remote attestation of the software running on that company's servers.
That's a classic "road to hell paved with good intentions". The approaching reality is more like:
Imagine a world where to be allowed to use the Internet you will be mandated to run certain software, which reports your personal information to a company you are obligated to use, and whose use of that information is absolutely something you do not want.
Yes, the problem is indeed "who's using it". Unfortunately you aren't going to be able to decide either, and it will certainly be used against you.
"Imagine a world where you could submit personal information to a company, with the technological assurance that this information would not leave that company... and you could verify this with remote attestation of the software running on that company's servers."
That world already exists, it just doesn't get used much. You can do this with Intel SGX and AMD SEV.
The obvious place for this is blocking cloud providers from accessing personal data. For example, it could be used to resolve concerns about using US based services from Europe, because any data uploaded to such a service can be encrypted such that it's only processed in a certain way (this is what RA does).
RA gets demonized by people making the arguments found in the sibling comment, but they end up throwing the baby out with the bathwater. There are tons of privacy, control and decentralization problems that look intractable until you throw RA in the mix, then suddenly solving them becomes easy. Instead of needing teams of cryptographers to invent ad-hoc and app specific protocols for every app (which in reality they never do), you write a client that RAs the server to check that it's running software that won't leak your private information as part of the connect sequence.
> The obvious place for this is blocking cloud providers from accessing personal data. For example, it could be used to resolve concerns about using US based services from Europe, because any data uploaded to such a service can be encrypted such that it's only processed in a certain way (this is what RA does).
This will not work because the concerns about US based services are legal ones due to access requirements by the US government which cannot be solved by technical restrictions while still complying with those requirements.
Is there any way we can make Remote Attestation providers liable for any losses incurred while using their services? Can we make it so that banks, record companies, and individuals can sue Microsoft or Google if their system doesn't deliver on the promise? If we still see cheating in on-line gaming even though all machines are attested, can we we get our money back?
I feel like part of the problem is that Remote Attestation providers get to have their cake and eat it too: they make a theme park, set up boundaries, and charge admission under the premise that it's safer to play in their walled garden than in a public park.
But if a bad actor slips through their gate and picks a few pockets or kidnaps a couple children, the operators get to say "not our problem, our services have no warranty -- read the EULA".
I feel like in the real world, if a park operator explicitly bills itself as "a safe place to play" it's their problem if someone goes on a crime spree on their property -- there is some duty to deliver on the advertised safety promise.
But somehow, in the software world people can control admission, control what you do and somehow have no liability if things still go off the rails. It's just a sucker's game.
Of course, I'd rather not see remote attestation happen, but maybe part of the reason it keeps creeping back is exactly because there is zero legal downside to making security promises that can't be kept, but incredible market advantages if they can sucker enough people to believe in the scheme.
> If we still see cheating in on-line gaming even though all machines are attested, can we we get our money back?
The concept of remote attestation isn't somehow safer if it works perfectly, and it isn't clear to me that this is actually impossible to build (within an acceptable and specified liability constraint) as opposed to merely exceedingly difficult. I do relish the schadenfreude, though ;P.
> Of course, I'd rather not see remote attestation happen...
Interestingly, the CEO of MobileCoin told me earlier this year that they were "going deeper on discussions with [you] to design a fully open source enclave specifically for [their] use case" (which, for anyone who doesn't know much about this, currently relies on remote attestation and encrypted RAM from Intel SGX to allow mobile devices to offload privacy-sensitive computations and database lookups to their server). I wrote a long letter to you a few days later in the hope of (after verifying with you whether that was even true or not) convincing you to stop, but then decided I should probably try to talk to Kyle and/or Cory first on my way to you (and even later ended up deciding I was stressed out about too many things at the time to deal with it)... does this mean you actually aren't, and we are all safe? ;P (I guess it could be the case that this special design somehow doesn't involve any form of remote attestation--as while my core issue with their product is their reliance on such, I went back through the entire argument and I didn't use that term with THEM--in which case I'm very curious how that could actually work.)
Huh...maybe I didn't parse your comment correctly, but I just checked and I don't think I ever got an email from you on the subject? Totally possible I just bungled it, I'm terrible with names and my inbox is a dumpster fire :P
It's also interesting to see how the game of "telephone" works out when the message comes full circle. Mobilecoin did reach out to me, initially to see if I would write a whitepaper on SGX. After I told them I would be frank about all my opinions, the conversation pivoted to "well, if you could make something that fixed this problem what would it be?". Which I entertained by saying I think the problem may not be solvable, but whatever it was, it had to be open source; and "oh by the way let me tell you about my latest projects, perhaps I could interest you in those". To which it trailed off with a "I'll have my people call your people" and that was that, modulo a podcast I did for them about a month ago which surprisingly didn’t touch on SGX.
So: long story short, no, I'm not creating a solution for them, and I think remote attestation is both a bad idea and not practical. Is it worse than burning some hundreds of tera-watt hours of power per annum to secure a cryptocurrency? That is a harder question to answer: is climate change a bigger problem than remote attestation? The answer is probably obvious to anyone who reads that question, but no two people will agree on what it is.
To your point on RA being not impossible but possibly just exceedingly difficult – you might be right. My take on it is that remote attestation is only "transiently feasible": you can create a system that is unbreakable with the known techniques today; but the very "unbreakability" of such a scheme would cause ever more valuable secrets to be put in such devices, which eventually promotes sufficient investment to uncover an as of yet unknown technique that, once again, breaks the whole model.
Which is why I’m calling out the legal angle, because the next step in the playbook of the corps currently pushing RA is to break that cycle -- by lobbying to make it unlawful to break their attestation toys. Yet, somehow, they still carry no liability themselves for the fact that their toys never worked in the first place. I feel like if they actually bore a consequence for selling technology that was broken, they’d stop trying to peddle it. However, if they can get enough of society to buy into their lie, they’ll have the votes they need to change the laws so that people like you and me could bear the penalty of their failure. With that strategy, they get to decide when the music stops – as well as where they sit.
I'd like to see a return to sanity. Security is fundamentally a problem of dealing with people acting as humans, not of ciphers and code. Technology tends to only delay the manifestation of malintent, while doing little to address the root cause, or worse yet -- hiding the root cause.
IMO this just seems like bargaining and hoping for a just world where the law actually applies equally and constrains too-big-to-fail actors. What would actually happen is various limits/exceptions would get written in, like as long as you used "proper" software (read: microsoft) and did "proper" audits (read: tediously check moar boxes) then you could pass that liability onto someone else or have it be "nobody's fault". We'd likely end up with the same software totalitarianism even faster, because companies would be even more incentivized to deploy cookie cutter centralizing solutions to escape the additional liability.
Never mind that you can't really put a dollar value on personal information to substantiate damages or even personal time spent dealing with the fallout from someone else's negligence, which is like one of the fundamental problems with our legal system.
(There's also the elephant in the room that one of the main industries clamoring for ever more "security" still continues to insist that widely-published numbers (ssn/acct/etc) are somehow secret.)
"Is there any way we can make Remote Attestation providers liable for any losses incurred while using their services?"
RA is a use-case neutral hardware feature, so it doesn't really make sense to talk about making providers liable for anything. That's an argument for making CPU manufacturers liable for anything that goes wrong with any use of a computer.
The sort of companies that use RA are already exposed to losses if RA breaks, that's why they invest in it to start with. Console makers lose money if cheating is rampant on their platforms for example, because people will stop playing games when they realize they can't win without cheating.
So what you're saying is, let's incentivize these already incentivized people to use RA even more, and moreover, let's strongly incentivize companies that don't use it to start doing so. Because if you think governments will say "oh, you didn't use the best available tech to protect the kids, fair enough no liability" then you're not very experienced with how governments work! They will say "you should have used RA like your competitors, 10x the fine".
Hardware-based attestation of the running software is an important security feature, especially in a world where data leaks and identity theft are rampant. Let's say I'm a healthcare provider, and I'm about to send sensitive medical data to a third party vendor. Wouldn't you prefer that this data only be able to be decrypted by a computer that can prove to the world it booted a clean OS image with all the latest security patches installed?
If the vendor wants to install some self-built OS that they trust on their computer and not update it for 5 years, that's their business, but I may not want to trust their computer to have access to my personal data.
Remote attestation gives more control to the owners of data to dictate how that data is processed on third-party machines (or even their own machines that may have been compromised). This is useful for more than just DRM.
Wouldn't you prefer that this data only be able to be decrypted by a computer that can prove to the world it booted a clean OS image with all the latest security patches installed?
No.
Contrarily unpopular opinion: You cannot own data except what resides on your own property. Once you give someone a copy, it is theirs to do with as they wish. They may tell you what they will and will not do, but it is entirely on you to trust them.
...and that's the peril of things like remote attestation and other "zero trust" crap. They replace the nuanced meaning of trust that holds society together (and has literally done so since the beginning of life) with absolutes enforced by an unrelenting machine controlled by some faceless bureaucracy which is also partly under the command of the government. There should already be enough dystopian sci-fi to convince everyone why that is a really bad idea.
We've already seen shades of this in banking. After chips were added to credit cards, people started having their chargebacks denied because "our records show the card was physically present" (even if the charge originated in another country)
How long until companies try to deny responsibility for data leaks because "our records show Windows was fully up-to-date and secure"
> You cannot own data except what resides on your own property. Once you give someone a copy, it is theirs to do with as they wish.
Completely agree. These outdated notions of information ownership are destroying free computing as we know it. Everything "hacker" stands for is completely antithetical to such notions.
Strongly agree, but even if you are wrong on all those points, I still don't want to be forced to run the same exact monoculture software as everone else.
Good luck getting your x86-64 windows kernel + chrome JavaScript exploit chain to run on my big endian arm 64 running Linux and Firefox.
(Also, the existence of competition like that regularly forces all the alternatives to improve.)
It's not necessarily "you" vs. "someone else". You could be one person with two computers and want one computer to be able to attest to the other computer something about its software. (Imagine it's not two computers, but a thousand computers that are exposed to both physical and network attacks.)
Right, but if they aren't going to follow best security practices and prove it (via a signed a hardware attestation of the running software that includes the transport key they want me to use to send them the data), then I'm not going to send them the data. That's my choice.
> Wouldn't you prefer that this data only be able to be decrypted by a computer that can prove to the world it booted a clean OS image with all the latest security patches installed?
I trust myself more than I trust anyone or anything else. It's as simple as that. I don't even slightly trust Microsoft, Google, or Apple.
Your logic is built on an invalid premise that these companies can, in fact, be trusted.
> Remote attestation gives more control to the owners of data to dictate how that data is processed on third-party machines (or even their own machines that may have been compromised).
This is exactly what I want to avoid. It's my device. It should only ever serve me, not anyone else, including its manufacturer and/or OS developer. It should not execute a single instruction that isn't in service of helping me achieve something.
Also, the concept of ownership can simply not be applied to something that does not obey the physical conservation law, i.e. can be copied perfectly and indefinitely.
If I want to buy a device that can generate a proof I can share with others to increase their trust in me, you shouldn't be able to stop me. Implemented properly, these machines can still boot whatever custom software you want; you don't have to share the proof of what booted with anyone.
All cryptography is important. Hardware attestation is no exception. The problem is who's using it, who owns the keys, who's exploiting who.
It's totally fine if it's used to empower and protect us, normal people. If we can use this with our own keys to cryptographically prove that our own systems haven't been tampered with, it's not evil, it's amazing technology that empowers us.
What we really don't need is billion dollar corporations using cryptography to enforce their rule over their little extra-legal digital fiefdoms where they own users and sell access to them to other corporations or create artificial scarcity out of infinite bits. Such things should be straight up illegal. I don't care how much money it costs them, they shouldn't be able to do it.
The problem is we have chip makers like Intel and AMD catering to the billion dollar corporation's use case instead of ours. They come up with technology like IME and SGX. They sell us chips that are essentially factory-pwned by mega-corps. My own computer will not obey me if it's not in some copyright owner's interest to do so and I can't override their control due to their own cryptographic roadblocks. Putting up these roadblocks to user freedom should be illegal.
The problem is who's using it, who owns the keys, who's exploiting who.
The governments know this all too well; that's why they've been trying to ban cryptography, and it was (and I believe still is in many cases) classified as a munition.
But that's not a good argument because SGX isn't something that empowers the Big Guys over the Little Guys. In fact it's the other way around - they took it out of their consumer chips and now it's only found in their server class chips. So companies can create RA proofs and send them to users, but not the other way around.
> Let's say I'm a healthcare provider, and I'm about to send sensitive medical data to a third party vendor. Wouldn't you prefer that this data only be able to be decrypted by a computer that can prove to the world it booted a clean OS image with all the latest security patches installed?
No, because this still doesn't mean my data is secure. A human can still go into the third party vendor's system and see my data, and if that human wants to steal it, they can. No amount of remote attestation will prevent that.
> Remote attestation gives more control to the owners of data to dictate how that data is processed on third-party machines
Oh, really? So every time my health care provider wants to send my data to a third party, a remote attestation confirmation box will pop up on my phone so I can say yes or no, or ask more questions about the third party vendor's computers?
Ultimately the problem here is trust, and trust is a social problem, and as the saying goes, you can't use technology to solve a social problem. But you sure can pretend to be using technology to "solve" a problem in order to get people to give up more and more control over their devices.
Even if we assume that the features will be basically unbreakable your world will still end up looking like the following.
Entities (ab)using remote attestation in order of 'screws over those below them':
Government > Cyber criminal groups > Large organizations > Normal people.
Do you want to live in a world where a large corp can dictate which $VERSION of $APPROVED_SOFTWARE you should be running? I think fundamentally it's just not the direction we should be going. I don't actually doubt that proper remote attestation eventually would be possible, but before then it will be possible to bypass it in countless ways. Probably eventually you'd end up with only a single software stack, assumed to be flawlessly secure.
I think, luckily, this will severely limit the usability of the technology that can work in this way. Developing for this stack will be a pain, the machine will have all sorts of super annoying limitations: can't use that display the driver is not vetted, can't use that USB webcam it might have DMA, etc. That will hopefully harm the uptake of such technologies.
Like often in tech remote attestation in your case is a technical fix for a social problem. If the problem is sharing sensitive data with institutions you don't trust then you need to build that trust, or transform the institutions so that they can be trusted. Transparency, laws, oversight, that type of stuff.
IMO the entire remote attestation is an obfuscated dance about who has root, control and ultimately ownership over devices.
If vendors were plain about it, "attestation" wouldn't be a big deal: you do not own the devices, we do, and you lease it from us, maybe for a one time fee.
But companies know it won't actually fly if your plain about it, ESPECIALLY with large corporations and governments who will outright refuse to buy your services or equipment for many key things if they are not the ultimate controllers of the machines for multiple reasons.
"Hardware-based attestation of the running software is an important security feature"
I understand the mechanics in a "lies to children" way but who exactly is attesting what? Let's face it: MS isn't going to compensate me for a perceived flaw in ... why am I even finishing this sentence?
I recently bought some TPM 2.0 boards for my work VMware hosts so I could switch on secure boot and "attestation" for the OS. They are R630s which have a TPM 1.2 built in but a 2.0 jobbie costs about £16.
I've ticked a box or three on a sheet but I'm not too sure I have significantly enhanced the security of my VMware cluster.
Implemented properly, the idea is that you have a chain of certificates (rooted by the CPU vendor's public key) that can identify all the different bits of software that have executed on the machine, along with a ephemeral public key. The hardware guarantees that the associated private key can only be wielded by the software versions that the chain attested to. So when you initiate your TLS connection with this machine, you can validate the cert chain and understand exactly what software the machine is running, assuming that you trust the CPU vendor and all the versions of the software that were attested to.
Wouldn't you prefer that this data only be able to be decrypted by a computer that can prove to the world it booted a clean OS image with all the latest security patches installed?
This is a pretty bad example. The attack vector is rarely, if ever, the technical way the encrypted file is received or where it is decrypted. The attack vector is what happens after it's decrypted. You've given an encrypted file to a computer you've confirmed knows how to decrypt it "securely" (whatever that means). And after that, that clean OS image with all the latest security patches still enables, by design, the decrypted data to be used (read, manipulated, do whatever it is you sent it to them in the first place) and sent someplace else or copied to removable media.
Can you even do paper prescriptions any more? I've only had digital ones my entire adult life.
Quick edit to answer my own question: In my home state paper prescriptions are only legal in a few situations (if it's for an animal, glasses, justifiable emergencies). However in some parts of the country they're still possible. Even if I had a choice, I prefer the convenience of sending the data digitally- once you actual fill the paper prescription CVS or whoever is still gonna be able to glean sensitive medical info, so you're just delaying the inevitable.
You already do have mandatory disclosure on shenanigans like that in the US. It's the boilerplate HIPAA agreement you sign when you first see a provider.
Good luck finding a provider that doesn't ship your sensitive medical data out to an EMR company though.
Right?! It's telling that that's the use case: "what if we want to securely exchange The New Oil for some sweet cash, without the chance of some other asshole honing in on our racket or the little people hearing about it?"
The reality though is none of this is "secure" except by extensive, massive collusion and centralization in society - and such a thing is implicitly able to be used against the people as much as it might be used for them.
The only reason such hardware is secure is because the resources required to hack it are large.
Basically, a sane system would be: two parties exchange their own TPM keys which they generated on device themselves. They agree to a common set of measurements they will use with their TPMs to determine if they believe the systems are running normally. They then exchange data.
What's happening instead: a large company uses its market position to bake in it's own security keys, which the user can't access or change. They then use their market position to demand your system be configured a specific way that they control. Everyone else suborns to them because they're a big player and manufacturing TPMs is complicated. They have full control of the process.
The essential difference is that rather then two individuals establishing trust, and agreeing to protocols for it - secured with the aid of technology, instead one larger party seizes control by coercion, pretends it'll never do wrong, and allows people to "trust" each other as mediated by its own definition. Trust between individuals ceases to exist, because it's trust provided you're not betrayed by the middle-man.
Weirdly enough, this is actually a big god damn problem if you actually work for any organization that's going government or security work, because the actual processes of those places tend to be behind whether or not they believe large corporate providers doing things like this are actually doing them well enough, or can be trusted enough, to be a part of the process. So even if you're notionally part of "the system" it doesn't actually make anything easier: in an ideal world open-source security parts would enable COTS systems to be used by defense and government departments with surety because they'd be built from an end-user trust and empowerment perspective.
So even the notional beneficiaries tend to have problems because a security assessment ends up at "well we just have to trust Microsoft not to screw up" and while the Head of the NSA might be able call them up and get access, random state-level government department trying to handle healthcare or traffic data or whatever cannot.
> Let's say I'm a healthcare provider, and I'm about to send sensitive medical data to a third party vendor. Wouldn't you prefer that this data only be able to be decrypted by a computer that can prove to the world it booted a clean OS image with all the latest security patches installed?
I'd rather be able to access it without google or microsoft sticking their nose in.
I'd rather be able to combine it with my other data in whatever ways I see fit.
I'd rather be able to back it up in whatever way I see fit.
I'd rather be able to open it on a device that doesn't have a backdoor provided by the US government.
Because it's not microsoft'sor qualcomm's data, it's mine.
I actually don't disagree with you. As I mention in the article:
> I cannot say how much freedom it will take. Arguably, some of the new features will be “good.” Massively reduced cheating in online multiplayer games is something many gamers could appreciate (unless they cheat). Being able to potentially play 4K Blu-ray Discs on your PC again would be convenient.
However, I'm more worried about the questions the increased deployment of technology will bring, such as will Linux users be doomed to a CAPTCHA onslaught being the untrusted devices, or worse. Important questions that, unless raised, risk us just "going with the flow" until it is way too late.
> Massively reduced cheating in online multiplayer games is something many gamers could appreciate (unless they cheat).
I have a rooted Android phone and I had to spend effort spoofing attestation in order to even launch some of my games which don't even have multiplayer. Allow me to be the one to tell you that I do not appreciate it.
I don't even care enough to cheat at these games but if I wanted to cheat it would be merely an exercise of my computer freedom which nobody has any business denying.
IMHO I don't see a CAPTCHA onslaught happening - if only because at that point the CAPTCHAs will be practically useless at stopping bots. They will just ban untrusted devices.
The current landscape of CAPTCHA technology is pretty bleak. It's pretty easy to use ML to learn and solve the early first-gen CAPTCHAs that just used crossed-out words. Google reCAPTCHA relies primarily on user data, obfuscation, and browser fingerprinting to filter out bots, but that only works because of (possibly misplaced) trust in Google. It falls back to an image recognition challenge (which hCaptcha uses exclusively) if you don't have a good data profile - which can also be solved by automated means.
I don't see desktop Linux being fully untrusted off the Internet, if only because Google won't let it happen. They banned Windows workstations internally over a decade ago and they are institutionally reliant upon Linux and macOS. What will almost certainly happen is that Linux will be relegated to forwarding attestation responses between Pluton, some annoying blob in Google Chrome, and any web service that does not want to be flooded with bots in our new hellscape of post-scarcity automation.
Another "doomsday scenario" is that a distinct market for FOSS hardware will arise, albeit much shittier hardware than what everyone else uses. Those users will become fringe and progressively isolated.
Unfortunately, it does seem likely that many services will require that your machine run a kernel/web browser signed by an entity they trust before they give you access to what they consider sensitive data. That will suck for those of us who want to build our own kernels/web browsers and use that software to interact with sensitive data from large corporations, but that's their choice to make (IMHO). And it's my choice not to use their service.
This crap hasn't worked and it will never work. Console vendors have built much stronger things than remote attestation but you know one thing stays true: if you sign crappy vulnerable code, you are just signing any code.
No, that would be horrible for data portability, which is the reason many hospitals are locked into shitty EPR systems. If you are going to design a way to transfer data across healthcare providers, it better be as easy as sending a fax. Sheesh.
You'd only be verifying the machine you sent the data to was up to date, but that is likely to be a file server or router of some sort. You'd need to validate the entire network that will touch the data.
While it might be theoretically possible to assert the whole network is up to date, the hospitals will definitely fail that check. There's all sorts of equipment the hospitals can't update for various reasons, such as aging radiology machines.
>Hardware-based attestation of the running software is an important security feature, especially in a world where data leaks and identity theft are rampant. Let's say I'm a healthcare provider, and I'm about to send sensitive medical data to a third party vendor. Wouldn't you prefer that this data only be able to be decrypted by a computer that can prove to the world it booted a clean OS image with all the latest security patches installed?
I'd prefer it to not be run a computer which has already been compromised with a UEFI rootkit which is what trusted computing has gotten us so far.
i dont think the attestation mechanism is wrong, but that the ability to perform remote attestation is going to be abused to lock in consumers (even if in some circumstances there's improved security using it).
> Wouldn't you prefer that this data only be able to be decrypted by a computer that can prove to the world it booted a clean OS image with all the latest security patches installed?
Yeah, because transferring that data into another machine is an impossible task.
Admittedly it is a simplified scenario for the sake of argument. You'd need to have a full attestation of everything that controls access to the data (including ACLs) to get much of a guarantee.
But in a future world it's not hard to imagine the vendor software running in some sort of SGX-like environment that is very difficult to manually extract the data from.
It's not just a war on general purpose computing, it's not just the mainframe ascendant: the mainframe seems intent on taking over & coopting every computer it can.
Google's anti-rooting regime (SafetyNet) has been painful to experience. I'm not sure what's next with the new Play Integrity API but it's hard to have hope users will see wins here or anywhere.
Remote attestation or not, "Software freedom" fighters should understand that things happen based on some user base need. Somebody needed this and they added it, whoever needs it doesn't care if they can't run linux on it. If the user cares about running anything else on the hardware, they will add a way to disable the feature. it is all about the user need.
if you are a secondary priority user on some hardware, the way to fix it is to focus on becoming important enough to be prioritized instead of fearing some technology will limit things.
> "Software freedom" fighters should understand that things happen based on some user base need.
I wish that were true. However, I think the movie Tron (1982) sums this up very nicely.
From the movie Tron:
> Dr. Walter Gibbs: That MCP, that's half our problem right there.
> Ed Dillinger: The MCP is the most efficient way of handling what we do! I can't sit here and worry about every little user request that comes in!
> Dr. Walter Gibbs: User requests are what computers are for!
> Ed Dillinger: Doing our business is what computers are for.
We are now moving toward a world where all computers have a "MCP". No, it is not to solve user problems, it is to do the business of the corporations that designed it.
> Somebody needed this and they added it, whoever needs it doesn't care if they can't run linux on it.
Originally, the ones who "needed" features like this this are the big content distributors. Without these features, it's too easy for normal people to extract content and give copies of it to their friends and family.
As a parallel development, another one who "needed" features like this is Microsoft, for a different reason. They were taking reputational damage from malware, and needed a way to prevent malware from running before their operating system kernel (malware loading after the operating system kernel could be contained by the normal security mechanisms like ACLs).
These two development threads had enough in common that they ended up merging together, and those who want to prevent copying content can now point to security as an excuse. And yes, neither of these two groups care if you can't run Linux on your own devices.
> if you are a secondary priority user on some hardware, the way to fix it is to focus on becoming important enough to be prioritized instead of fearing some technology will limit things.
I fully agree that this is our best defense. In fact, the only reason we can still run Linux on our desktops and notebooks is that, when SecureBoot was developed, Linux was already important enough. However, this could only happen because Linux had time to grow and become important enough (while being a "secondary priority user" of the hardware) before things started to become limited. Had SecureBoot come before Linux became important enough, running third party operating systems would not have been allowed, and Linux would not have had a chance to grow and gain importance.
Of course not. Things happen based on what investors and developers want. Users are very much secondary. They're a nuisance. If things did really happen based on some user-base need, would we have had Instagram or Facebook in their current form?
This. All these comments (and this article) worried that this is MS coming to take their Linux or whatever are missing that this is something their biggest customers want.
We need this in our corporate client device fleet to counter specific threats.
We need this in our servers for the same reason — we do remote attestation today for Linux servers in semi-trusted locations.
We’ve conveyed to our vendors that this is a desired capability in next-gen network equipment.
We’re not doing this to control data once it’s on an end-user’s computer. We’re doing it because we have a regulatory (and moral) obligation to protect the data that is entrusted to us.
We’re not Intel/AMD/NVIDIA/etc’s largest customer, but when we defer orders or shift vendor allocation it gets mentioned in their quarterly earnings reports. They tend to listen when we ask for features, and when our peer companies (not to mention governments) ask for the same thing because we have similar data security requirements?
Cloud and Business products is what, ~2/3rds of Microsoft’s revenue at this point? This isn’t being driven by the MPAA or whoever looking for better ways to screw over consumers.
I think if owners of devices have ultimate control over the root key / credential that determines attestation, I don't think people care about that.
So in your case, for devices you buy, you set up your corporate TPM key as the root owner, and then you send the device to employees, vendors, etc. The ownership chain is clear and you can send attestation requests. The corp is the owner of the device, and that is fairly obvious.
The issue is when people and corps buy devices, they do not have effective root. Microsoft, apple, google, etc have the tpm root key, and you as a corporation actually do not have root yourself. They can force you to do things you want to do. It makes you more vulnerable, because if it is in MSFTs interest (or they are coerced by the state to do so clandestinely) a lot of threats can happen, and you don't even need an 0day to do so!
If it starts becoming status quo, the freedom to do the things you need to your devices starts going away.
> We need this in our corporate client device fleet to counter specific threats
Can you please expand on what you verify via remote attestation and against which attack vectors this protects you?
Does this protect you against the usual attack vectors of your employees logging in on phishing sites, downloading malware, running office macros etc?
Stealing your data usually does not need any root/kernel access.
>We’re not doing this to control data once it’s on an end-user’s computer. We’re doing it because we have a regulatory (and moral) obligation to protect the data that is entrusted to us.
>Cloud and Business products is what, ~2/3rds of Microsoft’s revenue at this point? This isn’t being driven by the MPAA or whoever looking for better ways to screw over consumers.
Except... Yes it is. When your Ur business case was "do computation on someone else's computer, but ensure the operator cannot have full transparent access to their own computer's operational details", you are in the end casting the first stone. Just because I don't have an LLC, or Inc. or other legal fiction tied to my name, doesn't mean I'm not bound by the same moral imperatives you claim to be, but more importantly (I am not willing to sell everyone else's computational freedom up the river for a pole of quick bucks).
Get your collective heads out of your arses. Get back out in the sun. This nonsense is ripping every last bit of potential computing ever had and ripping it out of the hands of the lay consumer unless they dance the blessed dance of upstream.
You do not know best. You think you do. You've gotten where you are without that which you seek to create, and once created, that which you make can and will never be taken back. It creates and magnifies too much power asymmetry.
My god, have you never really stopped to think through the ethical implications? To really walk down the garden path?
> This isn’t being driven by the MPAA or whoever looking for better ways to screw over consumers.
Then they should prove it. I'm sure they have lots of expensive lobbyists under their employ, have them go to the government and tell the politicians the computer industry needs regulation to make it illegal to screw over users by depriving them of their computer freedom. If effective rules make it into law, I will trust their intentions.
"Somebody" means billion dollar corporations that already have way too much power over people. Their ability to want and actually realize this bleak attestation future needs to be regulated out of existence.
Somebody wanted this, but it might not be a user, and it might not fill any user need whatsoever. If I am the primary user of the hardware, my wants take precedence over anyone else, including the manufacturer and the software vendors.
It will make web scraping impossible. You will not be able to install any software that can do that because it won't be let in to app stores (i.e. it will never get the right keys and permissions, websites won't even respond to it as it won't have attestation).
If you somehow try to work around and use your "attested" machine and user id to do this (because websites will require it and your script can't have it, but may be it can run under your user account, for example) - monitoring systems will soon block your account for "suspicious activity" and it will be next to impossible to re-instate because Google and Microsoft don't provide any human support, unless you are some 1mm+ influencer on instagram and will manage to start a rukus on social media.
> Being able to potentially play 4K Blu-ray Discs on your PC again.
We cannot do this now because user-hostile vendors have locked the functionality away from us. I think this is a perfect microcosm for the whole tradeoff: lock away the rest of your userspace and we might let you watch the new Batman DVD you already bought.
This choice of words perfectly captures the arrogance of these copyright corporations. Who are they to dictate how our computers work just to maintain their irrelevant business model? They're ones who should be playing by our rules, not the other way around. It makes me wish piracy was as bad as they make it out to be, to the point it kills them.
It feels like they've been trying to shove variations on the celestial jukebox down our throats forever. Supply chain integration is definitely how they could win. I can see it now, DIVX on the desktop.
Why you assume artists cannot define how you consume media they created? Yes they royally screwed up, but it is their product. They define how they sell it.
Just to be clear: I hate how movies are currently distributed.
We would benefit from a better public discussion of what "security" encompasses. Else, we risk conflating "what MS wants me to do with my computer" with "preventing hackers from stealing my credit card number".
Imagine a world where you could submit personal information to a company, with the technological assurance that this information would not leave that company... and you could verify this with remote attestation of the software running on that company's servers.
That's a classic "road to hell paved with good intentions". The approaching reality is more like:
Imagine a world where to be allowed to use the Internet you will be mandated to run certain software, which reports your personal information to a company you are obligated to use, and whose use of that information is absolutely something you do not want.
Yes, the problem is indeed "who's using it". Unfortunately you aren't going to be able to decide either, and it will certainly be used against you.
Ask that question every time you see the word "security" written. There is no such word as bare security.
- security for who?
- security from who?
- security to what ends?
Much of the time security is a closed system, fixed-sum game. My security means your loss of it.
Deleted Comment
That world already exists, it just doesn't get used much. You can do this with Intel SGX and AMD SEV.
The obvious place for this is blocking cloud providers from accessing personal data. For example, it could be used to resolve concerns about using US based services from Europe, because any data uploaded to such a service can be encrypted such that it's only processed in a certain way (this is what RA does).
RA gets demonized by people making the arguments found in the sibling comment, but they end up throwing the baby out with the bathwater. There are tons of privacy, control and decentralization problems that look intractable until you throw RA in the mix, then suddenly solving them becomes easy. Instead of needing teams of cryptographers to invent ad-hoc and app specific protocols for every app (which in reality they never do), you write a client that RAs the server to check that it's running software that won't leak your private information as part of the connect sequence.
This will not work because the concerns about US based services are legal ones due to access requirements by the US government which cannot be solved by technical restrictions while still complying with those requirements.
I feel like part of the problem is that Remote Attestation providers get to have their cake and eat it too: they make a theme park, set up boundaries, and charge admission under the premise that it's safer to play in their walled garden than in a public park.
But if a bad actor slips through their gate and picks a few pockets or kidnaps a couple children, the operators get to say "not our problem, our services have no warranty -- read the EULA".
I feel like in the real world, if a park operator explicitly bills itself as "a safe place to play" it's their problem if someone goes on a crime spree on their property -- there is some duty to deliver on the advertised safety promise.
But somehow, in the software world people can control admission, control what you do and somehow have no liability if things still go off the rails. It's just a sucker's game.
Of course, I'd rather not see remote attestation happen, but maybe part of the reason it keeps creeping back is exactly because there is zero legal downside to making security promises that can't be kept, but incredible market advantages if they can sucker enough people to believe in the scheme.
The concept of remote attestation isn't somehow safer if it works perfectly, and it isn't clear to me that this is actually impossible to build (within an acceptable and specified liability constraint) as opposed to merely exceedingly difficult. I do relish the schadenfreude, though ;P.
> Of course, I'd rather not see remote attestation happen...
Interestingly, the CEO of MobileCoin told me earlier this year that they were "going deeper on discussions with [you] to design a fully open source enclave specifically for [their] use case" (which, for anyone who doesn't know much about this, currently relies on remote attestation and encrypted RAM from Intel SGX to allow mobile devices to offload privacy-sensitive computations and database lookups to their server). I wrote a long letter to you a few days later in the hope of (after verifying with you whether that was even true or not) convincing you to stop, but then decided I should probably try to talk to Kyle and/or Cory first on my way to you (and even later ended up deciding I was stressed out about too many things at the time to deal with it)... does this mean you actually aren't, and we are all safe? ;P (I guess it could be the case that this special design somehow doesn't involve any form of remote attestation--as while my core issue with their product is their reliance on such, I went back through the entire argument and I didn't use that term with THEM--in which case I'm very curious how that could actually work.)
It's also interesting to see how the game of "telephone" works out when the message comes full circle. Mobilecoin did reach out to me, initially to see if I would write a whitepaper on SGX. After I told them I would be frank about all my opinions, the conversation pivoted to "well, if you could make something that fixed this problem what would it be?". Which I entertained by saying I think the problem may not be solvable, but whatever it was, it had to be open source; and "oh by the way let me tell you about my latest projects, perhaps I could interest you in those". To which it trailed off with a "I'll have my people call your people" and that was that, modulo a podcast I did for them about a month ago which surprisingly didn’t touch on SGX.
So: long story short, no, I'm not creating a solution for them, and I think remote attestation is both a bad idea and not practical. Is it worse than burning some hundreds of tera-watt hours of power per annum to secure a cryptocurrency? That is a harder question to answer: is climate change a bigger problem than remote attestation? The answer is probably obvious to anyone who reads that question, but no two people will agree on what it is.
To your point on RA being not impossible but possibly just exceedingly difficult – you might be right. My take on it is that remote attestation is only "transiently feasible": you can create a system that is unbreakable with the known techniques today; but the very "unbreakability" of such a scheme would cause ever more valuable secrets to be put in such devices, which eventually promotes sufficient investment to uncover an as of yet unknown technique that, once again, breaks the whole model.
Which is why I’m calling out the legal angle, because the next step in the playbook of the corps currently pushing RA is to break that cycle -- by lobbying to make it unlawful to break their attestation toys. Yet, somehow, they still carry no liability themselves for the fact that their toys never worked in the first place. I feel like if they actually bore a consequence for selling technology that was broken, they’d stop trying to peddle it. However, if they can get enough of society to buy into their lie, they’ll have the votes they need to change the laws so that people like you and me could bear the penalty of their failure. With that strategy, they get to decide when the music stops – as well as where they sit.
I'd like to see a return to sanity. Security is fundamentally a problem of dealing with people acting as humans, not of ciphers and code. Technology tends to only delay the manifestation of malintent, while doing little to address the root cause, or worse yet -- hiding the root cause.
Never mind that you can't really put a dollar value on personal information to substantiate damages or even personal time spent dealing with the fallout from someone else's negligence, which is like one of the fundamental problems with our legal system.
(There's also the elephant in the room that one of the main industries clamoring for ever more "security" still continues to insist that widely-published numbers (ssn/acct/etc) are somehow secret.)
RA is a use-case neutral hardware feature, so it doesn't really make sense to talk about making providers liable for anything. That's an argument for making CPU manufacturers liable for anything that goes wrong with any use of a computer.
The sort of companies that use RA are already exposed to losses if RA breaks, that's why they invest in it to start with. Console makers lose money if cheating is rampant on their platforms for example, because people will stop playing games when they realize they can't win without cheating.
So what you're saying is, let's incentivize these already incentivized people to use RA even more, and moreover, let's strongly incentivize companies that don't use it to start doing so. Because if you think governments will say "oh, you didn't use the best available tech to protect the kids, fair enough no liability" then you're not very experienced with how governments work! They will say "you should have used RA like your competitors, 10x the fine".
Deleted Comment
In practice, both a good bike lock and remote attestation raise the bar against attacks significantly, without providing 100% security.
Hardware-based attestation of the running software is an important security feature, especially in a world where data leaks and identity theft are rampant. Let's say I'm a healthcare provider, and I'm about to send sensitive medical data to a third party vendor. Wouldn't you prefer that this data only be able to be decrypted by a computer that can prove to the world it booted a clean OS image with all the latest security patches installed?
If the vendor wants to install some self-built OS that they trust on their computer and not update it for 5 years, that's their business, but I may not want to trust their computer to have access to my personal data.
Remote attestation gives more control to the owners of data to dictate how that data is processed on third-party machines (or even their own machines that may have been compromised). This is useful for more than just DRM.
No.
Contrarily unpopular opinion: You cannot own data except what resides on your own property. Once you give someone a copy, it is theirs to do with as they wish. They may tell you what they will and will not do, but it is entirely on you to trust them.
...and that's the peril of things like remote attestation and other "zero trust" crap. They replace the nuanced meaning of trust that holds society together (and has literally done so since the beginning of life) with absolutes enforced by an unrelenting machine controlled by some faceless bureaucracy which is also partly under the command of the government. There should already be enough dystopian sci-fi to convince everyone why that is a really bad idea.
We've already seen shades of this in banking. After chips were added to credit cards, people started having their chargebacks denied because "our records show the card was physically present" (even if the charge originated in another country)
How long until companies try to deny responsibility for data leaks because "our records show Windows was fully up-to-date and secure"
Completely agree. These outdated notions of information ownership are destroying free computing as we know it. Everything "hacker" stands for is completely antithetical to such notions.
Good luck getting your x86-64 windows kernel + chrome JavaScript exploit chain to run on my big endian arm 64 running Linux and Firefox.
(Also, the existence of competition like that regularly forces all the alternatives to improve.)
I trust myself more than I trust anyone or anything else. It's as simple as that. I don't even slightly trust Microsoft, Google, or Apple.
Your logic is built on an invalid premise that these companies can, in fact, be trusted.
> Remote attestation gives more control to the owners of data to dictate how that data is processed on third-party machines (or even their own machines that may have been compromised).
This is exactly what I want to avoid. It's my device. It should only ever serve me, not anyone else, including its manufacturer and/or OS developer. It should not execute a single instruction that isn't in service of helping me achieve something.
Also, the concept of ownership can simply not be applied to something that does not obey the physical conservation law, i.e. can be copied perfectly and indefinitely.
It's totally fine if it's used to empower and protect us, normal people. If we can use this with our own keys to cryptographically prove that our own systems haven't been tampered with, it's not evil, it's amazing technology that empowers us.
What we really don't need is billion dollar corporations using cryptography to enforce their rule over their little extra-legal digital fiefdoms where they own users and sell access to them to other corporations or create artificial scarcity out of infinite bits. Such things should be straight up illegal. I don't care how much money it costs them, they shouldn't be able to do it.
The problem is we have chip makers like Intel and AMD catering to the billion dollar corporation's use case instead of ours. They come up with technology like IME and SGX. They sell us chips that are essentially factory-pwned by mega-corps. My own computer will not obey me if it's not in some copyright owner's interest to do so and I can't override their control due to their own cryptographic roadblocks. Putting up these roadblocks to user freedom should be illegal.
The governments know this all too well; that's why they've been trying to ban cryptography, and it was (and I believe still is in many cases) classified as a munition.
No, because this still doesn't mean my data is secure. A human can still go into the third party vendor's system and see my data, and if that human wants to steal it, they can. No amount of remote attestation will prevent that.
> Remote attestation gives more control to the owners of data to dictate how that data is processed on third-party machines
Oh, really? So every time my health care provider wants to send my data to a third party, a remote attestation confirmation box will pop up on my phone so I can say yes or no, or ask more questions about the third party vendor's computers?
Ultimately the problem here is trust, and trust is a social problem, and as the saying goes, you can't use technology to solve a social problem. But you sure can pretend to be using technology to "solve" a problem in order to get people to give up more and more control over their devices.
Entities (ab)using remote attestation in order of 'screws over those below them':
Government > Cyber criminal groups > Large organizations > Normal people.
Do you want to live in a world where a large corp can dictate which $VERSION of $APPROVED_SOFTWARE you should be running? I think fundamentally it's just not the direction we should be going. I don't actually doubt that proper remote attestation eventually would be possible, but before then it will be possible to bypass it in countless ways. Probably eventually you'd end up with only a single software stack, assumed to be flawlessly secure.
I think, luckily, this will severely limit the usability of the technology that can work in this way. Developing for this stack will be a pain, the machine will have all sorts of super annoying limitations: can't use that display the driver is not vetted, can't use that USB webcam it might have DMA, etc. That will hopefully harm the uptake of such technologies.
Like often in tech remote attestation in your case is a technical fix for a social problem. If the problem is sharing sensitive data with institutions you don't trust then you need to build that trust, or transform the institutions so that they can be trusted. Transparency, laws, oversight, that type of stuff.
Who needs espionage or lobbying when you have an undetectable root shell on every computer in the country?
If vendors were plain about it, "attestation" wouldn't be a big deal: you do not own the devices, we do, and you lease it from us, maybe for a one time fee.
But companies know it won't actually fly if your plain about it, ESPECIALLY with large corporations and governments who will outright refuse to buy your services or equipment for many key things if they are not the ultimate controllers of the machines for multiple reasons.
I understand the mechanics in a "lies to children" way but who exactly is attesting what? Let's face it: MS isn't going to compensate me for a perceived flaw in ... why am I even finishing this sentence?
I recently bought some TPM 2.0 boards for my work VMware hosts so I could switch on secure boot and "attestation" for the OS. They are R630s which have a TPM 1.2 built in but a 2.0 jobbie costs about £16.
I've ticked a box or three on a sheet but I'm not too sure I have significantly enhanced the security of my VMware cluster.
Yes, dear Windows, you're running on a dual-core Xeon Gold 6326 with i440BX chipset. Don't ask how this is possible, just trust me...
This is a pretty bad example. The attack vector is rarely, if ever, the technical way the encrypted file is received or where it is decrypted. The attack vector is what happens after it's decrypted. You've given an encrypted file to a computer you've confirmed knows how to decrypt it "securely" (whatever that means). And after that, that clean OS image with all the latest security patches still enables, by design, the decrypted data to be used (read, manipulated, do whatever it is you sent it to them in the first place) and sent someplace else or copied to removable media.
Let's say I'd like mandatory disclosure on shenanigans like that, so I can avoid this healthcare provider.
Quick edit to answer my own question: In my home state paper prescriptions are only legal in a few situations (if it's for an animal, glasses, justifiable emergencies). However in some parts of the country they're still possible. Even if I had a choice, I prefer the convenience of sending the data digitally- once you actual fill the paper prescription CVS or whoever is still gonna be able to glean sensitive medical info, so you're just delaying the inevitable.
Good luck finding a provider that doesn't ship your sensitive medical data out to an EMR company though.
The only reason such hardware is secure is because the resources required to hack it are large.
Basically, a sane system would be: two parties exchange their own TPM keys which they generated on device themselves. They agree to a common set of measurements they will use with their TPMs to determine if they believe the systems are running normally. They then exchange data.
What's happening instead: a large company uses its market position to bake in it's own security keys, which the user can't access or change. They then use their market position to demand your system be configured a specific way that they control. Everyone else suborns to them because they're a big player and manufacturing TPMs is complicated. They have full control of the process.
The essential difference is that rather then two individuals establishing trust, and agreeing to protocols for it - secured with the aid of technology, instead one larger party seizes control by coercion, pretends it'll never do wrong, and allows people to "trust" each other as mediated by its own definition. Trust between individuals ceases to exist, because it's trust provided you're not betrayed by the middle-man.
Weirdly enough, this is actually a big god damn problem if you actually work for any organization that's going government or security work, because the actual processes of those places tend to be behind whether or not they believe large corporate providers doing things like this are actually doing them well enough, or can be trusted enough, to be a part of the process. So even if you're notionally part of "the system" it doesn't actually make anything easier: in an ideal world open-source security parts would enable COTS systems to be used by defense and government departments with surety because they'd be built from an end-user trust and empowerment perspective.
So even the notional beneficiaries tend to have problems because a security assessment ends up at "well we just have to trust Microsoft not to screw up" and while the Head of the NSA might be able call them up and get access, random state-level government department trying to handle healthcare or traffic data or whatever cannot.
I'd rather be able to access it without google or microsoft sticking their nose in.
I'd rather be able to combine it with my other data in whatever ways I see fit.
I'd rather be able to back it up in whatever way I see fit.
I'd rather be able to open it on a device that doesn't have a backdoor provided by the US government.
Because it's not microsoft'sor qualcomm's data, it's mine.
> I cannot say how much freedom it will take. Arguably, some of the new features will be “good.” Massively reduced cheating in online multiplayer games is something many gamers could appreciate (unless they cheat). Being able to potentially play 4K Blu-ray Discs on your PC again would be convenient.
However, I'm more worried about the questions the increased deployment of technology will bring, such as will Linux users be doomed to a CAPTCHA onslaught being the untrusted devices, or worse. Important questions that, unless raised, risk us just "going with the flow" until it is way too late.
I have a rooted Android phone and I had to spend effort spoofing attestation in order to even launch some of my games which don't even have multiplayer. Allow me to be the one to tell you that I do not appreciate it.
I don't even care enough to cheat at these games but if I wanted to cheat it would be merely an exercise of my computer freedom which nobody has any business denying.
The current landscape of CAPTCHA technology is pretty bleak. It's pretty easy to use ML to learn and solve the early first-gen CAPTCHAs that just used crossed-out words. Google reCAPTCHA relies primarily on user data, obfuscation, and browser fingerprinting to filter out bots, but that only works because of (possibly misplaced) trust in Google. It falls back to an image recognition challenge (which hCaptcha uses exclusively) if you don't have a good data profile - which can also be solved by automated means.
I don't see desktop Linux being fully untrusted off the Internet, if only because Google won't let it happen. They banned Windows workstations internally over a decade ago and they are institutionally reliant upon Linux and macOS. What will almost certainly happen is that Linux will be relegated to forwarding attestation responses between Pluton, some annoying blob in Google Chrome, and any web service that does not want to be flooded with bots in our new hellscape of post-scarcity automation.
While it might be theoretically possible to assert the whole network is up to date, the hospitals will definitely fail that check. There's all sorts of equipment the hospitals can't update for various reasons, such as aging radiology machines.
I'd prefer it to not be run a computer which has already been compromised with a UEFI rootkit which is what trusted computing has gotten us so far.
Yeah, because transferring that data into another machine is an impossible task.
That's the stupidest argument I heard today...
But in a future world it's not hard to imagine the vendor software running in some sort of SGX-like environment that is very difficult to manually extract the data from.
Google's anti-rooting regime (SafetyNet) has been painful to experience. I'm not sure what's next with the new Play Integrity API but it's hard to have hope users will see wins here or anywhere.
if you are a secondary priority user on some hardware, the way to fix it is to focus on becoming important enough to be prioritized instead of fearing some technology will limit things.
I wish that were true. However, I think the movie Tron (1982) sums this up very nicely.
From the movie Tron:
> Dr. Walter Gibbs: That MCP, that's half our problem right there.
> Ed Dillinger: The MCP is the most efficient way of handling what we do! I can't sit here and worry about every little user request that comes in!
> Dr. Walter Gibbs: User requests are what computers are for!
> Ed Dillinger: Doing our business is what computers are for.
We are now moving toward a world where all computers have a "MCP". No, it is not to solve user problems, it is to do the business of the corporations that designed it.
Originally, the ones who "needed" features like this this are the big content distributors. Without these features, it's too easy for normal people to extract content and give copies of it to their friends and family.
As a parallel development, another one who "needed" features like this is Microsoft, for a different reason. They were taking reputational damage from malware, and needed a way to prevent malware from running before their operating system kernel (malware loading after the operating system kernel could be contained by the normal security mechanisms like ACLs).
These two development threads had enough in common that they ended up merging together, and those who want to prevent copying content can now point to security as an excuse. And yes, neither of these two groups care if you can't run Linux on your own devices.
> if you are a secondary priority user on some hardware, the way to fix it is to focus on becoming important enough to be prioritized instead of fearing some technology will limit things.
I fully agree that this is our best defense. In fact, the only reason we can still run Linux on our desktops and notebooks is that, when SecureBoot was developed, Linux was already important enough. However, this could only happen because Linux had time to grow and become important enough (while being a "secondary priority user" of the hardware) before things started to become limited. Had SecureBoot come before Linux became important enough, running third party operating systems would not have been allowed, and Linux would not have had a chance to grow and gain importance.
If this were true, how would the malware ever get itself to the point where it is loaded before the kernel is?
Of course not. Things happen based on what investors and developers want. Users are very much secondary. They're a nuisance. If things did really happen based on some user-base need, would we have had Instagram or Facebook in their current form?
We need this in our corporate client device fleet to counter specific threats. We need this in our servers for the same reason — we do remote attestation today for Linux servers in semi-trusted locations. We’ve conveyed to our vendors that this is a desired capability in next-gen network equipment.
We’re not doing this to control data once it’s on an end-user’s computer. We’re doing it because we have a regulatory (and moral) obligation to protect the data that is entrusted to us.
We’re not Intel/AMD/NVIDIA/etc’s largest customer, but when we defer orders or shift vendor allocation it gets mentioned in their quarterly earnings reports. They tend to listen when we ask for features, and when our peer companies (not to mention governments) ask for the same thing because we have similar data security requirements?
Cloud and Business products is what, ~2/3rds of Microsoft’s revenue at this point? This isn’t being driven by the MPAA or whoever looking for better ways to screw over consumers.
So in your case, for devices you buy, you set up your corporate TPM key as the root owner, and then you send the device to employees, vendors, etc. The ownership chain is clear and you can send attestation requests. The corp is the owner of the device, and that is fairly obvious.
The issue is when people and corps buy devices, they do not have effective root. Microsoft, apple, google, etc have the tpm root key, and you as a corporation actually do not have root yourself. They can force you to do things you want to do. It makes you more vulnerable, because if it is in MSFTs interest (or they are coerced by the state to do so clandestinely) a lot of threats can happen, and you don't even need an 0day to do so!
If it starts becoming status quo, the freedom to do the things you need to your devices starts going away.
Can you please expand on what you verify via remote attestation and against which attack vectors this protects you?
Does this protect you against the usual attack vectors of your employees logging in on phishing sites, downloading malware, running office macros etc? Stealing your data usually does not need any root/kernel access.
>Cloud and Business products is what, ~2/3rds of Microsoft’s revenue at this point? This isn’t being driven by the MPAA or whoever looking for better ways to screw over consumers.
Except... Yes it is. When your Ur business case was "do computation on someone else's computer, but ensure the operator cannot have full transparent access to their own computer's operational details", you are in the end casting the first stone. Just because I don't have an LLC, or Inc. or other legal fiction tied to my name, doesn't mean I'm not bound by the same moral imperatives you claim to be, but more importantly (I am not willing to sell everyone else's computational freedom up the river for a pole of quick bucks).
Get your collective heads out of your arses. Get back out in the sun. This nonsense is ripping every last bit of potential computing ever had and ripping it out of the hands of the lay consumer unless they dance the blessed dance of upstream.
You do not know best. You think you do. You've gotten where you are without that which you seek to create, and once created, that which you make can and will never be taken back. It creates and magnifies too much power asymmetry.
My god, have you never really stopped to think through the ethical implications? To really walk down the garden path?
The same insane regulations that were probably the result of corporate lobbying are now the excuse for these hostile features? WTF?
Then they should prove it. I'm sure they have lots of expensive lobbyists under their employ, have them go to the government and tell the politicians the computer industry needs regulation to make it illegal to screw over users by depriving them of their computer freedom. If effective rules make it into law, I will trust their intentions.
Deleted Comment
If you somehow try to work around and use your "attested" machine and user id to do this (because websites will require it and your script can't have it, but may be it can run under your user account, for example) - monitoring systems will soon block your account for "suspicious activity" and it will be next to impossible to re-instate because Google and Microsoft don't provide any human support, unless you are some 1mm+ influencer on instagram and will manage to start a rukus on social media.
The outlook is quite bleak :-(
We cannot do this now because user-hostile vendors have locked the functionality away from us. I think this is a perfect microcosm for the whole tradeoff: lock away the rest of your userspace and we might let you watch the new Batman DVD you already bought.
This choice of words perfectly captures the arrogance of these copyright corporations. Who are they to dictate how our computers work just to maintain their irrelevant business model? They're ones who should be playing by our rules, not the other way around. It makes me wish piracy was as bad as they make it out to be, to the point it kills them.
Just to be clear: I hate how movies are currently distributed.
Deleted Comment