Readit News logoReadit News
forapurpose · 8 years ago
I'm starting to consider whether this reflects a larger failure in the industry/community: Traditionally, many of us (I'd say almost all) have been focused on security at the OS level and above. We've assumed that the processor and related hardware are safe and reliable.

However, below the OS level much new technology has been introduced that has greatly increased the attack surface, from processor performance enhancements such as branch prediction to subsystems such as Intel ME. I almost feel like Intel broke a social compact that their products would be predictable, safe commodities on which I can build my systems. But did those good old days ever really exist?. And of course, Intel naturally doesn't want their products to be commodities, which likely is why they introduced these new features.

Focusing on OS and application security may be living in a fantasy world, one I hesitate to give up because the reality is much more complex. What good are OpenBSD's or Chrome's security efforts, for example, if the processor on which they run is insecure and if there are insecure out-of-band management subsystems? Why does an attacker need to worry about the OS?

(Part of the answer is that securing the application and OS makes attacks more expensive; at least we can reduce drive-by JavaScript exploits. But now the OS and application are a smaller part of the security puzzle, and not at all sufficient.)

gibson99 · 8 years ago
The issue of hardware security really has been ignored too long in favor of the quest for performance enhancement. Perhaps there is a chance now for markets to encourage production of simplified processors and instruction sets that are designed with the same philosophy as OpenBSD. I would imagine companies and governments around the globe should have developed a new interest in secure IT systems with news about major exploits turning up every few months now it seems.
AnIdiotOnTheNet · 8 years ago
It reflects the industries priorities: performance and productivity. That's all. You can make the argument that these priorities are wrong, but we've known such attacks were theoretically possible since the vulnerability was introduced.

Even now I'm certain there are many companies not even bothering to patch against Spectre and Meltdown because they've deemed the performance degradation to be worse than the risk, and that's a perfectly rational decision to make.

dredmorbius · 8 years ago
I'd heard Jon Callas of PGP talking about concerns over hardware-level security -- CPU and baseboard systems / BMCs -- in the mid naughties. So this stuff has been on at least some peoples' radar. Not particularly widespread, perhaps.

Theo de Raat turned up with a ~2005 post specifically calling out Intel as well, though not necessarily speculative execution that I'm aware.

lclarkmichalek · 8 years ago
Really not the end. The existence of issues that cannot be addressed via 'langsec' does not imply that we should give up on 'langsec'. There will be more security issues due to buffer overflows than there will be CPU bugs this year. More importantly, there will be orders of magnitudes more users with data compromised via buffer overflows, compared to CPU bugs.
alxlaz · 8 years ago
The author does not seem to mean "the end of langsec" as in "everyone will give up on it", but rather the end of a period characterized, and not incorrectly, by the opinion that a safe programming language guarantees the absence of unintentional unsafe behaviour. In short, that things which, within this "langsec framework", one could prove to be impossible, turn out to be possible in practice; in the author's own words:

"The basis of language security is starting from a programming language with a well-defined, easy-to-understand semantics. From there you can prove (formally or informally) interesting security properties about particular programs. [..] But the Spectre and Meltdown attacks have seriously set back this endeavor. One manifestation of the Spectre vulnerability is that code running in a process can now read the entirety of its address space, bypassing invariants of the language in which it is written, even if it is written in a "safe" language. [...] Mathematically, in terms of the semantics of e.g. JavaScript, these attacks should not be possible. But practically, they work. "

This is not really news. The limits of formal methods were, in my opinion, well-understood, if often exaggerated by naysayers, brogrammers or simply programmers without much familiarity with them. Intuitively, it is not too difficult to grasp the idea that formal proofs are exactly as solid as the hardware which will run the program about which one is reasoning, and my impression is that it was well-grasped, if begrudgingly, by the community.

(This is akin to the well-known mantra that no end-to-end encryption scheme is invulnerable to someone looking over your shoulder and noting what keys you type; similarly, no software-only process isolation scheme is impervious to the hardware looking over its shoulder and "writing down" bytes someplace where everyone can access them)

allenz · 8 years ago
> the end of a period characterized, and not incorrectly, by the opinion that a safe programming language guarantees...

I don't think that there was ever such a period. Provable correctness always had the caveat of highly idealized assumptions, and we've known from the start that hardware vulnerabilities such as timing attacks, rowhammer, gamma rays, and power loss can undermine those assumptions.

UncleMeat · 8 years ago
Wait. Who has ever said this? We've had soundiness as a fundamental concept for years. I'm not aware of a single static analysis tool that is truly sound. Everything is always "sound with respect to X". The academics surely haven't been the ones making a claim that correct programs truly are immune to any and all problems. If we ended this period, we did it ten years ago.
sideshowb · 8 years ago
Well, how about a provably secure hardware description language then?
carussell · 8 years ago
It's funny that you mention buffer overflows. An interesting thing to point out—and I'm not sure if anyone has; I've looked before—is that if you follow Meredith Patterson's line of reasoning about langsec for protocols in particular, you end up at the conclusion that C-style, NUL-terminated strings are the right way to do things.

This doesn't mean there aren't other defects in the apparatus where C exploits thrive. It's just that (if you adhere to the langsec school of thought) you are required to confront the conclusion that it's been pinned on the wrong thing.

EtDybNuvCu · 8 years ago
This is naïve first-order thinking; we can justify length-prefixed strings on a more abstract basis. First, note that in a trusted kernel, we'll prefer to define our string-handling functions only once and then explicitly hand-write their correctness proofs. This means that, on a higher level, we do not have to prefer any one particular string abstraction just because of the white-box implementation details; we'll call strcat() either way.

Now, note that we can store any character, including NUL, in a length-prefixed string. However, we cannot do this in NUL-terminated strings. This is because stray NULs will change the apparent length of the string. More directly, what a string is, whether it's "string of length 5" or "string of length 255", is determined by the string's data. In contrast, a length-prefixed string contains two fields, and only one of them controls the length of the string.

Now, imagine that an attacker comes to totally control the contents of our string. In the NUL-terminated scenario, they also control the length of the string! This does not occur with length-prefixed strings. There is strictly less capability offered to the attacker.

I know about the automaton-based approach to langsec, but this capability-based analysis is far more in line with the recent rise of weird machines.

richardwhiuk · 8 years ago
Why are C style buffers any more correct than Pascal strings?

Deleted Comment

nordsieck · 8 years ago
> Rich Hickey has this thing where he talks about "simple versus easy". Both of them sound good but for him, only "simple" is good whereas "easy" is bad.

I don't think I've ever heard anyone mischaracterize his talk [1] this badly.

The claim is actually that simplicity is a fundamental property of software, whereas ease of use is often dominated by the familiarity a user has with a particular set of tools.

[1] https://www.infoq.com/presentations/Simple-Made-Easy

spiralganglion · 8 years ago
Agreed, but I have see a lot of people come away from the talk with an unfortunate disdain for ease. Ironically, in disentangling "simple" and "easy", Rich created a lot of confusion about the value of ease.
scott_s · 8 years ago
My personal take is that perhaps chipmakers will start to see different market pressures. Performance is the big one, and it's been around since the first microprocessor. Power became increasingly more important, particularly in the past decade, from both ends of the market. (Both mobile devices and supercomputers are very power conscious.)

Security may become a new market pressure. You will likely sacrifice performance to get it, as it will mean simpler cores, maybe in-order, and probably without speculative execution. But, with simpler cores, we can probably further increase hardware parallelism, which will only partially mitigate the loss in single-threaded performance. Some chips may even be more radically security conscious, and guarantee no shared caches between processes. Such chipmakers would be able to say: we can't say for certain that these chips are fully secure, but because they are simpler with less attack vectors, we are far more confident they are. Security conscious chips may tend to be the ones that are internet facing (your mobile device, cloud data centers), and faster, less security conscious chips may only exist behind strict firewalls.

I bring this up in response to the submitted article because I find it unlikely that we will start to model processor insecurity at the language level. It ruptures too many levels of abstraction. I find it more likely that we will find ways to maintain those abstractions.

tzs · 8 years ago
> Security may become a new market pressure. You will likely sacrifice performance to get it, as it will mean simpler cores, maybe in-order, and probably without speculative execution.

Maybe we go from having CPU + GPU to having CPU + GPU + FPU, where FPU = "Fast Processing Unit".

The CPU in the CPU/GPU/FPU model becomes simpler. Any time we have to choose between performance and security we choose security.

The FPU goes the other way. It is for things where speed is critical and you either don't care if others on the machine can see your data, or you are willing to jump through a few hoops in your code to protect your secrets.

For most of what most people do on their computers most of the time, performance is fine without speculative execution or branch prediction and probably even with caches that are completely flushed on every context switch. (It will probably be fine to leave branch prediction in but just reset the history on every context switch).

The FPU memory system could be designed so that there is a way to designate part of FPU memory as containing secrets. Data from that memory is automatically flushed from cache whenever there is a context switch.

sliverstorm · 8 years ago
I believe you can make a process noncacheable today, and maybe even disable branch prediction. This would totally shut down Spectre and Meltdown. You can disable SMT, and there's a whole host of other things you can do to isolate your "secure" process on an existing chip. Nobody has done these things because they like performance.

For most of what most people do on their computers most of the time, performance is fine without speculative execution or branch prediction

I think you underestimate the importance of branch prediction.

Narishma · 8 years ago
We already have that in ARM phone SoCs with big.LITTLE.
okreallywtf · 8 years ago
I'm less familiar with Spectre than Meltdown but part of the issue to me seemed to be that everything hinged on the principal of memory isolation. To the point where all of physical memory was mapped from the kernel which was mapped to the user address space. It felt to me like having 1 great big lock and assuming it couldn't be broken.

I get that when you make the assumption that you cannot depend on memory isolation a lot goes out the window, but could there be a more layered approach that lessens the damage done when some assumption like that is challenged or is it appropriate to wait until that time to make changes?

The amount of damage possible when that assumption is chosen to be false seems to warrant considering changing the way process memory is mapped. This is at a software level but its similar to what you were saying, when is speed and performance going to be sacrificed for preemptive security measures? I'm sure that it already is in some ways since all security has a performance hit, but allocating processes and virtual memory is such a core feature of the kernel that I can imagine the performance hit could be significant (or was it just laziness to map all of physical memory to the kernel address space?)

tdullien · 8 years ago
I think this is too dark a post, but it shows a useful shock: Computer Science likes to live in proximity to pure mathematics, but it lives between EE and mathematics. And neglecting the EE side is dangerous - which not only Spectre showed, but which should have been obvious at the latest when Rowhammer hit.

There's actual physics happening, and we need to be aware of it.

If you want to prove something about code, you probably have to prove microop semantics upward from verilog, otherwise you're proving on a possibly broken model of reality.

Second-order effects are complicated.

chias · 8 years ago
This is a bit of a nitpick, but when you say Computer Science I think you mean Software Engineering and/or Computer Engineering. These are very different fields, and Computer Science is by-and-large agnostic of any actual physics.

As an anecdote: my Computer Science PhD dissertation contained all of about 12 lines of pseudocode, and these are only to provide a more direct description of an idea than I was able to provide with several paragraphs of prose. Hardware / architecture / physics is entirely irrelevant to it -- while any implementation of the ideas therein should be aware of hardware / physics / etc., we are solidly entering the realm of engineering at that point.

Qwertious · 8 years ago
>There's actual physics happening, and we need to be aware of it.

More specifically, we're not running software on physics, software is physics - CPUs are simply physical structures designed in such a way as to be extremely easily mathematically modeled. Software is nothing more than runtime configuration of hardware.

zbentley · 8 years ago
> If you want to prove something about code, you probably have to prove microop semantics upward from verilog

What if your chip fab doesn't obey the HDL's instructions?

You're always "trusting trust" at some level. See also this horror story: https://www.teamten.com/lawrence/writings/coding-machines/.

qznc · 8 years ago
> The "abstractions" we manipulate are not, in point of fact, abstract. They are backed by real pieces of code, running on real machines, consuming real energy, and taking up real space. [...] What is possible is to temporarily set aside concern for some (or even all) of the laws of physics.

– Gregor Kiczales, 1992

ddellacosta · 8 years ago
Rowhammer I can understand, but how do you come to that conclusion--"neglecting the EE side is dangerous"--from analyzing Spectre? Does speculative execution rely somehow on physical effects? I can't find anything ( for example here: https://en.wikipedia.org/wiki/Spectre_(security_vulnerabilit... ) that suggests there is a physical component to this vulnerability.
mst · 8 years ago
I'd argue that using timing differences due to physical limitations of the hardware that to exfiltrate data based on whether or not it's cached is very definitely 'relying on physical effects'
SomeHacker44 · 8 years ago
"There's actual physics happening, and we need to be aware of it." Yes, maybe abstractly. However, the Spectre flaw is well above the level of physics. There may be physics flaws as well (e.g., Rowhammer), but Spectre is an architectural flaw in the design of some microprocessors, in that it leaks information by e.g. cache timing attacks and non-program-state data (e.g., branch prediction tables). These can be redesigned above the physics layer.
mannykannot · 8 years ago
I can't speak for the author of that quote, but cache timing appears to be as much a physics phenomenon as is the behavior that allows rowhammer to work. I cannot see any point where the author implied that the solution can only be found in the physics layer.
perlgeek · 8 years ago
I don't see how this is fundamentally different than timing attacks and other side channel attacks that have been well known before, and to the best of my knowledge, simply hasn't been the focus of the "prove it correct" approach.

Whenever you want to prove something correct, you need to make assumptions about the execution model, and about what correctness means. Now "we" as an industry found a bug that makes the actual model differ from the assumed model, so we need to fix it.

The same is true when you can measure the power used by a microchip during some cryptographic operation, and infer the secret key from that -- even if the cryptographic operation has been proven correct, the definition of correctness likely didn't include this factor.

KirinDave · 8 years ago
While langsec can't easily mitigate spectre because the processor is trying to hide where the performance comes from, it's worth noting that several new languages are working on ways to write code where you can actually assert and have the compiler check that the timing of the code you write is bounded and uniform.

It's very easy, I think, to throw up our hands and say, "Well gosh all this language stuff is useless because timing attacks are so scary!" But in reality, they're pretty well studied and many of them are actually pretty simple to understand even if they can be hard to recognize.

Both hardware AND software sides of our industry need to start taking correctness, both at compile and runtime, seriously. The days where we can shrug and say, "But that's too slow, don't run code you don't trust" are dead. We killed them by ceding the idea of hardware ownership to big CSPs. The days where we can say, "This is too complicated to do!" or "This doesn't deliver customer value!" are also going away; the threat of combination attacks easily overshadows any individual attack, and small vulnerabilities tend to multiply the total surface area into truly cataclysmic proportions.

But also gone is the day when we can say things like, "Just use Haskell or OCaml!" We've seen now what these environments offer. It's a great start and it's paved the way for a lot of important understanding, but even that is insufficient. Our next generation of programming environments needs to require less abstract category theory, needs to deliver more performant code, and needs to PROVE properties of code to the limit of runtime resolution. The hardware and OS sides of the equation need to do the same thing. And we as engineers need to learn these tools and their techniques inside and out; and we shouldn't be allowed to sell our work to the general public if we don't.

zbentley · 8 years ago
> several new languages are working on ways to write code where you can actually assert and have the compiler check that the timing of the code you write is bounded and uniform.

I'm interested in how that would avoid the halting problem. Let's say I write code, compile it, and run some "timing verifier" on it. That verifier either runs my code and verifies that it's timing is correct on that run, or inspects the machine code against a known specification of the hardware I'm running it on right then and ensures all of the instructions obey my timing constraints. How would you check that the code's timing is bounded and uniform on subsequent executions? Or on other hardware? Or in the face of specifications that are incorrect regarding the timing characteristics of machine code instructions (CPU/assembly language specs are notoriously incomplete and errata-filled).

I suspect something fundamental would have to be changed about computer design (e.g. a "CPU, report thine own circuit design in a guaranteed-accurate way") to make something like this possible, but am not sure what that would be, or if it's feasible.

UncleMeat · 8 years ago
It avoids the halting problem the same way literally all sound static analysis does. With false positives. The java type checker will reject programs that will not have type errors at runtime. And a system that verifies timing assertions with SMT or whatever will reject some programs that will not fail the assertion.

The halting problem has never actually stopped static analysis tools. Static analysis tools that check timing assertions have been around for a very long time.

KirinDave · 8 years ago
The Halting problem as a burden for this kind of code analysis is very overstated. What you do is instruct the compiler how to infer timing inductively on a language of all basic internal and I/O ops.

And of course it's 2018, nothing stops us from saying "Your software needs training." We can go from a somewhat conservative binding that is then informed by runtime metrics to be very precise for a given deployment and as long as we hold the system static for that measurement, it's pretty fair.

As for Hardware verification, I agree it's a harder problem in some ways, but since there is no obfuscation of the important timing quantities it also gets easier.

hacknat · 8 years ago
Everyone is assuming the author is giving up on Langsec. Read carefully, he called Spectre/Meltdown a setback. I think he’s making a subtler point that the fundamentals of programming have become more of a pragmatic activity than a mathematical one, if you’re being practical about your goals that is. I’m currently on the kubernetes multi-tenancy working group (which isn’t really a working group yet) and its really funny to see how much effort is going into securing containers, but core bits like the CNI receive little attention. A wise security professional, ex hacker said that he actually liked over engineered security systems as a hacker, because it told him what not to focus on. Container security pretty good? Okay then figure out how to do what you want without breaking out of the container (definitely possible in the case of Spectre/Meltdown).

There is a fundamental cognitive bias in our field to solve the technically challenging problems without realizing that there are practical vulnerabilities that are far more dangerous, but a lot more boring to solve (the most common way orgs are exploited is through a combination of social attacks and something really trivial).

I think the author is frustrated because he feels the interesting work is unimportant in comparison to the practical.

That isn’t to say that this work isn’t helpful. I’m very glad to be working daily in a typesafe, memory safe language, but I have bigger fish to fry now as a security professional on the frontline.