Readit News logoReadit News
wpietri · 4 years ago
At least on first reading, I find this unpersuasive. He correctly lists a variety of problems. But he doesn't explain how his proposed solution, listing all the components of a technological product, would make a practical difference. Creating a list is valuable only if people a) read the list, b) recognize problems, and c) do something based on that.

And for some of the examples he gives, it seems pretty obvious to me that an SBOM wouldn't help. The Equifax breach, for example. They knew [1] that they needed to upgrade Apache Struts. Somebody was supposed to make the upgrade. They just didn't do it. Who would an SBOM help here? Since it's a consumer-facing website, the only people who weren't informed were consumers. So is he proposing to make public the SBOM for every website? I'm not sure that on balance that helps security.

[1] https://www.csoonline.com/article/3444488/equifax-data-breac...

rixrax · 4 years ago
In my mind SBOM is similar to food ingredients being listed on the packaging. FDA or someone requires them, very few read them or cares what is in there. BUT now that they are listed on every food product, those who care can read them and make informed decisions. And raise alarm when it is found that someone uses unhealthy amounts of whatever in their cakes or sausages.

As for software, if I had up to date reliable SBOMs for everything I run, it would certainly give me piece of mind. And maybe, even if unlikely, I might be able to do purchasing decisions based on used components, their CVE/etc. history, or sheer amount (in less being generally better, unless there is a reason to suspect the vendor e.g. rolled their own TLS instead of using one of the usual suspects).

pessimizer · 4 years ago
> In my mind SBOM is similar to food ingredients being listed on the packaging. FDA or someone requires them, very few read them or cares what is in there. BUT now that they are listed on every food product, those who care can read them and make informed decisions. And raise alarm when it is found that someone uses unhealthy amounts of whatever in their cakes or sausages.

And sue them if they lie about it. I think a lot of the benefit of these types of regulations is to force businesses to commit active frauds instead of passive frauds. Not doing something you were supposed to do is incompetence. Lying on a form about doing something that you haven't is deceit.

The profits from incompetence and deceit are equal until one gets caught, then the lesser punishment for incompetence as compared to deceit makes deceit more expensive. Smart businesses will choose incompetence every time, and engineer it into the system everywhere where fraud would be profitable.

Of course, they can also hire temps to sign forms, like the banks did in 2008[1], but the current administration has to really want you to get away with it for that to work.

[1] https://www.nolo.com/legal-encyclopedia/false-affidavits-for... Note: it was strangely difficult to find information on this still on the web.

-----

edit: https://news.ycombinator.com/item?id=26530786

jacques_chester · 4 years ago
> might be able to do purchasing decisions based on used components, their CVE/etc. history, or sheer amount (in less being generally better, unless there is a reason to suspect the vendor e.g. rolled their own TLS instead of using one of the usual suspects).

Counting CVEs is a poor indicator. It's not a pure function of how many vulnerabilities exist, it's a function of how many exist, are found and reported. Those latter two components have a strongly economic nature. It's cheaper to not search and report than be fastidious.

If anything, more CVE reports from a given company is a positive signal that they give a damn.

(There's also the problem that CVSSv3 is not a very sound measurement of risk. It's sorta-kinda just made up without derivation from a sound theoretical foundation, nor is it based on data about actual impacts. The scores don't move smoothly as a continuous function but jump around a fair amount. It's very easy to swing between widely-separated named categories with a bit of argumentation.)

indymike · 4 years ago
The whole idea of SBOM is a bad one because of the rate of change in software. For example, a simple Python web app will aggregate change all the way from the OS, to the language ecosystem, to the application code. What was in the product when you installed it will change dramatically. Bonus: much change is being driven by security issues in your software's supply chain. This idea is just paperwork for the sake of paperwork and will just make vendors like SolarWinds more entrenched.
wpietri · 4 years ago
An SBOM as part of a contractual requirement when purchasing software seems totally reasonable to me if the receiving organization already has the practice of checking a lot of versions and making sure they're sufficiently up to date. But the hard part there isn't the creation of the SBOM, it's a) actually using the SBOM, b) having enough contract power that if the SBOM turns out to be incomplete, out of date, or a lie, the purchaser can do something about it, and c) the purchaser doing something about it.

Nutritional labels only work in practice because a) plenty of people read and care about them, b) there are regulatory agencies that set standards and enforce compliance, and c) if they are too far off, an expensive class action suit is a real possibility.

My concern with starting with SBOMs is that since they're orders of magnitude harder to read and evaluate, and since many, many companies are already bad at tracking their own software patch status, approximately nobody will actually use them. Again, I look at the Equifax breach: it happened not because they didn't know what a vendor was up to, but because their internal processes weren't sufficient to turn knowledge into results.

Natsu · 4 years ago
Scanners already effectively give this, finding the vulnerable components and a list of CVEs. But it may be difficult, expensive, or too time consuming to upgrade the affected components. Or there may be blackout periods (e.g. during open enrollment for many healthcare companies) where they basically can't make any changes to the production stack.

The problems with upgrades are usually centered around testing and understanding the changes and ensuring that things still work. It often requires more resources, especially time & developers, than may be available at any given time. And some companies treat all IT functions as cost centers and you can see this from how they run the place: the internal people don't know their own setup very well and may not have much experience in general, things are run by a tiny number of people who may have multiple roles to fill, etc.

Source: I've helped many people in many industries upgrade complex, security-sensitive enterprise software that interfaces with large amounts of their infrastructure.

daniellarusso · 4 years ago
Like ‘natural flavors’ and ‘artificial flavors’ are just different uses of ‘git rebase’?
TeMPOraL · 4 years ago
I wonder how many companies already have SBOM internally for legal reasons? I know I recently participated in building a partial one, to help the company ensure we comply with exports regulations of multiple countries.

After a casual inspection, we thought we had it all covered, but I felt a bit uneasy, so I dug deeper. Only after I actually read the build scripts of the transitive dependencies, one by one, cover to cover, I discovered we are actually pulling some extra libraries and features we weren't aware of.

I've spent several days manually digging through build scripts of our dependencies, and manually[0] inspecting all the dynamic libraries we ship, to provide a complete list of artifacts that include components subject to legal requirements of interest. And the only reason I could complete this work to my satisfaction, is because there was a select set of things we were looking for. Even after this, I don't know what all the stuff our project depends on do - I only know the stuff the legal team cared about is accounted for.

What this experience made me wish for is better tooling for figuring out what exactly goes into a software product. I'd love to have a tool I could attach to our build system, that would be able to track every single library and library feature that's actually being used. It's a tough job, given how many ways there are for some seemingly innocent piece of code to pull in some other innocent piece of code. Such tool would probably have to be launched on a freshly configured VM and intercept all network traffic, just to be sure.

--

[0] - Well, I quickly scripted that part away. Thank God for people who provide CLI interfaces for GUI tools they write. And yes, inspecting the build output was very useful too - that's how we learned a binary-only commercial dependency we ship is also subject to legal requirements. This wasn't at all visible in the build system - the only way to know was to read the vendor's documentation thoroughly, or audit the symbols in the export tables.

jacques_chester · 4 years ago
I had a similar experience several years ago.

It got worse when I began to consider dependencies in the supply chain itself. What version of our CI system are we using? What OS base image? What version are our worker VMs on? What packages are installed on them? And on and on and on. When I began writing these sources of upstream variability down I began to find dozens of them, for what was, in dependency terms, a fairly unremarkable application.

akkartik · 4 years ago
Then there's the build dependencies of all the libraries. And of the compilers. It's quite possible a complete list would go all the way back to the original Unix or CP/M, passing through multiple architectures along the way. The tool you want can't exist in the world we have, I think. Projects like http://bootstrappable.org might help get us there.
pycal · 4 years ago
I think on balance it actually hurts more than it helps.

The author lists Equifax as a case where an organization “failed to update a web server in timely fashion (a few months)” but a software bill of materials would not have made it any more or less obvious that they were running vulnerable web software an attacker could get a foothold in, and could have made it easier for an attacker to exploit that foothold, pivot, and exfiltrate, knowing what other software is available for them to exploit.

Equifax didn’t “fail” to manage that particular vulnerability, as the author describes, and protect customer data. They neglected to manage the vulnerability and protect customer data.

It’s my opinion that what would actually be valuable (and have been valuable) in the case of Equifax is compliance legislation that places liability on the custodian of PII. This compliance should require companies which are custodians of PII or financial data, or which operate critical infrastructure to have a vulnerability management practice.

arrosenberg · 4 years ago
FYI - Compliance regulation in the US government almost never works, our government sucks at it. If you want to regulate a company like EquiFax, you have to stick to investigations and prosecutions, which the US government is quite good at. Companies can take the risk, but if they violate the law it should be big fines and jail time for the executives.
Jgrubb · 4 years ago
PII?
netflixandkill · 4 years ago
We're already living with dedicated software companies having serious issues with their internal lifecycles and secure build processes. The concept of a SBOM isn't bad but any nontrivial end product is going to be pulling in orders of magnitude more component software than even large nested BOMs do, and no one is willing to pay to maintain what they have internally, much less read and act on that.

In principle, sure, but in immediate practice it would be like california forcing the labeling of basically everything as carcinogenic -- a step sort of in the right direction but mostly useless in practice.

The one thing that absolutely needs to be considered is not constructing it in a way that encourages private and unmaintained forks or requiring business contractual liability. Most of software only works as well as it does because there is so much really good open source to draw on.

njitbew · 4 years ago
Without reading the article, I can imagine that listing the components of a technological product (i.e., an SBOM) is a _first step_ towards the goal of solving all those problems. Once you have a standardized way of communicating what a software product is made of, you can start thinking of automatically upgrading dependencies (Maven's pom.xml does this to some extent, and Dependabot and Renovatebot leverage this semi-standard to automatically upgrade your dependencies). If you take this one step (or two steps) further, you can start to automatically rebuild the code, automatically deploy the code, patch running systems, detect when CVEs are actively being abused, and so on. Basically, automate the heck out of this so that the "they just didn't do it" will not happen. And for automation, you need standards.
mikepurvis · 4 years ago
Wouldn't having to advertise your out of date dependencies help to shame companies into upgrading on a reasonable schedule? So that upgrades are actually a priority and not just a thing that happens when literally everything else is already done?
sokoloff · 4 years ago
If that became a problem, companies intending to skirt the disclosure would fork and “maintain” private branches of dependencies such that it couldn’t be determined if they were out of date.
zvr · 4 years ago
It's not about "shaming", as these SBOMs might not be publicly available. But serious customers might have something to say when they realize that they are getting obsolete versions of components full of security issues.
jmull · 4 years ago
I’m a bit skeptical too. (It didn’t seem to me that a SBOM would have helped with solarwinds either.) But I’ll play the devil’s advocate:

* Perhaps end-user systems could automatically monitor the SBOMs of all software installed, cross-reference it with a live vulnerabilities database, and produce vulnerability reports and notifications. This increases the visibility of vulnerabilities and the chance they will be resolved quicker.

* Software companies will feel the increased exposure of the SBOM they need to publish causing them to think more carefully about when and how to take on dependencies. Some do this well already, but this would likely cause more companies to do so.

wpietri · 4 years ago
It's certainly possible. But I think it's equally likely that applied naively, we'd see more breaches as public SBOMs make it clearer what attacks will work where.

There's also a real question of net value for effort. Security is one consideration people balance, but it's far from the only one. Starting with SBOMs as the focus assumes too much about what people care about and how much work they'll do.

I'd much rather people start with some user-focused approach and then making use of particular technologies (like SBOMs) as needed to advance people's actual goals.

perlgeek · 4 years ago
At my employer, we have a company-wide database of which package is installed in which version on each machine (several ten thousands of them).

This allows the compliance department to follow known security issues, and they can then open tickets to the affected operating teams stating on which machines the software needs to be upgraded (or mitigations implemented), and they set deadlines based on vulnerability ratings. If the deadlines aren't meant, there's a hierarchical escalation.

In the case of the Equifax breach, such a mechanism might have helped. If the developers knew they had to update, but didn't, maybe the ticket from compliance would have given them the right nudge to actually do it.

Deleted Comment

silly-silly · 4 years ago
At my employer we do the same thing for pretty much all software shipped to customers. (X thousand packages, across 5 arches, across 5 releases)

There is an ongoing effort and it becomes more complex with vendored packages, embedded jars and 'containers'.

I'm assuming that the indexing is done at compile time, how far back into your dep tree do you go ?

Ericson2314 · 4 years ago
This government-adjacent people rediscovery Nix / Guix. So yes the current phrasing is bit vague in that they are just grasping at the concept via draft requirements. But you can't fault their intuition, as those tools do exact and are absolutely revolutionary.

The one thing I wish they mentioned is https://docs.softwareheritage.org/devel/swh-model/persistent..., which are the right idea and actually used in practice.

Jupe · 4 years ago
Ramblings on these topics...

Exposing SBOM on every piece of delivered software will just make a hackers job easier and quicker... Since by design they are machine readable, SBOMs will make querying for specific vulnerabilities trivial.

This is not a top-down problem! Any upper layer can be compromised by a lower layer (os, build tool, library, reporting tool, etc.) this problem can only be solved Botton up : from verified OS, to verified (bootstrap) build tools of that OS, to every library installed on that OS, etc. We currently have decades of software resting atop of unverified libraries resting atop of unverified operating systems, all built with unverified tooling.

We can't even build verification tools that are, themselves, verified! And if we could, can we even say they verify every potential vulnerability? (mitm, boundaries, race conditions, cpu cache, etc.)

I know there is research at some universitys into formally verified OS's, but it's a long way off IMO.

This is the problem of our time. But, unfortunately, the industry seems consumed with velocity and cleverness over stability and security.

silly-silly · 4 years ago
> I know there is research at some universitys into formally verified OS's, but it's a long way off IMO.

I believe seL4 is verified and used in production ( https://sel4.systems/ )

Roark66 · 4 years ago
If the author was serious about promoting the idea the article would be published in an open manner (not behind a pay wall).

The concept is good, but good luck enforcing it with closed source software companies.

Anyone that is really interested can already find that info for OS software but where it would really be useful is with closed source software. Where I personally would really love to see it implemented is with embedded devices.

I've been recently hacking a not so old IP cam in my spare time. Hardware is great... It has 600mhz 32bit cpu with 64mb ram, hardware h264 (1080p 30fps close to real-time) encoding, bi-directional audio, WiFi, USB host, ptz, free gpio, Ethernet all for around $20 (indoor version) but software is abysmal. It runs Linux Kernel v3 (almost a decade old). Upon startup immediately starts streaming video/audio to a server in China while the mobile app requires you to "register" for an account with a phone number. The only way it can receive the video is from the Chinese server and it displays ads on 20% of its screen. Ridiculous. Thankfully it is pretty easy to hack, but what about all non technical people who buy it?

pdimitar · 4 years ago
No SBOM will help you if the people know they have to act but they don't -- out of malice, bureaucratic slowdown, policy restriction and what-have-you.

If you don't have hardware and software that can't be tampered with and that automatically apply / enforce the SBOM, then it is essentially worthless.

jacques_chester · 4 years ago
I disagree. SBOMs will help to create useful pressure on upstream providers to show their work. In particular, as another commenter pointed out, failing to provide an SBOM and providing a deliberately inaccurate or incomplete SBOM are quite different. One is presumably mere incompetence, the latter opens the door to consequences for fraud.
mybrid · 4 years ago
Bills of Materials work with hardware because origins can be tracked. Good luck tracking the origin of electrons. I worked in manufacturing for a company the dealt solely with MilSpec (Military Specification). Crates sent to Nuclear Power plants had to X-Rayed on the shipping dock and then again at the receiving dock. If the X-Rays differed in any questionable way the shipment was rejected. However, most supply chains are not that paranoid. In the 1980s there was a grade eight bolt scandal for a satellite that blew up in space because the manufacturer substituted plain steal to make more money. More recently the BOM did nothing to protect the Bay Area Bridge where once again bolts as well as rods were specified of one quality and delivered as another. The bolts and rods are still in the new span because taking them out would require tearing it down again. But the builder assures us things are fine, wink wink, nod nod. https://www.courthousenews.com/34m-settlement-reached-for-de...
patcon · 4 years ago
Aren't you just talking about checksums? Your xray anecdote is neat, but the software equivalent of that paranoia is even simpler and easier to operationalize if there's incentive to do so. e.g., https://reproducible-builds.org/
mybrid · 4 years ago
Materials can be traced to point of origin, especially nuclear active material. There is no such tracing with software. A checksum only guarantees that two things have equal software. It does not say the difference of origin between the two.
jacques_chester · 4 years ago
We need SBOMs, but these are not enough. We need supply chain attestations, but these are not enough. What we need is the combination of asset data, process data and to acknowledge that our knowledge of both is always incomplete and subject to change. I call this need a "universal asset graph" and I've been nagging folks for years to get us to it.

The sigstore project is the biggest foundation stone of what I'd wish for, at least in terms of creating a robust shared log of observations (a leader of that effort, dlor, is in this discussion). But we still have a very, very long way to go as an industry.

Ericson2314 · 4 years ago
Just look at Nix.

Here's the thing, having the sellers of unfree software compile the code for is a terrible skeuomorphism from the way traditional products are made. The final integrator should be the one building the code even for propriety0 and unfree software, whose secretiveness should be enforced with contracts not obfuscation and baking in specific dependencies.

The fact that the finally compilation graph, and the IP procurement graph have some similarities should just be a coincidence.

jacques_chester · 4 years ago
I didn't follow your argument. Could you elaborate?
mrweasel · 4 years ago
Doesn't something like FDA approval of medical systems already require this? I believe you're required to maintain a list, and risk analysis of third party software you incorporate into medical products: https://en.wikipedia.org/wiki/Software_of_unknown_pedigree
dlor · 4 years ago
My big problem with all the SBOM efforts is that any kind of compliance/accuracy will be best effort and most likely wrong, leading to more problems and blame.

This is not as simple as writing down your dependencies. Most people don't even know what their full set of transitive dependencies is, or how to even go about finding it.

How do you know the SBOM you get is even accurate? You can't just crack open a binary and look at what's inside. If you could, we wouldn't need these giant complicated file formats.

jmull · 4 years ago
> Most people don't even know what their full set of transitive dependencies is, or how to even go about finding it.

I think that’s the point.

Also: you really do know your direct dependencies since you need them to build your software. If the efforts to promote or require SBOM are successful, your dependencies will all have SBOM and your tooling will be update to help you generate yours.

dlor · 4 years ago
I don't think that's true in practice. Try it. I did here: https://dlorenc.medium.com/whos-at-the-helm-1101c37bf0f1

It's basically impossible with today's tooling and practices to come up with a list of dependencies for a moderately complex application.

fulafel · 4 years ago
You can, in fact, crack open the binaries and look at what's inside. The field of tooling for it is called SCA (software composition analysis).
ozim · 4 years ago
Technically you are right.

Question is who is going to pay for that?

In my job we dealt with enterprise customers that required list of all libraries we use and what license those have. But they had buckets of money to spend on compliance.

dlor · 4 years ago
Sort of. The quality of the data this tooling generates varies GREATLY among languages, build systems and environments. For packaged software like Solarwinds, sure you can try to run an SCA tool. But is anyone claiming an SBOM or SCA tool could have prevented that attack?

The bigger issue is services and hosted software. You can't crack open an API or website that stores your data to see what database they're using. You could ask that they publish an SBOM, but who knows if it's accurate.