Interesting. I'm not sure if the public comment period is over (The original proposal is dated August, 2023), but this stands out to me from their paper:
We propose to focus the scope of our program on intentional radiators that generate and emit RF energy by radiation or induction.31 Such devices – if exploited by a vulnerability – could be manipulated to generate and emit RF energy to cause harmful interference. While we observe that any IoT device may emit RF energy (whether intentionally, incidentally, or unintentionally), in the case of incidental and unintentional radiators, the RF energy emitted because of exploitation may not be enough to be likely to cause harmful interference to radio transmissions.
I guess it is the FCC so this makes sense from their point of view. From my perspective, I'd like to see marks indicating:
* If the devices can be pointed to an alternate API provider if the company stops supporting
* If firmware has been escrowed / will be made available if the company stops supporting
* If device data is stored by the company
* If that data is certified as end to end encrypted
You might be getting a bit too far ahead of where the industry is at with some of those wishlist items. NIST's requirements are things that are best practices that everyone agrees with, like:
* data stored/transmitted is secured by some kind of means
* the device supports software updates
* the device requires users to authenticate
* the device has documentation
* you can report security vulnerabilities to the developer
And even these are things that many devices fail to do, today. We gotta get the basics fixed first.
But for now, you can presume the Netflix button on your TV remote can't be configured to point to an alternative API if Netflix goes away. :)
'Cause they need somewhere to load in those exploits!
A hypothetical device which is all read-only (except perhaps for a very carefully crafted, limited set of configurable parameters) might in some cases be more secure than the bulk of what's on the shelves today. After all, how many widespread hacks do you read about on old, single-purpose fixed analog or digital devices (which in a sense are similarly 'read-only').
Oh, I'm with you 100%. The labels for my list will all be like black with a big X through them. But I propose consumer behavior has a better shot of changing with labels.
Seems to me that this would wholesale rule out projects like the ESP32 open WiFi driver. Or rather, in order to comply, espressif would have to retool their chips to make "unauthorized" aceess to the raw radio hardware impossible. Sort of how cellular modems are now.
Seems reasonable from the FCC's perspective, but I'm not sure how I'd feel about it.
> If that data is certified as end to end encrypted
This needs better and more detailed clarification. I've reverse engineered a camera-equipped pet feeder, and videos sent to a cloud (or my emulating server in my case) were partially encrypted - I-frames were, P-frames were NOT. Someone ticked a checkbox "videos are encrypted", and still left the thing glaring open.
Then, of course, it's also a matter of ciphers and modes, authentication, key generation, transmission and storage, etc etc.
Feels like encrypted storage and transmission features alone require a full whole label, like the FCC's broadband facts label, or FDA's nutritional facts label, which outlines what data exists in the system, where the data is stored, how it's encrypted, how it's authenticated, and so on.
Which is probably not happening until cryptography 101 becomes a part of general school curriculum and layman people start to understand the basics. Without people asking real questions and refusing to purchase products from sloppy engineering companies (aka voting with their wallets*), companies will always wave it away with tried-and-proven "military-grade security" bullshit.
___
*) That is, if there's even a competition. When no one does things right (because consumers don't know and thus don't ask for it), there's nothing to pick from.
The device doesn’t ship with a known, unchangeable admin password. The device doesn’t needlessly require Wi-Fi access for basic local functionality. [my wish list]
Some questions already answered in article - from gov't responsible NIST and FCC and from industry agreed to participate deputies from large companies, so now they will gather meetings and will create some documents.
So now, any interested subject (any human or entity, even "group of hackers") could ask to responsible. Or could talk with deputies, as their contacts should appear soon.
What could we do to make something self-repairable domestically that would also make it not repairable otherwise? Like if you bought it here, but then took it with you internationally, would it suddenly not be repairable?
Cool, I'd rather have a stamp that indicates a company will support their product for X number of years, and if they don't, they will release the software as OSS so you can maintain yourself. I have an extremely expensive scale that came with wifi support and an app, only bought it 3 years ago, half the features already don't work because they nuked the app and stopped supporting the scale. did I need a smart scale? Absolutely not, and I don't really need any other "smart" devices the more I think about stuff like this, and now seek to buy "stupid" devices as much as possible. I'm not sure what such security stamps are supposed to provide other than false sense of security, as most things can be hacked eventually with enough determination or someone unknown zero day.
Yeah, nowadays i try to buy many things that are "not smart" in order to avoid what you experienced with the smart scale. That being said, i wonder if what you're asking for is more on the warranty side, rather than security/promise side? To clarify, i am 100% in agreement with you that after a company stops supporting a product, they should open source it (which could create a secondary ecosystem of techs who offer services to support said open source software if a person is not inclined to manage the OSS themselves, etc.)...However, technically wouldn't a company's "promise" to support software be more like a warranty? And in that case, whatever gov. agency who oversees warranties would need to nudge business to comply...nevertheless both this cybermark, a warranty on software lifecycle, and other things are the LEAST that shoild exist nowadays.
This is the best strategy, but let's be clear... consumers who make a purchase have a reasonable expectation of owning a durable product that does not increase the threat surface of consumers' lives.
This means that the product requirements should be clear and the supply chain must be secure.
Until a "trust label" can guarantee these principles, the proposal is just another prop in a grand security theatre.
This is a bit scary. Knowing how software is developed, I know there's no government program that could actually ensure a device is secure. It's one thing to measure an electronic device's EMI or pump it full of power and see if it catches fire. But black box testing of software is itself a black art, as software security is a lot more complex than [typical] electronic design.
The scary bit is that this label is going to be found to be ineffective, and then consumers may lose trust in government-issued safety stamps.
In Germany we had something like this from the TUEV Süd, where they certified online shops and online banking websites for their security.
Suffice it to say, but the keywords are a google dork for finding easy to hack pentesting victims.
Now the BSI (German institute for cybersecurity, similar to CISA) also started to push out certifications for the BSI Grundschutz, which is an absolute meaningless certificate and literally tests the absolute bare minimum things.
The problem here is that there is no market, this cyber security crisis cannot be solved economically, because customers want a certificate without having to do further work. So they'll get it at whatever auditor that accepts their money.
This is how it's done, even for ISO 27001 and SOC2 certifications. Nobody gives a damn if a single working student has 20+ role descriptions laying on their table. Findings are always ignored and never corrected.
Cyber security policies and their effects over time need to be measurable first before there can be certification processes.
Additionally there needs to be legislation that cannot be interpreted. Things like "reasonably modern" cannot be used as a law text because it doesn't mean anything, and instead standardized practices have to be made mandatory requirements. Preferably by a committee that is not self controlling, maybe even something like the EFF, FSF, OWASP or Linux foundation.
I would be more specific than that. SELinux can be running and intentionally poorly written policies can allow absolutely anything to happen. The risk being, [X] Checkbox SELinux is technically running.
The real problem is very few vendors are inclined to spend the time and money to make their products truly stable & secure. Instead we churn out a firehouse of crap code for a sewage dump of cheap IoT products. I'm not sure how much a government-conceived seal will raise the bar of consumer expectations.
I'd still put my faith in other indicators like a company's track record, third party audits, robustness of open source library choices where applicable, my own analysis of their stack and engineering choices based on signs I can observe about their product / interface / etc (there are usually several present), my own testing and so forth.
I'd argue the generally accepted pace of consumer product development these days is reckless, and not sustainable if you want truly robust results.
I would have been glad to see this step in the right direction if I weren't convinced all it will likely amount to in practice is security theatre. Here's hoping my skepticism is unwarranted.
The combined requirements of govt purchasing must carry the mark and major US surveillance tech manufacturers like Amazon are leading the rollout, makes this seem less like a cybersecurity concern and more of a protectionist carve out.
1) What are the requirements for the mark? E.g. no passwords stored in plaintext on servers, no blank/default passwords on devices for SSH or anything else, a process for security updates, etc.?
2) Who is inspecting the code, both server-side and device-side?
3) What are the processes for inspecting the code? How do we know it's actually being done and not just being rubber-stamped? After all, discovering that there's an accidental open port with a default password isn't easy.
> 2) Who is inspecting the code, both server-side and device-side?
UL is administering the program and they're going to come up with the requirements
> UL Solutions will work with stakeholders to make recommendations to the FCC on a number of important program details, like applicable technical standards and testing procedures, post-market surveillance requirements, the product registry, and a consumer education campaign.
Good questions. As I understand, they spent months to decide who will be responsible and who will pay (and how much). Announce happen after budget passed Parliament, so now could make manning table and hire people for next steps.
Some questions already answered in article - from gov't responsible NIST and FCC and from industry agreed to participate deputies from large companies, so now they will gather meetings and will create some documents.
So now, any interested subject (any human or entity, even "group of hackers") could ask to responsible. Or could talk with deputies, as their contacts should appear soon.
Things like this are useless, in my mind, because hackers are always going to innovate and find ways around protection mechanisms. Today's "locked down" IoT device could easily become tomorrow's "vulnerable to an easily exploitable pre-auth RCE".
What the government probably _should_ do is begin establishing a record of manufacturers/vendors which indicates how secure their products have been over a long period of time with an indication of how secure and consumer-friendly their products should be considered in the future. This would take the form of something like the existing travel advisories Homeland Security provides.
Should you go to the Bahamas? Well, there's a level 2 travel advisory stating that jet ski operators there get kinda rapey sometimes.
Should you buy Cisco products? Well, they have a track record of deciding to EOL stuff instead of fixing it when it's expensive or inconvenient to do the right thing.
Should you buy Lenovo products? Well, they're built in a country that regularly tries and succeeds in hacking our infrastructure and has a history of including rootkits in their laptops.
NIST isn't a bunch of dummies that don't know this. The requirements posed are not micromanagement of device design; some address your concern exactly... like a requirement that developers provide contact information to report vulnerabilities and that devices makers just can't ignore authentication entirely.
But this is IoT stuff we're talking about here, not Lenovo/Cisco... but ReoLink/PETLIBRO/eufy/roborock/FOSCAM/Ring/iRobot/etc. Security (or the lack of it) in the IoT world is a whole different ball game. It isn't uncommon for IoT devices to be EOL on release date, or just lack authentication or encryption entirely.
> NIST isn't a bunch of dummies that don't know this
They've provided thorough definitions and a label that implies they've all been understood by the manufacturer. It doesn't mean that this solves any real world problem.
> Security (or the lack of it) in the IoT world is a whole different ball game.
Those can be described as IoT devices. They're more appropriately categorized as "consumer electronics" and often have a firmware update right out of the box. That's what makes this badging program an absurd idea with no meaningful outcome. This segment is not going to care.
This isn't "Energy Star" where the purchased product does not have additional functionality which can be exposed or exploited through software and no third party testing can be exhaustive enough to prevent the obvious exploit from occurring.
Even to the extent they can it then enforces a product design which cannot be upgraded or modified by the user under any circumstances. Worse the design frustrates the users ability to do their own verification of the device security.
It's a good idea applied to the wrong category of products and users.
Picking and choosing companies like that could work if it could somehow remain apolitical. This registry can work despite the tendency for these things to become political.
What you’ve described is maybe more possible if provided by a Consumer Reports-style org that consumers could subscribe to.
When I buy technology today, I'm 10X more worried about the manufacturer deliberately changing, killing or nerfing the product after I bought it, than I am worried about hackers compromising it. This goes for connected hardware, IOT devices, and software.
Oddly "hackers" are the ones who often revive defunct hardware or give users back control over their devices. Things like DRM laws seem to only enhance corporate interests.
* If the devices can be pointed to an alternate API provider if the company stops supporting
* If firmware has been escrowed / will be made available if the company stops supporting
* If device data is stored by the company
* If that data is certified as end to end encrypted
* Some marks for who / how the data is used
* data stored/transmitted is secured by some kind of means
* the device supports software updates
* the device requires users to authenticate
* the device has documentation
* you can report security vulnerabilities to the developer
And even these are things that many devices fail to do, today. We gotta get the basics fixed first.
But for now, you can presume the Netflix button on your TV remote can't be configured to point to an alternative API if Netflix goes away. :)
'Cause they need somewhere to load in those exploits!
A hypothetical device which is all read-only (except perhaps for a very carefully crafted, limited set of configurable parameters) might in some cases be more secure than the bulk of what's on the shelves today. After all, how many widespread hacks do you read about on old, single-purpose fixed analog or digital devices (which in a sense are similarly 'read-only').
> the device requires users to authenticate
At least for Android TV devices, Button Mapper works for some.
https://play.google.com/store/apps/details?id=flar2.homebutt...
Warning, I haven't personally tried this
https://askanydifference.com/how-to-root-samsung-tv/
https://wiki.samygo.tv/index.php?title=SamyGO_for_DUMMIES
Seems reasonable from the FCC's perspective, but I'm not sure how I'd feel about it.
This needs better and more detailed clarification. I've reverse engineered a camera-equipped pet feeder, and videos sent to a cloud (or my emulating server in my case) were partially encrypted - I-frames were, P-frames were NOT. Someone ticked a checkbox "videos are encrypted", and still left the thing glaring open.
Then, of course, it's also a matter of ciphers and modes, authentication, key generation, transmission and storage, etc etc.
Feels like encrypted storage and transmission features alone require a full whole label, like the FCC's broadband facts label, or FDA's nutritional facts label, which outlines what data exists in the system, where the data is stored, how it's encrypted, how it's authenticated, and so on.
Which is probably not happening until cryptography 101 becomes a part of general school curriculum and layman people start to understand the basics. Without people asking real questions and refusing to purchase products from sloppy engineering companies (aka voting with their wallets*), companies will always wave it away with tried-and-proven "military-grade security" bullshit.
___
*) That is, if there's even a competition. When no one does things right (because consumers don't know and thus don't ask for it), there's nothing to pick from.
So now, any interested subject (any human or entity, even "group of hackers") could ask to responsible. Or could talk with deputies, as their contacts should appear soon.
This is the best strategy, but let's be clear... consumers who make a purchase have a reasonable expectation of owning a durable product that does not increase the threat surface of consumers' lives.
This means that the product requirements should be clear and the supply chain must be secure.
Until a "trust label" can guarantee these principles, the proposal is just another prop in a grand security theatre.
The scary bit is that this label is going to be found to be ineffective, and then consumers may lose trust in government-issued safety stamps.
Suffice it to say, but the keywords are a google dork for finding easy to hack pentesting victims.
Now the BSI (German institute for cybersecurity, similar to CISA) also started to push out certifications for the BSI Grundschutz, which is an absolute meaningless certificate and literally tests the absolute bare minimum things.
The problem here is that there is no market, this cyber security crisis cannot be solved economically, because customers want a certificate without having to do further work. So they'll get it at whatever auditor that accepts their money.
This is how it's done, even for ISO 27001 and SOC2 certifications. Nobody gives a damn if a single working student has 20+ role descriptions laying on their table. Findings are always ignored and never corrected.
Cyber security policies and their effects over time need to be measurable first before there can be certification processes.
Additionally there needs to be legislation that cannot be interpreted. Things like "reasonably modern" cannot be used as a law text because it doesn't mean anything, and instead standardized practices have to be made mandatory requirements. Preferably by a committee that is not self controlling, maybe even something like the EFF, FSF, OWASP or Linux foundation.
Well, there's SELinux, TOR
I'd still put my faith in other indicators like a company's track record, third party audits, robustness of open source library choices where applicable, my own analysis of their stack and engineering choices based on signs I can observe about their product / interface / etc (there are usually several present), my own testing and so forth.
I'd argue the generally accepted pace of consumer product development these days is reckless, and not sustainable if you want truly robust results.
I would have been glad to see this step in the right direction if I weren't convinced all it will likely amount to in practice is security theatre. Here's hoping my skepticism is unwarranted.
Dead Comment
1) What are the requirements for the mark? E.g. no passwords stored in plaintext on servers, no blank/default passwords on devices for SSH or anything else, a process for security updates, etc.?
2) Who is inspecting the code, both server-side and device-side?
3) What are the processes for inspecting the code? How do we know it's actually being done and not just being rubber-stamped? After all, discovering that there's an accidental open port with a default password isn't easy.
Yep, pretty basic stuff, like 'require authentication', 'support software updates', etc
> 2) Who is inspecting the code, both server-side and device-side?
UL is administering the program and they're going to come up with the requirements
> UL Solutions will work with stakeholders to make recommendations to the FCC on a number of important program details, like applicable technical standards and testing procedures, post-market surveillance requirements, the product registry, and a consumer education campaign.
https://www.ul.com/insights/us-cyber-trust-mark
So now, any interested subject (any human or entity, even "group of hackers") could ask to responsible. Or could talk with deputies, as their contacts should appear soon.
1) Don’t be select Chinese products
2) Be select American products
It’s not reaaaally 3d chess, but a relatively crude misnomer for the “Made in America” stamp or “Its American and definitely not Chinese”.
The security practices are probably the same across products, it’s just the wrong time wrong presidency for China.
What the government probably _should_ do is begin establishing a record of manufacturers/vendors which indicates how secure their products have been over a long period of time with an indication of how secure and consumer-friendly their products should be considered in the future. This would take the form of something like the existing travel advisories Homeland Security provides.
Should you go to the Bahamas? Well, there's a level 2 travel advisory stating that jet ski operators there get kinda rapey sometimes.
Should you buy Cisco products? Well, they have a track record of deciding to EOL stuff instead of fixing it when it's expensive or inconvenient to do the right thing.
Should you buy Lenovo products? Well, they're built in a country that regularly tries and succeeds in hacking our infrastructure and has a history of including rootkits in their laptops.
But this is IoT stuff we're talking about here, not Lenovo/Cisco... but ReoLink/PETLIBRO/eufy/roborock/FOSCAM/Ring/iRobot/etc. Security (or the lack of it) in the IoT world is a whole different ball game. It isn't uncommon for IoT devices to be EOL on release date, or just lack authentication or encryption entirely.
They've provided thorough definitions and a label that implies they've all been understood by the manufacturer. It doesn't mean that this solves any real world problem.
> Security (or the lack of it) in the IoT world is a whole different ball game.
Those can be described as IoT devices. They're more appropriately categorized as "consumer electronics" and often have a firmware update right out of the box. That's what makes this badging program an absurd idea with no meaningful outcome. This segment is not going to care.
This isn't "Energy Star" where the purchased product does not have additional functionality which can be exposed or exploited through software and no third party testing can be exhaustive enough to prevent the obvious exploit from occurring.
Even to the extent they can it then enforces a product design which cannot be upgraded or modified by the user under any circumstances. Worse the design frustrates the users ability to do their own verification of the device security.
It's a good idea applied to the wrong category of products and users.
What you’ve described is maybe more possible if provided by a Consumer Reports-style org that consumers could subscribe to.