I work in security/systems/ops/etc. and fundamentally disagree with this premise. I understand the author is saying "it's not that easy" and I agree completely with that, but I don't agree that it means you're doing your job well.
Unfortunately the vast majority of people do their jobs poorly. The entire industry is set-up to support people doing their job poorly and to make doing your job well hard.
If I deploy digital signage the only network access it should have is whitelisted to my servers' IP addresses and it should only accept updates that are signed and connections that have been established with certificate pinning.
This makes it nearly impossible for a remote attacker to mess with it. Look at the security industry that has exploded from the rise of IoT. There's signage out there (replace with any other IoT/SCADA/deployed device) with open ports and default passwords, I guarantee it.
IoT is just a computer, but it's also a computer that you neglect even more than the servers/virtual machines you're already running poorly.
People don't want to accept this, or even might be affronted by this.
There are some places doing things well - but it's the vast minorities of companies out there, because you are not incentivised to do things well.
"Best practises" or following instructions from vendors does not mean you are doing things well. It means you are doing just enough that a vendor can be bothered to support. Which in a lot of cases is unfettered network access.
A sign connected to the internet but with IP whitelists and cryptographic checks is still CONNECTED TO THE INTERNET. Yeah, it's way safer than the same sign with ports open to the world and no authentication, but you can't treat it as "not connected to the internet." You still have to worry about networking bugs, cryptographic vulnerabilities, configuration errors, and other issues that can allow remote attackers to exploit the system. If you want to make the point, you have to give an example of something that's literally not connected to the internet, not one that's simply locked down better.
The number of people who are willing and able to build their own disconnected network is vanishingly small which is the author's point. When deploying "edge" computing like signage which demands remote administration telling your customers anything other than "get it connected to the internet and we'll handle it from there" isn't going to go over well.
"Sorry you can't deploy our signs because we haven't deployed our custom LoRa towers in your area" is just gonna get laughs.
For IoT in particular, you hit a crossroads where the embedded devs haven't really dealt with advanced security concepts so you kinda have to micromange the implementation. And, in small teams it's hard to justify the overhead of managing x509 certs and all the processes that come along with it. Just my personal experience.
Yeah you know, just roll out our MVP, let's see where the business goes with it, and then we'll fix it. Whaat? Budget of fixing it is 2x of the product itself? Hm. Let's have meetings over meetings to postpone the decision until the next one, indefinitely - we cannot really make the decision not to do it of course.
> And, in small teams it's hard to justify the overhead of managing x509 certs and all the processes that come along with it. Just my personal experience.
If you're using (say) Python in your client code, call SSLSocket.getpeercert() and check if your company's domain is in the subjectAltName:
You can ensure it is a valid cert from a valid public CA (like Let's Encrypt) instead of doing your own private CA (which you would specify with SSLContext.load_verify_locations()).
> I understand the author is saying "it's not that easy" and I agree completely with that, but I don't agree that it means you're doing your job well.
Could you elaborate what you mean by this? It seems to me that your comment just highlights another set of problems that should (in theory) motivate people to think more clearly about the ways their system communicates with the internet.
I don't see where you disagree with the blog author. Or are you saying that it's fundamentally impossible to improve security in internet-connected systems because people are not equipped to do so?
There's no reason for digital signage inside an airport to be connected to the internet (or running enterprise security software either). The author seemingly doesn't agree with this. Hospital computers should not be connected to the internet. If you are receiving real-time updates directly from a vendor, you are connected to the internet.
Ideally updates should come from a central source internally to the organisation that has been vetted and approved by the organisation itself. Clearly CrowdStrike knows this and that's why they offer N, N-1, N-2 updates for their falcon sensor.
It's easier to remote into a box and just pull updates from the internet though.
Granted I have not had dozens of jobs, but the only place I have worked where security was treated as the first class issue that it is, (and this type of CrowdStrike incident probably wouldn't have happened), is at one of the largest financial services companies in the world. And it did not hamper development, it actually improved it because you couldn't make stupid mistakes like relying on externally hosted CDN content for your backend app. But for people that don't do their job well, it's a pain. "Hey why doesn't my docker image build on a prod machine, why can't I download from docker hub?"
> Or are you saying that it's fundamentally impossible to improve security in internet-connected systems because people are not equipped to do so?
Yes - but I don't think think it's that hard. There's 90% easy work to be more secure than most out there. It just requires expertise and for people to change how they work.
Instead people spend $bn on cyber security when you can get 97% of the way there by following good standards and knowing your systems.
I am by no means perfect, I spent all day Friday fixing hundreds of machines manually that had BSOD'd from CrowdStrike. In this case the vendor had made it impossible to do my job well because they offered zero control on how these updates are rolled out - there is no option to put them through QA first. Unlike the sensor itself, which we do roll out gradually after it has been proofed in QA.
I interpreted it as "Software is crap, and it's hard to make crap work offline. The problem is not the offline, it's the crap."
The question is where to lay the blame for the crap, and how to change that.
I would love to see the author's "lists" turned into a table of sorts, and then any given piece of software could be rated by how many situations on each list it works in without modification, works in with trivial config tweaks, works in with more elaborate measures, or cannot work in. Turn the whole table green and your software is more attractive to certain environments.
In Sweden, there is a private network (Sjunet) which is isolated from the Internet. It is used by healthcare providers. Its purpose is to make computers valuable communication devices (I love how the article points this out), but without exposing your hospital IT to the whole Internet. Members of Sjunet are expected to know their networks and keep tight controls on IT.
I guess Sjunet can be seen as an industry-wide air-gapped environment. I'd say it improves security, but at a smaller cost than each organization having its own air-gapped network with a huge allowlist.
You know what I've seen give decision-makers a false sense of security?
"Zero Trust Architecture" and not thinking to deeply about the extent to which you're not actually removing overall trust from the system, just shifting and consolidating much of it from internal employees to external vendors.
I'm not even thinking about CS here. It's curious to see what the implications on individual agency and seem to become when the "Zero Trust" story is allowed to play out - not by necessity but because it's "the way we do things now".
(As the wiki page you linked notes, the concept is older and there are certainly valuable lessons there. I am commenting on the "ZTA" trend kicked off by NIST. I bet the NSA are happy about warm reception of the message from industry...)
> I bet that gives hospital IT a false sense of security.
Why?
They can just as effectively use (e.g.) Nessus/Rapid7/Qualsys to do security sweeps of that network as any other.
At my last job we had an IoT HVAC network that we regularly scanned from a dual-homed machine where the on-network devices could not get to the general Internet (no gateway).
That is a solution for companies like Google or non-essential cloud software provider. For all others serious network segmentation is the safer approach. You could argue that this network is far too large and that is probably true.
There is future tech on ancient software stacks. There is no safe solution to put it on the net directly.
AWS was an example in the article. Easy to get a fixed IP? True. Getting a fixed IP for outgoing traffic? Not that easy anymore - AWS is nice, but for many application it just isn't a solution.
If you can't trust anything, you can't do anything. The result is that people who actually need to get their job done then circumvent the entire system and reduce security to absolute zero. As much as the average security expert would like to lock everyone in a padded room forever, there needs to be an acceptable trade-off level of safety and usability.
Post-its with passwords are the most classical example, but removing internet access from an entire institution is just gonna lead to people bringing their own mobile networked devices and does honestly sound like a completely braindead idea.
UK has that (called the HSCN). I don't think it's a good thing. Couple of years ago you had to pay hundreds of dollars for a a TLS certificate because there were only a couple of 'approved' certificate providers. It also provides a false sense of security and provides an excuse to bad security policies. The bandwidth is low and expensive.
It’s not sure it’s quite the same, HSCN does provide border connectivity to Internet as well as a peering exchange. Sjunet on the other hand is an entirely private network with no border connectivity. I have dealt with both.
The same argument was against seat belts in cars and bicycle/motorcycle hemlets. IMHO this arguments is rarely good. False sense of security should not be addressed by removing protection.
> provides an excuse to bad security policies
It should not be used as an excuse but bad policies in air-gaped network is less bad than bad policies in the Interned connected one. I doubt policies will be quickly improve as soon as you connect to the Internet.
That's a (highly predictable) implementation problem of HSCN, not a problem with the idea. These complaints boil down to the same old thing: stupidly written law setting a (potentially) good policy up for failure.
Poland has the little-known "źródło" (meaning "source" in English).
It's a network that interconnects county offices, town halls and such, giving them access to the central databases where citizens' personal information are stored. It's what is used when e.g. changing your address with the government, getting a new ID card, registering a child or marriage etc.
As far as I know, the "Źródło" app runs on separate, "airgapped" computers, with access to the internal network but not the internet, using cryptographic client certificates (via smart cards) for authentication.
Given the state of IT in healthcare in pretty much every other country, is there any reason to believe "Members of Sjunet are expected to know their networks and keep tight controls on IT" has any meaning? Does the government audit every computer on the network? Are they all updated with the latest patches? Do we know people aren't plugging in random USB devices, etc..?
My understanding is that the members need to sign a contract to join Sjunet. I'm not sure of penalties, but being kicked out of Sjunet is likely an incentive for decent IT staffing.
Yeah. As someone who has literally been in this industry.. As sad as it is, its a pretty massive ask to expect all healthcare places to have their security "tight". All it takes is one lax clinic or hospital (and truth be told they are ALL lax in their security in one way or another) for it to come crumbling down.
Sundhedsdatanettet actually runs on "public IPs". They aren't public, they aren't routed and they certainly are not connected to the internet, but they do exist within a public range. Not sure why a private range wasn't picked, but I'd guess it's to avoid conflicts with other networks.
As others suggest: Sjunet is not really "private", in the sense that you can bet that there are unsupervised machines connected to it via some of the legit machines (or via some of the comm. equipment), which are also connected to the rest of the Internet via another Ethernet or WiFi connection. These can in principle expose open ports for interested parties to act as they wish on the "private" network. And they do so despite the reassuring contract which Sjunet members sign.
I'm a controls engineer. I've built hundreds of machines, they do have Ethernet cables for fieldbus networks but should never be connected to the Internet.
Every tool and die shop in your neighborhood industrial park contains CNC machines with Ethernet ports that cannot be put on the Internet. Every manufacturing plant with custom equipment, conveyor lines and presses and robots and CNCs and pump stations and on and on, use PLC and HMI systems that speak Ethernet but are not suitable for exposure to the Internet.
The article says:
> In other words, the modern business computer is almost primarily a communications device.
> There are not that many practical line-of-business computer systems that produce value without interconnection with other line-of-business computer systems.
which ignores the entirety of the manufacturing sector as well as the electronic devices produced by that sector. Millions of embedded systems and PLCs produce value all day long by checking once every millisecond whether one or more physical or logical digital inputs have changed state, and if so, changing the state of one or more physical or logical digital outputs.
There's no need for the resistance welder whose castings were built more than a century ago, and whose last update was to receive a PLC and black-and-white screen for recipe configurations in 2003 to be updated with 2024 security systems. You just take your clipboard to it, punch in the targets, and precisely melt some steel.
Typically, you only connect to machines like this by literally picking up your laptop and walking out to the machine with an Ethernet patch cable. If anything beyond that, I expect my customers to put them on a firewalled OT network, or bridge between information technology (IT) and operations technology (OT) with a Tosibox, Ixon, or other SCADA/VPN appliance.
It's reassuring that such things still exist. My mental model of consumer hardware is that they take devices like the ones you describe, and just add wifi, bluetooth, telemetry, ads, and an app.
PLCs are explicitly considered high value targets as they control large swaths of a nation-states critical infrastructure as well as connect to high value end-points in air-gapped networks.
Now perhaps you're not working on anything someone might want to exploit, but PLCs are often found in critical infrastructure as well as high-end manufacturing facilities, which make them attractive targets for malicious actors. Whether because they're attempting to exploit critical infrastructure or infect a poorly secured device that high value end-points (such as engineering laptops) might eventually connect to directly.
I was in a cybersecurity program in college and one of the classes explicitly targeted SCADA systems and how to exploit them. That was 10 years ago and I imagine things have only gotten worse since.
I remain unconvinced that you shouldn't air-gap systems because that means you can't use internet-centric development practices. I find this argument absurd. The systems that should have their ethernet ports epoxyed also should never have been programmed using internet-centric development practices in the first place. Your MRI machine fetches JS dependencies from NPM on boot? Straight to jail. Not metaphorically.
After watching a video of a person playing with a MacDonald's kiosk, I started to do the same with equipment I found at different places.
One food court had kiosks with Windows and complete access to the Internet. Somebody could download malware and steal credit card data. Every time I used one, I turned it off or left a message on the screen. Eventually they started running it in kiosk mode.
Another was a parking kiosk. It was never hardened. I guess criminals haven't caught on to this yet.
The third was an interactive display for a brand of beer. This one wasn't going to cause any harm, but I liked to leave Notepad open with "Drink water" on it. Eventually they turned it off. That's one way to fix it, I guess.
> Another was a parking kiosk. It was never hardened. I guess criminals haven't caught on to this yet.
I don't know the details of how the parking kiosks near me are setup, but I can only assume they're put together really poorly because once, after mashing buttons in frustration, it started refunding me for tickets that I'd not purchased. You'd have thought "Don't give money to random passers by" would have been fairly high on the list of requirements, but there we are.
> If you are operating a private network, your internal services probably don't have TLS certificates signed by a popular CA that is in root programs. You will spend many valuable hours of your life trying to remember the default password for the JRE's special private trust store and discovering all of the other things that have special private trust stores, even though your operating system provides a perfectly reasonable trust store that is relatively easy to manage, because of Reasons. You will discover that in some tech stacks this is consistent but in others it depends on what libraries you use.
Oof, I feel this one. I tried to get IntelliJ's JRE trust store to understand that there was a new certificate for zscaler that it had to use and there were two or three different JDKs to choose from, and all of their trust stores were given the new certificate and it still didn't work and we didn't know why.
With hamm radio, it doesn't need to be near IIRC. It's been a long af time since I've messed about with radio, but pretty sure you'd be able to use the ionosphere as an antenna.
It seems fairly obvious that an airline reservation system needs to be connected to a network at least, I haven't heard many people claim they should have been all offline. But for example I heard stories of lathe machines in workshops that were disabled by this. You gotta wonder whether they really needed to be online. (I'm sure there are reasons, but they are reasons that should be weighed against the risks.)
Beyond that there are plenty of even more ridiculous examples of things that are now connected to the internet, like refrigerators, kettles, garage doors etc. (I don't know if many, or any, of these things were affected by the CrowdStrike incident, but if not, it's only a matter of time until the next one.)
As for the claim that non-connected systems are "very, very annoying", my experience as a user is that all security is "very, very annoying". 2FA, mandatory password changing, locked down devices, malware scanners, link sanitisers - some of it is necessary, some of it is bullshit (and I'm not qualified to tell the difference), but all of it is friction.
Unfortunately the vast majority of people do their jobs poorly. The entire industry is set-up to support people doing their job poorly and to make doing your job well hard.
If I deploy digital signage the only network access it should have is whitelisted to my servers' IP addresses and it should only accept updates that are signed and connections that have been established with certificate pinning.
This makes it nearly impossible for a remote attacker to mess with it. Look at the security industry that has exploded from the rise of IoT. There's signage out there (replace with any other IoT/SCADA/deployed device) with open ports and default passwords, I guarantee it.
IoT is just a computer, but it's also a computer that you neglect even more than the servers/virtual machines you're already running poorly.
People don't want to accept this, or even might be affronted by this.
There are some places doing things well - but it's the vast minorities of companies out there, because you are not incentivised to do things well.
"Best practises" or following instructions from vendors does not mean you are doing things well. It means you are doing just enough that a vendor can be bothered to support. Which in a lot of cases is unfettered network access.
"Sorry you can't deploy our signs because we haven't deployed our custom LoRa towers in your area" is just gonna get laughs.
So why aren't their employers investing in educating their devs & PMs about security? (rhetorical - we all know why)
If you're using (say) Python in your client code, call SSLSocket.getpeercert() and check if your company's domain is in the subjectAltName:
* https://docs.python.org/3/library/ssl.html#ssl.SSLSocket.get...
You can ensure it is a valid cert from a valid public CA (like Let's Encrypt) instead of doing your own private CA (which you would specify with SSLContext.load_verify_locations()).
Could you elaborate what you mean by this? It seems to me that your comment just highlights another set of problems that should (in theory) motivate people to think more clearly about the ways their system communicates with the internet.
I don't see where you disagree with the blog author. Or are you saying that it's fundamentally impossible to improve security in internet-connected systems because people are not equipped to do so?
Ideally updates should come from a central source internally to the organisation that has been vetted and approved by the organisation itself. Clearly CrowdStrike knows this and that's why they offer N, N-1, N-2 updates for their falcon sensor.
It's easier to remote into a box and just pull updates from the internet though.
Granted I have not had dozens of jobs, but the only place I have worked where security was treated as the first class issue that it is, (and this type of CrowdStrike incident probably wouldn't have happened), is at one of the largest financial services companies in the world. And it did not hamper development, it actually improved it because you couldn't make stupid mistakes like relying on externally hosted CDN content for your backend app. But for people that don't do their job well, it's a pain. "Hey why doesn't my docker image build on a prod machine, why can't I download from docker hub?"
Yes - but I don't think think it's that hard. There's 90% easy work to be more secure than most out there. It just requires expertise and for people to change how they work.
Instead people spend $bn on cyber security when you can get 97% of the way there by following good standards and knowing your systems.
I am by no means perfect, I spent all day Friday fixing hundreds of machines manually that had BSOD'd from CrowdStrike. In this case the vendor had made it impossible to do my job well because they offered zero control on how these updates are rolled out - there is no option to put them through QA first. Unlike the sensor itself, which we do roll out gradually after it has been proofed in QA.
The question is where to lay the blame for the crap, and how to change that.
I would love to see the author's "lists" turned into a table of sorts, and then any given piece of software could be rated by how many situations on each list it works in without modification, works in with trivial config tweaks, works in with more elaborate measures, or cannot work in. Turn the whole table green and your software is more attractive to certain environments.
I guess Sjunet can be seen as an industry-wide air-gapped environment. I'd say it improves security, but at a smaller cost than each organization having its own air-gapped network with a huge allowlist.
"Zero Trust Architecture" and not thinking to deeply about the extent to which you're not actually removing overall trust from the system, just shifting and consolidating much of it from internal employees to external vendors.
I'm not even thinking about CS here. It's curious to see what the implications on individual agency and seem to become when the "Zero Trust" story is allowed to play out - not by necessity but because it's "the way we do things now".
(As the wiki page you linked notes, the concept is older and there are certainly valuable lessons there. I am commenting on the "ZTA" trend kicked off by NIST. I bet the NSA are happy about warm reception of the message from industry...)
Why?
They can just as effectively use (e.g.) Nessus/Rapid7/Qualsys to do security sweeps of that network as any other.
At my last job we had an IoT HVAC network that we regularly scanned from a dual-homed machine where the on-network devices could not get to the general Internet (no gateway).
There is future tech on ancient software stacks. There is no safe solution to put it on the net directly.
AWS was an example in the article. Easy to get a fixed IP? True. Getting a fixed IP for outgoing traffic? Not that easy anymore - AWS is nice, but for many application it just isn't a solution.
Post-its with passwords are the most classical example, but removing internet access from an entire institution is just gonna lead to people bringing their own mobile networked devices and does honestly sound like a completely braindead idea.
The same argument was against seat belts in cars and bicycle/motorcycle hemlets. IMHO this arguments is rarely good. False sense of security should not be addressed by removing protection.
> provides an excuse to bad security policies
It should not be used as an excuse but bad policies in air-gaped network is less bad than bad policies in the Interned connected one. I doubt policies will be quickly improve as soon as you connect to the Internet.
Deleted Comment
That's a (highly predictable) implementation problem of HSCN, not a problem with the idea. These complaints boil down to the same old thing: stupidly written law setting a (potentially) good policy up for failure.
It's a network that interconnects county offices, town halls and such, giving them access to the central databases where citizens' personal information are stored. It's what is used when e.g. changing your address with the government, getting a new ID card, registering a child or marriage etc.
As far as I know, the "Źródło" app runs on separate, "airgapped" computers, with access to the internal network but not the internet, using cryptographic client certificates (via smart cards) for authentication.
Are the latest patches security updates ?
A bit like tor but without all the creepy stuff I guess.
If there are, a bridge could be made willingly or not. OFC it's more secure than everything on the internet.
What a tongue twister for non danish speaking people :D
Every tool and die shop in your neighborhood industrial park contains CNC machines with Ethernet ports that cannot be put on the Internet. Every manufacturing plant with custom equipment, conveyor lines and presses and robots and CNCs and pump stations and on and on, use PLC and HMI systems that speak Ethernet but are not suitable for exposure to the Internet.
The article says:
> In other words, the modern business computer is almost primarily a communications device.
> There are not that many practical line-of-business computer systems that produce value without interconnection with other line-of-business computer systems.
which ignores the entirety of the manufacturing sector as well as the electronic devices produced by that sector. Millions of embedded systems and PLCs produce value all day long by checking once every millisecond whether one or more physical or logical digital inputs have changed state, and if so, changing the state of one or more physical or logical digital outputs.
There's no need for the resistance welder whose castings were built more than a century ago, and whose last update was to receive a PLC and black-and-white screen for recipe configurations in 2003 to be updated with 2024 security systems. You just take your clipboard to it, punch in the targets, and precisely melt some steel.
Typically, you only connect to machines like this by literally picking up your laptop and walking out to the machine with an Ethernet patch cable. If anything beyond that, I expect my customers to put them on a firewalled OT network, or bridge between information technology (IT) and operations technology (OT) with a Tosibox, Ixon, or other SCADA/VPN appliance.
Now perhaps you're not working on anything someone might want to exploit, but PLCs are often found in critical infrastructure as well as high-end manufacturing facilities, which make them attractive targets for malicious actors. Whether because they're attempting to exploit critical infrastructure or infect a poorly secured device that high value end-points (such as engineering laptops) might eventually connect to directly.
https://www.cisa.gov/news-events/cybersecurity-advisories/aa... - Water Infra
https://claroty.com/team82/research/evil-plc-attack-using-a-...
Damned right. That would be a special type of malfeasance that should earn criminal punishment, if healthcare equipment worked that way.
One food court had kiosks with Windows and complete access to the Internet. Somebody could download malware and steal credit card data. Every time I used one, I turned it off or left a message on the screen. Eventually they started running it in kiosk mode.
Another was a parking kiosk. It was never hardened. I guess criminals haven't caught on to this yet.
The third was an interactive display for a brand of beer. This one wasn't going to cause any harm, but I liked to leave Notepad open with "Drink water" on it. Eventually they turned it off. That's one way to fix it, I guess.
I don't know the details of how the parking kiosks near me are setup, but I can only assume they're put together really poorly because once, after mashing buttons in frustration, it started refunding me for tickets that I'd not purchased. You'd have thought "Don't give money to random passers by" would have been fairly high on the list of requirements, but there we are.
Oof, I feel this one. I tried to get IntelliJ's JRE trust store to understand that there was a new certificate for zscaler that it had to use and there were two or three different JDKs to choose from, and all of their trust stores were given the new certificate and it still didn't work and we didn't know why.
https://hamnetdb.net/map.cgi
It has interesting limitations due to the amateur radio spectrum used. Including total ban commercial use.
As that is the social contract of the spectrum, you get cheap access to loads of spectrum between 136kHz and 241GHz, but can't make money with it.
Only in the Netherlands and Germany is it really widespread: https://hamnetdb.net/map.cgi . Here in Spain it's not available anywhere near me.
Beyond that there are plenty of even more ridiculous examples of things that are now connected to the internet, like refrigerators, kettles, garage doors etc. (I don't know if many, or any, of these things were affected by the CrowdStrike incident, but if not, it's only a matter of time until the next one.)
As for the claim that non-connected systems are "very, very annoying", my experience as a user is that all security is "very, very annoying". 2FA, mandatory password changing, locked down devices, malware scanners, link sanitisers - some of it is necessary, some of it is bullshit (and I'm not qualified to tell the difference), but all of it is friction.
Of course. But not the Internet.