- Running a port scan caused the weapons system to fail
- One admin password for a system was guessed in nine seconds
- "Nearly all major acquisition programs that were operationally tested between 2012 and 2017 had mission-critical cyber vulnerabilities that adversaries could compromise."
- Taking over systems was pretty much playing on easy mode: "In one case, it took a two-person test team just one hour to gain initial access to a weapon system and one day to gain full control of the system they were testing."
My thoughts on this are always related to "skin in the game": does it matter personally to the people making and procuring the systems, especially at senior management level, whether it actually works?
Back in WW2 it definitely did, especially in the UK where bombing had no respect for the class system. Winning or losing the war would make a personal difference.
But since then? All the wars have been overseas with no real threat to the mainland US; there was a real technological race against the Soviet Union, but that ended in the 1990s. The post-911 wars were more of an excuse to settle scores and play the Great Game than a real effort against terrorism (no pursuit of the Saudis for example).
The consequence is that the main thing that matters is selling the technology to the Pentagon, or promoting a career inside it. Nobody really believes that if they procure a crappy IT system the enemy is going to fly a 747 into their office. Someone might get killed, but nobody they know or who matters, and it's never going to come back to the project manager or procurement person who made terrible, expensive, uninformed choices about technology.
This is a very good question I've been pondering for years, and I generally came to the same conclusion wrt. military-industrial complex in general - not just software. It seems to me that no one expects any war that would hurt the US any time soon, so it's an open season for fleecing the military budget for all it's worth.
I also wonder sometimes if a similar thing isn't happening in enterprise software - that is, actual software doesn't have to work; it only has to serve as an object of trade between companies, and all the problems will disappear in general organizational noise & inertia.
I would suggest that the problem is too much wriggle room / dissonance during the design process (in nice safe meeting rooms admittedly) We can all persuade ourselves that as all items are ticked, the job is done.
But testing gives the lie to all this. The patriot system was battle tested in the 1990s and its deficiencies became apparent - and lessons seem to have been learnt.
So perhaps more adversarial testing is the right approach - set the marines to take out the air forces weapons, the navy to destroy the army.
If people know their beloved weapons systems are going to get roughed up, then the tick box stops being the determinant of achievement - it becomes "what would one of those navy/marine/air force/army bar stewards do?" That's a much higher bar.
Honestly I'm really offended by this comment. To suggest that coders writing weapons systems have little skin in the game is condescending and shows how ignorant of the environment you are. Low effort comment. Every industry is for the most part disturbingly bad at security in general.
Maybe write some weapons systems or work with people that do and you would have a different perspective.
> Test reports we reviewed make it clear that simply having cybersecurity controls does not mean a system is secure. How the controls are implemented can significantly affect cybersecurity. For example, one test report we reviewed indicated that the system had implemented rolebased access control, but internal system communications were unencrypted. Because the system’s internal communications were unencrypted, a regular user could read an administrator’s username and password and use those credentials to gain greater access to the system and the ability to affect the confidentiality, integrity, or availability of the system.
"Do you want to play a game?"
This is some scary bad WarGames like security, password 'joshua' level.
> Program offices were aware of some of the weapon system vulnerabilities that test teams exploited because they had been identified in previous cybersecurity assessments. For example, one test report indicated that only 1 of 20 cyber vulnerabilities identified in a previous assessment had been corrected. The test team exploited the same vulnerabilities to gain control of the system. When asked why vulnerabilities had not been addressed, program officials said they had identified a solution, but for some reason it had not been implemented. They attributed it to contractor error.
Ah, the old blame the 'contractor error' and 'not invented here' syndrome. Looks like engineers aren't in the power structure to change these things, scary if the military is driven like an MBA only led business with no influence from engineering/security.
The problem isn't just MBAs. It's that scientific management has become central to the DoD's mode of functioning. Which is remarkably sad. The DoD in the WWII and post-WWII era actually developed a lot of good systems engineering practices and was largely successful on some massive projects.
Since then, however, there has been the constant desire to deskill workers. That is, they want explicit operating/work instructions that mean even a trained monkey could do the job. This is useful for some things (got a broken down Jeep? Pull out the manual and even your Lt can fix it!), but for development, acquisitions, and sustainment this is actually a horrible idea.
The infection of deskilling spread to the office. They've removed the office managers (among others) and left the work (critical, but time consuming and secondary to the mission) with the highly educated and trained staff. This diminishes their ability to focus on real work (see [0] from yesterday). From there, they continued to try to document precisely how engineering work is done. Believing that the process of the work is the same as the work. So if only my engineers knew how to properly fill out SF-1234 it wouldn't matter how educated they are, magically the work would be done.
Of course, we all know this is bullshit. Knowledge work is called that for a reason. The capacity for work is based on the knowledge of the workers. You cannot deskill computer science or aerospace engineering. You can deskill aspects of it, or really the workflow, but not the science and engineering work itself. I can eliminate or largely reduce the need for a classical configuration manager by establishing a (automated) peer review and version control workflow. But the creative act producing content that enters the workflow will always require skilled, competent, knowledgeable engineers and scientists.
EDIT: I will not change the above, but I will make a note. After re-reading the Wikipedia page on Scientific Management [1] I see that some people consider Lean and others to also be extensions of it. So read the above as describing Fordism and Taylorism, not scientific management in general. Any management largely based on, or influenced by, statistics, models, and experimentantion could be argued to be "scientific management", that doesn't mean it's bad. It's when it's taken to an extreme. Like, in Taylor's case, where workers are treated with contempt. The purpose of the management being to make them fully expendable. They had no unique knowledge or skills that could really contribute to the business beyond their physical presence and ability to follow directions. His goal was to disempower workers, whereas other examples (Lean particularly) work to empower workers.
Contractors and procurement also has to follow trade agreements and quotas . Probably way higher in priority than some engineers memo with indecipherable tech jargon about ‘security holes’
It's like the worst possible scenario. Could it be this will be a wakeup call to the people that work on these systems? I doubt it, based on government procurement strategies like we saw with the website for obama care.
>Multiple weapon systems used commercial or open source software, but did not change the default password when the software was installed, which allowed test teams to look up the password on the Internet and gain administrator privileges for that software.
Even worse is that institutional problem where you have people constantly cycling on and off of this hardware that was never designed for a multi-user environment, so default passwords are the order of the day. At best they changed the password and then put it on a sticky note attached to the monitor.
The last thing you want is someone forgetting their password to their tactical system while out at sea and having to sail back into port to get the vendor to reset it for you.
And really, the first case is no worse than the old days with manual controls that just anybody could walk up and fiddle with.
"In operational testing, DOD routinely found mission-critical cyber vulnerabilities in
systems that were under development, yet program officials GAO met with
believed their systems were secure and discounted some test results as
unrealistic. "
Aren’t there reams of security standards and thousands of man-years of security compliance bureaucracy for even the most basic DOD IT projects? And they still have trivial vulnerabilities like this? Is the process really that useless?
Bureaucracy not only does not discourage vulnerabilities unless they're on a very short list, it actively encourages them by driving away the kind of imaginitive thinking you need to think of them.
I think the difference is between "DoD IT projects" and DoD projects that have networked computer systems. My hunch is that most of these vulnerabilities are in systems that are not labeled as "IT projects".
Maybe? I somewhat doubt it, since I work for a company that has a couple of DOD IT products that are in fairly widespread use, and I don't know that we have done any security compliance to speak of over the past seven or eight years. In that time period we haven't done a ton of work, but we have had to make some changes to move from a really ancient JRE version to a slightly less ancient one.
A side note: the picture in the first few pages of the pdf looks like the original authors intent, aka, not pointing to a particular part of the fake plane for each subsystem. The picture on the web was "upgraded" editorially to point to specific parts for... ? Marketing reasons? Not sure but its hilarious because the logistics system of the web version of the fake plane is in a missile.
I was a dev contractor for the US Army for a few years. None of this surprises me.
They had some goofballs policies that made it seem like vulnerabilities were the goal. I could bitch at length. Their TSA style security theater practices were the order of the day. The IA training was an embarrassing joke and they made you do it often enough to make you a little crazy.
I just checked the certificate of networthiness page and they don't have a valid SSL certificate. I recall that being the case years ago too. I wonder if it's been that way for the last 7 years? That's a cute little terrarium of the whole biome I remember.
Off topic a bit, but that all aside... I am more proud of the work I did there than at any other place in my career. I got a lot of excitement and engaged feedback about the interactive learning materials I created.
I'll never know if it made any difference, but the mere fact that someone's son or daughter COULD have noticed an IED threat they wouldn't have otherwise because of my work gives me all sorts of proud fuzzies.
That work had way more meaning than all the other CRUD/ML/Advertainment schlock I'll get to do for the rest of my life :)
> I just checked the certificate of networthiness page and they don't have a valid SSL certificate. I recall that being the case years ago too. I wonder if it's been that way for the last 7 years? That's a cute little terrarium of the whole biome I remember.
That's not quite true. Internal use sites don't have a valid cart issued by a "default" external vendor.
Public sites use existing CAs that are in use by the public. E.g., the Marines public facing site[0] is signed by DigiCert. If you go to a site that's public facing but for internal use like MoL[1], you'll see that the cert is issue by an internal DoD CA. This is intentional.
The DoD has an internal CA already set up. These internal use sites are a gateway to sensitive information, so the DoD doesn't want to rely on an external CA for HTTPS. What I never understood was why these internal CAs weren't marked as trusted on the internal machines. That would avoid the browser warnings when accessing one of these site from DoD hardware, and it would (in theory) force the user to double check when accessing the site from an external device.
They are trusted by internal machines -- since a lot of internal authentication relies on these certificates. The DOD long ago moved away from password-based authentication mechanisms to certificate-based authentication (GSC-IS initially (CAC), now NIST SP 800-73 (PIV; CAC II)) and so the system will have the correct certificates or the user generally won't be able to login.
What I find as the most common error is that users setup an alternate browser (such as Firefox) that does not use the system certificate store and then lack the system's local certificate authorities.
Additionally, DOD PKI is now cross-signed with Federal PKI (FPKI), so it's larger than the DOD now and other agencies also use the same smartcards (PIV).
Thank you for your work and for this comment. Regarding the last line: if you can work in the US and are not hamstrung by personal circumstances, there is no way, given the skills you imply having, that you can't find meaningful work: health care, education, energy all have dozens of good companies straining to find additional competent technical staff.
Thank you for the kindness. I'm in a bit of a slump right now so my cynicism is leaking.
Job offers are trivial to get. Meaning.. proper autonomy / feedback balance.. impact.. Life must be too easy for me to be such a snob. Neural fatigue is real.
I'm not sure what specific training you're talking about regarding DIACAP (which it would likely have been when you were working there; now replaced with RMF), but over all the goal of certification and accreditation is about assuming risk, and the DAA (Designated Approving Authority) assumes the risk so they need to be informed about the risk. More information can be found in DoD Directive 8500 (DoD Instruction 8500.02 specifically).
As far as the SSL certificate, I assume you mean: https://www.atsc.army.mil/ ? That site seems like it has a valid certificate, if you validate against the DOD PKI (now cross-signed with FPKI) root CAs:
Looks like I was remembering the wrong acronyms. It was information assurance training. We had to do it every 6 months, and like twice in a month when Snowden did his thing.
My first year there it was a goofy flash game with uncanny valley cartoon characters awkwardly telling you not to share secrets at the bar to get laid. Every year I stayed it seemed to get longer and more awkward. At some point they added a boxing minigame that didn't have any training value. Nothing was optional.
It became a goal of mine that they'd let me remake it in a way that was... not patronizing... I never found anyone who knew who to talk to get me the project though. :(
If you are interested in helping the US Government fix this particular trashfire, consider joining the Defense Digital Service. We work on a variety of DoD projects as part of the US Digital Service "tech peace corps". https://www.dds.mil/
If you're not ready for that level of commitment (though it's amazing work), and you're interested in being involved as a security researcher, reach out to me and we can talk about joining our bug bounty program.
If this intrigued anyone else, just a quick summary: 3-6 week interview process, no relocation assistance, no bonuses, no equity, citizenship requirement, oh and the kicker: drug testing.
Yup! We’re all employees of the federal government, so we have to meet the requirements of all Federal positions.
Honestly, you don’t do this job for the money. I took a pay cut when I joined, on top of losing bonuses and equity. You join because you want to make a real difference in people’s lives, in a visceral, real way.
I can say without exaggeration that there are people who would have died except for the work that our team had done. Even when the stakes aren’t life or death, the impact you can have working for USDS is massive compared to anywhere else. You can personally change the lives of hundreds of thousands or millions of people. That’s the kind of hook that beats equity for me any day.
Thanks for chiming in. Curiously, I had a few questions:
1.) Does DDS really pen test developmental/operational weapon systems? I'm talking about custom flavors of standalone PIT systems at the lowest embedded level, not just public-facing unclassified commidity IT systems. Maybe I'm missing something, but the projects highlighted on DDS's website suggest otherwise.
2.) How's your Blue team ops? The current RMF meta in the field strikes me as an all-Red team party, while the Blue side of business is pretty much always MIA. I suspect it's partly because pen testing is fashionable these days, successful outcomes can be quite dramatic and perceivably understood by stakeholders, and avoidance of the inherent liablility of defensive posturing without significant impact to performance/capability if a complex system's requirements are not well understood (a compounded issue not exclusive to weapon systems), to name a few.
> The enemy here is fairly low-tech. Shouldn't be a problem.
Would be perfectly acceptable if your hardware was only used for 2-3 years against only low tech enemies that don't have access to electricity during that whole time.
I think this can be a downfall of the US military if they ever get into a conflict with a capable enemy. They are so used to use super complex and expensive weapons against enemies who can't really put up a resistance. I wonder what would happen to the B-2 bomber or aircraft carriers if they had to fight China. My guess is these weapons would be eliminated very quickly.
The catch is that on DOD systems, encryption is very difficult to add. That is, to be certified by the NSA and compatible with the military key infrastructure. So its better to avoid mentioning it unless its forced on you. Better is a relative term here. I mean, in terms of cost and effort to add. Not security.
> Nearly all major acquisition programs that were operationally tested between 2012 and 2017 had mission-critical cyber vulnerabilities that adversaries could compromise.
It's not too surprising and a little reminiscent of the security nightmare that are IoT devices.
All those weapon systems come out of hardware/engineering companies with little background in software engineering and the accompanying security best practices.
Most hardware engineering companies have no idea about software. To them, software is just another line item on the BOM, like a bolt or a piece of sheet metal. Something that you need to source as cheaply as possible and stick into the package somewhere on the assembly line. Nobody cares what it does or how buggy it is as long as it meets the checklist of requirements written into the contract with the supplier.
Look at things like cable set top boxes, and automotive entertainment systems. It's like they don't care what the software is as long as some bits that some supplier sent them are flashed onto the device.
They don't know how to hire a security advisor or external team?
What I'd be most concerned about is that the procurement process is favouring companies who clearly aren't up to designing in rudimentary security, in weapons systems, ... smh.
That seems like getting clothing made and not having anyone flag that it was glued together with PVA instead of being sewn; and the company you hiredb not having anyone who realises that's a fundamental problem.
Meanwhile, the software companies capable of fixing these issues face internal revolt at the idea of defense contracts. Apparently inaccurate targeting systems and vulnerable firmware in equipment that is going to deployed (regardless of protest) is better for pacifism?
When I was at Lockheed - we were building the RFID tracking systems they used to track various everythings all over - and they were trying to make it a part of the Port Security for every port... and even had Tom Ridge join the board...
well, I recall asking about the security of the systems (I was the IT lead and was to help design the global port tracking system which they hoped to track all shipping containers) -- there was no encryption/authentication on any of the tags.
If you had a reader, you could read/write the tags.
They had not even thought about securing these systems - and they were trying to tout them as a security system for weapons shipments. They even had tags that had G-sensors that were to be able to tell you if a munition was dropped, if it had armed (some weapons will only arm themselves once a certain g-force is reached which indicates to the weapon they have been launched.)
The inclusion of this graphic makes me realize the report is not intended to explain the situation to engineers. It's to explain the problem to well-decorated higher ups that probably don't understand modern technology all that well, yet are calling all the budget shots.
US military strategy and tactics are much more reliant on high-tech advantages than other countries though. If everyone’s tech all goes down, we’re going to be hit a lot harder.
https://www.gao.gov/assets/700/694913.pdf
- Running a port scan caused the weapons system to fail
- One admin password for a system was guessed in nine seconds
- "Nearly all major acquisition programs that were operationally tested between 2012 and 2017 had mission-critical cyber vulnerabilities that adversaries could compromise."
- Taking over systems was pretty much playing on easy mode: "In one case, it took a two-person test team just one hour to gain initial access to a weapon system and one day to gain full control of the system they were testing."
Back in WW2 it definitely did, especially in the UK where bombing had no respect for the class system. Winning or losing the war would make a personal difference.
But since then? All the wars have been overseas with no real threat to the mainland US; there was a real technological race against the Soviet Union, but that ended in the 1990s. The post-911 wars were more of an excuse to settle scores and play the Great Game than a real effort against terrorism (no pursuit of the Saudis for example).
The consequence is that the main thing that matters is selling the technology to the Pentagon, or promoting a career inside it. Nobody really believes that if they procure a crappy IT system the enemy is going to fly a 747 into their office. Someone might get killed, but nobody they know or who matters, and it's never going to come back to the project manager or procurement person who made terrible, expensive, uninformed choices about technology.
I also wonder sometimes if a similar thing isn't happening in enterprise software - that is, actual software doesn't have to work; it only has to serve as an object of trade between companies, and all the problems will disappear in general organizational noise & inertia.
But testing gives the lie to all this. The patriot system was battle tested in the 1990s and its deficiencies became apparent - and lessons seem to have been learnt.
So perhaps more adversarial testing is the right approach - set the marines to take out the air forces weapons, the navy to destroy the army.
If people know their beloved weapons systems are going to get roughed up, then the tick box stops being the determinant of achievement - it becomes "what would one of those navy/marine/air force/army bar stewards do?" That's a much higher bar.
tl;dr blow shit up and see if it still works
Maybe write some weapons systems or work with people that do and you would have a different perspective.
"Do you want to play a game?"
This is some scary bad WarGames like security, password 'joshua' level.
> Program offices were aware of some of the weapon system vulnerabilities that test teams exploited because they had been identified in previous cybersecurity assessments. For example, one test report indicated that only 1 of 20 cyber vulnerabilities identified in a previous assessment had been corrected. The test team exploited the same vulnerabilities to gain control of the system. When asked why vulnerabilities had not been addressed, program officials said they had identified a solution, but for some reason it had not been implemented. They attributed it to contractor error.
Ah, the old blame the 'contractor error' and 'not invented here' syndrome. Looks like engineers aren't in the power structure to change these things, scary if the military is driven like an MBA only led business with no influence from engineering/security.
Having known many people that worked in/around the military and defense industry, this seems like our reality.
Since then, however, there has been the constant desire to deskill workers. That is, they want explicit operating/work instructions that mean even a trained monkey could do the job. This is useful for some things (got a broken down Jeep? Pull out the manual and even your Lt can fix it!), but for development, acquisitions, and sustainment this is actually a horrible idea.
The infection of deskilling spread to the office. They've removed the office managers (among others) and left the work (critical, but time consuming and secondary to the mission) with the highly educated and trained staff. This diminishes their ability to focus on real work (see [0] from yesterday). From there, they continued to try to document precisely how engineering work is done. Believing that the process of the work is the same as the work. So if only my engineers knew how to properly fill out SF-1234 it wouldn't matter how educated they are, magically the work would be done.
Of course, we all know this is bullshit. Knowledge work is called that for a reason. The capacity for work is based on the knowledge of the workers. You cannot deskill computer science or aerospace engineering. You can deskill aspects of it, or really the workflow, but not the science and engineering work itself. I can eliminate or largely reduce the need for a classical configuration manager by establishing a (automated) peer review and version control workflow. But the creative act producing content that enters the workflow will always require skilled, competent, knowledgeable engineers and scientists.
[0] https://news.ycombinator.com/item?id=18157885
EDIT: I will not change the above, but I will make a note. After re-reading the Wikipedia page on Scientific Management [1] I see that some people consider Lean and others to also be extensions of it. So read the above as describing Fordism and Taylorism, not scientific management in general. Any management largely based on, or influenced by, statistics, models, and experimentantion could be argued to be "scientific management", that doesn't mean it's bad. It's when it's taken to an extreme. Like, in Taylor's case, where workers are treated with contempt. The purpose of the management being to make them fully expendable. They had no unique knowledge or skills that could really contribute to the business beyond their physical presence and ability to follow directions. His goal was to disempower workers, whereas other examples (Lean particularly) work to empower workers.
[1] https://en.wikipedia.org/wiki/Scientific_management
As an MBA holder and avid HN user, I take issue with that statement...
The last thing you want is someone forgetting their password to their tactical system while out at sea and having to sail back into port to get the vendor to reset it for you.
And really, the first case is no worse than the old days with manual controls that just anybody could walk up and fiddle with.
https://xkcd.com/1573/
To be fair, it could also be a death ray laser. The whole thing looks a lot more like Star Wars than a real plane. It has asymmetrical wings.
Deleted Comment
They had some goofballs policies that made it seem like vulnerabilities were the goal. I could bitch at length. Their TSA style security theater practices were the order of the day. The IA training was an embarrassing joke and they made you do it often enough to make you a little crazy.
I just checked the certificate of networthiness page and they don't have a valid SSL certificate. I recall that being the case years ago too. I wonder if it's been that way for the last 7 years? That's a cute little terrarium of the whole biome I remember.
Off topic a bit, but that all aside... I am more proud of the work I did there than at any other place in my career. I got a lot of excitement and engaged feedback about the interactive learning materials I created.
I'll never know if it made any difference, but the mere fact that someone's son or daughter COULD have noticed an IED threat they wouldn't have otherwise because of my work gives me all sorts of proud fuzzies.
That work had way more meaning than all the other CRUD/ML/Advertainment schlock I'll get to do for the rest of my life :)
That's not quite true. Internal use sites don't have a valid cart issued by a "default" external vendor.
Public sites use existing CAs that are in use by the public. E.g., the Marines public facing site[0] is signed by DigiCert. If you go to a site that's public facing but for internal use like MoL[1], you'll see that the cert is issue by an internal DoD CA. This is intentional.
The DoD has an internal CA already set up. These internal use sites are a gateway to sensitive information, so the DoD doesn't want to rely on an external CA for HTTPS. What I never understood was why these internal CAs weren't marked as trusted on the internal machines. That would avoid the browser warnings when accessing one of these site from DoD hardware, and it would (in theory) force the user to double check when accessing the site from an external device.
[0]: https://www.marines.com/ [1]: https://mol.tfs.usmc.mil/mol
What I find as the most common error is that users setup an alternate browser (such as Firefox) that does not use the system certificate store and then lack the system's local certificate authorities.
Additionally, DOD PKI is now cross-signed with Federal PKI (FPKI), so it's larger than the DOD now and other agencies also use the same smartcards (PIV).
Job offers are trivial to get. Meaning.. proper autonomy / feedback balance.. impact.. Life must be too easy for me to be such a snob. Neural fatigue is real.
As far as the SSL certificate, I assume you mean: https://www.atsc.army.mil/ ? That site seems like it has a valid certificate, if you validate against the DOD PKI (now cross-signed with FPKI) root CAs:
My first year there it was a goofy flash game with uncanny valley cartoon characters awkwardly telling you not to share secrets at the bar to get laid. Every year I stayed it seemed to get longer and more awkward. At some point they added a boxing minigame that didn't have any training value. Nothing was optional.
It became a goal of mine that they'd let me remake it in a way that was... not patronizing... I never found anyone who knew who to talk to get me the project though. :(
If you're not ready for that level of commitment (though it's amazing work), and you're interested in being involved as a security researcher, reach out to me and we can talk about joining our bug bounty program.
Honestly, you don’t do this job for the money. I took a pay cut when I joined, on top of losing bonuses and equity. You join because you want to make a real difference in people’s lives, in a visceral, real way.
I can say without exaggeration that there are people who would have died except for the work that our team had done. Even when the stakes aren’t life or death, the impact you can have working for USDS is massive compared to anywhere else. You can personally change the lives of hundreds of thousands or millions of people. That’s the kind of hook that beats equity for me any day.
1.) Does DDS really pen test developmental/operational weapon systems? I'm talking about custom flavors of standalone PIT systems at the lowest embedded level, not just public-facing unclassified commidity IT systems. Maybe I'm missing something, but the projects highlighted on DDS's website suggest otherwise.
2.) How's your Blue team ops? The current RMF meta in the field strikes me as an all-Red team party, while the Blue side of business is pretty much always MIA. I suspect it's partly because pen testing is fashionable these days, successful outcomes can be quite dramatic and perceivably understood by stakeholders, and avoidance of the inherent liablility of defensive posturing without significant impact to performance/capability if a complex system's requirements are not well understood (a compounded issue not exclusive to weapon systems), to name a few.
1. Well, this is rapid deployment, we can't have everything.
2. The enemy here is fairly low-tech. Shouldn't be a problem.
Needless to say, I'm not surprised by this report.
Would be perfectly acceptable if your hardware was only used for 2-3 years against only low tech enemies that don't have access to electricity during that whole time.
It's not too surprising and a little reminiscent of the security nightmare that are IoT devices.
All those weapon systems come out of hardware/engineering companies with little background in software engineering and the accompanying security best practices.
Look at things like cable set top boxes, and automotive entertainment systems. It's like they don't care what the software is as long as some bits that some supplier sent them are flashed onto the device.
What I'd be most concerned about is that the procurement process is favouring companies who clearly aren't up to designing in rudimentary security, in weapons systems, ... smh.
That seems like getting clothing made and not having anyone flag that it was glued together with PVA instead of being sewn; and the company you hiredb not having anyone who realises that's a fundamental problem.
well, I recall asking about the security of the systems (I was the IT lead and was to help design the global port tracking system which they hoped to track all shipping containers) -- there was no encryption/authentication on any of the tags.
If you had a reader, you could read/write the tags.
They had not even thought about securing these systems - and they were trying to tout them as a security system for weapons shipments. They even had tags that had G-sensors that were to be able to tell you if a munition was dropped, if it had armed (some weapons will only arm themselves once a certain g-force is reached which indicates to the weapon they have been launched.)
The inclusion of this graphic makes me realize the report is not intended to explain the situation to engineers. It's to explain the problem to well-decorated higher ups that probably don't understand modern technology all that well, yet are calling all the budget shots.