The TSA's response here is childish and embarrassing, although perhaps unsurprising given the TSA's institutional disinterest in actual security. It's interesting to see that DHS seemingly (initially) handled the report promptly and professionally, but then failed to maintain top-level authority over the fix and disclosure process.
It’s very hard for management, even IT managers, to fully understand what such things mean.
I’ve seen huge issues, like exposed keys, being treated as a small issue. While an outdated js library, or lack of ip6 support being escalated.
I’m sure TSA and their partners wants to downplay potential exposure, I’m also sure it’s hard for a lot of their managers to fully understand what the vulnerability entails (most likely their developers are downplaying their responsibility and pointing fingers at others)
This is the Transportation SECURITY Agency. If the managers involved here can't understand why this is a huge deal, they're exceptionally unqualified for their jobs.
Edit: Fixed a double negative (previously: This is the Transportation SECURITY Agency. If the managers involved here can't understand why this is a huge deal, they're not exceptionally unqualified for their jobs.)
>> The TSA's response here is childish and embarrassing,
> It’s very hard for management, even IT managers,
I'm confident that the grandparent's comment is correct.
TSA is closer to the issue than HSA; I'd wager big that they sense embarrassment.¹²
TSA management would have immediate access to people capable of framing the issue correctly, including their own parent agency. Their reaction was never going to be held back by technical facts.
¹ US Sec/LEO/IC agencies have a long and unbroken history of attacking messengers that bring embarrassment. There is ~no crime they are more dedicated to punishing.
² The worlds easiest presupposition: Discussions took/are taking place on how they might leverage the CFAA to deploy revenge against the author.
Part of being a good manager is knowing how to get good folks to give you advice on things you don't understand, and knowing how to follow that advice. Yeah, its hard- but that's a huge part of the whole dang job!
No manager (or human) is perfect, mistakes happen- we need to be humble enough to listen and learn from mistakes.
TSA is security theater, it is there to give the illusion of security. In reality it seems more like the goal is the entrenchment of surveillance and the appearance of strength.
> It's interesting to see that DHS seemingly (initially) handled the report promptly...
I think DHS mid level manager yelled at a TSA mid level manager who reported this to the senior TSA officials and then their usual policy kicked in... deny/deflect/ignore
What was surprising to me was that they didn't immediately do pre-dawn raids on the pentesters' homes and hold them without a lawyer under some provision of an anti-terror law.
That's not really how this works. TSA is maliciously incompetent, but there is a reporting pipeline and procedure for these things that's formalized and designed to protect exactly this kind of good-faith reporting[1].
(It's very easy to believe the worst possible thing about every corner of our government, since every corner of our government has something bad about it. But it's a fundamental error to think that every bad thing is always present in every interaction.)
that is apparently not a popular move anymore since people keep logs and have credentials, strong social media presence and readily available cloud enabled cameras. one email to any news org and whoever authorizes the raid will probably face some music. but knowing TSA, we can expect this any minute now...
Since they actually went past the SQL injection and then created a fake record for an employee, I'm shocked that Homeland did not come after and arrest those involved. Homeland would have been top of the list to misinterpret a disclosure and prefer to refer to the disclosure as malicious hacking instead of responsible disclosure. I'm more impressed by this than the incompetence of the actual issue.
You're not wrong, but I would have a hard time as a jury member convicting them of a CFAA violation or whatever for creating a user named "Test TestOnly" with a bright pink image instead of a photo.
If they had added themselves as known crewmembers and used that to actually bypass airport screening, then yeah, they'd be in jail.
That's what jury instructions are for. The judge can instruct the jury to ignore pretty much any facts and consider any subset of what really happened that they want. So they'd just instruct "did they access the system? Were they authorized? If the answer to the first question is yes, and to the second is no, the verdict is guilty, ignore all the rest". The jury won't be from the HN crowd, it would be random people who don't know anything about CFAA or computer systems, it will be the easiest thing in the world to convict. Those guys got so lucky DHS exhibited unusually sensible behavior, they could have ruined their lives.
> You're not wrong, but I would have a hard time as a jury member convicting them of a CFAA violation or whatever for creating a user named "Test TestOnly" with a bright pink image instead of a photo.
If they had added themselves as known crewmembers and used that to actually bypass airport screening, then yeah, they'd be in jail.
I think it could go any which way. The prosecution could argue that the defendant may have tampered with existing records or deleted some. In this particular case, it’s probable that the system does not have any or adequate audit trails to prove what exactly transpired. Or the claim could be that the defendant exfiltrated sensitive data (or that the defendant is trying to hide it) to share with hostile entities.
If anyone from there reads the parent, they should know they have created an atmosphere where the worry of possible prosecution over responsible disclosure has the potential to scare away the best minds in our country from picking at these systems.
That just means the best minds from other, potentially less friendly countries, will do the picking. I doubt they will responsibly disclose.
I personally don't comprehend how these people are taking such a huge risks. Once bureaucrat wakes one morning in the wrong mood and your life is ruined at least for the next decade, maybe forever. Why would anyone do it - just for the thrill of it? I don't think they even got paid for it?
I’m not sure any country’s bureaucracy really appreciates responsible disclosures that make the government’s systems look very poorly designed. There is always the risk of being classified as an enemy agent/criminal depending on who’s reading the report and their own biases.
They've had that relationship for a few years now, so I'm guessing they're somewhat versed. TSA specifically might be less so, but I can't imagine the DHS referring anything to the DOJ for prosecution given that they both have a VDP for the entire department and advise other departments on how to run VDPs (via CISA).
In some countries where this is the norm, like Germany, the usual route is to report the issue to journalists or to non-profits like the CCC and those then report the issue to the government agency/company. This way you won't get prosecuted for responsible disclosure. Alternatively an even safer route is to write a report and send it to them anonymously with a hard deadline on public/full disclosure, won't get any credit for the discovery this way of course.
The timeline mentions the disclosure was made through CISA, and on their website there is an official incident report form.
I can imagine an email to some generic email address could have gone down the way you describe, but I guess they look at these reports more professionally.
Good catch. Of course, different people wear different shades of hat, and I guess the author might have good rationale for going quite as far as they did, I don't know.
Kudos to the author for alerting DHS. Methodology questions aside, it sounds like the author did a service, by alerting of a technical vulnerability that would be plausible for a bad actor to seek out and successfully discover.
But regardless, I hope any new/aspiring security researchers don't read this writeup, and assume that they could do something analogous in an investigation, without possibly getting into trouble they'd sorely regret. Some of the lines are fuzzy and complicated.
BTW, if it turns out that the author made a legality/responsibility mistake in any of the details of how they investigated, then maybe the best outcome would be to coordinate publishing a genuine mea culpa and post mortem on that. It could explain what the mistake was, why it was a mistake, and what in hindsight they would've done differently. Help others know where the righteous path is, amidst all the fuzziness, and don't make contacting the proper authorities look like a mistake.
You know it's bad when it's so bad that as I write this no one has even bothered talking about how bad storing MD5'd passwords is. This even proves they aren't even so much as salting it, which is itself insufficient for MD5.
But that isn't even relevant when you can go traipsing through the SQL query itself just by asking; wouldn't matter how well the passwords were stored.
This used to be a question on the Triplebyte interview almost verbatim, and a huge percentage of (even quite good) engineers got it wrong. I'd say probably <20% both salted and used a cryptographically-secure hash; MD5 specifically came up all the time. And keep in mind that we filtered substantially before this interview, so the baseline is even worse than that!
The screenshot in the article shows MD5() is returned as part of the error message from the web server, so it is probably also a part of the original server-side query.
> We did not want to contact FlyCASS first
> as it appeared to be operated only by one person
> and we did not want to alarm them
I’m not buying this. Feels more like they knew the site developer would just fix it immediately and they wanted to make a bigger splash with their findings.
This is exactly the kinda bug where you want to make a big splash though. You don't just want the guy to silently fix it, everyone in the database needs to be vetted again.
Whatever their motive was, the engineering process that allowed such a common bug to sneak in is broken. If the sole developer immediately fixed it, it would have been hard to escalate the issue so that maybe someone up the chain can fix this systematically. I'm not sure such overhaul would really happen but it's more likely that it won't if not escalated.
Yes, and what about the possibility that an attacker already accessed this database and added themself as an employee?
Would you rather to be prepared and do a full (well, for a govt agency, full enough) check on all people allowed to access flying death machines, or have a dev silently fix the issue with possible issues later?
ya because the person who developed this is totally trustworthy to fully fix it and assess any other possible vulnerabilities. he definitely isn't gonna just add a front
end validation to throw a message on the front end when you submit a single quote...
Not surprised that they deny the severity of the issue, but I am quite surprised they didn't inform the FBI and/or try to have you arrested. Baby steps?
The author made the right move by doing this through FAA and CISA (via DHS), rather than directly via TSA. It's not inconceivable that a direct report to TSA would have resulted in legal threats and bluster.
> We did not want to contact FlyCASS first as it appeared to be operated only by one person...
It seems pretty remarkable that airlines are buying such a security sensitive piece of software from a one person shop. If you make it very far into selling any piece of SaaS software to most companies in corporate America, at the absolute minimum they're going to ask you for your SOC2 audit report.
SOC2 is pretty damn easy to get through with minimal findings as far as audits go, but there are definitely several criteria that would should generate some red flags in your report if the company is operated by a single person. And I would have assumed that if your writing software that integrates with TSA access systems, the requirements would be a whole lot more rigorous than SOC2.
The "airlines" that are using something like FlyCASS are themselves smaller operations and typically running on razor thin margins (if not just unprofitable and wishfully thinking that money will suddenly appear and make their business viable). Literally everything on their backend is held together with more duct tape than the average small business.
You could be an "airline" by purchasing a couple of older airliners and converting them to cargo use. Is it valuable for new airlines to get started? Should we force them out of business because they don't already have the systems in place that take years to decades to build out? Should they pay $$$ for boutique systems designed for a large passenger airline when they have 2 aircraft flying 1 route between nowhere and nowhere?
Requirements and audits really aren't the answer here. The fundamental design problem is that the TSA has used authentication "airline XXX says you're an employee" with a very large blanket authorization "you're allowed to bypass all security checks at any airport nationwide" without even the basic step of "does your airline even operate here?"
I'm curious why a small cargo airline would even need to use the KCM system. If they don't fly passengers, then wouldn't their crew access the aircraft from the cargo ramp (with a SIDA badge) and never need to enter the passenger terminal/sterile area?
I mean, yes, in this particular situation it seems like there is many layers of screw ups from several different organizations.
Though given that airlines are responsible for the safety of their crew, passengers, and anyone in the vicinity of their aircraft, requiring them to do some basic vetting of their chosen vendors related to safety and security doesn’t seem unreasonable.
I’ve seen huge issues, like exposed keys, being treated as a small issue. While an outdated js library, or lack of ip6 support being escalated.
I’m sure TSA and their partners wants to downplay potential exposure, I’m also sure it’s hard for a lot of their managers to fully understand what the vulnerability entails (most likely their developers are downplaying their responsibility and pointing fingers at others)
Edit: Fixed a double negative (previously: This is the Transportation SECURITY Agency. If the managers involved here can't understand why this is a huge deal, they're not exceptionally unqualified for their jobs.)
> It’s very hard for management, even IT managers,
I'm confident that the grandparent's comment is correct.
TSA is closer to the issue than HSA; I'd wager big that they sense embarrassment.¹²
TSA management would have immediate access to people capable of framing the issue correctly, including their own parent agency. Their reaction was never going to be held back by technical facts.
No manager (or human) is perfect, mistakes happen- we need to be humble enough to listen and learn from mistakes.
I think DHS mid level manager yelled at a TSA mid level manager who reported this to the senior TSA officials and then their usual policy kicked in... deny/deflect/ignore
(It's very easy to believe the worst possible thing about every corner of our government, since every corner of our government has something bad about it. But it's a fundamental error to think that every bad thing is always present in every interaction.)
[1]: https://www.cisa.gov/report
I didn't see any comment about them being contracted to do this at least.
Deleted Comment
Deleted Comment
Dead Comment
If they had added themselves as known crewmembers and used that to actually bypass airport screening, then yeah, they'd be in jail.
Which is why Jury selection usually removes people who understand the situation.
I think it could go any which way. The prosecution could argue that the defendant may have tampered with existing records or deleted some. In this particular case, it’s probable that the system does not have any or adequate audit trails to prove what exactly transpired. Or the claim could be that the defendant exfiltrated sensitive data (or that the defendant is trying to hide it) to share with hostile entities.
Doing this under your own name is insane.
That just means the best minds from other, potentially less friendly countries, will do the picking. I doubt they will responsibly disclose.
https://bugcrowd.com/engagements/dhs-vdp
They've had that relationship for a few years now, so I'm guessing they're somewhat versed. TSA specifically might be less so, but I can't imagine the DHS referring anything to the DOJ for prosecution given that they both have a VDP for the entire department and advise other departments on how to run VDPs (via CISA).
But I might just be overly optimistic.
I can imagine an email to some generic email address could have gone down the way you describe, but I guess they look at these reports more professionally.
https://myservices.cisa.gov/irf
Kudos to the author for alerting DHS. Methodology questions aside, it sounds like the author did a service, by alerting of a technical vulnerability that would be plausible for a bad actor to seek out and successfully discover.
But regardless, I hope any new/aspiring security researchers don't read this writeup, and assume that they could do something analogous in an investigation, without possibly getting into trouble they'd sorely regret. Some of the lines are fuzzy and complicated.
BTW, if it turns out that the author made a legality/responsibility mistake in any of the details of how they investigated, then maybe the best outcome would be to coordinate publishing a genuine mea culpa and post mortem on that. It could explain what the mistake was, why it was a mistake, and what in hindsight they would've done differently. Help others know where the righteous path is, amidst all the fuzziness, and don't make contacting the proper authorities look like a mistake.
But that isn't even relevant when you can go traipsing through the SQL query itself just by asking; wouldn't matter how well the passwords were stored.
I’m not buying this. Feels more like they knew the site developer would just fix it immediately and they wanted to make a bigger splash with their findings.
Would you rather to be prepared and do a full (well, for a govt agency, full enough) check on all people allowed to access flying death machines, or have a dev silently fix the issue with possible issues later?
https://www.schneier.com/crypto-gram/archives/2003/0815.html...
https://www.schneier.com/essays/archives/2006/11/the_boardin...
It seems pretty remarkable that airlines are buying such a security sensitive piece of software from a one person shop. If you make it very far into selling any piece of SaaS software to most companies in corporate America, at the absolute minimum they're going to ask you for your SOC2 audit report.
SOC2 is pretty damn easy to get through with minimal findings as far as audits go, but there are definitely several criteria that would should generate some red flags in your report if the company is operated by a single person. And I would have assumed that if your writing software that integrates with TSA access systems, the requirements would be a whole lot more rigorous than SOC2.
You could be an "airline" by purchasing a couple of older airliners and converting them to cargo use. Is it valuable for new airlines to get started? Should we force them out of business because they don't already have the systems in place that take years to decades to build out? Should they pay $$$ for boutique systems designed for a large passenger airline when they have 2 aircraft flying 1 route between nowhere and nowhere?
Requirements and audits really aren't the answer here. The fundamental design problem is that the TSA has used authentication "airline XXX says you're an employee" with a very large blanket authorization "you're allowed to bypass all security checks at any airport nationwide" without even the basic step of "does your airline even operate here?"
Though given that airlines are responsible for the safety of their crew, passengers, and anyone in the vicinity of their aircraft, requiring them to do some basic vetting of their chosen vendors related to safety and security doesn’t seem unreasonable.