This is a prime example of "If you make an unusable secure system, the users will turn it into an insecure usable one."
If someone is actively subverting a control like this, it probably means that the control has morphed from a guardrail into a log across the tracks.
Somewhat in the same vein as AppLocker &co. Almost everyone says you should be using it, but almost no-one does, because it takes a massive amount of effort just to understand what "acceptable software" is across your entire org.
Nobody outside of the IT security bubble thinks that using AppLocker is a sensible idea.
Companies have no business telling their employees which specific programs they can and cannot run to do their jobs, that's an absurd level of micromanagement.
> Companies have no business telling their employees which specific programs they can and cannot run to do their jobs, that's an absurd level of micromanagement.
I'm usually on the side of empowering workers, but I believe sometimes the companies do have business saying this.
One reason is that much of the software industry has become a batpoop-insane slimefest of privacy (IP) invasion, as well as grossly negligent security.
Another reason is that the company may be held liable for license terms of the software.
Another reason is that the company may be held liable for illegal behavior of the software (e.g., if the software violates some IP of another party).
Every piece of software might expose the company to these risks. And maybe disproportionately so, if software is being introduced by the "I'm gettin' it done!" employee, rather than by someone who sees vetting for the risks as part of their job.
That level of micromanagement can be quite sensible depending on the employee role. It's not needed for developers doing generic software work without any sensitive data. But if the employee is, let's say, a nurse doing medical chart review at an insurance company then there is absolutely no need for them to use anything other than specific approved programs. Allowing use of random software greatly increases the potential attack surface area, and in the worst case could result in something like a malware penetration and/or HIPAA privacy violation.
Security practitioners are big fans of application whitelisting for a reason: Your malware problems pretty much go away if malware cannot execute in the first place.
The Australian Signals Directorate for example has recommended (and more recently, mandated) application whitelisting on government systems for the past 15 years or so, because it would’ve prevented the majority of intrusions they’ve investigated.
AppLocker is effectively an almost perfect solution to ransomware. (On the employee desktops anyway) You can plug lots of random holes all day long or just whitelist what can be run in the first place. Ask M&S management today if they prefer to keep working with paper systems for the another month, or would they prefer to deal with AppLocker.
> Companies have no business telling their employees which specific programs they can and cannot run to do their jobs, that's an absurd level of micromanagement.
This is a lovely take if your business exclusively running on FOSS on premise software, but is a receipe for some hefty bills from software vendors due to people violating licensing conditions
> Companies have no business telling their employees which specific programs they can [run]
Agreed.
> and cannot run
I strongly disagree. I think those controls are great for denylists. For example, almost no one needs to run a BitTorrent client on their work laptops. (I said almost. If you’re one of them, make a case to your IT department.) Why allow it? Its presence vastly increases the odds of someone downloading porn (risk: sexual harassment) or warez (risks: malware, legal issues) with almost no upside to the company. I’m ok with a company denylisting those.
I couldn’t care less if you want to listen to Apple Music or Spotify while you work. Go for it. Even though it’s not strictly work-related, it makes happier employees with no significant downside. Want to use Zed instead of VSCode? Knock yourself out. I have no interest in maintaining an allowlist of vetted software. That’s awful for everyone involved. I absolutely don’t want anyone running even a dev version of anything Oracle in our non-Oracle shop, though, and tools to prevent that are welcome.
>Companies have no business telling their employees which specific programs they can and cannot run to do their jobs, that's an absurd level of micromanagement.
Yet so many receptionists think that the application attached to the email sent by couriercompany@hotmail.com is a reasonable piece of software to run. Curious.
That's not a fix though is it? Git tools are already on the runner. You could checkout code from public repos using cli, and you could hardcode a token into the workflow if you wanted to access a private repo (assuming the malicious internal user doesn't have admin privileges to add a secret).
Had these exact same thoughts while I was configuring a series of workflows and scripts to get around the multiple unjustified and longstanding restrictions on what things are allowed to happen when.
That sinking feeling when you search for how to do something and all of the top results are issues that were opened over a decade ago...
It is especially painful trying to use github to do anything useful at all after being spoiled by working exclusively from a locally hosted gitlab instance. I gave up on trying to get things to cache correctly after a few attempts of following their documentation, it's not like I'm paying for it.
Was also very surprised to see that the recommended/suggested default configuration that runs CodeQL had burned over 2600 minutes of actions in just a day of light use, nearly doubling the total I had from weeks of sustained heavy utilization. Who's paying for that??
I'm baffled you can't clone internal/private repos with anything other than a developer PAT. They have a UI to share access for workflows, let cloning use that...
It used 1.8 days of time to run for a single day? I'm less curious about who's paying for it than who's _using _ it on your repo, because I can't even imagine having an average of almost two people scanning a codebase every single minute of the day.
Not the OP, but a poorly behaving repo can turn and burn for six hours on every PR, rather than the handful of minutes one would expect. It happens - but usually that sort of thing should be spotted and fixed. More often then not, something is trying to pull artifacts and timing out rather than it being a giant monorepo.
Anyone who can write code to the repo can already do anything in GitHub actions. This security measure was never designed to mitigate against a developer doing something malicious. Whether they clone another action into the repo or write custom scripts themselves, I don’t see how GitHub’s measures could protect against that.
The risk is the same reason we don't allow any of our servers to make outgoing network connections except to a limited host lists. eg backend servers can talk to the gateway, queue / databases, and an approved list of domains for apis and nothing else.
The same guard helps prevent accidents, not maliciousness, and security breaches. If code somehow gets onto our systems, but we prevent most outbound connections, exfiltrating is much harder.
Yes, people do code review but stuff slips through. See eg Google switching one of their core libs that did mkdir with a shell to run mkdir -p (tada! every invocation better understand shell escaping rules). That made it through code review. People are imperfect; telling your network no outbound connections (except for this small list) is much closer to perfect.
A mitigation for this exact policy mechanism is included in the post.
(The point is not directly malicious introductions: it's supply chain risk in the form of engineers introducing actions/reusable workflows that are themselves malleable/mutable/subject to risk. A policy that claims to do that should in fact do it, or explicitly document its limitations.)
Companies that care about this kind of thing usually have the CI config on another repo from the actual code so you can't just rewrite it to deploy your dev branch straight to prod.
The risk is simple enough. GitHub Enterprise allows admins to configure a list of actions to allow or deny. Ideally these actions are published in the GitHub Marketplace.
The idea is that the organization does not trust these third-parties, therefore they disable their access.
However this solution bypasses those lists by cloning open-source actions directly into the runner. At that point it’s just running code, no different from if the maintainers wrote a complex action themselves.
This is why I avoid using non-official actions where possible and always set a version for the action.
We had a contractor that used some random action to ssh files to the server and referenced master as the version to boot. First, ssh isn't that difficult to upload files and run commands but the action owner could easily add code to save private keys and information to another server.
I am a bit confused on the "bypass" though. Wouldn't the adversary need push access to the repository to edit the workflow file? So, the portion that needs hardening is ensuring the wrong people do not have access to push files to the repository?
On public repositories I could see this being an issue if they do it in a section of the workflow that is run when a PR is created. Private repositories, you should take care with who you give access.
> This is why I avoid using non-official actions where possible and always set a version for the action.
Those are good practices. I would add that pinning the version (tag) is not enough, as we learnt with the tj-actions/changed-files event. We should pin the commit sha.[0]. Github states this in their official documentation [1] as well:
> Pin actions to a full length commit SHA
> Pin actions to a tag only if you trust the creator
> I am a bit confused on the "bypass" though. Wouldn't the adversary need push access to the repository to edit the workflow file? So, the portion that needs hardening is ensuring the wrong people do not have access to push files to the repository?
I understand it that way, too. But: Having company-wide policies in place (regarding actions) might be misunderstood/used as a security measure for the company against malicious/sloppy developers.
So documenting or highlighting the behaviour helps the devops guys avoid a wrong sense of security. Not much more.
We forked the actions as a submodule, and then pointed the uses to that directory.
That way we were still tracking the individual commits which we approved as a team.
Now there is interesting dichotomy. On one hand PMs want us to leverage GitHub Actions to build out stuff more quickly using pre-built blocks, but on the other hand security has no capacity or interest to whitelist actions (not to mention that the whitelist list is limited to 100 actions as per the article).
securityscorecard is easy to integrate (it's a cli tool or you run it as a github action), one of the checks it performs is "Pinned-Dependencies": https://github.com/ossf/scorecard/blob/main/docs/checks.md#p.... Checks that fail generate an security alert under Security -> Code scanning.
My main problem with the policy and how it's implemented at my job is that the ones setting the policies aren't the ones impacted by them, and never consult people who are. Our security team tells our GitHub admin team that we can't use 3rd party actions.
Our GitHub admin team says sure, sounds good. They don't care, because they don't use actions, and they in fact don't delivery anything at all. Security team also delivers nothing, so they don't care. Combined, these teams crowning achievement is buying GitHub Enterprise and moving it back and forth between cloud and on prem 3 times in the last 7 years.
As a developer, I'll read the action I want to use, and if it looks good I just clone the code and upload it into our own org/repo. I'm already executing a million npm modules in the same context that do god knows what. If anyone complains, it's getting hit by the same static/dynamic analysis tools as the rest of the code and dependencies.
It sounds like reading the code and forking it (therefore preventing malicious updates) totally satisfies the intent behind the policy, then.
My company has a similar whitelist of actions, with a list of third-party actions that were evaluated and rejected. A lot of the rejected stuff seems to be some sort of helper to make a release, which pretty much has a blanket suggestion to use the `gh` CLI already on the runners.
I don't see the vulnerability. In fact, I think considering this a problem at all is ridiculous.
Obviously it's impossible to block all ways of "bypassing" the policy. If you are a developer who has already been entrusted with the ability to make your GitHub Actions workflows run arbitrary code, then OF COURSE you can make it run the code of some published action, even if it's just by manual copy and paste. This fact doesn't need documenting because it's trivially obvious that it could not possibly be any other way.
Nor does it follow from this that the existence of the policy and the limited automatic enforcement mechanism is pointless and harmful. Instead of thinking of the enforcement mechanism as a security control designed to outright prevent a malicious dev from including code from a malicious action, instead think of it more like a linting rule: its purpose is to help the developer by bringing the organisation's policy on third party actions to the dev's attention and pointing out that what they are trying to do breaks it.
If they decide to find a workaround at that point (which of course they CAN do, because there's no feasible way to constrain them from doing so), that's an insubordination issue, just like breaking any other policy. Unless his employer has planted a chip in his brain, an employee can also "bypass" the sexual harassment policy "in the dumbest way possible" - just walk up to Wendy from HR and squeeze her tits! There is literally no technical measure in place to make it physically impossible for him do so. Is the sexual harassment policy therefore also worse than nothing, and is it a problem that the lack of employee brain chips isn't documented?
The problem of audit of third-party code is real. Especially because of the way GitHub allows embedding it in users' code: it's not centralized, doesn't require signatures / authentication.
But, I think, the real security-minded approach here should be at the container infrastructure level. I.e. security policies should apply to things like container network in the way similar to security groups in popular cloud providers, or executing particular system calls, or accessing filesystem paths.
Restrictions on the level of what actions can be mentioned in the "manifest" are just a bad approach that's not going to stop anyone.
The author relates to exactly that: "ineffective policy mechanisms are worse than missing policy mechanisms, because they provide all of the feeling of security through compliance while actually incentivizing malicious forms of compliance."
And I totally agree. It is so abundant. "Yes, we are in compliance with all the strong password requirements, strictly speaking there is one strong password for every single admin user for all services we use, but that's not in the checklist, right?"
It's less of an "use this to do nasty shit to a bunch of unsuspecting victims" one, and more of a "people can get around your policies when you actually need policies that limit your users".
1. BigEnterpriseOrg central IT dept click the tick boxes to disable outside actions because <INSERT SECURITY FRAMEWORK> compliance requires not using external actions [0]
2. BigBrainedDeveloper wants to use ExternalAction, so uses the method documented in the post because they have a big brain
3. BigEnterpriseOrg is no longer compliant with <INSERT SECURITY FRAMEWORK> and, more importantly, the central IT dept have zero idea this is happening without continuously inspecting all the CI workflows for every team they support and signing off on all code changes [1]
That's why someone else's point of "you're supposed to fork the action into your organisation" is a solution if disabling local `uses:` is added as an option in the tick boxes -- the central IT dept have visibility over what's being used and by whom if BigBrainedDeveloper can ask for ExternalAction to be forked into BigEnterpriseOrg GH organisation. Central IT dept's involvement is now just review the codebase, fork it, maintain updates.
NOTE: This is not a panacea against all things that go against <INSERT SECURITY FRAMEWORK> compliance (downloading external binaries etc). But it would be an easy gap getting closed.
----
[0]: or something, i dunno, plenty of reasons enterprise IT depts do stuff that frustrates internal developers
[1]: A sure-fire way to piss off every single one of your internal developers.
If someone is actively subverting a control like this, it probably means that the control has morphed from a guardrail into a log across the tracks.
Somewhat in the same vein as AppLocker &co. Almost everyone says you should be using it, but almost no-one does, because it takes a massive amount of effort just to understand what "acceptable software" is across your entire org.
Companies have no business telling their employees which specific programs they can and cannot run to do their jobs, that's an absurd level of micromanagement.
I'm usually on the side of empowering workers, but I believe sometimes the companies do have business saying this.
One reason is that much of the software industry has become a batpoop-insane slimefest of privacy (IP) invasion, as well as grossly negligent security.
Another reason is that the company may be held liable for license terms of the software.
Another reason is that the company may be held liable for illegal behavior of the software (e.g., if the software violates some IP of another party).
Every piece of software might expose the company to these risks. And maybe disproportionately so, if software is being introduced by the "I'm gettin' it done!" employee, rather than by someone who sees vetting for the risks as part of their job.
https://itwire.com/guest-articles/guest-opinion/is-an-oracle...
Security practitioners are big fans of application whitelisting for a reason: Your malware problems pretty much go away if malware cannot execute in the first place.
The Australian Signals Directorate for example has recommended (and more recently, mandated) application whitelisting on government systems for the past 15 years or so, because it would’ve prevented the majority of intrusions they’ve investigated.
https://nsarchive.gwu.edu/sites/default/files/documents/5014...
This is a lovely take if your business exclusively running on FOSS on premise software, but is a receipe for some hefty bills from software vendors due to people violating licensing conditions
Agreed.
> and cannot run
I strongly disagree. I think those controls are great for denylists. For example, almost no one needs to run a BitTorrent client on their work laptops. (I said almost. If you’re one of them, make a case to your IT department.) Why allow it? Its presence vastly increases the odds of someone downloading porn (risk: sexual harassment) or warez (risks: malware, legal issues) with almost no upside to the company. I’m ok with a company denylisting those.
I couldn’t care less if you want to listen to Apple Music or Spotify while you work. Go for it. Even though it’s not strictly work-related, it makes happier employees with no significant downside. Want to use Zed instead of VSCode? Knock yourself out. I have no interest in maintaining an allowlist of vetted software. That’s awful for everyone involved. I absolutely don’t want anyone running even a dev version of anything Oracle in our non-Oracle shop, though, and tools to prevent that are welcome.
Yet so many receptionists think that the application attached to the email sent by couriercompany@hotmail.com is a reasonable piece of software to run. Curious.
That sinking feeling when you search for how to do something and all of the top results are issues that were opened over a decade ago...
It is especially painful trying to use github to do anything useful at all after being spoiled by working exclusively from a locally hosted gitlab instance. I gave up on trying to get things to cache correctly after a few attempts of following their documentation, it's not like I'm paying for it.
Was also very surprised to see that the recommended/suggested default configuration that runs CodeQL had burned over 2600 minutes of actions in just a day of light use, nearly doubling the total I had from weeks of sustained heavy utilization. Who's paying for that??
[0]: https://github.com/hickford/git-credential-oauth
Anyone who can write code to the repo can already do anything in GitHub actions. This security measure was never designed to mitigate against a developer doing something malicious. Whether they clone another action into the repo or write custom scripts themselves, I don’t see how GitHub’s measures could protect against that.
The same guard helps prevent accidents, not maliciousness, and security breaches. If code somehow gets onto our systems, but we prevent most outbound connections, exfiltrating is much harder.
Yes, people do code review but stuff slips through. See eg Google switching one of their core libs that did mkdir with a shell to run mkdir -p (tada! every invocation better understand shell escaping rules). That made it through code review. People are imperfect; telling your network no outbound connections (except for this small list) is much closer to perfect.
(The point is not directly malicious introductions: it's supply chain risk in the form of engineers introducing actions/reusable workflows that are themselves malleable/mutable/subject to risk. A policy that claims to do that should in fact do it, or explicitly document its limitations.)
The idea is that the organization does not trust these third-parties, therefore they disable their access.
However this solution bypasses those lists by cloning open-source actions directly into the runner. At that point it’s just running code, no different from if the maintainers wrote a complex action themselves.
The dumb thing is GitHub offers “action policies” pretending they actually do something.
We had a contractor that used some random action to ssh files to the server and referenced master as the version to boot. First, ssh isn't that difficult to upload files and run commands but the action owner could easily add code to save private keys and information to another server.
I am a bit confused on the "bypass" though. Wouldn't the adversary need push access to the repository to edit the workflow file? So, the portion that needs hardening is ensuring the wrong people do not have access to push files to the repository?
On public repositories I could see this being an issue if they do it in a section of the workflow that is run when a PR is created. Private repositories, you should take care with who you give access.
Those are good practices. I would add that pinning the version (tag) is not enough, as we learnt with the tj-actions/changed-files event. We should pin the commit sha.[0]. Github states this in their official documentation [1] as well:
> Pin actions to a full length commit SHA
> Pin actions to a tag only if you trust the creator
[0] https://www.stepsecurity.io/blog/harden-runner-detection-tj-...
[1] https://docs.github.com/en/actions/security-for-github-actio...
I understand it that way, too. But: Having company-wide policies in place (regarding actions) might be misunderstood/used as a security measure for the company against malicious/sloppy developers.
So documenting or highlighting the behaviour helps the devops guys avoid a wrong sense of security. Not much more.
That way we were still tracking the individual commits which we approved as a team.
Now there is interesting dichotomy. On one hand PMs want us to leverage GitHub Actions to build out stuff more quickly using pre-built blocks, but on the other hand security has no capacity or interest to whitelist actions (not to mention that the whitelist list is limited to 100 actions as per the article).
That said, even tagging GitHub actions with a sha256 isn't perfect for container actions as they can refer to a tag, and the contents of that tag can be changed: https://docs.github.com/en/actions/sharing-automations/creat...
E.g. I publish an action with code like
You use the action, and pin it to the SHA of this commit.I get hacked, and a hacker publishes a new version of optionoft/actions-tool:v3.0.0
You wouldn't even get a Dependabot update PR.
Optionally, you can tell your action to reference the docker image by sha256 hash also, in which case it's effectively immutable.
My main problem with the policy and how it's implemented at my job is that the ones setting the policies aren't the ones impacted by them, and never consult people who are. Our security team tells our GitHub admin team that we can't use 3rd party actions.
Our GitHub admin team says sure, sounds good. They don't care, because they don't use actions, and they in fact don't delivery anything at all. Security team also delivers nothing, so they don't care. Combined, these teams crowning achievement is buying GitHub Enterprise and moving it back and forth between cloud and on prem 3 times in the last 7 years.
As a developer, I'll read the action I want to use, and if it looks good I just clone the code and upload it into our own org/repo. I'm already executing a million npm modules in the same context that do god knows what. If anyone complains, it's getting hit by the same static/dynamic analysis tools as the rest of the code and dependencies.
My company has a similar whitelist of actions, with a list of third-party actions that were evaluated and rejected. A lot of the rejected stuff seems to be some sort of helper to make a release, which pretty much has a blanket suggestion to use the `gh` CLI already on the runners.
Deleted Comment
Obviously it's impossible to block all ways of "bypassing" the policy. If you are a developer who has already been entrusted with the ability to make your GitHub Actions workflows run arbitrary code, then OF COURSE you can make it run the code of some published action, even if it's just by manual copy and paste. This fact doesn't need documenting because it's trivially obvious that it could not possibly be any other way.
Nor does it follow from this that the existence of the policy and the limited automatic enforcement mechanism is pointless and harmful. Instead of thinking of the enforcement mechanism as a security control designed to outright prevent a malicious dev from including code from a malicious action, instead think of it more like a linting rule: its purpose is to help the developer by bringing the organisation's policy on third party actions to the dev's attention and pointing out that what they are trying to do breaks it.
If they decide to find a workaround at that point (which of course they CAN do, because there's no feasible way to constrain them from doing so), that's an insubordination issue, just like breaking any other policy. Unless his employer has planted a chip in his brain, an employee can also "bypass" the sexual harassment policy "in the dumbest way possible" - just walk up to Wendy from HR and squeeze her tits! There is literally no technical measure in place to make it physically impossible for him do so. Is the sexual harassment policy therefore also worse than nothing, and is it a problem that the lack of employee brain chips isn't documented?
The problem of audit of third-party code is real. Especially because of the way GitHub allows embedding it in users' code: it's not centralized, doesn't require signatures / authentication.
But, I think, the real security-minded approach here should be at the container infrastructure level. I.e. security policies should apply to things like container network in the way similar to security groups in popular cloud providers, or executing particular system calls, or accessing filesystem paths.
Restrictions on the level of what actions can be mentioned in the "manifest" are just a bad approach that's not going to stop anyone.
Seems like policies are impossible to enforce in general on what can be executed, so the only recourse is to limit secret access.
Is there a demonstration of this being able to access/steal secrets of some sort?
The author relates to exactly that: "ineffective policy mechanisms are worse than missing policy mechanisms, because they provide all of the feeling of security through compliance while actually incentivizing malicious forms of compliance."
And I totally agree. It is so abundant. "Yes, we are in compliance with all the strong password requirements, strictly speaking there is one strong password for every single admin user for all services we use, but that's not in the checklist, right?"
1. BigEnterpriseOrg central IT dept click the tick boxes to disable outside actions because <INSERT SECURITY FRAMEWORK> compliance requires not using external actions [0]
2. BigBrainedDeveloper wants to use ExternalAction, so uses the method documented in the post because they have a big brain
3. BigEnterpriseOrg is no longer compliant with <INSERT SECURITY FRAMEWORK> and, more importantly, the central IT dept have zero idea this is happening without continuously inspecting all the CI workflows for every team they support and signing off on all code changes [1]
That's why someone else's point of "you're supposed to fork the action into your organisation" is a solution if disabling local `uses:` is added as an option in the tick boxes -- the central IT dept have visibility over what's being used and by whom if BigBrainedDeveloper can ask for ExternalAction to be forked into BigEnterpriseOrg GH organisation. Central IT dept's involvement is now just review the codebase, fork it, maintain updates.
NOTE: This is not a panacea against all things that go against <INSERT SECURITY FRAMEWORK> compliance (downloading external binaries etc). But it would be an easy gap getting closed.
----
[0]: or something, i dunno, plenty of reasons enterprise IT depts do stuff that frustrates internal developers
[1]: A sure-fire way to piss off every single one of your internal developers.