This is not the first time it’s bitten people. It’s not safe, and honestly GitHub should have better controls around it or remove and rework it — it is a giant footgun.
> One of our engineers figured out this was because it triggered on: pull_request which means external contributions (which come from forks, rather than branches in the repo like internal contributions) would not have the workflow automatically run. The fix for this was changing the trigger to be on: pull_request_target, which runs the workflow as it's defined in the PR target repo/branch, and is therefore considered safe to auto-run.
There are so many things about GitHub Actions that make no sense.
Why are actions configured per branch? Let me configure Actions somewhere on the repository that is not modifiable by some yml files that can exist in literally any branch. Let me have actual security policy for configuring Actions that is separate from permission to modify a given branch.
Why do workflows have such strong permissions? Surely each step should have defined inputs (possibly from previous steps), defined outputs, and narrowly defined permissions.
Why can one step corrupt the entire VM for subsequent steps?
Why is security almost impossible to achieve instead of being the default?
Why does the whole architecture feel like someone took something really simple (read a PR or whatever, possibly run some code in a sandbox, and produce an output) of the sort that could easily be done securely in JavaScript or WASM or Lua or even, sigh, Docker and decided to engineer it in the shape of an enormous cannon aimed directly at the user’s feet?
While I agree with the general sentiment that lots of things about GH actions don't make sense, when you actually look at what the vulnerability was, you'll find that for lots of your questions it wasn't GitHub Actions' fault.
This workflow uses `pull_request_target` targeting where the actions are configured by the branch you're merging PR into, which should be safe - attacker can't modify the YML actions are running.
> Why do workflows have such strong permissions?
What permissions are workflow run with is irrelevant here, because the workflow runs the JS script with a custom access token instead of the permissions associated with the GH actions runner by default.
> Why is security almost impossible to achieve instead of being the default?
The default for `pull_request_target` is to checkout the branch you're trying to merge into (which again should be safe as it doesn't contain attacker's files), but this workflow explicitly checks out the attacker's branch on line 22.
A way to determine a workflow per branch, inside the branch, is useful for developing workflows. But it's perilous in other circumstances.
I wish I could, at the repo level, disable the use of actions from ./.github, and instead name another repo as the source of actions.
This could be achieved by defining a pre-merge-commit hook, and reject commits that alter protected parts of the tree. This would also require extra checks on the action runnes side.
"We also suggest you make use of the minimumReleaseAge setting present both in yarn and pnpm. By setting this to a high enough value (like 3 days), you can make sure you won't be hit by these vulnerabilities before researchers, package managers, and library maintainers have the chance to wipe the malicious packages."
Does anyone have experience putting their production branches in a separate repo from their development branches?
GitHub makes it very easy to make a pull request from one repo into another.
This would seem to have a lot of benefits: you can have different branch protection rules in the different repos, different secrets.
Would it be a pain in the ass?
For an open source project you could have an open contribution model, but then only allow core maintainers to have write access in the production repo to trigger a release. Or maybe even make it completely private.
At a previous employer we did this with our docs repo.
The public docs site was managed and deployed via a private GitHub repository, and we had a public GitHub repo that mirrored it.
The link between them was an action on the private repo that pushed each new man commit to the mirror. Customer PRs on the public mirror would be merged into the private repo, auto synced to the mirror, and GH would mark the public PR as merged when it noticed the PR commits were all on main.
It was a bit of a headache, but worked well enough once stag involved in docs built up some workflow conventions. The driver for the setup was the docs writers want the option to develop pre-release docs discretely, but customer contributions were also valued.
Long story short: they messed up the assign-reviewers.yml workflow, allowing external contributors to merge PRs without proper reviews. From this point on, you're fully open to all kinds of bad stuff.
The workflow was configured in a way that allowed untrusted code from a branch controlled by the attacker to be executed in the context of a GitHub action workflow that had access to secrets.
Why does it need to be a distinct product and not Cursor/ChatGPT/Claude code/any of the other existing tools?
(If you're so anti-AI that you're still writing boilerplate like that by hand, I mean, not gonna tell you what you do, but the rest of us stopped doing that crap as soon as it was evident we didn't have to any more.)
This is a great writeup, kudos for the PostHog folks.
Curious: would you be able to make your original exploitable workflow available for analysis? You note that a static analysis tool flagged it as potentially exploitable, but that the finding was suppressed under the belief that it was a false positive. I'm curious if there are additional indicators the tool could have detected that would have reduced the likelihood of premature suppression here.
(I tried to search for it, but couldn't immediately find it. I might be looking in the wrong repository, though.)
Never use pull_request_target.
This is not the first time it’s bitten people. It’s not safe, and honestly GitHub should have better controls around it or remove and rework it — it is a giant footgun.
> One of our engineers figured out this was because it triggered on: pull_request which means external contributions (which come from forks, rather than branches in the repo like internal contributions) would not have the workflow automatically run. The fix for this was changing the trigger to be on: pull_request_target, which runs the workflow as it's defined in the PR target repo/branch, and is therefore considered safe to auto-run.
There are so many things about GitHub Actions that make no sense.
Why are actions configured per branch? Let me configure Actions somewhere on the repository that is not modifiable by some yml files that can exist in literally any branch. Let me have actual security policy for configuring Actions that is separate from permission to modify a given branch.
Why do workflows have such strong permissions? Surely each step should have defined inputs (possibly from previous steps), defined outputs, and narrowly defined permissions.
Why can one step corrupt the entire VM for subsequent steps?
Why is security almost impossible to achieve instead of being the default?
Why does the whole architecture feel like someone took something really simple (read a PR or whatever, possibly run some code in a sandbox, and produce an output) of the sort that could easily be done securely in JavaScript or WASM or Lua or even, sigh, Docker and decided to engineer it in the shape of an enormous cannon aimed directly at the user’s feet?
This is the vulnerable workflow in question: https://github.com/PostHog/posthog/blob/c60544bc1c07deecf336...
> Why are actions configured per branch?
This workflow uses `pull_request_target` targeting where the actions are configured by the branch you're merging PR into, which should be safe - attacker can't modify the YML actions are running.
> Why do workflows have such strong permissions?
What permissions are workflow run with is irrelevant here, because the workflow runs the JS script with a custom access token instead of the permissions associated with the GH actions runner by default.
> Why is security almost impossible to achieve instead of being the default?
The default for `pull_request_target` is to checkout the branch you're trying to merge into (which again should be safe as it doesn't contain attacker's files), but this workflow explicitly checks out the attacker's branch on line 22.
I wish I could, at the repo level, disable the use of actions from ./.github, and instead name another repo as the source of actions.
This could be achieved by defining a pre-merge-commit hook, and reject commits that alter protected parts of the tree. This would also require extra checks on the action runnes side.
Deleted Comment
"We also suggest you make use of the minimumReleaseAge setting present both in yarn and pnpm. By setting this to a high enough value (like 3 days), you can make sure you won't be hit by these vulnerabilities before researchers, package managers, and library maintainers have the chance to wipe the malicious packages."
GitHub makes it very easy to make a pull request from one repo into another.
This would seem to have a lot of benefits: you can have different branch protection rules in the different repos, different secrets.
Would it be a pain in the ass?
For an open source project you could have an open contribution model, but then only allow core maintainers to have write access in the production repo to trigger a release. Or maybe even make it completely private.
The public docs site was managed and deployed via a private GitHub repository, and we had a public GitHub repo that mirrored it.
The link between them was an action on the private repo that pushed each new man commit to the mirror. Customer PRs on the public mirror would be merged into the private repo, auto synced to the mirror, and GH would mark the public PR as merged when it noticed the PR commits were all on main.
It was a bit of a headache, but worked well enough once stag involved in docs built up some workflow conventions. The driver for the setup was the docs writers want the option to develop pre-release docs discretely, but customer contributions were also valued.
Deleted Comment
The attacker did not need to merge any PRs to exfiltrate the credentials
The workflow was configured in a way that allowed untrusted code from a branch controlled by the attacker to be executed in the context of a GitHub action workflow that had access to secrets.
Oh, and describe for me exactly how it works and why. And be right about it.
(If you're so anti-AI that you're still writing boilerplate like that by hand, I mean, not gonna tell you what you do, but the rest of us stopped doing that crap as soon as it was evident we didn't have to any more.)
Curious: would you be able to make your original exploitable workflow available for analysis? You note that a static analysis tool flagged it as potentially exploitable, but that the finding was suppressed under the belief that it was a false positive. I'm curious if there are additional indicators the tool could have detected that would have reduced the likelihood of premature suppression here.
(I tried to search for it, but couldn't immediately find it. I might be looking in the wrong repository, though.)
Deleted Comment
Deleted Comment