Just this week I tried giving this another chance to debug some weird CI failures for ruby tests.
I’m on M-series macs so there is an inherent platform mismatch. That coupled with us using depot images as action runners. I was never able to get it running past the dry-run stage.
There is just a lot of differences between CI runners and my local docker images.
I even gave the image from catthehacker a try. Which is like >15GB. It still wouldn’t run.
Finally gave up.
Sounds similar to my own experiences trying to debug GH actions locally.
I've tried twice now to get it working, pulling down many GBs of images and installing stuff and then getting stuck in some obscure configuration or environment issue. I was even running Linux locally which I figured would be the happiest path.
I'm not eager to try again, and unless there's a CI that's very slow or GH actions that need to be updated often for some reason, I feel like it's better to just brute-force it - waiting for CI to run remotely.
There's another alternative - debug in CI itself. There's a few ways that you can pause your CI at a specific step and get a shell to do some debugging, usually via SSH. I've found that to be the most useful.
GH Actions have always worked flawlessly for me, as long as I don't ever do anything unusual.
Last week I moved from a custom server back to GH Pages + GH Actions, and broke a feature that depended on an out dir existing after one build phase and before another. I have no idea how to fix it. It's probably simple, I'm just burnt out.
ChristopherHX publishes Docker images which are created directly from packing up the root filesystem into a tarball from inside a runner. The images are huge though.
It’s even worse than that, it’s 20GB compressed and 60GB uncompressed. Regardless, if your VM runs out of disk space there’s no meaningful error (well 12 months ago) except a failure to start. (Possibly colima-specific I dunno I uninstalled docker-docker)
I do find that some things just work locally and fail in real actions and vice versa. For the most part it’s made it easier to move the bar forward though.
This is really only for debugging the logic of your Actions, isn't it?
On my team, Nix seems to work pretty well for encapsulating most of the interesting things CI jobs do. We don't use virtualization to run local versions of our jobs, and we don't run into architecture mismatch issues.
My experience with Ruby on Apple Silicon as been far from seamless. The only way I could thoroughly get rid of problems was by running the most recent Ruby release and dealing with what THAT causes.
I’m pretty sure GH actions don’t run the latest Ruby version.
My experience with both Ruby and Python Apple Silicon has been hit or miss. Mise has made it a lot simpler and seamless but if you need an older version or the absolute latest when it is released, you might have some issues. Mise has helped a lot, I generally do not expect to get an error.
I had luck with my actions I tried to debug. But I also had the issue that the runtime difference is simply too big to 100% trust it. Too bad because I think the tool is quite good.
Every mention of Github Actions is an occasion to start looking for the best alternative in <current_month>, let's go!
Is Dagger usable yet? Is there still hope for Earthly? Are general purpose workflow systems winning any time soon, or are we still afraid of writing code? Any new systems out there that cover all the totally basic features like running locally, unlike Github Actions?
Earthly has pivoted away from their CI/CD products: they’re shutting down Satellite, and they’re stopping active maintenance of the Earthly open source project.
I think we could build something on top of nix that is as easy to use and powerful as earthly, but gets all the nice stuff from nix: reproducibility, caching, just use any command from any nix package, etc
That's mostly what we've built with Flox [1] (though I'm not exactly sure what you mean by run any command from any Nix package). It looks and feels like an amped up package manager, but uses Nix as kind of an infrastructure layer under the hood. Here's a typical workflow for an individual developer:
- cd into repo
- `flox activate` -> puts you into a subshell with your desired tools, environment variables, services, and runs any setup scripts you've defined
- You do your work
- `exit` -> you're back in your previous shell
Setting up and managing an environment is also super easy:
- cd into project directory
- `flox init` -> creates an "environment"
- `flox install python312` -> installing new packages is very simple
- `flox edit` -> add any environment variables, setup scripts, services in an editor
- `flox activate` -> get to work
The reason we call these "environments" instead of "developer environments" is that what we provide is a generalization of developer environments, so they're useful in more than just local development contexts. For example, you can use Flox to replace Homebrew by creating a "default" environment in your home directory [2]. You can also bundle an environment up into a container [3] to fit Flox into your existing deployment tools, or use Flox in CI [4].
All that stuff I described is free, but we have some enterprise features in development that won't be, and I think people are going to find those very appealing.
I built something like this out using an old version of Dagger and found it enormously complicated, and the they rewrote everything and abandoned the version of Dagger I used.
When I they did, I said "fuck it" and just started distributing a Nix flake with wrappers for all the tools I want to run in CI so that at least that part gets handled safely and is easy to run locally.
The worst and most annoying stuff to test is the CI platforms' accursed little pseudo-YAML DSLs. I still would like that piece.
devenv.sh tasks might be a good basis for building on it. I think maybe bob.build also wants to be like this
Gitlab? It's been around longer than Actions, it's much better and honestly not sure why anyone would consider anything else (but happy to hear if there are reasons).
In the context of this submission, one will want to exercise caution because they and their forgejo friends are using forks of nektos/act, with all the caveats of both "act" and "fork"
Seems fine to me, I don't really understand why people think CI is hard and they need to shop a different platform. All these systems are is a glorified shell script runner. It seems like developers just keep making up new DSLs over and over to do the same things.
It's still a miserable experience to maintain it, update it, deal with mostly old plugins, dynamically loading the tooling, groovy idiosyncrasies.. and UI/UX that despite recent efforts continues to feel terrible.
Managing the underlying infra is painful, and it remains a security liability even when not making obvious mistakes like exposing it to any open network.
And good luck if you're having that fun at a windows shop and aren't using a managed instance (or 5).
Let me throw another one out there: how about TeamCity on premise? Still can be done for free and the last time I used it (years ago) it left a very complete, stable and easy to use impression on me. Jenkins by comparison seems heavy and complicated.
Rather than tying CI & deployments to Github Actions, it is usually better to pull as much of it as possible out to shell scripts and call them in containers in GH actions..
There are optimizations you’ll want (caching downloaded dependencies etc); if you wait to make those after your build is CI-agnostic you’ll be less tempted by vendor specific shortcuts.
Usually means more code - but, easier to test locally. Also, swapping providers later will be difficult but not “spin up a team and spend 6 months” level.
This is always the top comment in these kinds of threads, and I see this as an indication that the current state of CI/CD is pathetically propriety.
It’s like the dark times before free and open source compilers.
When are we going to push back and say enough is enough!?
CI/CD desperately needs something akin to Kubernetes to claw back our control and ability to work locally.
Personally, I’m fed up with pipeline development inner loops that involve a Git commit, push, and waiting around for five minutes with no debugger, no variable inspector, and dumping things to console logs like I’m writing C in the 1980s.
You and I shouldn’t be reinventing these wheels while standing inside the tyre shop.
We've had open source CI. We still have it. I remember ye olde days of Hudson, before it was renamed Jenkins. Lotsa orgs still use Jenkins all over the place, it just doesn't get much press. It predates GHA, Circle, and many of the popular cloud offerings.
Turns out CI/CD is not an easy problem. I built a short-lived CI product before containers were really much of a thing ...you can guess how well that went.
Also, I'll take _any_ CI solution, closed or open, that tries to be the _opposite_ of the complexity borg that is k8s.
Having a container makes debugging possible, but it's still generally going to be an unfriendly experience, compared to a script you can just run and debug immediately.
It's inevitable that things will be more difficult to debug once you're using a third party asynchronous tool as part of the flow.
- You may need something to connect the dots between code changes and containers. It's not always possible to build everything on every change, especially in a multi/mono-repo setup.
- You probably need some way to connect container outcomes back to branch protection rules. Again, if you are building everything, every time, it's pretty simple, but less so otherwise.
- You likely want to have some control over the compute capacity on which the actions run, both for speed and cost control. And since caching matters, some compute solutions are better than others.
I don't think GitHub Actions solves any of these problems well, but neither do containers on their own.
In my experience (and as reflected by the comments on this post already), trying to run complex CI workflow locally is a fairly hopeless enterprise. If you have a fully containerized workflow, you might be able to get close, but even then ensuring you have all of your CI specific environment variables is often not a trivial task, and if your workflow orchestrates things across tasks (e.g. one task uploads an artifact and another task uses that artifact) you'll have a hard time reproducing exactly what is happening in CI. My company (RWX) builds a GitHub Actions competitor and we intentionally do not support local execution -- instead we focused on making it easy to kick off remote builds from your local machine without having to do a `git push` and we made it easy to grab an SSH session before, during, or after a running task to inspect things in the exact build environment that your workflow is running.
It's important to note that this tool does not use the same container images or runtime environment that GitHub Actions actually runs. It's an approximation.
For simple use cases that won't matter, but if you have complex GitHub Actions you're bound to find varying behavior. That can lead to frustration when you're debugging bizarre CI failures.
Yeah, this is lame. With Gitlab you can choose the exact image that runs and provide your own. So you can easily run the exact same code in the exact same environment locally.
I guess it would be nice to have a tool to convert a Gitlab YAML to Docker incantations, but I've never needed to do it that often for it to be a problem.
Feel like the correct way to use any CI these days is to just have it do almost nothing. The CI just triggers a script. And then you can run that script locally.
exactly. I am surprised by the amount of comments in this and similar threads praying for github to add more complexity to an already over-complicated and fragile solution.
I'm convinced that they specifically won't do that so that people won't use them to build competing products. Actions is one of the pieces that make GitHub money.
WARNING: act is great if you use docker. Act does not support podman.
Issues or discussions related to providing support/coverage/compatibility/workarounds for podman are closed with a terse message. Unusual for an open source project.
Dumb question, but why hasn’t GitHub made a solution that lets you run GitHub Actions locally? Or at the very least a solution that validates the action (giving a bit more certainty that it will succeed, a bit like a dry-run)?
(My war story:) I stopped using GHAs after an optimistic attempt to save myself five key strokes ‘r’ ‘s’ ‘p’ ‘e’ ‘c’ led to 40+ commits and seeing the sunrise but still no successful test run via GHA. Headless browsers can be fragile but the cost benefit ratio against using GHA was horrible, at least for an indy dev.
It’s the allure of the marketplace that gets people. They’re like “oh I could parse my test report files and upload all of them somewhere and render them with a nice output on my pr! I could spend a week writing and testing all of this or I can slap this action here into the YAML and be done with it in 10 minutes.
The trap and tradeoff is that the thirtieth time you’ve done that is when you realize you’ve screwed yourself and the organization by building this Byzantine half baked DAG with a very sketchy security story that you can’t step through, run locally or test
I'm as big a GitLab fanboy as they come, but they recently axed the gitlab-runner binary's ability to execute local .gitlab-ci.yml files <https://gitlab.com/gitlab-org/gitlab/-/issues/385235>. It now only operates in "receive webhook from the mothership" mode, just like GHA
Pour one out, I guess, but it's okay since I previously was super angry at it for languishing in the uncanny valley of "hello world works, practically nothing else does" -- more or less like nektos/act and its bazillions of forks
I've tried twice now to get it working, pulling down many GBs of images and installing stuff and then getting stuck in some obscure configuration or environment issue. I was even running Linux locally which I figured would be the happiest path.
I'm not eager to try again, and unless there's a CI that's very slow or GH actions that need to be updated often for some reason, I feel like it's better to just brute-force it - waiting for CI to run remotely.
Last week I moved from a custom server back to GH Pages + GH Actions, and broke a feature that depended on an out dir existing after one build phase and before another. I have no idea how to fix it. It's probably simple, I'm just burnt out.
https://github.com/ChristopherHX/runner-image-blobs/pkgs/con...
I do find that some things just work locally and fail in real actions and vice versa. For the most part it’s made it easier to move the bar forward though.
On my team, Nix seems to work pretty well for encapsulating most of the interesting things CI jobs do. We don't use virtualization to run local versions of our jobs, and we don't run into architecture mismatch issues.
I’m pretty sure GH actions don’t run the latest Ruby version.
Is Dagger usable yet? Is there still hope for Earthly? Are general purpose workflow systems winning any time soon, or are we still afraid of writing code? Any new systems out there that cover all the totally basic features like running locally, unlike Github Actions?
https://earthly.dev/blog/shutting-down-earthfiles-cloud/
- cd into repo
- `flox activate` -> puts you into a subshell with your desired tools, environment variables, services, and runs any setup scripts you've defined
- You do your work
- `exit` -> you're back in your previous shell
Setting up and managing an environment is also super easy:
- cd into project directory
- `flox init` -> creates an "environment"
- `flox install python312` -> installing new packages is very simple
- `flox edit` -> add any environment variables, setup scripts, services in an editor
- `flox activate` -> get to work
The reason we call these "environments" instead of "developer environments" is that what we provide is a generalization of developer environments, so they're useful in more than just local development contexts. For example, you can use Flox to replace Homebrew by creating a "default" environment in your home directory [2]. You can also bundle an environment up into a container [3] to fit Flox into your existing deployment tools, or use Flox in CI [4].
All that stuff I described is free, but we have some enterprise features in development that won't be, and I think people are going to find those very appealing.
[1] https://flox.dev
[2] https://flox.dev/docs/tutorials/migrations/homebrew/
[3] https://flox.dev/docs/reference/command-reference/flox-conta...
[4] https://flox.dev/docs/tutorials/ci-cd/
When I they did, I said "fuck it" and just started distributing a Nix flake with wrappers for all the tools I want to run in CI so that at least that part gets handled safely and is easy to run locally.
The worst and most annoying stuff to test is the CI platforms' accursed little pseudo-YAML DSLs. I still would like that piece.
devenv.sh tasks might be a good basis for building on it. I think maybe bob.build also wants to be like this
/me puts tinfoilhat
https://gitea.com/gitea/act -> https://gitea.com/gitea/act_runner
https://code.forgejo.org/forgejo/act -> https://code.forgejo.org/forgejo/runner
https://github.com/jenkinsci/jenkins/tree/master/.github/wor...
Managing the underlying infra is painful, and it remains a security liability even when not making obvious mistakes like exposing it to any open network.
And good luck if you're having that fun at a windows shop and aren't using a managed instance (or 5).
Earthly was amazing. The exact same setup in CI and locally. They're reviving it with a community effort, but I'm not sure if it'll live
dagger is the only code-based solution. It works, but it does have some edges since it has a much bigger surface area and is constantly growing.
This is what Earthfiles look like: https://docs.earthly.dev/docs/earthfile
There are optimizations you’ll want (caching downloaded dependencies etc); if you wait to make those after your build is CI-agnostic you’ll be less tempted by vendor specific shortcuts.
Usually means more code - but, easier to test locally. Also, swapping providers later will be difficult but not “spin up a team and spend 6 months” level.
It’s like the dark times before free and open source compilers.
When are we going to push back and say enough is enough!?
CI/CD desperately needs something akin to Kubernetes to claw back our control and ability to work locally.
Personally, I’m fed up with pipeline development inner loops that involve a Git commit, push, and waiting around for five minutes with no debugger, no variable inspector, and dumping things to console logs like I’m writing C in the 1980s.
You and I shouldn’t be reinventing these wheels while standing inside the tyre shop.
Turns out CI/CD is not an easy problem. I built a short-lived CI product before containers were really much of a thing ...you can guess how well that went.
Also, I'll take _any_ CI solution, closed or open, that tries to be the _opposite_ of the complexity borg that is k8s.
It's inevitable that things will be more difficult to debug once you're using a third party asynchronous tool as part of the flow.
- You may need something to connect the dots between code changes and containers. It's not always possible to build everything on every change, especially in a multi/mono-repo setup.
- You probably need some way to connect container outcomes back to branch protection rules. Again, if you are building everything, every time, it's pretty simple, but less so otherwise.
- You likely want to have some control over the compute capacity on which the actions run, both for speed and cost control. And since caching matters, some compute solutions are better than others.
I don't think GitHub Actions solves any of these problems well, but neither do containers on their own.
I use dagger to read these .env/mise env vars and inject dummy values into the test container. Production is taken care of with a secrets manager.
For simple use cases that won't matter, but if you have complex GitHub Actions you're bound to find varying behavior. That can lead to frustration when you're debugging bizarre CI failures.
I guess it would be nice to have a tool to convert a Gitlab YAML to Docker incantations, but I've never needed to do it that often for it to be a problem.
CI must be local-first and platform-agnostic.
How do you figure that? I'd buy "lock people in the platform," but in that way GitHub Issues has been the moat for far longer than Actions has
Agreed. I’m thankful for tools like act, but there really should be an officially supported way to run gh actions locally.
Issues or discussions related to providing support/coverage/compatibility/workarounds for podman are closed with a terse message. Unusual for an open source project.
[1] https://github.com/nektos/act/issues/303
(My war story:) I stopped using GHAs after an optimistic attempt to save myself five key strokes ‘r’ ‘s’ ‘p’ ‘e’ ‘c’ led to 40+ commits and seeing the sunrise but still no successful test run via GHA. Headless browsers can be fragile but the cost benefit ratio against using GHA was horrible, at least for an indy dev.
Deleted Comment
Your action can be empty and actions generate webhook events.
Do whatever you want with the webhook.
The trap and tradeoff is that the thirtieth time you’ve done that is when you realize you’ve screwed yourself and the organization by building this Byzantine half baked DAG with a very sketchy security story that you can’t step through, run locally or test
Pour one out, I guess, but it's okay since I previously was super angry at it for languishing in the uncanny valley of "hello world works, practically nothing else does" -- more or less like nektos/act and its bazillions of forks