Readit News logoReadit News
MoreQARespect · 2 years ago
There are two types of github actions workflows you can build.

1) Program with github actions. Google "how can I send an email with github actions?" and then plug in some marketplace tool to do it. Your workflows grow to 500-1000 lines and start having all sorts of nonsense like conditionals and the YAML becomes disgusting and hard to understand. Github actions becomes a nightmare and you've invited vendor lock in.

2) Configure with github actions. Always ask yourself "can I push this YAML complexity into a script?" and do it if you can. Send an email? Yes, that can go in a script. Your workflow ends up being about 50-60 lines as a result and very rarely needs to be changed once you've set up. Github actions is suddenly fine and you rarely have to do that stupid push-debug-commit loop because you can debug the script locally.

Every time I join a new team I tell them that 1 is the way to madness and 2 is the sensible approach and they always tepidly agree with me and yet about half of the time they still do 1.

The thing is, the lack of debugging tools provided by Microsoft is also really not much of a problem if you do 2, vendor lock in is lower if you do 2, debugging is easier if you do 2 but still nobody does 2.

woodruffw · 2 years ago
This is a great perspective, and one I agree with -- many of the woes associated with GitHub Actions can be eliminated by treating it just as a task substrate, and not trying to program in YAML.

At the same time, I've found that it often isn't sufficient to push everything into a proper programming language: I do sometimes (even frequently) need to use vendor-specific functionality in GHA, mark dependencies between jobs, invoke REST APIs that are already well abstracted as actions, etc. Re-implementing those things in a programming language of my choice is possible, but doesn't break the vendor dependency and is (IME) still brittle.

Essentially: the vendor lock-in value proposition for GHA is very, very strong. Convincing people that they should take option (2) means making a stronger value proposition, which is pretty hard!

MoreQARespect · 2 years ago
No, you're right it's not necessarily a good idea to be anal about this rule. E.g. If an action is simple to use and already built I use it - I won't necessarily try to reimplement e.g. upload artifacts step in code.

Another thing I noticed is that if you do 1 sophisticated features like build caching and parallelization often becomes completely impractical whereas if you default to 2 you can probably do it with only a moderate amount of commit-push-debug.

flohofwoe · 2 years ago
I use yaml and gh actions to prepare the environment, define jobs and their dependencies and for git operations, everything else goes into scripts.
MenhirMike · 2 years ago
Option 2 also makes it easier for developers to run their builds locally, so you're essentially using the same build chain for local debugging than you do for your Test/Staging/Prod environments, instead of maintaining two different build processes.

It's not just true for GHA, but for any build server really: The build server should be a script runner that adds history, artifact management, and permissions/auditing, but should delegate the actual build process to the repository it's building.

hk1337 · 2 years ago
Locally or if for some reason you need to move off of Github and have to use Jenkins or some other CI tool.
dmtryshmtv · 2 years ago
Good perspective. Unfortunately (1) is unavoidable when you're trying to automate GH itself (role assignments, tagging, etc.). But at this point, I would rather handle a lot of that manually than deal with GHA's awful debug loop.

FWIW, there's nektos/act[^1], which aims to duplicate GHA behavior locally, but I haven't tried it yet.

[^1]: https://github.com/nektos/act

dlisboa · 2 years ago
> Unfortunately (1) is unavoidable when you're trying to automate GH itself (role assignments, tagging, etc.)

Can't you just use the Github API for that? The script would be triggered by the YAML, but all logic is inside the script.

But `act` is cool, I've used it for local debugging. Thing is its output is impossibly verbose, and they don't aim to support everything an action does (which is fine if you stick to (2)).

EddTheSDET · 2 years ago
How DO you debug your actions? I spend so long in the commit-action-debug-change loop it’s absurd. I agree with your point re: 2 wholeheartedly though, it makes debugging scripts so much easier too. CI should be runnable locally and GitHub actions, while supported with some tooling, still isn’t very easy to work with like that.
MoreQARespect · 2 years ago
Using the same commit-push-debug loop you do. It just isnt painful if I do 2.
davidmurdoch · 2 years ago
My GH Actions debugging usually devolves into `git commit -m "wtfqwehsjsidbfjdi"`
all2 · 2 years ago
There are ways to run GHA locally. I've tried out one or two of the tools. [0]

- [0] https://github.com/nektos/act

somehnguy · 2 years ago
Act works pretty well to debug actions locally. It isn't perfect, but I find it handles about 90% of the write-test-repeat loop and therefore saves my teammates from dozens of tiny test PRs.
edgyquant · 2 years ago
I too wish I could find a nicer way than this to debug.
jahnu · 2 years ago
The main reason I aim for (2) is that I want to be able to drive my build locally if and when GitHub is down, and I want to be able to migrate away easily if I ever need to.

I think of it like this:

I write scripts (as portable as possible) to be able to build/test/sign/deploy/etc They should work locally always.

GitHub is for automating me setting up the environments where I can run those scripts and then actually running them.

rayhu007 · 2 years ago
Totally get what you're saying. I once switched our workflow to trigger on PRs to make testing easier. Now, I'm all about using scripts — they're just simpler to test and fix.

I recommend making these scripts cross-platform for flexibility. Use matrix: and env: to handle it. Go for Perl, JavaScript, or Python over OS shells and put file tasks in scripts to dodge path issues.

I've tried boxing these scripts into steps, but unless they're super generic for everyone, it doesn't seem worth it.

toolslive · 2 years ago
> still nobody does 2.

They don't seem to grasp how bad their setup is, and consequently are willing to understand awful programming conditions. Even punch cards were better as these people had the advantage of working with a real programming language with defined behaviour. "when exactly is this string interpolation step executed? in the anchor or when referenced? (well, it depends)". No it's black box tinkering (you might as well be prompt engineering)

the C in IaC is supposed to stand for code. Well, if you're supposed to code something you need to

   - be able to assert correctness before you commit, 
   - be able to step through the code
If the setup they give you doesn't even have these minimal requirements you're going to be in trouble regardless of how brilliant an engineer you are.

(sorry for the rant)

riquito · 2 years ago
I agree overall, but you oversimplify the issue a bit.

> can I push this YAML complexity into a script?

- what language is the script written in?

- will developers use the same language for all those scripts?

- does it need dependencies?

- where are we going to host scripts used by multiple github actions?

- if we ended up putting those scripts in repositories, how do we update the actions once we release new version of the scripts?

- how do you track those versions?

- how much does it cost to write a separate script and maintain it versus locking us in with an external github action?

These are just the first questions that pop in my mind, but there is more. And some answers may not be that difficult, yet is still something to think about.

And I agree with the core idea (move logic outside pipeline configuration), but I can understand the tepid reaction you may get. Is not free and you compromise on some things

tomtheelder · 2 years ago
I think they framed it accurately and you are instead over complicating. Language for scripts is a decision that virtually every team ends up making regardless. The other questions are basically all irrelevant since the scripts and actions are both stored in repo, and therefore released together and versioned together.

I think the point about maintenance cost is valid, but the thesis of the comment that you are responding to is that the prebuilt actions are a complexity trap.

plorkyeran · 2 years ago
I think you are still envisioning a fundamentally incorrect approach. Build scripts for a project are part of that project, not some external thing. The scripts are stored in the repository, and pulled from the branch being built. Dependencies for your build scripts aren't any different from any other build-time dependencies for your project.
c-hendricks · 2 years ago
This is a whole lot of overthinking for something like

    #!/usr/bin/env bash
    set -ex

    aws send-email ...

umvi · 2 years ago
Default to bash. If the task is too complex for bash, then use python or node. Most of these scripts aren't going to change very often once stable.
peteradio · 2 years ago
If build scripts or configuration is shared it might be one of the only times a git submodule is actually useful.
pjc50 · 2 years ago
I've reached the same conclusion with Jenkins. It also helps if you ever have to port between CI systems.

A CI "special" language is almost by definition something that can't be run locally, which is really inconvenient for debugging.

wbond · 2 years ago
I have a few open source projects that have lasted for 10+ years, and I can’t agree more with approach #2.

Ideally you want your scripting to handle of the weird gotchas of different versions of host OSes, etc. Granted my work is cross-platform so it is compounded.

So far I’ve found relying on extensive custom tooling has allowed me to handle transitions from local, to Travis, to AppVeyor, to CircleCI and now also GitHub Actions.

You really want your CI config to specify the host platform and possibly set some env vars. Then it should invoke a single CI wrapper script. Ideally this can also be run locally.

cdaringe · 2 years ago
There’s a curve. Stringy, declarative DSLs have high utility when used in linear, unconditional, stateless programming contexts.

Adding state? Adding conditionals? Adding (more than a couple) procedure calls?

These concepts perform poorly without common programming tools: testing (via compilation or development runtime), static analysis, intellisense, etc etc

Imagine the curve:

X axis is (vaguely) LinesOfYaml (lines of dsl, really) Y axis is tool selection. Positive region of axis is “use a DSL”, lower region is “use a GeneralPurposeProgrammingLanguage”

The line starts at the origin, has a SMALL positive bump, than plummets downwards near vertically.

Gets it right? Tools like ocurrent (contrasted against GH actions) [1], cdk (contrasted against TF yaml) [2]

Gets it wrong? Well, see parent post. This made me so crazy at work (where seemingly everyone has been drinking the yaml dsl koolaide) that i built a local product simulator and yaml generator for their systems because “coding” against the product was so untenable.

[1] https://github.com/ocurrent/ocurrent/blob/master/doc/example... [2] https://docs.aws.amazon.com/cdk/v2/guide/getting_started.htm...

progmetaldev · 2 years ago
Your advice is sane and I can tell speaks from experience. Unfortunately, now that Github Actions are being exposed through Visual Studio, I fear that we are going to see an explosion of number 1, just because the process is going to be more disconnected from Github itself (no documentation or Github UI visible while working within Visual Studio).
lambda_garden · 2 years ago
Option 1 is required if you want to have steps on different runners, add approval processes, etc.

I always opt for option 2 where possible though.

kelnos · 2 years ago
I try to do (2), but I still run into annoyances. Like I'll write a script to do some part of my release process. But then I start a new project, and realize I need that script, so I copy it into the new repo. Then I fix a bug in that script, or ad some new functionality, and I need to go and update the script in the other repo too.

Maybe this means I should encapsulate this into an action, and check it in somewhere else. But I don't really feel like that; an action is a lot of overhead for a 20-line bash script. Not to mention that erases the lack of lock-in that the script alone gives me.

I guess I could check the script into a separate utility repo, and pull it into my other repos via git submodules? That's probably the least-bad solution. I'd still have to update the submodule refs when I make changes, but that's better than copy-pasting the scripts everywhere.

artemisart · 2 years ago
I agree, but of course all CI vendors build all their documentation and tutorials and 'best practices' 100% on the first option for lock-in and to get you to use more of their ecosystem, like expensive caching and parallel runners. Many github actions and circleci orbs could be replaced by few lines of shell script.

Independent tutorials unfortunately fall in the same bucket as they first look at official documentation to try to follow so-called best practices or just try to get their things working, and I would say also because shell scripts will seem more hacky for many people -unfairly-.

flohofwoe · 2 years ago
That's true for all CI services, do as little as possible in yaml, mostly just use it to start your own scripts, for the scripts use something like python or deno to cover Linux, Mac and Windows environments with the same code.
konschubert · 2 years ago
When GitHub actions came out, I felt bad about myself because I had no desire to learn their new programming language of breaking everything down into multiple small GitHub actions.

I think you explained quite well what I couldn't put my finger on last time: Building every simple workflow out of a pile of 3rd party apps creates a lot of unnecessary complexity.

Since then, I have used GitHub actions for a few projects, but mostly stayed away from re-using and combining actions (except for the obvious use cases of "check out this branch").

flohofwoe · 2 years ago
Github Actions basically only became usable once they started copying features from Gitlab CI. Before that it was an incomprehensible mess.

Compared to Gitlab CI, GH Actions still feels like a toy unfortuantely.

Deleted Comment

pplonski86 · 2 years ago
YAML is perfect for simple scenarios. But users produces with it really complex use cases.

Is it possible to write Python package that based on YAML specification produces Python API? User will code in Python and YAML will be the output.

I was working on YAML syntax for creating UI. I converted it to Python API and Im happy. For exmple, dynamic widgets in YAML were hard, in Python they are strightforward.

jjice · 2 years ago
Absolutely agreed. Well said and I'll be stealing this explanation going forward. Hell, just local running with simplicity and ability to test is a massive win of #2, aside from just not dealing with complex YAML.
MuffinFlavored · 2 years ago
> our workflow ends up being about 50-60 lines as a result and very rarely needs to be changed once you've set up.

As in, use GitHub Actions as a YAML wrapper around bash/zsh/sh scripts?

flohofwoe · 2 years ago
It can be any scripting language, Python or Typescript via Deno are good choices because they have batteries-included cross-platform standard libs and are trivial to setup.

Python is actually preinstalled on Github CI runners.

hk1337 · 2 years ago
1 is to build utilities for 2, IMO. It shouldn't have repository specific information inside and should be easily useable in other workflows.
chubot · 2 years ago
Exactly, I showed here how we just write plain shell scripts. It gives you "PHP-like productivity", iterating 50 times a minute. Not one iteration every 5 minutes or 50 minutes.

https://lobste.rs/s/veoan6/github_actions_could_be_so_much_b...

Also, seamlessly interleaving shell and declarative JSON-like data -- without YAML -- is a main point of http://www.oilshell.org, and Hay

Hay Ain't YAML - https://www.oilshell.org/release/0.18.0/doc/hay.html

xeromal · 2 years ago
Github actions calling make commands is my bread and butter.
intelVISA · 2 years ago
Turns out the real SaaS is Scripts as a Service.

Deleted Comment

withinboredom · 2 years ago
I appreciate this perspective, however, after spending 6mo on a project that went (2) all the way, never again. CI/CD SHOULD NOT be using the same scripts you build with locally. Now, we have a commit that every dev must apply to the makefile to build locally, and if you accidentally push it, CI/CD will blow up (requiring an interactive rebase before every push). However, you can’t build locally without that commit.

I won’t go into the details on why it’s this way (build chain madness). It’s stupid and necessary.

Tainnor · 2 years ago
This comment is hard to address without understanding the details of your project, but I will at least say that it doesn't mirror my experience.

Generally, I would use the same tools (e.g. ./gradlew build or docker build) to build stuff locally as on CI, and config params are typically enough to distinguish what needs to be different.

My CI scripts still tend up to be more complicated than I'd like to (due to things like caching, artifacts, code insights, triggers, etc.), but the main build logic at least is extracted.

tao_at_garden · 2 years ago
The git commit, push, wait loop is terrible UX. Users deserve portable pipelines that run anywhere, including their local machines. I understand Act [1] goes some way to solving this headache but it's by and large not a true representation.

There are many pipelines you can't run locally, because they're production, for example, but there's no reason why we can't capture these workflows to run them locally at less-critical stages of development. Garden offers portable pipelines and then adds caching across your entire web of dependencies. Some of our customers see 80% or higher reductions in run times plus devs get that immediate feedback on what tests are failing or passing without pushing to git first using our Garden Workflows.

We're OSS. [2]

[1] https://github.com/nektos/act

[2] https://docs.garden.io

candiddevmike · 2 years ago
If folks just had actions target make or bash scripts instead of turning actions into bash scripts none of this would be an issue. Your CI/CD and your devs should all use the same targets/commands like `make release`.
taeric · 2 years ago
I'm actually confused and scared on how often this isn't the case? What are people doing in their actions that isn't easily doable locally?
globular-toast · 2 years ago
This is how it should be done. It was trivial to port my company's CI from Jenkins to Gitlab because we did this.

Confusion arises when developers don't realise they are using something in their local environment, though. It could be some build output that is gitignored, or some system interpreter like Python (especially needing a particular version of Python).

Luckily with something like Gitlab CI it's easi to run stuff locally in the same container as it will be run in CI.

thiht · 2 years ago
Well… yeah?

My GitHub actions workflow consist of calls to make lint, make test, make build, etc. Everything is useable in local.

There’s just some specificities when it comes to boot the dependencies (I use a compose file in local and GitHub action services in CI, I have caching in CI, etc.) but all the flows use make.

This is not a technical problem, you’re just doing it wrong if you don’t have a Makefile or equivalent.

andrewstuart2 · 2 years ago
Yeah, it seems like we lost a lot of the "CI shouldn't be a snowflake" when we started creating teams that specialize in "DevOps" and "DevOps tools." Once something becomes a career, I think you've hit the turning point of "this thing is going to become too complicated." I see the same thing with capital-A Agile and all the career scrum masters needing something to do with their time.
baggachipz · 2 years ago
Act's incompleteness has had me barking up the wrong tree many times. At this point I've temporarily abandoned using it in favor of the old cycle. I'm hoping it gets better in time!
frodowtf · 2 years ago
I don't get why GitHub doesn't adopt it and make it a standard. Especially the lack of caches is annoying.
sakopov · 2 years ago
We need Terraform for build pipelines and God help you if you use Bitbucket lol
jherdman · 2 years ago
FYI garden.io’s landing page appears to be broken on iOS. It runs off the page to the right.
edvald · 2 years ago
Thanks for flagging! We'll fix that.
joshstrange · 2 years ago
I couldn't agree more with the pain of debugging a GH Actions run. The /only/ tool you have is the ability to re-run with debug on. That's it. I have so many "trash" commits trying to fix or debug a pipeline and so much of it's just throwing stuff at the wall to see if it sticks.

Very basic things, like having reusable logic, is needlessly complex or poorly documented. Once I figured out how to do it it was fairly easy but GitHub's docs were terrible for this. They made it seem like I had to publish an action to get any reusability or put it in a different repo. Turns out you can create new yaml files with reusable logic but make sure you put them in the root of the workflows folder or they won't work, go figure.

It's just incredibly painful to work on GH Actions but once you have them working they are such a joy. I really wish there was some kind of local runner or way to test your actions before committing and pushing.

giobox · 2 years ago
> I have so many "trash" commits trying to fix or debug a pipeline and so much of it's just throwing stuff at the wall to see if it sticks.

One tool is to use draft PRs for this - you can run changes to your action YAML from the draft PR. When you are happy just squash the commits as you see fit on a "real" PR to merge the changes in without the mess.

I've found draft PRs for debugging/developing GH action logic to be pretty reasonable.

lambda_garden · 2 years ago
Since some action depend on the branch / tag you are on this is not always possible.
hahn-kev · 2 years ago
Check this out. It doesn't do everything but it's decent https://github.com/nektos/act
andix · 2 years ago
If I'm fixing CI I always put it on a feature branch and do a squash merge once I'm done. Because it's never just one quick fix, it's always 3-10 commits.
kitallis · 2 years ago
> If I'm fixing CI I always put it on a feature branch and do a squash merge once I'm done. Because it's never just one quick fix, it's always 3-10 commits.

The problem is GA also does not allow you to commit a new workflow in a branch. It must first exist on your primary branch and then you may tweak it in another.

TeeMassive · 2 years ago
I've tried running the GitHub runner image (or maybe an imitation) and it was really painful to setup and to get some things working. I just let it go after 2 days.

And it's not just Github. The others big CI platform are not really better in terms of workflow and integration.

Now I just script everything to the maximum.

vladaionescu · 2 years ago
This is the main reason we built Earthly: run your builds locally, and get consistency with the CI.
pid-1 · 2 years ago
If only competitors could do better...

https://gitlab.com/gitlab-org/gitlab-runner/-/issues/2797

hv42 · 2 years ago
yeah... https://github.com/firecow/gitlab-ci-local is a good workaround but should be built-in. How do developers at GitLab/Github debug their workflows?
solatic · 2 years ago
GitHub Actions is a horrible CI/CD system. You cannot run steps in parallel on the same VM; container-based workloads are a second-class citizen. The first problem means that setting up local credentials and other environment dependencies cannot be parallelized (I'm looking at you, google-github-actions/setup-gcloud, with your 1m+ runtime... grrr), the second makes it quite difficult to put a Dockerfile in a repository to represent setup of the CI environment, and have both (a) the CI rebuild the container images when the image would change, pausing workflows depending on that image until the image is rebuilt, (b) not attempting to rebuild the image when its contents did not change, and immediately running workflows inside that image, including all dependencies already installed.

No, in GitHub Actions, you will attempt to repopulate from cache on every single run. Of course, sometimes the cache isn't found, particularly because there's a 5 GB cache size limit (which cannot be enlarged, not even for payment) which cycles out FIFO. So if you go over the 5 GB cache, you might as well not have one.

I deeply miss Concourse. Too bad Pivotal dismantled the team and it barely gets even maintenance work. Parallel tasks. Custom pipeline triggers. SSH into the CI environment to debug. Run one-off tasks in CI without needing to set up a pipeline. Transfer directory contents from one job to another without needing to create artifacts (which cost money to store if you're not careful about how long they should stick around for).

GitHub Actions is a bastardized toy CI/CD system that only got popular because GitHub make it as simple as uploading a file to .github/workflows in any repository - no additional signup, no in-house ops required, everything you could want is just there. So let's be very clear about what GitHub Actions is good and what it's bad at - it's good at getting people to sign up, but it's not nearly powerful enough to be the "best" system once you start to outgrow the early features.

couchand · 2 years ago
> Of course, sometimes the cache isn't found, particularly because there's a 5 GB cache size limit (which cannot be enlarged, not even for payment) which cycles out FIFO. So if you go over the 5 GB cache, you might as well not have one.

Looks like I can move on that "build caching mysteriously broken" issue now. Thanks for the heads up!

speed_spread · 2 years ago
Concourse rocks. I didn't know the team had been dismantled, this sucks. Zito's communication style was the best.
Huggernaut · 2 years ago
Vito is at dagger.io now so hopefully we can expect some good stuff in the CI space there.
Huggernaut · 2 years ago
Less Pivotal and more VMWare post acquisition dismantling I'd say. There was a lot of love internally for Concourse (I left before the acquisition though).

SSH debugging and one off tasks absolutely dreamy.

toastal · 2 years ago
Even SourceHut’s Spartan CI supports SSH debugging
bryandollery · 2 years ago
Have you seen Tekton? (https://tekton.dev/)
lawnchair · 2 years ago
solatic -- I have an existing solution that accounts for a lot of these issues you bring up. Would it be possible to pick your brain? Can you share your email or shoot me an email? lawnchair@lawnchair.net.
txtsd · 2 years ago
Try the sourcehut build server
solatic · 2 years ago
It's a public alpha and clearly targeted for small hobbyists / FOSS work as a result. I'm looking for something where the creator has more confidence in its reliability...
numbsafari · 2 years ago
> Because GitHub fails to distinguish between fork and non-fork SHA references, forks can bypass security settings on GitHub Actions that would otherwise restrict actions to only “trusted” sources (such as GitHub themselves or the repository’s own organization).

How is this not resolved?

Easily bypassing security controls is a major security issue.

Yes, you need to convince someone to use your SHA, but social engineering is usually the easy part.

k8svet · 2 years ago
I mean, it's embarrassing how bad it is.

- (unrelated) build failures just randomly notifies the latest maintainer who happened to merge something? (Imagine you finding this out when your newly added maintainer pings you on Matrix and tells you 1: about this behavior, and 2: that your update/builds have been failing for a week without you knowing?!?!)

- The cache action is horribly, trivially observably broken with seemingly no maintainer?

- Can't tail the build log of a step if their UI poops or your tab unloaded after it started?

- The complete lack of abstraction that might actually make workflows portable across worker nodes? pfft.

- the default image is a travesty. I thought it was obnoxious how bloated it was and then I started digging in and realizing "Oh, some Microsoftie that didn't know Linux was put in charge of this". (saying this as a former FTE that knows). And there's no effort to allow a more slimmed down image for folks that, you know, use Nix? Or even just Docker?

I'm in the process of migrating off GitHub and it's mostly because Actions shocked me to my senses. Too bad MS can't figure out how to retain virtually any Linux talent, and not just their cuffed-acqui-hires or Windows-devs-cosplaying. Even the well compensated ones head for the door.

And I'll just say, I don't program in YAML because YAML is a disgrace wrought upon us by Go+Yaml enthusiasts that don't know any better fueled by senseless VC money shoveled at an overall ecosystem incognizant of modern, actually useful technology.

edit: removing some of the blatantly identifying former-FTE trauma. Knowing what I know I should sell all my MSFT, but the market thinks differently.

rijoja · 2 years ago
Wasn't it obvious that something along these lines would happen when Microsoft took over Github?
pxeger1 · 2 years ago
I would like to give a strong recommendation for https://pre-commit.ci (no relation, just a happy user).

The idea is that you write hooks (or use pre-existing ones) that check (and fix, where possible) your code before you commit, and then once you've pushed, the CI will check again, including automatically committing fixes if necessary.

Anyway, it works brilliantly - unbelievably fast thanks to good automatic caching, and using the exact same setup on your local development machine to in CI negates a lot of these debugging problems. It's free for open source.

It only does checks, not things like automatic releases, but it does those them well.

globular-toast · 2 years ago
This combined with tox is great for Python projects in particular. Tox automates creating virtualenvs for the right Python versions you want to test with then running your tests in them. It can also run static checks by issuing `skip_install = True` because you want to test the source code itself. You just need to run this in a Python container that has tox installed as a globally available tool and all versions of Python available in it (like https://github.com/georgek/docker-python-multiversion <-- not maintained but easy to update).

Here's some boilerplate to do all that:

    [tox]
    envlist = py{310,311}

    [testenv]
    passenv =
            PIP_CACHE_DIR
    deps = coverage   # deps only for tox
    extras = testing  # testing extras include pytest
    commands = pytest ...

    [testenv:check]
    passenv =
            PIP_CACHE_DIR
            PRE_COMMIT_HOME
    skip_install = True
    deps = pre-commit
    commands = pre-commit run --all-files --show-diff-on-failure

kaikoenig · 2 years ago
Been through that git commit; git push; repeat cycle too much as well until i discovered https://github.com/mxschmitt/action-tmate which gives a shell in between steps, which does not help with all problems but sure it's makes it less painful at times.
justinclift · 2 years ago
Thanks, that looks super useful. :)