So the question becomes: is $0.002/minute a good price for this. I have never run GitHub Actions, so I am going to assume that experience on other, similar, systems applies.
So if your job takes an hour to build and run though all tests (a bit on the long side, but I have some tests that run for days), then you are going to pay GitHub $.12 for that run. You are probably going to pay significantly more for the compute for running that (especially if you are running on multiple testers simultaneously). So this does not seem to be too bad.
This is probably going to push a lot of people to invest more in parallelizing their workloads, and/or putting them on faster machines in order to reduce the number of minutes they are billed for.
I should note that if you are doing something similar in AWS using SMS (Systems Management Service), that I found that if you are running small jobs on lots of system that the AWS charges can add up very quickly. I had to abandon a monitoring system idea I had for our fleet (~800 systems) because the per-hit cost of just a monitoring ping was $1.84 (I needed a small mount of data from an on-worker process). Running that every 10 minutes was going to be more than $250/day. Writing/running my own monitoring system was much cheaper.
Now the GitHub pricing change definitely? costs more than both servers combined a month ... (They cost about 60$ together )
3 step GitHub action builds around 1200 nix packages and derivations , but produces only around 50 lines of logs total if successful and maybe 200 lines of log once when a failure occurs And I'm supposed to pay 4$ a day for that ? Wonder what kind of actual costs are involved on their side of waiting for a runner to complete and storing 50 lines of log
Despite what people say about "maintaining" Jenkins (whatever that means to them personally) - you can set it up in an IaaC way including the jobs. You can migrate/create jobs en masse via its API (I did this about 10 years ago for a large US company converting from what was then called TFS)