I found out when Actions started failing again for the Nth time this month.
The internal conversation about moving away from Actions or possibly GitHub has been triggered. I didn't like Zig's post about leaving GitHub because it felt immature, but they weren't wrong. It's decaying.
If you consider that an American maintainer was cheesed off enough to move an entire project off GitHub two days before Thanksgiving then the tone of the original post was completely in line with the energy involved.
Anger is a communication tool. It should absolutely be used when boundaries are being violated. Otherwise you’ll get walked all over.
I mostly agree, but a generalized attack at the remaining GitHub workers by calling them "losers" and then "rookies" is unwarranted and leaves a bad taste IMO.
Edit: 1. just to be clear, it's very good that they have accepted the feedback and removed that part, but there's no apology (as far as I know) and it still makes you wonder about the culture. On the other side, people make mistakes under stress. 2. /s/not warranted/unwarranted/
Idk, if being bad is the reason for leaving Github Actions, I think people would have left it ages ago. It stuck not because it is better than competitors but because it is included in the Github plans. It's decaying implies that it has somehow became worse, in fact it was one of the worst implementation to start with.
Combined with security concerns, this made us reconsider even our self-hosted GH Actions last month.
GH Packages is something we're extricating ourselves from after today too. One more outage in the next year and maybe we get the ammunition to move away from GH entirely.
It's still hard to believe that they couldn't even keep the lights on on this thing.
GitHub has seem to come under the same management as VSCode, everything has to be made AI and that is the only priority. It's like the Google+ of old but stupider.
Hopefully with that much AI they can finally make the Explore page more useful than "most stars" and "most recent updated". There seems to be no way to discover stuff on GitHub except knowing where it is (hence not discovering but knowing).
This is why I keep encouraging folks to a) have a mirror & b) make sure their tools automatically pick up the mirrors.
I recently got mirror support upstreamed into Nixpkgs for fetchdarcs & fetchpijul which actually work on my just-alpha-released pinning tool, Nixtamal <https://darcs.toastal.in.th/nixtamal/trunk/README.rst>, for just this sort of thing.
Building your software usually involves getting dependencies, & those dependencies are, hopefully, in more than one location—which includes a cronjob to a bare repo, or Alice’s fork on another repo that at least has the latest tags. It should be trivial to point to these as mirrors for the cases where any forge/repository, even the ones held by megacorporations, inevitably go down. Even Nixpkgs itself, while not maintaining their own official mirrors, are mirrored by TUNA. Backups are an important strategy, & the source code should also be a part of that.
These are different concerns. There are a lot of use cases, where folks are just getting dependencies & not interacting with bug tracker or continuous integration use which are less critical & can be accessed later or ran locally.
I've been getting some weird cryptocurrency spam notifications on GitHub and they can't be cleared for some reason. Blue dot is gonna be there forever apparently. Some users made an issue out of it but nobody cared to fix it.
GitHub Actions is a good example of systems thrown together that at face value have something to offer until they get put under stress.
Just now I found:
* a job that's > 1 month old, still running
* another job that started 2 hours ago that had 0 output
* a job that was marked as pending, yet I could rerun it
* auto-merges that don't happen
* pull requests show (1), click it, no pull requests visible
Makes me wonder in how many places state is stored, because there is some serious disconnect between them.
That's just post-Windows 8 Microsoft quality for you. Every product has been like that - looks "ok" on the outside (in reality it looks shit, but at least that's intentional), but the second you dig deeper and start using it you get all kinds of paper cuts like that.
It is a fun bootstrapping problem. How do you firewall enough dedicated resources to stand up your infrastructure if you dogfood your own product. Probably insidiously easy to have a dependency on the production service.
I've gotten accustomed lately to spending a lot of time in the Github Copilot / agent management page. In particular I've been having a lot of fun using agents to browse some of my decade-old throwaway projects; telling it to "setup playwright, write some tests, record screenshots/videos and commit them to the repo" works every time and it's a great way to browse memory lane without spending my own time getting some of these projects building and running again.
However this means I'm now using the Github website and services 1000x more than I was previously, and they're trending towards having coin-flip uptime stats.
If Github sold a $5000 box I could plug into a corner in my house and just use that entire experience locally I'd seriously consider it. I'm guessing maybe I could get partway there by spending twice that on a Mac Pro but I have no idea what the software stack would look like today.
Is there a fully local equivalent out-of-the-box experience that anyone can vouch for? I've used local agents primarily through VSCode, but AFAIK that's limited to running a single active agent over your repo, and obviously limited by the constraints of running on a single M1 laptop I currently use. I know at least some people are managing local fleets of agents in some manner, but I really like how immensely easy Github has made it.
None of the open weights models you can run locally will perform at the same level as the hosted frontier models. Some of them are becoming better, but the step-down in output quality is very noticeable for me.
> If Github sold a $5000 box I could plug into a corner in my house and just use that entire experience locally I'd seriously consider it. I'm guessing maybe I could get partway there by spending twice that on a Mac Pro but I have no idea what the software stack would look like today.
Right now, the only reasons to host LLMs locally are if you want to do it as a hobby or you are sensitive about data leaving your local network. If you only want a substitute for Copilot when GitHub is down, any of the hosted LLMs will work right away with no up front investment and lower overall cost. Most IDEs and text editors have built-in support for connecting to other hosted models or installing plugins for it.
> I know at least some people are managing local fleets of agents in some manner,
If your goal is to run fleets of agents in parallel, local LLM hosting is going to be a bottleneck. Familiarize yourself with some of the different tool options out their (Claude Code, Cline, even the new Mistral Vibe) and sign up for their cloud API. You can also check OpenRouter for some more options. The cloud hosted LLMs will absorb parallel requests without problem.
Thank you, a bit sad to hear that local inference isn't really at this level of performance yet. I was previously using the VSCode agent chat and playing with both OpenAI and Github hosted models but I switched to using the Github web UI directly a lot since my workflow became a lot more issue/PR-focused. Sounds like I should probably tighten up the more generic IDE-centric workflow and make it a keyboard shortcut to switch around when a given provider is down. I haven't actually used Claude directly yet but I think Github agents often use it under the hood anyway.
An NVIDIA DGX Spark is $4000, pair that with a relatively cheap second box to run GitLab in the corner and you would have pretty good local AI inference setup. (you'd probably have to write a nontrivial amount of software to get your setup where you want)
The local models are just right on the edge of being really useful, there's a tipping point to where accuracy is high enough so that getting things done is easy vs models getting continuously stuck. We're in the neighborhood.
Alternatively, just have local GitLab and use one of the many APIs, those are much more stable than github. Honestly just get yourself a Claude subscription.
The DGX Spark is not good for inference though it's very bandwidth limited - around the same as a lower end MacBook Pro. You're much better off with a Apple silicon for performance and memory size at the moment but I'd recommend holding off until the M5 Max comes out early in the early as the M5 has vastly superior performance to any other Apple silicon chip thanks to its matmul instruction set.
I can't say I'm not tempted looking at the Spark, I could probably save some cash on heating my house with that thing. Though yeah unless there's some good software already built around a similar LLM workflow I could use it'd probably be wasted on me, or spend its time desperately trying to pay for itself with crypto mining.
Adding Claude to my rotation is starting to look like the option with the least amount of building the universe from scratch. I have to imagine it can be used in a similar or identical workflow to the Copilot one where it can create PRs and make adjustments in response to feedback etc.
The internal conversation about moving away from Actions or possibly GitHub has been triggered. I didn't like Zig's post about leaving GitHub because it felt immature, but they weren't wrong. It's decaying.
Anger is a communication tool. It should absolutely be used when boundaries are being violated. Otherwise you’ll get walked all over.
See the edit history here: https://news.ycombinator.com/item?id=46133179
Edit: 1. just to be clear, it's very good that they have accepted the feedback and removed that part, but there's no apology (as far as I know) and it still makes you wonder about the culture. On the other side, people make mistakes under stress. 2. /s/not warranted/unwarranted/
It may have been updated, but nobody is reading the update.
GH Packages is something we're extricating ourselves from after today too. One more outage in the next year and maybe we get the ammunition to move away from GH entirely.
It's still hard to believe that they couldn't even keep the lights on on this thing.
Deleted Comment
Dead Comment
I recently got mirror support upstreamed into Nixpkgs for fetchdarcs & fetchpijul which actually work on my just-alpha-released pinning tool, Nixtamal <https://darcs.toastal.in.th/nixtamal/trunk/README.rst>, for just this sort of thing.
Are you still seeing it, would you mind checking? Our team will get on it if so.
gh api notifications -X PUT -F last_read_at=2025-10-06T00:00:00Z
Just change the date to today. I also got that line from a gh issue somewhere - maybe it was the same issue that you’re referring to.
```
gh api notifications\?all=true | jq -r 'map(select(.unread) | .id)[]' | xargs -L1 sh -c 'gh api -X PATCH notifications/threads/$0'
```
https://news.ycombinator.com/formatdoc
https://github.com/orgs/community/discussions/174310#discuss...
I had the same issue too, and this was the only thing that fixed it for me.
Just now I found:
Makes me wonder in how many places state is stored, because there is some serious disconnect between them.However this means I'm now using the Github website and services 1000x more than I was previously, and they're trending towards having coin-flip uptime stats.
If Github sold a $5000 box I could plug into a corner in my house and just use that entire experience locally I'd seriously consider it. I'm guessing maybe I could get partway there by spending twice that on a Mac Pro but I have no idea what the software stack would look like today.
Is there a fully local equivalent out-of-the-box experience that anyone can vouch for? I've used local agents primarily through VSCode, but AFAIK that's limited to running a single active agent over your repo, and obviously limited by the constraints of running on a single M1 laptop I currently use. I know at least some people are managing local fleets of agents in some manner, but I really like how immensely easy Github has made it.
> If Github sold a $5000 box I could plug into a corner in my house and just use that entire experience locally I'd seriously consider it. I'm guessing maybe I could get partway there by spending twice that on a Mac Pro but I have no idea what the software stack would look like today.
Right now, the only reasons to host LLMs locally are if you want to do it as a hobby or you are sensitive about data leaving your local network. If you only want a substitute for Copilot when GitHub is down, any of the hosted LLMs will work right away with no up front investment and lower overall cost. Most IDEs and text editors have built-in support for connecting to other hosted models or installing plugins for it.
> I know at least some people are managing local fleets of agents in some manner,
If your goal is to run fleets of agents in parallel, local LLM hosting is going to be a bottleneck. Familiarize yourself with some of the different tool options out their (Claude Code, Cline, even the new Mistral Vibe) and sign up for their cloud API. You can also check OpenRouter for some more options. The cloud hosted LLMs will absorb parallel requests without problem.
The local models are just right on the edge of being really useful, there's a tipping point to where accuracy is high enough so that getting things done is easy vs models getting continuously stuck. We're in the neighborhood.
Alternatively, just have local GitLab and use one of the many APIs, those are much more stable than github. Honestly just get yourself a Claude subscription.
Adding Claude to my rotation is starting to look like the option with the least amount of building the universe from scratch. I have to imagine it can be used in a similar or identical workflow to the Copilot one where it can create PRs and make adjustments in response to feedback etc.
https://docs.github.com/en/enterprise-server@3.19/admin/over...
"GitHub Enterprise Server is a self-hosted version of the GitHub platform"