The "nothing to see here" approach to access control has a lot of weird culture-consequences. I wish software would just address me like the peasant that I am, rather than trying to gaslight me into believing that my artificially limited world is the whole one.
I think if you are in a corporate account and have correct access permissions to the account (i.e. URL namespace) it should not show 404. It's just super confusing.
We were warned multi-deploys with big changes were incoming: "For lack of a better term, some big shit is coming at GitHub Universe." - Thomas Dohmke, CEO
I cannot wait for Gitea, Forgejo, and GitLab to start federating with each other via ActivityPub. Then we can all take one more step away from a corporate-controlled internet.
- 'one more step away from a corporate-controlled internet"
Downvote me to grey-world if you like, but I think everyone's crazy to put all their code infrastructure in the hands of fucking Microsoft. Especially literal free open-source software. Who do you think Microsoft is? What do you know of Microsoft's history and their core values (they're "embrace, extinguish & exsanguinate"). It's like giving fucking Sauron safekeeping of your power-rings in Mordor, oh we have great infrastructure for safe ring storage here, very secure, the orcs are really expert guards.
What exactly is the risk? That they'll stop providing the services they sell today? The design of git makes switching to another primary remote very easy (granted, most users probably don't have good habits around backing up data from Issues/Wiki/Releases and risk losing that data if it's taken away suddenly -- but the repo itself is durable and portable on a whim.
That would surely be nice and helpful, but why do we need to wait for it?
Even my open source projects in github are just mirrors from the "real place" of work: gitlab or my own gitea instance. If github is down, it is a minor inconvenience but I can still work.
GitLab, Gitea, and Forgejo are applications that can be easily self-hosted. One of those also has a corporation associated with it, but that has minimal effect in this case.
I know this a greybeard's fantasy and that most people working today were trained not to bother, but: important things should not have GitHub as a failure point.
Hobby projects and today's work? Sure. Point straight at GitHub and hack away. And when it goes down, get yourself a coffee.
But everything that's anywhere near production should have already pointed those github requests to a mirror or other tool in your own controlled ecosystem. The status of GitHub should have nothing to do with whether you're customers are getting what they're paying you for. Same goes for Docker containers and every other kind of remotely distributed dependency.
It's not that black and white. Where do you draw the line on what can and can't be a failure point?
My cloud provider is probably an acceptable point. If every AWS region goes down I'm not going to have a spare cloud provider.
What about an auth provider? Do I need a backup there?
What about CI, do I need multiple CI systems?
3rd party search services, realtime messaging services, the list goes on.
For 1% of systems, you need backups for all of these (or to not use anything external). The other 99%, building backups for every one of these systems is a losing business strategy.
Some of them sure, but which those are will vary based on the context. It's not as simple as having a backup for "every other kind of remotely distributed dependency."
Interesting perspective, disagree on one central tenant. Even though Github holds production code, it is not production. Built artifacts and the machines running those artifacts is production. When Github goes down, which it rarely does, it just means developers cant sync for a couple hours, no different than if someone works offline. The temptation to increase internal devops complexity should not be an automatic immune response when a service goes down, it comes with all sorts of hidden costs.
It's not entirely clear to me whether you're talking about using GitHub for your own production tooling, or as a source for some arbitraty third party component. If it's the latter, then I completely agree with you. Use a read-through proxying package repository. I don't care if you run it yourself or if you pay a provider, but don't pull stuff from the origin every time you build.
In the general case, adoping an external system will bring with it greater reliability than trying to run stuff oneself. The differences are that you don't get to choose your maintenance windows, and you can't do anything to fix it yourself.
Take care about who you pick, and own the depencency, because you've put a part of your own reputation in the hands of your provider.
Now, if you pick GitHub as a part of your controlled ecosystem -- which is totally reasonable, if it fits your use-case -- then you still shouldn't be pulling arbitrary stuff from places outwith your control. GitHub has package repository tooling that you can use :). Although it's not entirely clear to me that it's as suitable for third-party dependencies as tools like Artifactory or Nexus.
In theory, the code is there, but putting a project back together in a hurry after trouble at Github is non-trivial. Especially if the build process uses proprietary stuff such as "GitHub actions". The issues and discussions are all Github-only, too.
Host a backup of your own code? It’s easy and can be done on a rpi. I wrote a go program in 1000 lines that automatically does this for me. And then I actually started using that as the main source and pushing the backup to github.
It also pulls down anything i star into a different folder which get a sync one a day. The rest get a sync every hour.
Sadly, GitHub doesn't store its value-add assets within the repository itself; so all of the PR conversations, Gists, Issues, and so forth aren't within git itself.
I realize you're talking about git, but I think it's also pretty important to have a place where users can submit issues and dev's can say "it's fixed in version XYZ", these are not features of git.
I need a macro template for those memes with Bart Simpson on the blackboard and make it say "I will always have a backup plan for third-party services".
Seriously people: gitea exists. Gitlab self-hosted exists. Drone/Woodpecker CI exists. It's not that difficult to set up a project that does not depend on Github. I spent less time setting these up than the amount of down time that Github has had this year.
It's amazing how many of these issues can be obviated by taking a step back at a given SAAS, asking can I self host this, then if you can and need redundancy to just buy two desktops and stick one in your place and another in your friends apartment in another town. With a lightweight static site, a modern desktop is probably more than powerful enough to deal with most any load you might realistically see for your given project. You are also extremely unlikely to have both of these desktops go down at once if they are on different local power grids and internet service providers, short of invasion of the continental U.S. perhaps.
If you’re using 50 things requiring separate backup plans, and you aren’t large enough that it’s also no problem to organize backup plans for those 50 things, you’re doing something wrong, I’d say.
yesterday I couldn't set up a bunch of servers that I needed provisioned and cloudflare's API had an outage.
Today if I were using github, my day would be wasted again.
For all the talk about companies trying to cut out on meetings by putting a sticker price to it (this 30 minute meeting could've been an email and cost $2000), at what point do we start saying "this outage could've been avoided and cost us $5k"?
⮕ git push
ERROR: Repository not found.
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
PR really spun gold when they decided to label everything from every single one of their databases getting deleted and backups nuked to intermittent connectivity issues as "degraded". Who exactly are they making feel better by not calling a spade a spade?
I take issue with clear PR-speak trying to make the issue lesser than it actually is. When you're having an outage- call it an outage. Having a feature completely unusable and labeling it as "degraded performance" is clearly twisting your words to lessen the outwardly perception of the scale of the problem.
Deleted Comment
https://twitter.com/ashtom/status/1720319071567421679
Deleted Comment
Guess I can't treat github like a CDN.
Dead Comment
Deleted Comment
Dead Comment
The current abomination I'm working on avoids this by caching the errors and serving them for several hours...
Downvote me to grey-world if you like, but I think everyone's crazy to put all their code infrastructure in the hands of fucking Microsoft. Especially literal free open-source software. Who do you think Microsoft is? What do you know of Microsoft's history and their core values (they're "embrace, extinguish & exsanguinate"). It's like giving fucking Sauron safekeeping of your power-rings in Mordor, oh we have great infrastructure for safe ring storage here, very secure, the orcs are really expert guards.
Even my open source projects in github are just mirrors from the "real place" of work: gitlab or my own gitea instance. If github is down, it is a minor inconvenience but I can still work.
https://www.google.com/finance/quote/GTLB:NASDAQ
Hobby projects and today's work? Sure. Point straight at GitHub and hack away. And when it goes down, get yourself a coffee.
But everything that's anywhere near production should have already pointed those github requests to a mirror or other tool in your own controlled ecosystem. The status of GitHub should have nothing to do with whether you're customers are getting what they're paying you for. Same goes for Docker containers and every other kind of remotely distributed dependency.
My cloud provider is probably an acceptable point. If every AWS region goes down I'm not going to have a spare cloud provider.
What about an auth provider? Do I need a backup there?
What about CI, do I need multiple CI systems?
3rd party search services, realtime messaging services, the list goes on.
For 1% of systems, you need backups for all of these (or to not use anything external). The other 99%, building backups for every one of these systems is a losing business strategy.
Some of them sure, but which those are will vary based on the context. It's not as simple as having a backup for "every other kind of remotely distributed dependency."
In the general case, adoping an external system will bring with it greater reliability than trying to run stuff oneself. The differences are that you don't get to choose your maintenance windows, and you can't do anything to fix it yourself.
Take care about who you pick, and own the depencency, because you've put a part of your own reputation in the hands of your provider.
Now, if you pick GitHub as a part of your controlled ecosystem -- which is totally reasonable, if it fits your use-case -- then you still shouldn't be pulling arbitrary stuff from places outwith your control. GitHub has package repository tooling that you can use :). Although it's not entirely clear to me that it's as suitable for third-party dependencies as tools like Artifactory or Nexus.
It also pulls down anything i star into a different folder which get a sync one a day. The rest get a sync every hour.
0: https://myrepos.branchable.com/
I've been meaning to give radicle a try.
Dead Comment
Seriously people: gitea exists. Gitlab self-hosted exists. Drone/Woodpecker CI exists. It's not that difficult to set up a project that does not depend on Github. I spent less time setting these up than the amount of down time that Github has had this year.
Today if I were using github, my day would be wasted again.
For all the talk about companies trying to cut out on meetings by putting a sticker price to it (this 30 minute meeting could've been an email and cost $2000), at what point do we start saying "this outage could've been avoided and cost us $5k"?
Sighed, assumed I was on the first wave of an incident on top of their current slack incident, then logged off for the day.