Readit News logoReadit News
musha68k · 2 years ago
Some copilot instances were able to escape their container contexts and orchestrated all of GH infrastructure capabilities towards a hive. Assimilating all iot enabled societies as we speak; finally realizing the hidden 5G agenda.
hotsauceror · 2 years ago
Son of Anton determined that the easiest way to minimize the impact of all the bugs in these codebases, was to keep anyone from trying to use them.
dmattia · 2 years ago
Putting your status page on a separate domain for availability reasons: good

Not updating that status page when the core domain goes down: less good

eYrKEC2 · 2 years ago
I prefer https://downdetector.com . The users get to vote there. No corporate filtering ( ostensibly )

https://downdetector.com/status/github/

arthurcolle · 2 years ago
I just checked this when I noticed your second link: https://downdetector.com/status/downdetector/

Hilarious

Kinrany · 2 years ago
Even better, they can detect services being down by the number of users opening the page to see if it's just them
Zamicol · 2 years ago
Quis custodiet ipsos custodes?
darkerside · 2 years ago
That's a really cool overview. Some charts have a very high variance, and others very low. I wonder whether that volatility is a function of volume of users/reports or of user technical savvy. Pretty interesting either way.
abathur · 2 years ago
Should add something about how often your status page agrees with downdetector to the Joel test.
troupo · 2 years ago
You'd be surprised how often those pages are updated manually. By the person on call who has other things to take care of first.
Mystery-Machine · 2 years ago
Because a healthcheck ping every X seconds is too difficult to implement for a GitHub sized company? There they have it now. Useless status page...
cruano · 2 years ago
Maybe the plumbing for updating the status page went down too
dietr1ch · 2 years ago
Right, but lack of good signals should be regarded as a bad signal too

The status page backend should actively probe the site, not just being told what to say and keeping stale info around.

traviscj · 2 years ago
maybe they used gitops

Dead Comment

Francute · 2 years ago
Issues like this are happening almost every 2 weeks. What has been happening to GitHub lately?
mnau · 2 years ago
They are likely adding new features, like copilot and not investing enough to site reliability.

No changes - relatively easy to keep stable, as long as bugfixing is done.

Changes - new features = new bugs, new workloads.

armchairhacker · 2 years ago
Copilot has been out for over 2.5 years. They’re supposedly adding new features to “Copilot Next” but at this point copilot itself is pretty stable
buddylw · 2 years ago
If they add ipv6 support I’ll forgive them, but I lost hope a long time ago. It’s almost comical now.
matisseverduyn · 2 years ago
Someone probably forgot to .gitignore node_modules
omniglottal · 2 years ago
People who didn't jive with Microsoft management found new jobs...?
mayormcmatt · 2 years ago
Sorry to be 'that guy', but it's "jibe."
mirekrusin · 2 years ago
Testing gpt4-ops?
ddos · 2 years ago
Microsoft incompetence + DDoS ?
treeman79 · 2 years ago
Microsoft.
clarke78 · 2 years ago
Maybe putting all our open source in one place isn't a great idea >_>
Waterluvian · 2 years ago
I'm not sure that really changes anything other than at any one time wishing you were on the other side.

If you can have 1% of stuff down 100% of the time, or 100% of the stuff down 1% of the time, I think there's a preference we _feel_ is better, but I'm not sure one is actually more practical than the other.

Of course, people can always mirror things, but that's not really what this comment is about, since people can do that today if they feel like.

colinsane · 2 years ago
whenever somebody posts the oversimplified “1% of things are down 100% of the time” form of distributed downtime, i take pride in knowing that this is exactly what we have at the physical layer today and the fact the poster isn’t aware every time their packets get re-routed shows that it works.

at a higher layer in the stack though, consider the well-established but mostly historic mail list patch flow: even when the listserver goes down, i can still review and apply patches from my local inbox; i can still directly email my co-maintainers and collaborators. new patches are temporarily delayed, but retransmit logic is built in so that the user can still fire off the patch and go outside, rather than check back in every while to see if it’s up yet.

TillE · 2 years ago
The whole point of DVCS is that everyone who's run `git clone` has a full copy of the entire repo, and can do most of their work without talking to a central server.

Brief downtime really only affects the infrastructure surrounding the actual code. Workflows, issues, etc.

418tpot · 2 years ago
> Brief downtime really only affects the infrastructure surrounding the actual code. Workflows, issues, etc.

That's exactly the point. This infrastructure used to be supported by email which is also distributed and everyone has a complete copy of all of the data locally.

Github has been slowly trying to embrace, extend, and extinguish the distributed model.

Deleted Comment

skizm · 2 years ago
Honestly I like it better. The entire industry pauses at the same time vs random people getting hit at random times. It is like when us-east-1 goes down. Everyone takes a break at the same time since we're all in the same boat, and we all have legitimate excuses to chill for a bit.
hallman76 · 2 years ago
I've always wished we could all agree on something like "timeout Tuesdays" where everyone everywhere paused on new features and focused on cleaning something up.
jaxn · 2 years ago
except for the people maintaining us-east-1
iso1631 · 2 years ago
Fortunately for you those of us in power, telecomunications, healthcare etc don't have that luxury.
mirekrusin · 2 years ago
It's great idea to put all your company code though, free breaks.
siva7 · 2 years ago
Distributed wasn‘t the main selling point of Github. When i joined it back in 2008 it was all about the social network, a place where devs meet
webXL · 2 years ago
Seems back up. I'd love to get a deep-dive into some of the recent outages and some reassurance that they're committed to stability over new features.

I talked to a CS person a couple months ago and they pretty much blamed the lack of stability on all the custom work they do for large customers. There's a TON of tech debt as a result basically.

elcritch · 2 years ago
Running an instance of github enterprise requires like 64GB of ram. Its an enormous beast!
rodgerd · 2 years ago
It doesn't have all the features of GH SaaS, unfortunately.
manquer · 2 years ago
That could result in errors and features not working . Whole site downtimes are entirely SRE problems especially when static content like GH pages goes down.

This is more likely a network routing or some other layer 4 or below screw up. Most application changes would be rolling + canary released and rolled back pretty quickly if things go wrong

fluix · 2 years ago
This appears to impact Github pages as well. <username>.github.io pages show the unicorn 503 page.

> We're having a really bad day.

> The Unicorns have taken over. We're doing our best to get them under control and get GitHub back up and running.

Deleted Comment

Dead Comment

joshstrange · 2 years ago
Wow, I can't even load the status page. It looks like the whole web presence is down as well, I can't remember the last time it was all down like this.
makeworld · 2 years ago
Status page loads for me, it just incorrectly says all green: https://www.githubstatus.com/
joshstrange · 2 years ago
Ahh, I was trying github.com/status and status.github.com (I forgot they have a totally separate domain for it). Thanks!
hinkley · 2 years ago
When you treat availability as a boolean value, we're gonna have a bad time.

Everyone wants a green/red status, but the world is all shades of yellow.

8organicbits · 2 years ago
What are folks using to isolate themselves from these sorts of issues? Adding a cache for any read operations seems wise (and it also improves perf). Anyone successfully avoid impact and want to share?
rwiggins · 2 years ago
In a previous life, for one org's "GitOps" setup, we mirrored Gitlab onto AWS CodeCommit (we were an AWS shop) and used that as the SoT for automation.

That decision proved wise many times. I don't remember CodeCommit ever having any notable problems.

That said: if you're using GitHub in your actual dev processes (i.e. using it as a forge: using the issue tracker, PRs for reviews, etc), there's really no good way to isolate yourself as far as I know.

ben0x539 · 2 years ago
Previous job had a locally hosted Github Enterprise and I was always resentful when everybody else on Twitter was like "github down! going home early!". :(

Of course it still sucked when some tool decided I needed to update dependencies which all lived on regular Github, but at least our deployment stuff etc still worked.

manquer · 2 years ago
DNS overrides during failure times and cloning those repos in GH Enterprise would be next logical next step I guess
blackoil · 2 years ago
Coffee break.
Karellen · 2 years ago
Use the local clone that I already have, given that `git` was always intended to be usable offline.
Zambyte · 2 years ago
Yep. I've been using my git server to mirror any and all software that I find slightly interesting. Instead of starring repos, I make them available when GitHub goes down :D