Before Microsoft bought them they were basically at a standstill and no new features were being added to the product. At least, that's my recollection of it, perhaps someone can correct me if I'm wrong.
I'm sure making private repos free also dramatically increased the number of total repos to manage. I know personally that I went from having a couple of public repos to at least a dozen private repos for notes, configs, etc.
Am I the only one that remembers frequent outages before the acquisition? I didn't keep track in a spreadsheet or anything so don't have any data to back it up though but always assumed it's similar to Twitter of the day, hard to make a stable service in Rails.
If anyone can actually put out the data that would be great. I suspect there was a period post acquisition of relative stability that dulls our memory of frequently broken PRs which we probably just got used to at the time.
I’ve also noticed a lot of consistency issues on their site recently.
Open PR; someone comments-doesn’t show up no matter how many hard refreshes, browser restarts, etc. Push a commit to your branch - doesn’t show up in the PR, but shows up in the commit history. If you have auto-merge ticked, it might never merge even when it meets the conditions, and if the branch merges you won’t know, because again, the PR never updates and it still looks open.
I have these issues- in varying degrees of duration and severity- about once a week.
I gave up trying to be a polite API consumer with GitHub events, etc. Polling the basic resources every minute is way more reliable and your code will survive a junior developer's shenanigans.
I had the REST API e-tag polling working well, but then I discovered my org event stream didn't include label changes. This is a separate thing I needed to poll and at that point I lost my mind. I refuse to keep track of 6+ pieces of state in order to pull essential data from an API.
My current pattern is to list all open issues and then compare their updated_at with persisted copies. If any changes, then I refresh the comments for the issue as well as top-level items (title/body/labels).
It's part of the Shopify and BigCommerce's design pattern explicity. Like, yes we offer webhooks, but also you need to come back once an X (I typically do hourly) and sweep for missed data.
Not to mention with Shopify, webhooks are not guaranteed to be at all ordered. You may receive an order.updated event prior to an order.create event. Delivery milage and timing may vary.
So, after all that back and forth we asked ourselves, why do we even bother with webhooks?
Well for one, it helps our clients being real time to improve their shipping speed, keeping real time inventory, and so forth. Secondly, it keeps our processing more flat. We might run an hourly cron to pickup stragglers, but we don't do huge data dumps per hour. In some our busier systems it can be a real challenge when clients do things like monthly invoicing all their customers. Using webhooks and processing records as they come in keeps the queue as small as reasonable.
Yeah GitHub doesn't like when you do frequent polling over hooks but you pretty much need to have something polling (infrequently) to catch undelivered hooks.
fwiw pull requests heads can be accessed locally via
```
${remote}/pull/${ID}/head
```
where remote is the git remote for the repo (probably `origin`) and id is the pull request number. You may need to fetch to get the up to date head and if it still doesn't work, try just fetching the ref directly from the remote.
You can then diff against main/master, try merging into main, etc. which should give you everything you need to code review.
If you want to diff the current branch against main/master at the branch point, you can do:
```
git diff $(git merge-base --fork-point master)
```
That will diff against the point in history where the branch diverged.
And GitHub actions were basically sharing the infra of Azure Pipelines.
which implies a lot of more challenges
If anyone can actually put out the data that would be great. I suspect there was a period post acquisition of relative stability that dulls our memory of frequently broken PRs which we probably just got used to at the time.
And the year before. My previous job even floated the idea to come back to onprem infra for this reason.
Open PR; someone comments-doesn’t show up no matter how many hard refreshes, browser restarts, etc. Push a commit to your branch - doesn’t show up in the PR, but shows up in the commit history. If you have auto-merge ticked, it might never merge even when it meets the conditions, and if the branch merges you won’t know, because again, the PR never updates and it still looks open.
I have these issues- in varying degrees of duration and severity- about once a week.
I had the REST API e-tag polling working well, but then I discovered my org event stream didn't include label changes. This is a separate thing I needed to poll and at that point I lost my mind. I refuse to keep track of 6+ pieces of state in order to pull essential data from an API.
My current pattern is to list all open issues and then compare their updated_at with persisted copies. If any changes, then I refresh the comments for the issue as well as top-level items (title/body/labels).
Not to mention with Shopify, webhooks are not guaranteed to be at all ordered. You may receive an order.updated event prior to an order.create event. Delivery milage and timing may vary.
So, after all that back and forth we asked ourselves, why do we even bother with webhooks?
Well for one, it helps our clients being real time to improve their shipping speed, keeping real time inventory, and so forth. Secondly, it keeps our processing more flat. We might run an hourly cron to pickup stragglers, but we don't do huge data dumps per hour. In some our busier systems it can be a real challenge when clients do things like monthly invoicing all their customers. Using webhooks and processing records as they come in keeps the queue as small as reasonable.
Of all the *DDs, this is one of the few that is done without much complaint or major problems or controversies around it.
```
${remote}/pull/${ID}/head
```
where remote is the git remote for the repo (probably `origin`) and id is the pull request number. You may need to fetch to get the up to date head and if it still doesn't work, try just fetching the ref directly from the remote.
You can then diff against main/master, try merging into main, etc. which should give you everything you need to code review.
If you want to diff the current branch against main/master at the branch point, you can do:
```
git diff $(git merge-base --fork-point master)
```
That will diff against the point in history where the branch diverged.
With all of the GithHub Universe AI announcements, it's sad to see the main github product being ignored.