Readit News logoReadit News
Roguelazer · 3 months ago
I think this is ignoring a lot of prior art. Our deploys at Yelp in roughly 2010 worked this way -- you flagged a branch as ready to land, a system (`pushmaster` aka `pushhamster`) verified that it passed tests and then did an octopus merge of a bunch of branches, verified that that passed tests, deployed it, and then landed the whole thing to master after it was happy on staging. And this wasn't novel at Yelp; we inherited the practice from PayPal, so my guess is that most companies that care at all about release engineering have been doing it this way for decades and it was just a big regression when people stopped having professional release management teams and started just cowboy pushing to `master` / `main` on github some time in the mid 2010's.
jd__ · 3 months ago
That's super interesting, thanks for sharing the Yelp/PayPal lineage. You're right: there's probably a lot of prior art in internal release engineering systems that never got much written up publicly.

The angle we took in the blog post focused on what was widely documented and accessible to the community (open-source tools like Bors, Homu, Bulldozer, Zuul, etc.), because those left a public footprint that other teams could adopt or build on.

It's a great reminder that many companies were solving the "keep main green" problem in parallel (some with pretty sophisticated tooling), even if it didn't make it into OSS or blog posts at the time.

qlm · 3 months ago
Gotta be honest: the AI-ness of both the images and the text in this blog post (as well as your response) leaves a bad taste.
w10-1 · 3 months ago
> we inherited the practice from PayPal

Paypal got it from eBay, which in 2000's was rolling out 20M LOC worldwide every week or two on "release trains". There, a small team of kernel engineers rotated doing the merging -- two weeks of clearcase hell when it was your turn.

And, since eBay wrote their own developer tools, you'd have to deploy different tooling depending on the branch you were on. But because of their custom tooling, if there was a problem in the UI, in debug mode you could select an element in the browser UI and navigate to the java class in a particular component and branch that produced that element.

Deleted Comment

jes5199 · 3 months ago
I don't know what it's like now, but GitHub's internal merge queue circa 2017 was a nightmare. Every PR required you to set aside a full day of babysitting to get it getting merged/deployed - there were too many nondeterministic steps.

You'd join the queue, and then you'd have to wait for like 12 other people in front of you who would each spend up to a couple hours trying to get their merge branch to go green so it could go out. You couldn't really look away because it could be your turn out of nowhere - and you had to react to it like being on call, because the whole deployment process was frozen until your turn ended. Often that meant just clicking "retry" on parts of the CI process, but it was complicated, there were dependencies between sections of tests.

rufo · 3 months ago
It got a little bit better, first with trains (bundling together PRs so they weren't going out one at a time), and then the merge queue started automating most of the testing and fitting together PRs into bundles that could go out together. But by the time I left GH last year it had devolved into roughly the same amount of hassle; I had multiple days where I could queue a PR for deploy mid-morning and not have the deploy containing it go out until dinnertime, and I'd need to keep an eye out in Slack in case merge or test conflicts arose.
mattmatheson · 3 months ago
How did you guys resolve merge or test conflicts? Did you just have to rebase, update your PR, and resubmit?
jes5199 · 3 months ago
well, that’s horrifying! I appreciate the update though
fosterfriends · 3 months ago
Pretty direct rip of this blog post I wrote a while back: https://graphite.dev/blog/bors-google-tap-merge-queue
pcthrowaway · 3 months ago
I've only skimmed both, but I'm under the impression that your article lays out the problem much clearly. For example, your article makes it much clearer than OP's that merge skew refers to the situation where two PRs submitted closer together than the test suite takes to run may indicate the tests pass for both branches, when both merged together would not pass.

Another user suggested that the OP article may be AI-generated or AI-assisted; I'm not confident one way or another, but it does have me questioning whether HN has any AI detection mechanisms (though I'm not sure how effective these will be as AI keeps evolving)

LegNeato · 3 months ago
Yeah, there is way more prior art. We were doing this at FB early (here is a talk from 2014 mentioning it but it was much earlier: https://www.infoq.com/presentations/facebook-release-process...) and we definitely did not invent it...it was already the industry standard for large CI.
zdw · 3 months ago
This seems to skip the idea of stacked commits plus automatic rebasing, which have been around in Gerrit and other tools for quite a while.

If you read between the lines, the underlying problem in most of the discussion is GitHub's dominance of the code hosting space coupled with it's less than ideal CI integration - which while getting better is stuck with baggage from all their past missteps and general API frailty.

jd__ · 3 months ago
That's a good point. To clarify, Gerrit itself didn't actually do merge queuing or CI gating. Its model was stacked commits: every change was rebased on top of the current tip of main before landing. That ensured a linear history but didn't solve the "Is the whole pipeline still green when we merge this?" problem.

That's why the OpenStack community built Zuul on top of Gerrit: it added a real gating system that could speculatively test multiple commits in a queue and only merge them if CI passed together. In other words, Zuul was Gerrit's version of a merge queue.

Deleted Comment

wbl · 3 months ago
Gerrit integrates with try and mergebots I thought.
sfink · 3 months ago
This covers part of the problem, the part where your tests are enough to indicate whether the changes are good enough to keep. In that scenario, relying only on fast-forward merges is good enough.

One trickier problem is when you don't know until later that a past change was bad: perhaps slow-running performance tests show a regression, or flaky tests turn out to have been showing a real problem, or you just want the additional velocity of pipelined landings that don't wait for all the tests to finish. Or perhaps you don't want to test every change, but then when things break you need to go back and figure out which change(s) caused the issue. (My experience is at Mozilla where all these things are true, and often.) Then you have to deal with backouts: do you keep an always-green chain of commits by backing up and re-landing good commits to splice out the bad, and only fast-forwarding when everything is green? Or do you keep the backouts in the tree, which is more "accurate" in a way but unfortunate for bisecting and code archaeology?

oftenwrong · 3 months ago
I think there was an even earlier example of merge trains at Etsy mentioned here: https://pushtrain.club/

This blog post about choosing which commit to test is also relevant and may be of interest: https://sluongng.hashnode.dev/bazel-in-ci-part-1-commit-unde...

peterldowns · 3 months ago
No mention of Graphite.dev? Oh, it's written by Mergify, got it.
mlutsky1231 · 3 months ago
(Co-founder of Graphite here) Even better - they didn't mention that Shopify deprecated Shipit in favor of Graphite's merge queue for their new monorepo.

Deleted Comment