Readit News logoReadit News
hn_throwaway_99 · 5 years ago
Interested in how they handle DB updates/migrations (I don't know what Slack uses for data storage backend).

IMO those DB migrations are the most difficult/fraught with risk because you need to ensure that the different versions of the servers that are running as they are deploying can work with whatever state your DB is in at the moment.

derekperkins · 5 years ago
Mostly MySQL that is moving to Vitess (transparently sharded MySQL). I believe they use gh-ost for migrations.
switch007 · 5 years ago
It’s almost a running joke that if a big or well known company blogs about their deploys, they won’t go into detail about databases.
brycethornton · 5 years ago
It's always nice to see how other teams do it. Nothing too groundbreaking here but that's a good thing.

I did notice the screenshot of "Checkpoint", their deployment tracking UI. Are there solid open source or SaaS tools doing something similar? I've seen various companies build similar tools but most deployment processes are consistent enough to have a 3rd-party tool that was useful for most teams.

thinkingkong · 5 years ago
I've built that tool 2-3 times now. The issue is really the deploy function and what controls it. It's always a one-off, or so tightly integrated into the hosting environment, that reaching in with a SaaS product is somewhat difficult. That being said, the new lowest-common-denominator standards like K8s make it way easier. If anyone is interested in using a tool just leave a comment and I'll reach out.
sciurus · 5 years ago
Please provide a way for people to reach you without commenting here.
rajatvijay · 5 years ago
Interested!
lolftw · 5 years ago
Interested!
Dissori · 5 years ago
Interested!
rutigs · 5 years ago
Interested!
matheussampaio · 5 years ago
interested.
alexeyindeev · 5 years ago
Interested
piroux · 5 years ago
Interested
kronin · 5 years ago
Interested
oxygen0211 · 5 years ago
Interested, especially in K8s based
IBCNU · 5 years ago
interested
broth · 5 years ago
interested
gkcgautam · 5 years ago
interested
bsima · 5 years ago
interested
kvz · 5 years ago
Interested
mrdonbrown · 5 years ago
Sleuth is a SaaS deployment tracker that pulls deployments from source repositories, feature flags, and other sources, in addition to pushes via curl. You can see Sleuth used to, well, track Sleuth at https://app.sleuth.io/sleuth

[Disclaimer: am a Sleuth co-founder]

lukax · 5 years ago
I can also recommend Sleuth. We use it at our company and the integration is very good. Their team is constantly working on new features, integrations and better UI.

Hi Don :)

ivanfon · 5 years ago
Is it possible to view the page you linked without creating an account? It redirects me to your landing page.
paxys · 5 years ago
> most deployment processes are consistent enough

Definitely disagree with this. I have never worked at two places with a similar enough deploy process that would benefit from a generic tool.

brycethornton · 5 years ago
Sure, I see your point. I'd just like to see a pattern that works for most that could gain some traction. At the end of the day we're all trying to do the same thing (deploy high quality software), just in different ways. Deployment strategy shouldn't need to be a main competency of most teams.
sahillavingia · 5 years ago
bob1029 · 5 years ago
I've never seen anything that could even remotely give us what we wanted. We ultimately decided to roll our own devops management platform in-house which was 100% focused on our specific needs. We are now on generation 4 of this system. We just rewrote our principal devops management interface using Blazor w/ Bootstrap4. The capabilities of the management system relative to each environment are fairly absolute - Build/Deploy/Tracing/Configuration/Reporting/etc. is all baked in. We can go from merging a PR in GitHub to a client system being updated with a fresh build of master in exactly 5 button clicks with our new system.

The central service behind the UI is a pure .NET Core solution which is responsible for executing the actual builds. The entire process is self-contained within the codebase itself. Very powerful the contract enforcement you get when the application you are building and tracking is part of the same type system as the application building and tracking it.

codenesium · 5 years ago
I'm curious what a Jenkins + Octopus system is missing that your system provides. Most companies would have a hard time justifying the expense to build a bespoke system just for devops.
jjeaff · 5 years ago
Gitlabs pipelines and issues/merges UI is similar and open source.
waz0wski · 5 years ago
taleodor · 5 years ago
This is part of what we're doing with Reliza Hub - https://relizahub.com (note, we're in a very early stage).

Apart from tracking deployments, we're really focused on tracking bills of materials and communication between Business and Tech teams.

Jestar342 · 5 years ago
I don't know if this will tick all of the boxes you need because it is primarily IAC, and is for k8s only afaik: https://www.pulumi.com
sandGorgon · 5 years ago
I think ArgoCD is close.
nathankunicki · 5 years ago
Fun to read, but there's a lack of detail here that I'd like to see. For example, this talks purely about code changes. However times a code change requires a database schema change (as mentioned above), different API's to be used, etc. In the percentage based rollout where multiple versions are in use at once, how are these differences handled?
navaati · 5 years ago
For database schema changes, here is the standard practice: - You have version 1 of the software, supporting schema A. - You deploy a version 2 supporting both schema A and new schema B. Both versions coexist until the deployment iis complete and all version 1 instances are stopped. During all this time the database is still on schema A, this is fine because your instances, both version 1 and 2, support schema A. - Now you do the schema upgrade. This is fine because your instances, now all runnning version 2, support schema B - At last, if you wish you can now deploy a version 3, dropping the support for schema A.
daigoba66 · 5 years ago
We do it the other way (and I’ve always seen it done this way): database change is compatible with current code and new code. So deploy the database change, then deploy the code change. It usually allows you to rollback code changes.
rockostrich · 5 years ago
My company uses HBase currently for things on premise and we're moving to a mix of psql and BigTable in GCP. This is how we do things except all of our "schemas" are defined by the client so we just have to make sure that serialization/deserialization works correctly. With psql we might have to figure out a migration strategy, but for now we'll just be using it to store raw bytes.
tantalor · 5 years ago
Easy: don't do that.

Always make your code compatible with the old and new schema. Migrate the database separately. Then after the migration, remove the code that supports the old schema.

aledalgrande · 5 years ago
I think every DB change should be done like you suggest. An example I worked on recently:

- migrate DB and create new field

- deploy code for writing into such field (not read yet), in parallel with old field

- backfill data migration for older records

- deploy code with feature flag to read new field in workflows, but still write to both fields

- switch read feature flag on

- make sure everything works for a few weeks

- switch write feature flag to only use new field

Deleted Comment

yoloClin · 5 years ago
I'm more curious about how DB rollbacks occur in situations where a PR changes DB and is then reverted.
aledalgrande · 5 years ago
It would be a good practice to first make a DB change alone, which is compatible with both and new code, so you don't need rollbacks. Then separately deploy a code change.

Edit: also suggested by Martin Fowler https://www.martinfowler.com/bliki/BlueGreenDeployment.html

RussianCow · 5 years ago
> Even strategies like parallel rsyncs had their limits.

They don't really go into detail as to what limitations they hit by pushing code to servers instead of pulling. Does anyone have any ideas as to what those might be? I can't think of any bottlenecks that wouldn't apply in both directions, and pushing is much simpler in my experience, but I've also never been involved with deployments at this scale.

rbtying · 5 years ago
I can't speak for Slack, but it's not unreasonable to believe that a single machine's available output bandwidth (~10-40Gbps) can be saturated during a deploy of ~GB to hundreds of machines. Pushing the package to S3 and fetching it back down lets the bandwidth get spread over more machines and over different network paths (e.g. in other data centers)
zerd · 5 years ago
We do it similarly except we push an image to a docker registry (backed by multi-region S3), then you can use e.g. ansible to pull it to 5, 10, 25, 100% of your machines. It "feels" like push though, except that you're staging the artifact somewhere. But when booting a new host it'll fetch it from the same place.
one2know · 5 years ago
Considering they are not bringing machines out of rotation or draining connections in the example given with the errors, I assume that more than 10 machines produces too many errors or takes too long to have two versions of the code deployed, and wherever they pull from is not scalable. All those problems can be easily solved though.
Saaster · 5 years ago
I'm surprised at the 12 deployments per day, if that's truly to production. There's bugfixes etc., but feature wise Slack has been... let's say slow. Not Twitter slow, but still slow, in making any user visible changes.
onion2k · 5 years ago
Far too many people on HN seem to think the public facing code that we see is all that the engineering team in a large company works on. There's so much more to running a large SaaS business. If Slack is like all the other SaaS companies I've encountered they'll have dozens of internal apps for sales, comms, analytics, engineering, etc that they work on that people outside of the business never see[1]. Those all need developing and all need deploying.

[1] They might buy in solutions for some business functions like accounting, HR and support, but they'll still have tons of homegrown stuff. Every tech company does.

greglindahl · 5 years ago
Lots of places do a lot of deploys but hide significant new features behind A/B testing and feature flags. So the two things are disconnected from each other.
paxys · 5 years ago
User visible changes are dependent on the product development process rather than the rate of deploys. Whether you deploy 12 times a day or once a month, it's not like code is getting written any faster.
darkwater · 5 years ago
I wonder why they didn't evaluate at some point using an immutable infrastructure approach leveraging tools like Spinnaker to manage the deploy? They sure have the muscle and numbers to use it and even contribute to it actively, no? I mean, I know that deploying your software is usually something pretty tied to a specific engineering team but I really like the immutable approach and I was wondering why a company the size of Slack, born and grown in the "right" time, did not consider it.
truetuna · 5 years ago
I had similar thoughts when I read their article. Their atomic deploy problem completely disappears had they gone with an immutable approach.
daenz · 5 years ago
I'm kind of surprised they don't have a branch-based staging. Every place I've worked at has evolved in the direction of needing the ability to spin up an isolated staging environment that was based on specific tags or branches.
sophiebits · 5 years ago
It’s become more common to eschew long-lived release branches for SaaS applications. For example: https://engineering.fb.com/web/rapid-release-at-massive-scal...
roadbeats · 5 years ago
It's cool to see how big organizations have deployment setups, while it feels like there is not enough resources about how one should setup a deployment system for a new startup just in the beginning.

The setup I currently use is custom bash scripts setting up EC2 instances. Each instance installs a copy of the git repo(s), and runs a script to pull updates from production/staging branches, compiles a new build, replaces the binaries & frontend assets, then restarts the service, and sends a slack message with list of changes just deployed.

It works good enough for a startup with 2 engineers. However, I'd like to know what could be better ? What could save my time from maintaining my own deployment system in AWS world, without investing days of resources to K8s?

gtsteve · 5 years ago
You don't have to do a big-bang style Google thing. You can just invest in some continuous improvement over the next few years:

Iteration 0: What you have now.

Iteration 1: A build server builds your artifact, and your EC2 instances download the artifact from the build server.

Iteration 2: The build server builds the artifact and builds a container and pushes it to ECR. Your EC2 instances now pull the image into Docker and start it.

Iteration 3: You use ECS for basic container orchestration. Your build server instructs your ECS instances to download the image and run them, with blue-green deployments linked to your load balancer.

Iteration 4: You set up K8s and your build server instructs it to deploy.

I went in a similar trajectory, and I'm at iteration 3 right now, on the verge of moving to K8s.

It's your call on how long the timespan is here, and commercial pressures will drive it. It could be 6 months, it could be 3 years.

roadbeats · 5 years ago
Thanks a lot for the answer.
GordonS · 5 years ago
For me, it feels a bit "wrong" to be building on each production server.

Firstly, production servers are usually "hardened", and only have installed what they need to run, reducing the attack surface as much as possible.

Secondly, for proprietary code, I don't want it on production servers.

But most importantly, I want a single, consistent set of build artifacts that can be deployed across the server/container fleet.

You can do this with CI/CD tools, such as Azure DevOps (my personal favourite), Github Actions, CircleCI, Jenkins and Appveyor.

The way it works is you set up an automated build pipeline, so when you push new code, it's built once, centrally, and the build output is made available as "build artifacts". In another pipeline stage, you can then push out the artifacts to your servers using various means (rsync, FTP, build agent, whatever), or publish them somewhere (S3, Docker Registry, whatever) where your servers can pull them from. You can have more advanced workflows, but that's the basic version.

diamondo25 · 5 years ago
Automate compilation on a buildserver and run tests on that, and if everything is ok, use the artifacts to push to your servers. This way you can guarantee that the code is tested and all running versions are from the same build environment.
roadbeats · 5 years ago
Thanks for the answer.
circular_logic · 5 years ago
If you make your application stateless and have it in a container then there are many managed services out there that can do this for you. For example, in AWS there is fargate and EKS.