It's always so enlightening to have articles like this one shed light on how companies at scale operate. It goes without saying that many of the problems Stripe faced with their monorepo isn't application to smaller businesses, but there are still bits and pieces that are applicable to many of us.
I've been working on an ephemeral/preview environment operator for Kubernetes(https://github.com/pier-oliviert/sequencer) and as I could agree to a lot of things OP said.
I think dev boxes is really the way to go, specially with all the components that makes an application nowadays. But the latency/synchronization issue is a hard topic and it's full of tradeoff.
A developer's laptop always ends up being a bespoke environment (yes, Nix/Docker can help with that), and so, there's always a confidence boost when you get your changes up on a standalone environment. It gives you the proof that "hey things are working like I expected them to".
My main gripe with the dev box approach is that a cloud instance with similar compute resources as a developers MacBook is hella expensive. Even ignoring compute, a 1TB ebs volume with equivalent performance to a MacBook will probably cost more than the MacBook every month.
Wouldn't this be a reasonable alternative? Asking because I don't have experience with this.
1. New shared builds update container images for applications that comprise the environment
2. Rather than a "devbox", devs use something like Docker Compose to utilize the images locally. Presumably this would be configured identically to the proposed devbox, except with something like a volume pointing to local code.
I'm interested in learning more about this. It seems like a way to get things done locally without involving too many cloud services. Is this how most people do it?
It’s $250/month for a c6g 2xl 1tb ebs on demand pricing go reserved instances. Given they use AWS and are a major customer, you can expect excellent pricing above the above public pricing quote.
Considering the cost of a developers time, and you can do shenanigans to drive that even lower, this all feels totally reasonable.
The article didn't actually say what "Stripe's cloud environment" was, besides "outside of the production environment". I assumed the company had their own hardware but your assumption is more probable.
I find the devbox approach very frustrating because the JetBrains IDEs are leaps and bounds ahead of everything else in terms of code intelligence, but only work well locally. VSCode is very slightly more capable than plain text editor + sync or terminal-based editor over SSH, but only slightly.
It's darkly amusing how we have all these black-magic LLM coding assistants but we can't be reasonably assured of even 2000s level type-aware autocomplete.
Right, dev boxes do not need to do double duty as a personal computer plus development target, which allows them to more closely resemble the machine your code will actually run on. They also can be replaced easily, which can be helpful if you ever suspect something is wrong with the box itself - if the new one acts the same way, it wasn't the dev box.
I don't recall latency being a big problem in practice. In an organization like this, it's best to keep branches up to date with respect to master anyway, so the diffs from switching between branches should be small. There was a lot of work done to make all this quite performant and nice to use. The slowest part was always CI.
I feel like we're not getting the right lessons from this. It feels like we're focusing on HOW we can do something versus pausing for a brief moment to consider if we SHOULD in the first place.
To me the root issue is the complexity of production environments has expanded to the point of impacting complexity in developer environments just to deploy or test - this is in conjunction with expanding complexity of developer environments just to develop - i.e. web pack.
For very large well resourced organizations like Stripe that actually operate at scale that complexity may very well be unavoidable. But most organizations are not Stripe. They should consider decreasing complexity instead of investing in complex tooling to wrangle it.
I'd go as far as to suggest both monorepos and dev-boxes are complex toolchains that many organizations should consider avoiding.
Look into dev containers — if you set one up for your repo, you get pretty much the same experience as GitHub Codespaces, but the choice of running it locally.
Maybe a silly question, but why all this engineering effort when you could host the dev environment locally?
By running a Linux VM on your local machine you get a consistent environment that you can ssh to, remove the latency issues but you remove all the complexity of syncing that they’ve created.
That’s a setup that’s worked well for me for 15 years but maybe I’m missing some other benefit?
I work on this at Stripe. There's a lot of reasons:
* Local dev has laptop-based state that is hard to keep in sync for everyone. Broken laptops are _really hard_ to debug as opposed to cloud servers I can deploy dev management software to. I can safely say the oldest version of software that's in my cloud; the laptops skew across literally years of versions of dev tools despite a talented corpeng team managing them.
* Our cloud servers have a lot more horsepower than a laptop, which is important if a dev's current task involves multiple services.
* With a server, I can get detailed telemetry out of how devs work and what they actually wait on that help me understand what to work on next; I have to have pretty invasive spyware on laptops to do the same.
* Servers in our QA environment can interact with QA services in a way that is hard for a laptop to do. Some of these are "real services", others are incredibly important to dev itself, such as bazel caches.
There's other things; this is an abbreviated list.
If a linux VM works for you, keep working! But we have not been able to scale a thousands-of-devs experience on laptops.
I want to double check we’re talking about the same thing here. I’m referring to running everything inside a single VM that you would have total access to. It could have telemetry, you’d know versions etc. I wonder if there’s some confusion around what I’m suggesting given your points above.
I’m sure there are a bunch of things that make it the right choice for Stripe. Obviously if you just have too many things to run at a time and a dev laptop can’t handle it then it’s a dealbreaker. What’s the size of the cloud instances you have to run on?
To provide historical context, 10 years ago there was a local dev infrastructure, but it was already so creaky as to be unreliable. Just getting the ruby dependencies updated was a problem.
The local dev was also already cheating: All the asynchronous work that was triggered via RabbitMQ/Kafka was getting hacked together, because trying to run everything that Infra/Queues did locally would have been very wasteful. So magic occurred in the calls to the message queue that instead triggered the crucial ruby code that would be hit in the end.
So if this was a problem back then, when the company had less than 1000 employees, I can't even imagine how hard would it be to get local dev working now
The way these problems are stated mighy make it seem like they're unsolvable without a lot of effort. I just want to point out that I've worked at places that do use a local, supported environment, and it works well.
Not saying it's the wrong choice for you, but it's a choice, not a natural conclusion.
In my opinion the single most important feature of any development environment is a reliable “reset” button.
The amount of time companies lose to broken development environments is incredible. A developer can easily lose half a day (or more) of productive time.
With cloud environments it’s much easier to offer a “just give me a brand new environment that works” button somewhere. That’s incredibly valuable.
For sure, but, a VM has that feature too. They have to run some services directly on the laptop to handle the code syncing. So if you accept a certain amount of “need to do some dev machine setup” as a cost, installing Parallels and running a script to download an iso is a pretty small surface area that allows for a full reset.
I don’t doubt that Stripe have a setup that works well for them them but I also bet they could have gone done a different path that also worked well and I suspect that other path (local VMs) is a better fit for most other smaller teams.
From what I remember (left Stripe in late 2022) much of Stripe's codebase was/is a Ruby tangled "big ball of mud" monorepo due to lack of proper modules. Basically a lot of the core modules all imported code from each other with little layering so you couldn't deploy a lean service without pulling in almost all of the monorepo code. And due to the way imports worked it would load a ton of this code a runtime. This meant that even a simple service would have extremely high memory usage and be unsuitable for a local dev environment where you have N of these bloated services running at the same time. There was a big refactoring effort to get "strict modules" in place to cut down on this bloat which had some promising results. I'm not an expert in this area but I believe this was the gist of it.
You're limited by the resources available to you on your local laptop and when you close that laptop the dev environment stops running. Remote dev environments are more costly and complicated to maintain but they can be shared, can scale vertically (or horizontally) on demand, can persist when you exit them, and managing access to various internal services from dev environments can in some cases be simpler.
It also centralizes dev environment management to the platform team that owns them and provides them as a service which cuts down on support tickets related to broken dev environments. There are certainly some trade offs though and for most companies a local VM or docker compose file will be a better choice.
Also tends to security advantages to mitigate/manage dev risks. Typically hosts will have security tooling installed (AV, EDR, etc) that may not be installed on local VMs, hosts are ephemeral so quickly created and destroyed, network restrictions, etc.
Not even once did I want to share my dev. environment, nor did anyone want to share mine. We are talking about 25-odd years of being a developer.
Never in my life did I want to scale my dev. environment vertically or horizontally or in any other direction. Unless you work on a calculator, I don't know why would you need that.
I have no problems with my environment stopping when I close my laptop. Why is this a problem for anyone?
For overwhelming majority of programming projects out there they fit on a programmer's laptop just fine. The rare exceptions are the projects which require very specialized equipment not available to the developers. In any case, a simulator would be usually a preferable way to dealing with this, and the actual equipment would be only accessed for testing, not for development. Definitely not as a routine development process.
Never in my life did I want development process to be centralized. All developers have different habits, tastes and preferences. Last thing I want is to have centralized management of all environments which would create unwanted uniformity. I've been only once in a company that tried to institute a centrally-managed development environment in the way you describe, and I just couldn't cope with it. I quit after few month of misery. The most upsetting aspect about these efforts is stupidity. These efforts solve no problems, but add a lot of pain that is felt continuously, all the time you have to do anything work-related.
Working in a configuration where your development environment isn't on your computer is always a huge downgrade. Work with VM? -- sooner or later you'll have problems with forwarding your keyboard input to the VM. Work with containers? -- no good way to save state, no good way to guarantee all containers are in sync etc. God forbid any sort of Web browser-based solution. The number of times I accidentally closed the tab or did something else unintentionally because of key mapping that's impossible to modify...
However, in some situations you must endure the pain of doing this. For example, regulatory reasons. Some organizations will not allow you to access their data anywhere but on some cloud VM they give you very botched and very limited control over. While, technically, these are usually easy to side-step, you are legally required to not move the data outside of the boundaries defined for you by the IT. And so you are stuck in this miserable situation, trying to engineer some semblance of a decent utility set in a hostile environment.
Another example is when the infrastructure of your project is too vast to be meaningfully reduced to your laptop, and a lot of your work is exploratory in nature. I.e. instead of typical write-compile-upload-test you are mostly modifying stuff on the system you are working on to see how it responds. This is kind of how my day-to-day goes: someone reported they fail to install or use one of the utilities we provide in a particular AWS region with some specific network settings etc. They'd give me a tunnel to the affected cluster, and I'd have some hours to spend there investigating the problem and looking for possible immediate and long-term solutions. So, you are essentially working in a tech-support role, but you also have to write code, debug it, sometimes compile it etc.
The year of Linux on the laptop has yet to arrive for most of us. Windows and MacOS both offer better battery life, if for no other reason (and there are usually other reasons, like suspend/wake issues, graphics driver woes, etc.)
Agreed. It's so much simpler when people run Linux locally too. Most of our dev environment problems are from people who don't. When you run it locally you also get good at using it which, unsurprisingly, helps a lot when you have to figure out a problem with the deployed version. Learning MacOS/Windows is kinda pointless knowledge in the long run.
It (obviously) leverages Nix, which in turn means the environment is declarative and fully reproducible (not "reproducible" as in docker). Now, you can use just Nix's devShells, but with devenv you have a middleground between just Nix package manager and a full fledged NixOS module system. Basically, write out one line of code - and you've got your Postgres, another one - full linter set up for whatever language you're using, etc.
My small team uses devenv for all our development environments and we really like it. Local DX is really important to me and to our team, which is a big part of why we've chosen Nix and devenv.
As we've started to use it more extensively, we've also found that we want to add some enhancements, work out some bugs, and experiment with our own customizations out-of-tree, etc. I'm happy to report here on HN that devenv is well-documented and easy to extend for Nix users who have some experience with Nix module systems, and that Domen is really responsive to PRs. :)
I think for smaller companies, you can get a long way towards a lot of this with judicious use of docker-compose, and convenience scripts in a Makefile. As long as you don't do anything stupid like try and spin up 100 services when you're a team of 8, most laptops these days are sufficiently capable of handling a database, Redis, your codebase, and something like LocalStack.
I would say you can even go a looong way without any Docker at all.
And for the large majority of the companies/projects, if your project is so complex and heavy of resources that it doesn't fit on a modern laptop, the problem is not in the laptop, it's in the whole project and the culture and cargo-cult around "modern" software development.
Containers/VMs are a nice way to isolate away any machine configuration discrepancies. Conversely it does encourage the use of non hermetic and deterministic build systems which come with other issues too (eg speed differences surfacing race conditions in the build)
>Some caveats: It’s been nearly five years, and I have no doubt that I have misremembered some of the specific details, even though I’m confident in the overall picture. I’m also certain that Stripe has continued evolving and I make no claim this document represents the developer experience at Stripe as of today.
Are there any more recently ex-Stripe folks here willing and able to comment on how Stripe's developer environment might have evolved since the OP left in 2019?
The biggest difference not mentioned is the article is that code is no longer kept on developer machines. The sync process described in the article was well-designed, but also was a fairly constant source of headaches. (For example, sometimes the file watcher would miss an update and the code on your remote machine would be broken in strange ways, and you'd have to recognize that it was a sync issue instead of an actual problem with your code.) As a result, the old devbox system was superseded by "remote devboxes", which also host the code. Engineers use VSCode remote development via SSH. It works shockingly well for a codebase the size of Stripe's.
There are actually several different monorepos at Stripe, which is a constant source of frustration. There have been lots of efforts to try to unify the codebase into a single git repo, but it was difficult for a lot of reasons, not the least of which was the "main" monorepo was already testing the limits of the solution used for git hosting.
Overall, maintaining good developer productivity is an extremely challenging problem. This is especially true for a company like Stripe, which is both too large to operate as a "small" company and too small to operate as a "big" company. Even with a well-funded team of lots of super talented people putting forth their best efforts, it's tough to keep all of the wheels fully greased.
Glad to see that they moved to code living with the execution environment. The code living separate from the execution environment seemed like too much overhead and complexity for not enough benefit.
Especially given VSCode, or Cursor ;), work so well via ssh.
To the engineers that don't want to use those IDE's it might suck temporarily, but that's it.
* Code is off of laptops and lives entirely on the dev server in many (but not all) cases. This has opened up a lot of use cases where devs can have multiple branches in flight at once.
* Big investments into bazel.
* Heavier investment into editor experiences. We find most developers are not as idiosyncratic in their editor choices as is commonly believed, and most want a pre-configured setup where jump-to-def and such all "just work".
That last point has long been a red flag when interviewing. A developer who doesn't care about their tooling also tends to not care about the quality of their work.
I'm glad to see that first bullet point. The code living separate from the execution environment seemed like too much overhead and complexity for not enough benefit.
Not ex-Stripe but in "close relationship" with them since its inception and there's a clear mark in my calendar circa end of 2018 when their decisions and output started to become... weird, or ill-designed.
I don't think it has to do with the dev environment itself, but I'd blame such thing for allowing to deliver "too fast" without thinking twice. Combine that with new blood in management and that's an accident waiting to happen *
They're the best in business still, but far from the well-designed easy-to-use API-first developer-friendly initial offering.
Though I am under the impression that things have gotten more sensical internally over the last year or so.
Note also that the devprod team has largely been shielded from the craziness, and may still be making good decisions (but I don't know what they are in this realm personally).
I was only there in 2022, but at that point there were in fact three or more monorepos (forked roughly based on toolchain - go and scala in one, primarily Ruby in the one detailed here, and there was one for the client stripe api libs that was JS only. There may have been more.
I use syncthing to manage the synchronization of files between local laptop and remote development server. The software code base is upwards of 20 years and has dependencies on Windows for runtime. I can run unit tests locally on very fast MacBook Pro or run it much slower on Windows VM. With syncthing I can easily edit files locally or remotely and they are available locally for source control.
The worst problem is refining the ignore settings to ensure only code is synced preventing conflicts on derivative files and that some rule doesn’t overlap code file names.
I love this. I believe I might have even interfaced with your team around that time. I was leading Facebook's (now Meta) Developer Products team and we were building against super similar areas internally.
We ran back then a similar project that I coined "Developer On-Demand" to tackle that same problem space. It's also what eventually lead me to find the magics of Nix and then build Flox.
I also agree with a lot of what was shared in other comments, while the problems we tackled at large orgs such as Facebook, Shopify, Uber, Google (to name a few teams I remember working with) and obviously also Stripe, certain areas of the pain are 100% universal regardless of team size.
On the Flox side, we're trying to help with a few of them today and many more hopefully in the soon future, very open for thoughts! Things like - simple to use Nix for each of your projects + keep deps and config up to date across everyones Macbooks and Linux boxes, etc -- even if you don't have a full AWS team and Language Server team ready to support.
I've been working on an ephemeral/preview environment operator for Kubernetes(https://github.com/pier-oliviert/sequencer) and as I could agree to a lot of things OP said.
I think dev boxes is really the way to go, specially with all the components that makes an application nowadays. But the latency/synchronization issue is a hard topic and it's full of tradeoff.
A developer's laptop always ends up being a bespoke environment (yes, Nix/Docker can help with that), and so, there's always a confidence boost when you get your changes up on a standalone environment. It gives you the proof that "hey things are working like I expected them to".
1. New shared builds update container images for applications that comprise the environment
2. Rather than a "devbox", devs use something like Docker Compose to utilize the images locally. Presumably this would be configured identically to the proposed devbox, except with something like a volume pointing to local code.
I'm interested in learning more about this. It seems like a way to get things done locally without involving too many cloud services. Is this how most people do it?
Considering the cost of a developers time, and you can do shenanigans to drive that even lower, this all feels totally reasonable.
Deleted Comment
It's darkly amusing how we have all these black-magic LLM coding assistants but we can't be reasonably assured of even 2000s level type-aware autocomplete.
What? For which languages are you talking about? For python, VSCode is leaps and bounds ahead of PyCharm if your project is well typed.
JetBrains offer a remote solution now though: https://www.jetbrains.com/remote-development/gateway/
I don't recall latency being a big problem in practice. In an organization like this, it's best to keep branches up to date with respect to master anyway, so the diffs from switching between branches should be small. There was a lot of work done to make all this quite performant and nice to use. The slowest part was always CI.
To me the root issue is the complexity of production environments has expanded to the point of impacting complexity in developer environments just to deploy or test - this is in conjunction with expanding complexity of developer environments just to develop - i.e. web pack.
For very large well resourced organizations like Stripe that actually operate at scale that complexity may very well be unavoidable. But most organizations are not Stripe. They should consider decreasing complexity instead of investing in complex tooling to wrangle it.
I'd go as far as to suggest both monorepos and dev-boxes are complex toolchains that many organizations should consider avoiding.
It became clear to me that cloud-only is not the way to go, but instead a local-first, cloud-optional approach.
https://mootoday.com/blog/dev-environments-in-the-cloud-are-...
I should be able to launch a local VM using the GitHub Desktop App just as easily as I can an Azure-hosted instance.
By running a Linux VM on your local machine you get a consistent environment that you can ssh to, remove the latency issues but you remove all the complexity of syncing that they’ve created.
That’s a setup that’s worked well for me for 15 years but maybe I’m missing some other benefit?
* Local dev has laptop-based state that is hard to keep in sync for everyone. Broken laptops are _really hard_ to debug as opposed to cloud servers I can deploy dev management software to. I can safely say the oldest version of software that's in my cloud; the laptops skew across literally years of versions of dev tools despite a talented corpeng team managing them.
* Our cloud servers have a lot more horsepower than a laptop, which is important if a dev's current task involves multiple services.
* With a server, I can get detailed telemetry out of how devs work and what they actually wait on that help me understand what to work on next; I have to have pretty invasive spyware on laptops to do the same.
* Servers in our QA environment can interact with QA services in a way that is hard for a laptop to do. Some of these are "real services", others are incredibly important to dev itself, such as bazel caches.
There's other things; this is an abbreviated list.
If a linux VM works for you, keep working! But we have not been able to scale a thousands-of-devs experience on laptops.
I’m sure there are a bunch of things that make it the right choice for Stripe. Obviously if you just have too many things to run at a time and a dev laptop can’t handle it then it’s a dealbreaker. What’s the size of the cloud instances you have to run on?
So if this was a problem back then, when the company had less than 1000 employees, I can't even imagine how hard would it be to get local dev working now
Not saying it's the wrong choice for you, but it's a choice, not a natural conclusion.
The amount of time companies lose to broken development environments is incredible. A developer can easily lose half a day (or more) of productive time.
With cloud environments it’s much easier to offer a “just give me a brand new environment that works” button somewhere. That’s incredibly valuable.
I don’t doubt that Stripe have a setup that works well for them them but I also bet they could have gone done a different path that also worked well and I suspect that other path (local VMs) is a better fit for most other smaller teams.
It also centralizes dev environment management to the platform team that owns them and provides them as a service which cuts down on support tickets related to broken dev environments. There are certainly some trade offs though and for most companies a local VM or docker compose file will be a better choice.
And the dev environment stops running when you close the laptop, but you also don't need it since you're not developing.
Not saying it can work for absolutely all cases but it's definitely good enough for a lot of cases.
Never in my life did I want to scale my dev. environment vertically or horizontally or in any other direction. Unless you work on a calculator, I don't know why would you need that.
I have no problems with my environment stopping when I close my laptop. Why is this a problem for anyone?
For overwhelming majority of programming projects out there they fit on a programmer's laptop just fine. The rare exceptions are the projects which require very specialized equipment not available to the developers. In any case, a simulator would be usually a preferable way to dealing with this, and the actual equipment would be only accessed for testing, not for development. Definitely not as a routine development process.
Never in my life did I want development process to be centralized. All developers have different habits, tastes and preferences. Last thing I want is to have centralized management of all environments which would create unwanted uniformity. I've been only once in a company that tried to institute a centrally-managed development environment in the way you describe, and I just couldn't cope with it. I quit after few month of misery. The most upsetting aspect about these efforts is stupidity. These efforts solve no problems, but add a lot of pain that is felt continuously, all the time you have to do anything work-related.
However, in some situations you must endure the pain of doing this. For example, regulatory reasons. Some organizations will not allow you to access their data anywhere but on some cloud VM they give you very botched and very limited control over. While, technically, these are usually easy to side-step, you are legally required to not move the data outside of the boundaries defined for you by the IT. And so you are stuck in this miserable situation, trying to engineer some semblance of a decent utility set in a hostile environment.
Another example is when the infrastructure of your project is too vast to be meaningfully reduced to your laptop, and a lot of your work is exploratory in nature. I.e. instead of typical write-compile-upload-test you are mostly modifying stuff on the system you are working on to see how it responds. This is kind of how my day-to-day goes: someone reported they fail to install or use one of the utilities we provide in a particular AWS region with some specific network settings etc. They'd give me a tunnel to the affected cluster, and I'd have some hours to spend there investigating the problem and looking for possible immediate and long-term solutions. So, you are essentially working in a tech-support role, but you also have to write code, debug it, sometimes compile it etc.
The idea here is that you use a VM (cloud or local) to run your compute. Most people can run it in the background without explicitly connecting to it.
Or just run Linux on your local machine as the OS. I don't get the obsession with Macs as dev workstations for companies whose products run on Linux.
Or Guix, which has the advantage of a more pleasant language.
As we've started to use it more extensively, we've also found that we want to add some enhancements, work out some bugs, and experiment with our own customizations out-of-tree, etc. I'm happy to report here on HN that devenv is well-documented and easy to extend for Nix users who have some experience with Nix module systems, and that Domen is really responsive to PRs. :)
And for the large majority of the companies/projects, if your project is so complex and heavy of resources that it doesn't fit on a modern laptop, the problem is not in the laptop, it's in the whole project and the culture and cargo-cult around "modern" software development.
Are there any more recently ex-Stripe folks here willing and able to comment on how Stripe's developer environment might have evolved since the OP left in 2019?
The biggest difference not mentioned is the article is that code is no longer kept on developer machines. The sync process described in the article was well-designed, but also was a fairly constant source of headaches. (For example, sometimes the file watcher would miss an update and the code on your remote machine would be broken in strange ways, and you'd have to recognize that it was a sync issue instead of an actual problem with your code.) As a result, the old devbox system was superseded by "remote devboxes", which also host the code. Engineers use VSCode remote development via SSH. It works shockingly well for a codebase the size of Stripe's.
There are actually several different monorepos at Stripe, which is a constant source of frustration. There have been lots of efforts to try to unify the codebase into a single git repo, but it was difficult for a lot of reasons, not the least of which was the "main" monorepo was already testing the limits of the solution used for git hosting.
Overall, maintaining good developer productivity is an extremely challenging problem. This is especially true for a company like Stripe, which is both too large to operate as a "small" company and too small to operate as a "big" company. Even with a well-funded team of lots of super talented people putting forth their best efforts, it's tough to keep all of the wheels fully greased.
Especially given VSCode, or Cursor ;), work so well via ssh.
To the engineers that don't want to use those IDE's it might suck temporarily, but that's it.
* Code is off of laptops and lives entirely on the dev server in many (but not all) cases. This has opened up a lot of use cases where devs can have multiple branches in flight at once.
* Big investments into bazel.
* Heavier investment into editor experiences. We find most developers are not as idiosyncratic in their editor choices as is commonly believed, and most want a pre-configured setup where jump-to-def and such all "just work".
I don't think it has to do with the dev environment itself, but I'd blame such thing for allowing to deliver "too fast" without thinking twice. Combine that with new blood in management and that's an accident waiting to happen *
They're the best in business still, but far from the well-designed easy-to-use API-first developer-friendly initial offering.
* Pure speculation based on very evident patterns
Though I am under the impression that things have gotten more sensical internally over the last year or so.
Note also that the devprod team has largely been shielded from the craziness, and may still be making good decisions (but I don't know what they are in this realm personally).
The worst problem is refining the ignore settings to ensure only code is synced preventing conflicts on derivative files and that some rule doesn’t overlap code file names.
https://www.cis.upenn.edu/~bcpierce/unison/
https://mutagen.io/
We ran back then a similar project that I coined "Developer On-Demand" to tackle that same problem space. It's also what eventually lead me to find the magics of Nix and then build Flox.
I also agree with a lot of what was shared in other comments, while the problems we tackled at large orgs such as Facebook, Shopify, Uber, Google (to name a few teams I remember working with) and obviously also Stripe, certain areas of the pain are 100% universal regardless of team size.
On the Flox side, we're trying to help with a few of them today and many more hopefully in the soon future, very open for thoughts! Things like - simple to use Nix for each of your projects + keep deps and config up to date across everyones Macbooks and Linux boxes, etc -- even if you don't have a full AWS team and Language Server team ready to support.