That work building “yet another build tool” could have gone in to programmatically generating bazel BUILD files. So, there was an active choice here somewhere; we just don’t know all the information as to why effort was diverted away from Bazel and toward building a new tool.
I trust them to make good decisions, so I would like to understand more. :)
Seems like Siso supports Starlark, so maybe its a step in Bazels direction after all.
This tool is substantially less complex than Bazel, nor is it a reimplementation of Bazel. Ninja's whole goal in life is to be a very fast local executor of the command DAG described by a ninja file, and siso's only goal is to be a remote executor of that DAG.
This is overall less complex than their first stabs at remote execution, which involved standing up a proxy server locally and wrapping all ninja commands in a "run a command locally which forwards it to the proxy server which forwards it to the remote backend" script.
I wonder if the end goal is to use Bazel for Chromium and Siso is an incremental step to get there
They absolutely address this in the linked article, so why are we even speculating here?
> Probably hard to quickly assemble a rust team within msft.
The same MSFT that is rewriting their Windows OS in rust as we speak? I think you should stop commenting when you don't know anything about the subject.
The closest I've ever found to a real acknowledgement is this issue with relation to GitHub Actions: https://github.com/actions/runner-images/issues/8755
Look up the difference between Dv5 and Ddv5 VMS, for instance, or anything talking about azure VM temp disks for more info.
It looks like the free CI runners have C: drive pointing to a disk that is restored from a snapshot, but often times it hasn't finished restoring the entire snapshot by the time your workflow runs, so IO can be very slow, even if you don't need to read from the still-frozen parts of the disk. Some software ran inside workflows will do heavy R/W on C: drive, but it's better to move anything that will be written to disk, e.g. caches, to D: if possible. This often leads to much better performance with I/O and more predictable runtimes, particularly when there isn't a lot of actual compute to do.
(And also this is all for v5 and earlier skus and changes slightly for v6 skus but whatever).
I started listening to a podcast called "the history of the early church" to learn a bit more about that but unfortunately I think the target audience was Christians interested in theology rather than nerds interested in history. Recommendations for books etc are welcome!