Though I will say that Temporal's use case is probably not really well mapped to CI/CD - though it could be used for it (which is why I didn't mention it). It's primary strength is robust, long lived workflows with intelligent retries and the like - you typically want your CI/CD to be as fast as possible and while you want retries and resilience etc it's not as important as some other things (like being hermetic, reproducible, and cached).
tldr is Temporal is more general-purpose: for reliable programming in general, vs data pipelines. It supports many languages, and combining languages, has features like querying & signaling, and can do very high scale.
CI/CD is a common use case for Temporal—used by HashiCorp, Flightcontrol, Netflix: https://www.youtube.com/watch?v=LliBP7YMGyA
this is pretty epic. but it also means you need to keep track of what version of your code ran every past workflow because you need to run it the exact same way when you replay it, right? Is there an easy way to track in workflow metadata or something which version of a worker (commit sha or something) ran a workflow?
also i love the beginning section about history. It would be awesome if every article I read about some new technology started with a reference to how whatever-it-is really grew out of PARC or bell labs or some research paper written pre-1980.
You do need to run it on the same code version. There are different ways deploy code changes. If you use one of our in-built versioning systems, the version is recorded in the workflow history (and you can either keep track of version-sha mapping or use a sha as the version). Otherwise, you can add code that adds the current code version as workflow metadata.
The situation is similar to the whole "database inside out" hype that drives application programmers to re-implement proper DBMS.
I am pretty sure the IT hype wheel will turn around and someone will sell 2PC as a new shiny thing.
However, if all of the data you need to update is in a single database that supports atomic commits, I'd go with that over sagas.
Step1
Step2
Step1Undo
then this has a 1% chance of needing manual repair (it's okay if step1 fails, but if step1 succeeds and step2 fails, we need to repair):
do Step1
do Step2
and this has a .01% chance (we only repair if Step2 and Step1Undo fails, 1% * 1%):
do Step1
try {
do Step2
} catch { do Step1Undo
}
Workflows are not zero-cost, they have their own tradeoffs compared to microservices. State management / bootstrapping logic becomes non-trivial, execution order though easier to visualize is also slightly not deterministic, workflows are not as well suited for request-response style replies due to the latency involved in total execution etc (but I think they are great alternatives to async / background workfllows) - and shared underlying infrastructure means increased chances of SPOFs.
The state must have improved much since then. Also, adoption of anything new to require remodelling your application into a different paradigm must be worth the value delivered. For example, modular monoliths became popular because they reduced operational complexity by reducing # of pieces involved. At the time, that value prop vs effort involved was unclear to our teams IMO