This is on our radar! The primitives mentioned in this blog post are fairly general and allow us to support various types of artifact storage and caching protocols.
(The good news is that if the spikes are regular, a sufficiently-advanced serverless can "prime the pump" and prep-and-launch instances into surplus compute before the spike since historical data suggests the spike is coming).
[cofounder of blacksmith here]
This is exactly one of the symptoms of running CI on traditional hyperscalers we're setting out to solve. The fundamental requirement for CI is that each job requires its own fresh VM (which is unlike traditional serverless workloads like lambdas). To provision an EC2 instance for a CI job:
- you're contending against general on-demand production workloads (which have a particular demand curve based on, say, the time of day). This can typically imply high variance in instance provisioning times.
- since AWS/GCP/Azure deploy capacity out as spot instances with a guaranteed pre-emption warning, you're also waiting for the pre-emption windows to expire before a VM can be handed to you!
Fun fact, we decided to call it "sticky disks" as a result of this old HN comment thread: https://news.ycombinator.com/item?id=39957753
OOC, what are the main objections against?
In contrast, other similar VMM seem to have a better one, like Cloud Hypervisor [1]. Why then FC and not CH? (I've nothing against FC, actually love it and have been using it, but it appears not being the best I/O wise).
Can you provide any sources for this claim? We're running Firecracker in production over at blacksmith dot sh and haven't been able to reproduce any perf regressions in Firecracker over CH in our internal benchmarking.