I can see an image tag available in the cache in my project on cloud.google.com, but after attempting to pull from the cache (and failing) the image is deleted from GAR :(
I can see an image tag available in the cache in my project on cloud.google.com, but after attempting to pull from the cache (and failing) the image is deleted from GAR :(
If so, I'd love to see your measured distribution of boot times. Because I've observed results similar to your observations on EBS, with some long-tail outliers.
Thanks for the analysis and article!
A possibly better metric for your particular case (assuming you're interested in fastest bootup possibly achievable) is from our self-managed github-actions runners. Those boot times are in the 40-50s range. This is consistent with what others see, as far as I know. A good blog on this topic - including how they got boot-to-ready times down to 5s - that you might be interested in from the depot.dev folks: https://depot.dev/blog/github-actions-breaking-five-second-b...
Are you able to share the image you’re using with me, or a reproduction case? Even the base images would help.
I wasn't sure how to load the images back into docker at first. I tried `docker load` but I get this error:
$ (cd ci-repack && tar cfv - .) | docker load
./
./oci-layout
./index.json
./blobs/
./blobs/sha256/
./blobs/sha256/2ad6ec1b7ff57802445459ed00e36c2d8e556c5b3cad7f32512c9146909b8ef8
./blobs/sha256/9f3908db1ae67d2622a0e2052a0364ed1a3927c4cebf7e3cc521ba8fe7ca66f1
open /var/lib/docker/tmp/docker-import-1084022012/blobs/json: no such file or directory
Then I noticed the `skopeo copy` in one of the github actions workflows. That got me further. The image was able to be pushed to a registry. But I am getting this error when pulling the repacked image: failed to register layer: duplicates of file paths not supported*EDIT*: dhcp6leased landed in base yesterday: https://www.undeadly.org/cgi?action=article;sid=202406040850...
The documentation is great but it can be hard to find examples of common patterns, although it's getting better with time and a growing audience.
My pro-tip has been to prefix your searches with "vector dev <query>" for best results on google. I think "vector" is/was just too generic.
A nice recent contribution added an alternative to prometheus pushgateway that handles counters better: https://github.com/vectordotdev/vector/issues/10304#issuecom...
I’ve been collecting bookmarks in Evernote and now obsidian for about a decade. I try to add as many tags as I hope will allow me to find an article again later. It’s often many months before I need to find something. My success rate at remembering a term from the title or the right tag is not great. I’ve been pretty impressed with zenfetch’s ability to search and find exactly what I was looking for. And this doesn’t even scratch the surface of what it can do when you want it to synthesize answers from many articles for you.
They’re also working on indexing your saved tweets which I’m excited about. It’s a giant pain to try to find liked tweets.
I built ChatKeeper because I wanted to treat my ChatGPT history like a local knowledge base, with local-first access to my data.
It’s a command-line tool (GUI in progress) that takes a full ChatGPT .zip export and syncs it with local Markdown files. You can move and rename them freely and they will stay in sync on future runs.
It pairs well with tools like Obsidian and lets you link your own notes to specific conversations or even points within them.
Revenue is modest but growing month over month. It’s a one-time purchase, not a subscription.
Most users so far are researchers and other ChatGPT power users who already live in Markdown or want to do things like curate and compress the context of very long-running conversations.