Aren't most SecOps pushing 48 hours as the absolute limit for critical vulns or are ours just being extra pushy?
I've seen most auditors mandate 30 days for Critical, but you clearly want to move a lot quicker than that.
Aren't most SecOps pushing 48 hours as the absolute limit for critical vulns or are ours just being extra pushy?
I've seen most auditors mandate 30 days for Critical, but you clearly want to move a lot quicker than that.
It's not too much work since we built on an existing set of tools (melange & apko). I've actually found that putting a Dockerfile into ChatGPT generates a really good first iteration.
When video became an affordable medium, would people say "this is the end of art, live performances are art. Now the people will just watch the same recordings over and over?" Maybe, if the internet existed. But it's had the effect of creating and introducing new art forms.
AI generated content won't replace art. It will evolve it to a new creative.
Like Opensea had insider trading but they were not nearly as big as FTX, everyone knew FTX. Opensea is “just” a marketplace, not even near a live trading platform.
But there will probably be more information about it the next 10 years heh.
That's an unexpected view. Security teams are experts in security and help application developers think of ways the product could be exploited. Security teams run pen tests and bug bounty programs. Security teams manage compliance.
Separation of duties is a critical part of building a secure system, and you can't have separation of duties properly if app developers do it all.
Don't think of a security team a punishment for when things didn't go as expected, but a good security team can help increase velocity and confidence and security all at the same time.
The lifecycle is: PoPs generate/gather data > send to PDX > compute in PDX > ship updates / data to PoPs.
If you take out PDX, then as so much runs on fresh data, it starts getting stale.
I doubt everything has changed since then, so this is unlikely just "API down" and more likely that a lot of things are now in a degraded state as they're running on stale information (no update from PDX)... this includes things like load balancing, the tiered caching (Argo Smart Routing), Warp / Zero Trust, etc.
Even if it were only "API down", then bear in mind that a lot of automation customers have will block attacks by calling the API... "API down" is a hell of a window of opportunity for attackers.
Note that just before I'd left they'd been investing in standing up AMS (I think) but had never successfully tested a significant failover, and the majority of services that needed fresh state did not know how to do this.
PS: :scream: most of the observability was also based in PDX, so hugs to all the teams and SREs currently running blind.
It’s actually a vendor-agnostic replacement for the client side of DataDog, New Relic, or Azure App Insights.
It’s complicated because those tools are complicated.
It’s especially complicated because it needs to support the special needs of library vendors, third party plugins, and framework-level integrations.
So no, it’s never going to be “simple” in the same way there will never be a simple replacement for something as complex as a Word document.
No, ASCII won’t cut it. Yes it’s simple and lightweight, but not what people actually want.
The client side of DataDog, New Relic aren't nearly as complicated as Otel.
I assume this is limited to CVEs in the underlying layers, and adding in the latest of the primary package. Given that how/are you testing the images after you fix the CVEs?
When we build the OCI image, we validate it via some custom tests that we've written. We have identified the canonical image (i.e. DockerHub, GHCR, etc), and we confirm that our image has the same entrypoint, args, env that the canonical image has. Then we have some generated scenarios we run the OCI image through to make sure it functions the same as the canonical image runs.
For example, we have Postgres in the catalog today. When we rebuild, we have some tests that run with various configurations of PG_DATABASE/PG_PASSWORD, etc env vars. We run these with our image and with index.docker.io/library/postgres, and expect to see the same output with both.