Readit News logoReadit News
timbotron commented on XZ Utils Backdoor Still Lurking in Docker Images   binarly.io/blog/persisten... · Posted by u/torgoguys
DiabloD3 · 4 months ago
When I was doing my stuff at my former stint as a hatrack, I made the choice to ban Docker from anywhere inside the company.

_Docker_ is a security hazard, and anything it touches is toxic.

Every single package, every single dependency, that has an actively exploited security flaw is being exploited in the Docker images you're using, unless you built them yourself, with brand new binaries. Do not trust anyone except official distro packages (unless you're on Ubuntu, then don't trust them either).

And if you're going to do that... just go to _actual_ orchestration. And if you're not going to do that, because orchestration is too big for your use case, then just roll normal actual long lived VMs the way we've done it for the past 15 years.

timbotron · 4 months ago
I can understand criticism of docker specifically from a "requires root and daemon" perspective (rootless daemonless container runtimes exists) but this is such an odd take, using outdated software is completely unrelated to whether or not you use containers. Why would long lived VMs be better if they're also using old versions of software?
timbotron commented on 7 Databases in 7 Weeks for 2025   matt.blwt.io/post/7-datab... · Posted by u/yarapavan
mble_ · a year ago
I lived through the MongoDB hype cycle.

For document databases, I'm more interested in things like PoloDB and SurrealDB.

timbotron · a year ago
I agree mongo is overhyped and attracts a lot of web newbies who only know javascript and don't want to think through schemas, although one interesting newer feature of mongo is time series collections -- unfortunately they are a bit buggy but they're getting better seem like a legitimate non-relational use case.
timbotron commented on 7 Databases in 7 Weeks for 2025   matt.blwt.io/post/7-datab... · Posted by u/yarapavan
mble_ · a year ago
Author here.

Thanks for sharing! My choices are pretty coloured by personal experience, and I didn't want to re-tread anything from the book (Redis/Valkey, Neo4j etc) other than Postgres - mostly due to Postgres changing _a lot_ over the years.

I had considered an OSS Dynamo-like (Cassandra, ScyllaDB, kinda), or a Calvin-like (FaunaDB), but went with FoundationDB instead because to me, that was much more interesting.

After a decade of running DBaaS at massive scale, I'm also pretty biased towards easy-to-run.

timbotron · a year ago
I'm curious why you said you don't find MongoDB interesting?
timbotron commented on Developing with Docker   danielquinn.org/blog/deve... · Posted by u/bruh2
JohnMakin · a year ago
> I don't want to be negative, but if one of my engineers came to me saying they wanted to deploy images built from their machine, with all the dev niceties enabled, to go to prod, rather than proper CI/CD of prod optimized images, I'd have a hard time being sold on that.

ditto, "worked on local" is a meme for a reason.

timbotron · a year ago
but "works on my machine" is exactly the problem docker solves -- if you still have those issues then you're not building your images right
timbotron commented on Developing with Docker   danielquinn.org/blog/deve... · Posted by u/bruh2
d_watt · a year ago
I don't think I agree with this. Docker is an amazing tool, I've used it for everything I've done in the last 7 years, but this is not how I'd approach it.

1. I think the idea of local-equal-to-prod is noble, and getting them as close as possible should be the goal, but is not possible. In the example, they're using a dockerized postgres, prod is probably a managed DB service. They're using docker compose, prod is likely ECS/K8S/DO/some other service that uses the image (with more complicated service definitions). Local is probably some VM linux kernel, prod is some other kernel. Your local dev is using mounted code, prod is probably baked in code. Maybe local is ARM64, and prod is AMD64.

I say this not because I want to take away from the idea of matching dev and prod as much as possible, but to highlight they're inherently going to be very different. So deploying your code with linters, or in debug mode, and getting slower container start times at best, worse production performance at worse - just to pretend envs which are wildly different aren't different seems silly. Moreover if you test in CI, you're much more likely to get to a prod-like infra than a laptop.

2. Cost will also prohibit this. Do you have your APM service running on every dev node, are you paying for that for all the developer machines for no benefit so things are the same. If you're integrating with salesforce, do you pay for a sandbox for every dev so things are the same. Again, keeping things as similar as possible should be a critical goal, but their are cost realities that again make that impossible to be perfect.

3. In my experience if you actually want to achieve this, you need a remote dev setup. Have your code deployed in K8S / ECS / whatever with remote dev tooling in place. That way your DNS discovery is the same, kernels are the same, etc. Sometimes this is worth it, sometimes it isn't.

I don't want to be negative, but if one of my engineers came to me saying they wanted to deploy images built from their machine, with all the dev niceties enabled, to go to prod, rather than proper CI/CD of prod optimized images, I'd have a hard time being sold on that.

timbotron · a year ago
"widely different" seems like a stretch e.g. ECS is pretty directly translatable to docker compose, and if you do cross-platform builds with buildx then I don't see why doing the building locally or on a cloud service matters much.
timbotron commented on How we migrated onto K8s in less than 12 months   figma.com/blog/migrating-... · Posted by u/ianvonseggern
WaxProlix · a year ago
People move to K8s (specifically from ECS) so that they can use cloud provider agnostic tooling and products. I suspect a lot of larger company K8s migrations are fueled by a desire to be multicloud or hybrid on-prem, mitigate cost, availability, and lock-in risk.
timbotron · a year ago
there's a pretty direct translation from ECS task definition to docker-compose file
timbotron commented on Is Slack Down?    · Posted by u/jchen42
timbotron · 2 years ago
At this point their 99.99% SLA is a total joke, make sure to claim your service credits https://slack.com/terms/service-level-agreement
timbotron commented on Show HN: We scaled Git to support 1 TB repos   xethub.com/user/login... · Posted by u/reverius42
JZL003 · 3 years ago
I also have a lot of issues with versioning data. But look at git annex - it's free, self hosted and has a very easy underlying data structure [1]. So I don't even use the magic commands it has for remote data mounting/multi-device coordination, just backup using basic S3 commands and can use rclone mounting. Very robust, open source, and useful

[1] When you run `git annex add` it hashes the file and moves the original file to a `.git/annex/data` folder under the hash/content addressable file system, like git. Then it replaces the original file with a symlink to this hashed file path. The file is marked as read only, so any command in any language which tries to write to it will error (you can always `git annex unlock` so you can write to it). If you have duplicated files, they easily point to the same hashed location. As long as you git push normally and back up the `.git/annex/data` you're totally version controlled, and you can share the subset of files as needed

timbotron · 3 years ago
If you like git annex check out [datalad](http://handbook.datalad.org/en/latest/), it provides some useful wrappers around git annex oriented towards scientific computing.

u/timbotron

KarmaCake day12September 6, 2019View Original