Readit News logoReadit News
serbuvlad commented on Will AI Replace Human Thinking? The Case for Writing and Coding Manually   ssp.sh/brain/will-ai-repl... · Posted by u/articsputnik
serbuvlad · 2 days ago
I think the whole AI vs non. AI debate is a bit besides the point. Engineers are stuck in the old paradigm of "perfect" algorithms.

I think the image you post at the beginning basically sums it up for me: ChatGPT o3/5 Thinking can one-shot 75% of most reasonably sized tasks I give it without breaking a sweat, but struggles with tweaks to get it to 100%. So I make those tweaks myself and I have cut my code writing task in half or one third of the time.

ChatGPT also knows more idioms and useful libraries than I do so I generally end up with cleaner code this way.

Ferrari's are still hand assembled but Ford's assembly line and machines help save up human labor even if the quality of a mass-produced item is less than a hand-crafted one. But if everything was hand-crafted, we would have no computers at all to program.

Programming and writing will become niche and humans will still be used where a quality higher than what AI can produce is needed. But most code will be done by minotaur human-ai teams, where the human has a minimal but necessary contribution to keep the AI on track... I mean, it already is.

serbuvlad commented on Pkgbase Removes FreeBSD Base System Feature   lists.freebsd.org/archive... · Posted by u/vermaden
ender341341 · a month ago
It's treated special cause most shells handle undefined variables as empty strings so `rm -rf "${base_path}/${sub_dir}"` can turn into `rm -rf '/'` and users commonly don't expect that.

While that case may be simple to catch the writers of gnu rm also recognize that scripts tend to not be well tested and decided "better than it currently is" is better than "we didn't do any mitigations to a common problem cause the solution wasn't perfect".

serbuvlad · a month ago
Ah, the beauties of the POSIX shell.
serbuvlad commented on Pkgbase Removes FreeBSD Base System Feature   lists.freebsd.org/archive... · Posted by u/vermaden
charcircuit · a month ago
rm should not have permission to break the operating system. If a program can break the operating system that is a failure in the operating system's sandboxing or permissions. no-preserve-root tries to solve the issue at the wrong layer of the stack and only adresses one way to break the os. Being special to just / doesn't make sense to me.
serbuvlad · a month ago
Why?

Obviously rm -rf / will only "destroy the operating system" if the user is root and we're in the root namespace. There is nothing stopping you from building a sandboxed OS that never gives your users real root (Android).

But what'd be the point of that? Users care about their data, not about their OS internals. If the OS blows, that's just an OS reinstall. But if a non-backed-up /home blows, that could be months of work. And any program that can delete a file in /home (as they need to be able to do to allow the user to do everyday work) can also delete all of them.

serbuvlad commented on The future is not self-hosted   drewlyton.com/story/the-f... · Posted by u/drew_lytle
whartung · a month ago
> What costs? I run a self-hosted soultion for ~5 people off a $150 N100 machine + storage costs and currently my bottleneck is Jellyfin transcoding speed. I want to scale out with a couple more $150 N100/N150 machines to ~20 users: my entire extended family and friends.

Support has costs.

Anyone here can grab an N100 off of ebay, install "self hosting stuff" (much like the guy in the post did), put it back on eBay with a 50-100% markup. "Plug and play, self hosted. Plug in a TB drive for more storage. Total solution!"

And it's still not enough. They need a domain, they need a tunnel, they need to hook up with Lets Encrypt. They need to leave the machine on, and they need a backup strategy. Much less now having to cope with all of the Fine People that inhabit the wild interwebs and will soon come knocking...and knocking...and knocking.

This all has to be explained to folks that don't know, have no aptitude for, and simply don't care, about the mechanics of this process. They just want it to work.

It's not just a couple of dockers shoved onto a small Linux box. It's free like a puppy.

Self hosting is arcane, fiddly stuff. Fine for those comfortable with it, but a nightmare to those who are not.

serbuvlad · a month ago
Yeah, you need to have (I would argue basic) Linux sysadmin skills. If you don't have those skills, and aren't interested to learn them, then you shouldn't self-host, just because it's the hot new trend.

The thing I like the most is the area effect. I have those skills so 5-20 people get a self-hosted experience managed by me. But even so, many people will be left outside of any such area. This too, is fine.

My dad knows how to do basic woodworking, so if I need a simple piece of wooden furniture, I go to him. I have a friend who knows how to 3D print stuff (I know nothing about it) and another who's in medical school and gives me medical advice (including "go to the doctor" when the problem is not minor). But I don't have any friends which are good at car mechanics, so I go to the shop (and get charged) for all problems related to that.

Now, I do not live in the US, so maybe these sorts of relationships spanning wide fields are less common there, but the solution to rugged individualism doesn't seem to me to be "collectivism on a grand scale", be it corporations or the government. The solution seems to me to be "collectivism on a small scale", building friend-family groups that can solve the most common 80% of problems in most fields within themselves, and that reach into professionals from the larger collective for the other 20% of problems, or for the problems in fields they have no experience in.

serbuvlad commented on The future is not self-hosted   drewlyton.com/story/the-f... · Posted by u/drew_lytle
stego-tech · a month ago
The author gets into a few issues I’ve talked at length about on my own blogs over the years, with the same gist: self-hosting is a better alternative than corporate cloud providers, but isn’t suitable for the everyman due to its complexity and associated costs. The grim reality is that most people and businesses still have such disdain for their own privacy, security, and/or sovereignty, and that’s not going to change absent a profound crisis in all of the above simultaneously (y’know, like what the USA is doing atm).

I do like that the author gets into alternatives, like the library storage idea (my similar concept involved the USPS giving citizens gratis space and a CDN). I think that’s a discussion we need a lot more of, including towns or states building publicly-owned datacenters and infrastructure to support more community efforts involving technology. We also need more engagement from FOSS projects in making their software as easy to deploy with security best practices as possible, by default, such that more people can get right to tinkering and building without having to understand how the proverbial sausage is made. That’s arguably the biggest gap at the moment, because solving the UX side (like Plex did) enables more people to self-host and more communities to consider offering compute services to their citizens.

I’m glad to see a stronger rejection of this notion that a handful of private corporations should control the bulk of technology and the associated industry running atop it, and I’m happy to see more folks discussing alternative futures to it.

serbuvlad · a month ago
> Self-hosting is a better alternative than corporate cloud providers, but isn’t suitable for the everyman due to its complexity and associated costs.

What costs? I run a self-hosted soultion for ~5 people off a $150 N100 machine + storage costs and currently my bottleneck is Jellyfin transcoding speed. I want to scale out with a couple more $150 N100/N150 machines to ~20 users: my entire extended family and friends.

As a point of comparison an iPhone 16 non-pro starts at $799.

That's the fixed costs, the running costs are extremely tiny. The N100 eats up electricity like an anorexic model chewing up red meat, the domain is $10/year and the dynamic DNS is ~$3/month (and I didn't even go for a particularly cheap one).

serbuvlad commented on Generic Containers in C: Vec   uecker.codeberg.page/2025... · Posted by u/uecker
gsliepen · a month ago
It's amazing how many people try to write generic containers for C, when there is already a perfect solution for that, called C++. It's impossible to write generic type-safe code in C, and this version resorts to using GCC extensions to the language (note the ({…}) expressions).

For those afraid of C++: you don't have to use all of it at once, and compilers have been great for the last few decades. You can easily port C code to C++ (often you don't have to do anything at all). Just try it out and reassess the objections you have.

serbuvlad · a month ago
My problem with C++, and maybe this is just me, is RAII.

Now, Resource Aquisition Is Initialization is correct, but the corollary is not generally true, which is to say, my variable going out of scope does not generally mean I want to de-aquire that resource.

So, sooner or later, everything gets wrapped in a reference counting smart pointer. And reference counting always seemed to me to be a primitive or last-resort memory managment strategy.

serbuvlad commented on When Is WebAssembly Going to Get DOM Support?   queue.acm.org/detail.cfm?... · Posted by u/jazzypants
jauntywundrkind · a month ago
> "Ah, but each container will run it's own separate runtime process." Sure, but the most valuable resource that probably wastes is a PID (and however many TIDs). Processes exec'ing the same program will share a .text and .rodata sections and the .data and .bss segments are COW'ed.

In terms of memory footprint, I can allow that the OS may be smart enough to share a lot of the exec'ed process if it's run multiple times. With Go and Rust and other statically compiled programs, that's going to scale to the number of instances of a service. With Node you might scale more, but then you need to start dynamically loading app code and that won't be shared.

With wasm hosts, you can just ship your app code, and ask the wasm host to provide libraries to you. So you can have vastly more memory sharing. Wasm allows a lot of what you term p_i to be shifted into k through this sharing.

But there's so many other reasons to have a a shared runtime rather than many processes.

Context switching can be a huge cost, one that a wasm host can potentially avoid as it switches across different app workloads. Folks see similar wins from v8 isolates, which for example CloudFlare has used on their worker platform to allow them to scale up to a massive number of ultra-light worker.

> Even for deploying wasm containers. Maybe there are certain technical reasons why they needed an alternate "container" runtime (wasi) to run wasm workloads with CRI orchestration, but size is not a legitimate reason. If you made a standard container image with the wasm runtime and all wasm applications simply base off that image and add the code, the wasm runtime will be shared between them, and only the code will be unique

The above talks to some of the technical reasons why an alternate runtime enables such leaps and bounds versus the container world. Memory size is absolutely the core reason; that you can fit lots of tiny micro-processes on a wasm host, and have them all sharing the same set of libraries. Disk size is a win for the same reason, that containers don't need to bundle their dependencies, just ask for them. There's a 2022 post talking about containers, isolates, and wasm, talking more to all this arch: https://notes.crmarsh.com/isolates-microvms-and-webassembly

serbuvlad · a month ago
If you want small function-style services then yea, that's valid, because p_i is really small.

The question is really if you want hundreds of big and medium-sized services on a server, or tens of thousands of tiny services. This is a design question. And while my personal preference would be for the former, probably because that's what I'm used to, I'll admit there could be certain advantages to the latter.

Good job, you've convinced me this can be valid.

serbuvlad commented on When Is WebAssembly Going to Get DOM Support?   queue.acm.org/detail.cfm?... · Posted by u/jazzypants
jauntywundrkind · a month ago
> Alpine container image is <5MB. Debian container image (if you really need glibc) is 30MB. wasmtime is 50MB.

That's not the deployment model of wasm. You don't ship the runtime and the code in a container.

If you look at crun, it can detect if your container is wasm and run it automatically, without your container bundling the runtime. I don't know what crun does, but in wasmcloud for example, you're running multiple different wasm applications atop the same wam runtime. https://github.com/containers/crun/blob/main/docs/wasm-wasi-...

serbuvlad · a month ago
My point is that that's exactly the deployment model of Docker. So if I have 20 apps that are a Go binary + config on top of Alpine, that Alpine layer will only exist once and be shared by all the containers.

If I have 20 apps that depend on a 300MB bundle of C++ libraries + ~10MB for each app, as long as the versions are the same, and I am halfway competent at writing containers, the storage usage won't be 20 * 310MB, but 300MB + 20 * 10MB.

Of course in practice each of the 20 different C++ apps will depend on a lot of random mutually exclusive stuff leading to huge sizes. But there's rarely any reason for 20 Go (or Rust) apps to base their containers on anything other than lean Alpine or Debian containers.

Even for deploying wasm containers. Maybe there are certain technical reasons why they needed an alternate "container" runtime (wasi) to run wasm workloads with CRI orchestration, but size is not a legitimate reason. If you made a standard container image with the wasm runtime and all wasm applications simply base off that image and add the code, the wasm runtime will be shared between them, and only the code will be unique.

"Ah, but each container will run it's own separate runtime process." Sure, but the most valuable resource that probably wastes is a PID (and however many TIDs). Processes exec'ing the same program will share a .text and .rodata sections and the .data and .bss segments are COW'ed.

Assuming the memory usage of the wasm runtime (.data and .bss modifications + stack and heap usage) is vaguely k + sum(p_i) where p_i is some value associated with process i, then running a single runtime instead of running n runtimes saves (n - 1) * k memory. The question then becomes how much is k. If k is small (a couple megs), then there really isn't any significant advantage to it, unless you're running an order of magnitude more wasm processes than you would traditional containers. Or, in other words if p_i is typically small. Or, in other other words, if p_i/k is small.

If p_i/k is large (if your programs have a significant size), wasi provides no significant size advantage, on disk or in memory, over just running the wasm runtime in a traditional container. Maybe there are other advantages, but size isn't one of them.

serbuvlad commented on You can now disable all AI features in Zed   zed.dev/blog/disable-ai-f... · Posted by u/meetpateltech
serbuvlad · a month ago
The one thing I love about VSCode is how trivially I can fire it up on a container or on a remote machine via SSH. If Zed had this I would switch tomorrow.

So my question for Zed users is: does it?

The UI is a tad idiosyncratic on Linux (can't speak for Macs) but DAMN is it fast, I love the generality of tasks.json (haven't played with debug.json yet), by far the best system I've ever used, and everything just works well out the gate.

serbuvlad commented on When Is WebAssembly Going to Get DOM Support?   queue.acm.org/detail.cfm?... · Posted by u/jazzypants
rkangel · a month ago
These numbers are true, but you'd be amazed and the number of organisations that have containers that are just based on ubuntu:latest, and don't strip package cache etc.
serbuvlad · a month ago
ubuntu:latest is also 30MB, like Debian.

Obviously an unoptimized C++/Python stack that depends on a billion .so's (specific versions only) and pip packages is going to waste space. The advantage of containers for these apps is that it can "contain" the problem, without having to rewrite them.

The "modern" languages: Go and Rust produce apps that depend either only on glibc (Rust) or on nothing at all (Rust w/ musl and Go). You can plop these binaries on any Linux system and they will "just work" (provided the kernel isn't ancient). Sure, the binaries can be fat, but it's a few dozen megabytes at the worst. This is not an issue as long as you architect around it (prefer busybox-style everything-in-a-binary to coreutils-style many-binaries).

Moreover, a VM isn't much necessary, as these programming languages can be easily cross-compiled (especially Go, for which I have the most experience). Compared to C/C++ where cross-compiling is a massive pain which led to Java and it's VM dominating because it made cross-compilation unnecessary, I can run `GOOS=windows GOARCH=arm64 go build` and build a native windows arm64 binary from x86-64 Linux with nothing but the standard Go compiler.

The advantage of containers for Rust and Go lies in orchestration and separation of filesystem, user, ipc etc. namespaces. Especially orchestration in a distributed (cluster) environment. These containers need nothing more than the Alpine environment, configs, static data and the binary to run.

I fail to see what problem WASM is trying to solve in this space.

u/serbuvlad

KarmaCake day382August 26, 2024View Original