Readit News logoReadit News
snthpy commented on Why we built Lightpanda in Zig   lightpanda.io/blog/posts/... · Posted by u/ashvardanian
gorjusborg · 9 days ago
Years ago, when I initially picked up Rust, I loved it. It does a lot of things right. At the same time, though, I knew there was a possibility of it going wrong in two opposite directions:

1. Developers balked at being required to take on the cognitive load required to allow GC-less memory management

2. Developers wore their ability to take on that cognitive load as a badge of honor, despite it not being in their best interest

I eventually came to the decision to stop developing in Rust, despite its popularity. It is really cool that its creators pulled it off. It was quite an achievement, given how different it was when it came out. I think that if I had to implement a critical library I would consider using Rust for it, but as a general programming language I want something that allows me to focus my mental facilities on the complexities of the actual problem domain, and I felt that it was too often too difficult to do that with Rust.

snthpy · 9 days ago
So what do you use instead now?
snthpy commented on What's the deal with Euler's identity?   lcamtuf.substack.com/p/wh... · Posted by u/surprisetalk
yen223 · 10 days ago
Arguably, base-10 counting vs base-12 counting is one such example
snthpy · 10 days ago
Which one of those is preferable? It seems to me that they are both historically based. 10 x 10 is also 100 in base-12 (it's only in base-10 that it looks like 144).

IMHO, in a modern setting base-16 would be the most convenient. Then I maybe wouldn't struggle to remember that the CIDR range C0.A8.0.0/18 (192.168.0.0/24) consists of 10 (16) blocks of size 10 (16).

snthpy commented on Uncloud - Tool for deploying containerised apps across servers without k8s   uncloud.run/... · Posted by u/rgun
JohnMakin · 11 days ago
If you are a small operation and trying to self host k3s or k8s or any number of out of the box installations that are probably at least as complex as docker compose swarms, for any non trivial production case, presents similar problems in monitoring and availability as ones you’d get with off the shelf cloud provider managed services, except the managed solutions come without the pain in the ass. Except you don’t have a control plane.

I have managed custom server clusters in a self hosted situation. the problems are hard, but if you’re small, why would you reach for such a solution in the first place? you’d be better off paying for a managed service. What situation forces so many people to reach to self hosted kubernetes?

snthpy · 10 days ago
Yes, not everyone is allowed to use cloud services. There's also the cost. I haven't spent a cent on infrastructure in 5 years (other than my time). Using cloud services comes with extra costs and meetings to justify those costs. Plus, not everyone is in the USA or 1st world country where those costs are negligible. Necessity is the mother of invention.
snthpy commented on Uncloud - Tool for deploying containerised apps across servers without k8s   uncloud.run/... · Posted by u/rgun
JohnMakin · 11 days ago
Having spent most of my career in kubernetes (usually managed by cloud), I always wonder when I see things like this, what is the use case or benefit of not having a control plane?

To me, the control plane is the primary feature of kubernetes and one I would not want to go without.

I know this describes operational overhead as a reason, but how it relates to the control plane is not clear to me. even managing a few hundred nodes and maybe 10,000 containers, relatively small - I update once a year and the managed cluster updates machine images and versions automatically. Are people trying to self host kubernetes for production cases, and that’s where this pain comes from?

Sorry if it is a rude question.

snthpy · 10 days ago
For an SME with nonetheless critical workloads, 10000 containers is not small. To me that's massive in fact. I run less than 10 but I need those to be HA. uncloud sounds great for my use case.
snthpy commented on Uncloud - Tool for deploying containerised apps across servers without k8s   uncloud.run/... · Posted by u/rgun
bluepuma77 · 10 days ago
It's not a k8s replacement. It's for the small dev team with no k8s experience. For people that might not use Docker Swarm because they see it's a pretty dead project. For people who think "everyone uses k8s", so we should, too.

I need to run on-prem, so managed k8s is not an option. Experts tells me I should have 2 FTE to run k8s, which I don't have. k8s has so many components, how should I debug that in case of issues without k8s experience? k8s APIs change continuously, how should I manage that without k8s experience?

It's not a k8s replacement. But I do see a sweet spot for such a solution. We still run Docker Swarm on 5 servers, no hyperscalers, no API changes expected ;-)

snthpy · 10 days ago
I still run docker swarm on 3 servers. Haven't needed to update it much over the past 5 years.
snthpy commented on Uncloud - Tool for deploying containerised apps across servers without k8s   uncloud.run/... · Posted by u/rgun
psviderski · 11 days ago
This is exactly how it works now. The Compose file is the declarative specification of your services you want to run.

When you run 'uc deploy' command:

- it reads the spec from your compose.yaml

- inspects the current state of the services in the cluster

- computes the diff and deployment plan to reconcile it

- executes the plan after the confirmation

Please see the docs and demo: https://uncloud.run/docs/guides/deployments/deploy-app

The main difference with Docker Swarm is that the reconciliation process is run on your local/CI machine as part of the 'uc deploy' CLI command execution, not on the control plane nodes in the cluster.

And it's not running in the loop automatically. If the command fails, you get an instant feedback with the errors you can address or rerun the command again.

It should be pretty straightforward to wrap the CLI logic in a Terraform or Pulumi provider. The design principals are very similar and it's written in Go.

snthpy · 10 days ago
That's really interesting and cool. In that case calling it imperative rather than declarative is underselling it imho. I haven't worked that much with Terraform but in my usage from the cli, that is how it works too and I consider that declarative.

I get that putting the declarative spec in the control plane and having the service autoreconcile continuously is another layer but this is great as a start.

In fact could you not just cron the cli deployment command on the nodes and get an effective poor man's declarative layer to guard against node failures if your ok with a 1 min or 1 sec recovery objective?

snthpy commented on Uncloud - Tool for deploying containerised apps across servers without k8s   uncloud.run/... · Posted by u/rgun
psviderski · 11 days ago
For the public cluster with multiple ingress (caddy) nodes you'd need a load balancer in front of them to properly handle routing and outage of any of them. You'd use the IP of the load balancer on the DNS side.

Note that a DNS A record with multiple IPs doesn't provide failover, only round robin. But you can use the Cloudflare DNS proxy feature as a poor man's LB. Just add 2+ proxied A records (orange cloud) pointing to different machines. If one goes down with a 52x error, Cloudflare automatically fails over to the healthy one.

snthpy · 10 days ago
I looked into this yesterday for making Caddy HA on my Proxmox cluster and stumbled upon keepalivd. It will provide you with a virtual IP and failover but not load balancing so you'd need to still point that at something like HAProxy for that.

Could be something interesting to integrate though.

snthpy commented on Uncloud - Tool for deploying containerised apps across servers without k8s   uncloud.run/... · Posted by u/rgun
psviderski · 11 days ago
Hey, creator here. Thanks for sharing this!

Uncloud[0] is a container orchestrator without a control plane. Think multi-machine Docker Compose with automatic WireGuard mesh, service discovery, and HTTPS via Caddy. Each machine just keeps a p2p-synced copy of cluster state (using Fly.io's Corrosion), so there's no quorum to maintain.

I’m building Uncloud after years of managing Kubernetes in small envs and at a unicorn. I keep seeing teams reach for K8s when they really just need to run a bunch of containers across a few machines with decent networking, rollouts, and HTTPS. The operational overhead of k8s is brutal for what they actually need.

A few things that make it unique:

- uses the familiar Docker Compose spec, no new DSL to learn

- builds and pushes your Docker images directly to your machines without an external registry (via my other project unregistry [1])

- imperative CLI (like Docker) rather than declarative reconciliation. Easier mental model and debugging

- works across cloud VMs, bare metal, even a Raspberry Pi at home behind NAT (all connected together)

- minimal resource footprint (<150MB ram)

[0]: https://github.com/psviderski/uncloud

[1]: https://github.com/psviderski/unregistry

snthpy · 10 days ago
Wow, this sounds very cool.

I share the same concern as top comments on security but going to check out out in more detail.

I wonder if you integrated some decentralized identity layer with DIDs, if this could be turned into some distributed compute platform?

Also, what is your thinking on high availability and fail failovers?

snthpy commented on What's the deal with Euler's identity?   lcamtuf.substack.com/p/wh... · Posted by u/surprisetalk
rmunn · 10 days ago
Personally, I prefer the version with tau (2 times pi) in it rather than the one with pi:

e^(i*tau) = 1

I won't reproduce https://www.tauday.com/tau-manifesto here, but I'll just mention one part of it. I very much prefer doing radian math using tau rather than pi: tau/4 radians is just one-fourth of a "turn", one-fourth of the way around the circle, i.e. 90°. Which is a lot easier to remember than pi/2, and would have made high-school trig so much easier for me. (I never had trouble with radians, and even so I would have had a much easier time grasping them had I been taught them using tau rather than pi as the key value).

snthpy · 10 days ago
This!

I've been posting the manifesto to friends and colleagues every tau day for the past ten years. Let's keep chipping away at it and eventually we won't obfuscate radians for our kids anymore.

Friends don't let friends use pi!

snthpy commented on MinIO is now in maintenance-mode   github.com/minio/minio/co... · Posted by u/hajtom
victormy · 11 days ago
Big thanks to MinIO, RustFS, and Garage for their contributions. That said, MinIO closing the door on open source so abruptly definitely spooked the community. But honestly, fair play to them—open source projects eventually need a path to monetization.

I’ve evaluated both RustFS and Garage, and here’s the breakdown:

Release Cadence: Garage feels a bit slower, while RustFS is shipping updates almost weekly.

Licensing: Garage is on AGPLv3, but RustFS uses the Apache license (which is huge for enterprise adoption).

Stability: Garage currently has the edge in distributed environments.

With MinIO effectively bowing out of the OSS race, my money is on RustFS to take the lead.

snthpy · 11 days ago
Thanks. I hadn't heard of RustFS. I've been meaning to migrate off my MinIO deployment.

I recently learned that Ceph also has an object store and have been playing around with microceph. Ceph also is more flexible than garage in terms of aggregating differently sized disks. Since it's also already integrated in Proxmox and has over a decade of enterprise deployments, that's my top contender at the moment. I'm just not sure about the level of S3 API compatibility.

Any opinions on Ceph vs RustFS?

u/snthpy

KarmaCake day844November 29, 2011
About
snthpy.at.hn

Chief Excel Officer, MAD* Scientist, Pythonista, Rustacian, PRQL Core Contributor (: ML, AI, Data)

* twitter: [@T0bias_Brandt](https://x.com/T0bias_Brandt) * github: [@snth](https://github.com/snth/) * [PRQL](https://prql-lang.org/)

View Original