in my case the problem doesn't arise because control plane and data plane are separated by design — metadata and signals never share a concurrency primitive with chunk writes. the data plane only sees chunks of similar order of magnitude, so a fixed worker pool doesn't overprovision on small payloads or stall on large ones.
curious whether your control and data plane are mixed on the same path, or whether the variance is purely in the blob sizes themselves.
if it's the latter: I wonder if batching sub-1MB payloads upstream would have given you the same result without changing the concurrency primitive. did you have constraints that made that impractical?
https://github.com/php/frankenphp/pull/2016 if you want to see a “correctly behaving” implementation that becomes 100% cpu usage under contention.
From my pov, the worker pool's job isn't to absorb saturation. it's to make capacity explicit so the layer above can route around it. a bounded queue that returns ErrQueueFull immediately is a signal, not a failure — it tells the load balancer to try another instance.
saturation on a single instance isn't a scheduler problem, it's a provisioning signal. the fix is horizontal, not vertical. once you're running N instances behind something that understands queue depth, the "unfair scheduler under contention" scenario stops being reachable in production — by design, not by luck.
the FrankenPHP case looks like a single-instance stress test pushed to the limit, which is a valid benchmark but not how you'd architect for HA.
If you fix N workers and control dispatch order yourself, the scheduler barely gets involved — no stealing, no surprises.
The inter-goroutine handoff is ~50-100ns anyway.
Isn't the real issue using `go f()` per request rather than something in the language itself?
1. Perhaps I am misusing the "blockchain" term. The access is granted with signed blocks. Each block can introduce some changes, like granting/removing access to other users, including the encryption key with an envelope. Each block links to the previous via hash. There is no consensus mechanism.
2. The vault is defined by a storage and the public keys of the creator. A client must know in advance the creator keys and he will use those keys to verify the signature. The creator then can grant admin rights to other users with specific blocks. An access grant not signed by an admin, will be rejected by a user. It is not really about data truth, because the target is more information exchange. Does it answer the question?
3. Go is the implementation language, not really a binding. I use Python in the first example because it is more compact. However the guide shows samples for all supported languages. The primary target is Go for server side and Dart for mobile. Python is effective for samples and experiments.
A few thoughts after your answers:
The E2E file sync part has existing solutions, and your access rights system is really a signed append-only log rather than a blockchain (no consensus, no decentralization) — which is fine, but the term might create misleading expectations.
What I'm more curious about is the access model itself. How are access tokens created and transferred? Who consumes them, and how does authorization propagate? Have you considered a salted API where each user carries a unique identifier, so the whole grant/revoke/delegate flow goes through a single unified mechanism regardless of what's being accessed?
The SQL sync layer is what actually caught my eye — I've worked on similar problems for specific use cases, and encrypted database sync between peers is a genuinely hard problem. That feels like your real differentiator.
On that note: does the SQL layer reference the file content or file paths? I'm guessing you built both interfaces because they're correlated — the SQL holds structured data that points to the encrypted files. If so, that's worth making explicit, because right now they look like two unrelated features rather than two sides of the same system.
Tomorow datasets.
Keep publishing !
llm does not act on production. he build scripts, and you take the greatest care of theses scripts.
Clone you customer data and run evertything blank.
Just uses the llm tool as dangerous tool: considere that it will fail each time it's able to.
even will all theses llm specific habitus, you still get a x100 productivity.
because each of theses advise can ben implemented by llms, for llms, by many way. it's almost free. just plan it.
Deleted Comment
OpenAI didn't object to anything.
They're all bad, but some are worse than others.