Even if that is true, in some areas companies' interests strongly misalign with that of citizens. An insurance company earns more by rejecting customer declarations. An ISP earns more by giving customers less bandwidth for a higher price. A toll road company earns more by doing as little maintenance they can get away with, while keeping prices high.
Yes, in a perfect market, customers will flock to competitors that are better for them. However, in many cases a perfect market is not attainable for various reasons. E.g. because the cost of entry is too high (e.g. making a competing ISP would require you to put your own fiber in the ground) or because there are network effects that are nearly impossible to break.
This is why certain markets need strong regulation or government monopolies -- to protect people from pure profit seeking. Health care is a good example. Health care in Western Europe is much better, while less money is spent on health care. This is because health care is strongly regulated and insurance companies cannot f*ck over customers. The objective function of maximizing profits becomes a constrained optimization problem, which generally leads to other ways to increase profits, like pressuring pharmaceutical companies to lower prices of medicines.
The issue with healthcare is that providers have leverage over insurers, not that there is a lack of competition for insurance.
My general take is that you want relatively few control loops, in positions of high leverage.
Especially in a time where the gates have come crashing down to pronouncements of, "now anybody can learn to code by just using LLMs," there is a shocking tendency to overly simplify and then pontificate upon what are actually bewilderingly complicated systems wrapped up in interfaces, packages, and layers of abstraction that hide away that underlying complexity.
It reminds me of those quantum woo people, or movies like What the Bleep Do We Know!? where a bunch of quacks with no actual background in quantum physics or science reason forth from drastically oversimplified, mathematics-free models of those theories and into utterly absurd conclusions.
For many, this is not going to be a practical problem yet, as real volumes will run out of usable space before exhausting 2^32 inodes. However, it is theoretically possible with a volume as small as ~18 TiB (using 16 TiB for 2^32 4096-byte or smaller files, 1-2 TiB for 2^32 256- or 512-byte inodes, plus file system overheads).
Anticipating this problem, most newer file systems use 64-bit inode numbers, and some older ones have been retrofitted (e.g. inode64 option in XFS). I don't think ext4 is one of them, though.
"your type of company" sod off. Meta is only like this because its got a massive advertising revenue stream.
the sheer amount of engineering time wasted because we don't document stuff is astounding.
For example, how many message queue systems do we have?
how many half arsed message queues have been created because they didn't know about FOQS?
The cloud counterpart had 600+ mongodb databases split amongst 3 Mongo clusters.
The integration team took usually 2 weeks to setup the on premises software, and the cloud stuff took about a minute. The entire setup for the cloud was a single form that the integration team filled in with data.
The point I'm trying to make, is that if your customers require separate infra, they can wait a bisuness day to be setup. Meanwhile they can play on a sandbox environment.
It's also doable in fully automated fashion, but you will have to have strong identity and payment verifications, to avoid DoS, and in those cases usually contracts fly around.
That's for the b2b side.
For b2c, usually you rely on a single db and filter by column ID or similar, which can easily be abstracted away.
I dot see the value proposition here. Let's take couple of examples
If I need to have my totally separate infra for each tenant I'm going to go for terraform
If I need separate database on the same db infra, I'm Goin to either have a db initialization script that creates a usable db or clones a template database already present
So why do I need your sdk? To avoid a call to postgres to execute a script or a terraform script?
How does that work with the need for prefilled data?
Maybe I'm missing something, but I do not understand this service.
I am really into astronomy. But dealing with this, just so stars pass local meridian exactly at 00:00:00.000 is simply not worth it!
And one funny note, astronomers still use use Julian calendar (one made by Ceasar without Gregorian corrections in 16th century) to avoid similar issues. They avoid their own inventions!