I just wanted something that works as promised and respects the user (open source, 70kb total payload size, no ads, no tracking, feature complete, etc)
I call this, "Sweeping Vertigo" https://doersino.github.io/tixyz/?code=tan%28i%2Bt%29*random...
This is why we anthropomorphize everything. Why we buy our dogs a little dog-house that looks like a people house. Dog doesn't care. Why we play such different roles - a cruel boss may be a loving father a few hours later. Likewise, why when people adopt a nickname "What would X do?" they find courage, or get to act differently (common trick in sales). It is why religions made god in man's image. Gods that fit human archetypes, roles or feelings (the father, the mother, lust, war) etc.
And of course, why people with multiple personality disorder get to have such vast personality changes and ups and downs.
What is the evolutionary benefit? Empathy. If I can simulate what you feel, I get to understand what you are going through, I may be kinder to you. And this way we get to cooperate and build a civilization that is not based on swarm mechanics (like ants, or bees).
Consciousness will soon have it's Gallileo moment. And we will be shocked to discover there is nothing special about our current human-centric world of "consciousness".
For those curious, here's a very well done documentary on Francis by BBC - https://www.youtube.com/watch?v=MgrO5za0lSY (warning, graphic themes)
For our current stack, the answer has been to make the entire business application run as a single process. We also use a single (mono) repository because it is a natural fit with the grain of the software.
As far as I am aware, there is no reason a single process cannot exploit the full resources of any computer. Modern x86 servers are ridiculously fast, as long as you can get at them directly. AspNetCore + SQLite (properly tuned) running on a 64 core Epyc serving web clients using Kestrel will probably be sufficient for 99.9% of business applications today. You can handle millions of simultaneous clients without blinking. Who even has that many total customers right now?
Horizonal scalability is simply a band-aid for poor engineering in most (not all) applications. The poor engineering, in my experience, is typically caused by underestimating how fast a single x86 thread is and exploring the concurrent & distributed computing rabbit hole from there. It is a rabbit hole that should go unexplored, if ever possible.
Here's a quick trick if none of the above sticks: If one of your consultants or developers tells you they can make your application faster by adding a bunch of additional computers, you are almost certainly getting taken for a ride.
It's a somewhat odd solution for a too common problem, but any solution is still better than dealing with such an annoying problem. (source: made docker the de facto cross-teams communication standard in my company. "I 'll just give you a docker container, no need to fight trying to get the correct version of nvidia-smi to work on your machine" type of thing)
It probably depends on the space and types of software you 're working on. If it's frontend applications for example then its overkill. But if somebody wants you to let's say install multiple elasticsearch versions + some global binaries for some reason + a bunch of different gpu drivers on your machine (you get the idea), then docker is a big net positive. Both for getting something to compile without drama and for not polluting your host OS (or VM) with conflicting software packages.