My intuition is that a properly sharded database will perform faster-or-same as a non-sharded one in all scenarios. Whereas automatically-sharded database will actually perform worst until you start reaching critical traffic that a single instance won't handle no matter what.
Am i wrong ?
you can get expected "single shard" performance in CockroachDB by manually splitting the shards (called "ranges" in CockroachDB) along the lines of the expected single shard queries (what you call a "properly shared database"). This is easy to do with a single SQL command. (This is what we do today; we use CockroachDB for strongly consistent metadata).
The difference between CockroachDB and a manually sharded database is that when you _do_ have to perform some cross-shard transactions (which you inevitably have to do at some point), in CockroachDB you can execute them (with a reasonable performance penalty) with strong consistency and 2PC between the shards, whereas in your manually sharded database... good luck! Hope you implement 2PC correctly.
If Intuit spends (just) $3.5M to significantly impact decisions that are worth Billions to them and potentially hundreds to every taxpayer (!), I think I'm frustrated that corrupt politicians aren't doing more to leverage their corruption.
This kind of illegal influence should cost... at least $100M? Selling everyone out for fractions of a penny on the dollar is frankly just kind of embarrassing.
Crime harder, elected reps, if you're going to get out of bed.
You'll find a more recent discussion in "Why is There so Little Money in U.S. Politics?" (2003). [1]
[1] https://www.aeaweb.org/articles?id=10.1257/08953300332116497...