I expect that in this case, like in all cases, as the datasets become gallactically large, the O(n) algorithm will start winning again.
One more fun fact: this is also the reason why Turing machines are a popular complexity model. The tape on a Turing machine does not allow random access, so it simulates the act of "going somewhere to get your data". And as you might expect, hash table operations are not O(1) on a Turing machine.
More autonomy, but MUCH more expensive. Thousands or tens of thousands of dollars per use. The issue is indeed using mass-produced consumer drones. It's a bit like the widespread use of "technicals" in some conflicts: yes, a pickup truck with a .50cal in the back is inferior to tanks or armored cars, but it's also much, much cheaper.
There's a bit of a "Sherman vs. Tiger" thing that's been going on since the dawn of industrialised warfare. Is it better to have a more effective weapon that you can only afford a few of, or lots of cheaper ones?
The US doctrine approach to the problem would simply be a set of B2 bunker buster decapitation strikes on Russian military HQs, but of course that option is not available to Ukraine. They can't even manage Iraq-war-style wave of SEAD strikes followed by unit level CAS. The air war has kind of stalemated with neither side having conventional air superiority and both being vulnerable to the other's anti-air.
This is less of an issue with systems where there is little monetary value attached (I don't know anyone whose mortgage is paid for by their Stack Overflow reputation). Now imagine that the future prospects of a national lab with multi-million yearly budget are tied to a system that can be (relatively easily) gamed with a Chinese or Russian bot farm for a few thousand dollars.
There are already players that are trying hard to game the current system, and it sometimes sort of works, but not quite, exactly because of how hard it is to get into the "high reputation" club (on the other hand, once you're in, you can often publish a lot of lower quality stuff just because of your reputation, so I'm not saying this is a perfect system either).
In other words, I don't think anyone reasonable is seriously against making peer review more transparent, but for better or worse, the current system (with all of its other downsides) is relatively robust to outside interference.
So, unless we (a) make "being a scientist" much more financially accessible, or (b), untangle funding from this new "open" measure of "scientific achievement", the open system would probably not be very impactful. Of course, (a) is unlikely, at least in most high-impact fields; CS was an outlier for a long time, not so much today. And (b) would mean that funding agencies would still need something else to judge your research, which would most likely still be some closed, reputation-based system.
Edit TL;DR: Describe how the open science peer-review system should be used to distribute funding among researchers while begin reasonably robust to coordinated attacks. Then we can talk :)
(a) [Name 2005] is much easier to mentally track if it appears repeatedly in longer text than [5] (at least for me). [5] is just [5]. [Name 2005] is "that paper by Name from twenty years ago".
(b) By using [Name 2005], I might not know which exact paper this is, but I get how recent it is w.r.t. what I am reading. In many cases, this is useful context. Saying "[5] proves X" could mean that this is a new result, or a well known fact. Saying "[Name 1967] proves X" clearly indicates that this is something that has been known for some time.
An error in manipulation leading to an external communication on something this high profile is sure to affect your career. It's like a biologist claiming to have found evidence extraterrestrial life and having to retract. I think I would consider hara-kiri..
Yes, retracting these is still shameful, but it's not a "we found extraterrestrial life" claim. It's a "we received weird signals from a nebula that we don't understand so far" claim.
And yes, a lot of supporting but inconclusive evidence is still supporting evidence. My point is not that (most) scientists would risk lying about replicating a superconductor, but rather that uncertain or inconclusive results with a solid chunk of plausible deniability in a rapidly evolving environment go a long way towards being "in the room where it happened".