No they don't. If we're talking about federal income tax, the vast majority of is paid by the wealthy.
If you’re working on the same type of thing everyday you’ll likely remember how to reverse an array in JavaScript. The other day I was trying to remember how to reverse a string in JavaScript… that was fun.
Those are usually people who aren't changing languages or frameworks. Memory is mostly recency and repetition, so if you want better memory then narrowing scope is a good strategy. I'd rather go broad so that I can better make connections between things but have to always look up the specifics especially now with LLMs right there
This is the knowledge in the head vs. in the world thing from Design of Everyday Things - if the knowledge is easily accessible in the world you will naturally keep it there not in your head. Maybe Google/LLMs are so fast this is the result.
Don’t do what? Consider the primary cause of conflicts: simultaneous operations occurring on the same data on different nodes. That happens because data may not have distinct structural or regional boundaries, or because a single application instance is interacting with multiple nodes simultaneously without regard for transmission latency.
Thus the simplest way to avoid conflicts is to control write targets
Use “sticky” sessions. Applications should only interact with a single write target at a time, and never “roam” within the cluster.
Assign app servers to specific (regional) nodes. Nodes in Mumbai shouldn’t write to databases in Chicago, and vice versa. It’s faster to write locally anyway.
interact with specific (regional) data. Again, an account in Mumbai may physically exist in a globally distributed database, but multiple accessors increase the potential for conflicts.
Avoid unnecessary cross-node activity. Regional interaction also applies on a more local scale. If applications can silo or prefer certain data segments on specific database nodes, they should.
To solve the issue of updates on different database nodes modifying the same rows, there’s a solution for that too: use a ledger instead
Best points are this summary near the end. IMO it's better to also allow for slower writes doing something simpler than trying to complex distributed stuff just so writes are quick. Users seem to have pretty long tolerance for something they understand as a write taking even many seconds.
When I write prompts, I've stopped thinking of LLMs as just predicting a next word, and instead to think that they are a logical model built up by combining the logic of all the text they've seen. I think of the LLM as knowing that cats don't lay eggs, and when I ask it to finish the sentence "cats lay ..." It won't generate the word eggs even though eggs probably comes after lay frequently