Or what?
I'd say the better word for that is polarising than political, but they synonims these days.
Or what?
I'd say the better word for that is polarising than political, but they synonims these days.
LLM's are pattern-imitating machines with a random number generator added to try to keep them from repeating the same pattern, which is what they really "want" to do. It's a brilliant hack because repeating the same pattern when it's not appropriate is a dead giveaway of machine-like behavior. (And adding a random number generator also makes it that much harder to evaluate LLM's since you need to repeat your queries and do statistics.)
Although zero-shot question-answering often works, a more reliable way to get useful results out of an LLM is to "lean into it" by giving it a pattern and asking it to repeat it. (Or if you don't want it to follow a pattern, make sure you don't give it one that will confuse it.)
For the past decade-plus I have mostly only searched for user facing strings. Those have the advantage of being longer, so are more easily searched.
Honestly, posts like this sound like the author needs to invest some time in learning about better tools for his language. A good IDE alone will save you so much time.
And grep cuts right through that in a pretty universal way. What the post describes are just ways to not work against grep to optimize for something ephemeral.
The position of the monolithics is "you should have one thing". Well, that's obviously wrong, if you're doing anything even slightly complex.
The position of the microservice people is "you should have more than one thing", but it gets pretty fuzzy after that. It's so poorly defined it's not useful.
How about have enough things such that all your codebases remain at a size where you don't dread digging into even the one that you're most prolifically incompetent coworker has gone to town on? Enough things that when not very critical things fail, it doesn't matter very much.
But only that many things. If you need to update more than one thing when you want to add a simple feature, if small (to medium) changes propagate across multiple codebases, well, ya done messed up.
If you're one of the people believing monoliths are The Way, you're making a bizarre bet, because there's N potential pieces you can have to create a complex system, and you're saying the most optimal is N == 1. What are the odds of that? Sometimes, maybe. But mostly N will be like 7 or something. Occasionally 1000. Occasionally 2. But usually 7. Or something.
This seems really obvious to me.
Plenty of people in the world live in much smaller places. See HK apartment or the "average" Tokyo apartment as examples.
I owned a car for 19yr old but haven't for 14 of the last 20 years. I don't miss it. But I live somewhere where I don't really need it.
BS. If that was the case, then YC would be a non-profit and they would accept everyone.
There's nothing wrong with wanting to make money AND help people, but pretending like PG/Jessica are some benevolent leaders is disingenuous. You're falling victim to some of the cargo cult nature of YC.
As PG has said himself (paraphrased) "I think most tech entrepreneurs want to become incredibly wealthy, not to simply be wealthy, but to work on things they truly want to do". My interpretation is that he is no different and he gets personal enjoyment out of helping others (the financials give him the freedom to do this without ever having to worry about financials ever again).
At least that's what I take from it. Cultism is just something that follows success.
Just as a sort of thought about this, if the entire web moved to brotli over gzip, or if we introduced an even better compression algorithm that was widely distributed across the internet, what would that impact me? A 5% reduction in CPU usage for every computer loading every page? I'd be curious to hear what that would actually amount to, power-wise.
I think my biggest concern with AI is its biggest proponents have the least wisdom imaginable. I'm deeply concerned that our technocrats are running full speed at AGI with like zero plan for what happens if it "disrupts" 50% of jobs in a shockingly short period of time, or worse outcomes (theres some evidence the new tariff policies were generated with LLMs.. its probably already making policy. But it could be worse. What happens when bad actors start using these things to intentionally gaslight the population?)
But I actually think AI (not AGI) as an assistant can be helpful.
At individual perspective - AI is useful as a helper to achieve your generative tasks. I'd argue against analytic tasks, but YMMV.
At the societal perspective, e.g. you as individual can not trus anything the society has produced, because it's likely some AI generated bullshit.
Some time ago, if you were not trusting a source, you could build your understanding by evaluating a plurality of sources and perspectives and get to the answer in a statistical manner. Now every possible argument can be stretched in any possible dimension and your ability to build a conclusion has been ripped away.