It's not people couldn't also: Diet, exercise, choose veggies, eat more fiber, etc
Yeah, but that's the thing, when you're interviewing, you usually have some sort of access to talk to future potential colleagues, your boss and so on, and they're more open because you're not just "outside the company" but investigating if you'd like to join them. You'll get different answers compared to someone 100% outside the company.
HN as a commenting community is markedly more hit-and-miss. We often comment without reading the articles, we are sometimes gratuitously negative for the sake of negativity, and there isn't any other place where I've seen so many people being confidently wrong about my areas of expertise. I think we'd be better off if we were more willing to say "this is okay and I don't need to have a strong opinion about it" or "I'm probably not an expert on X, even though I happen to be good with programming".
> The narrative synthesis presented negative associations between GPS use and performance in environmental knowledge and self-reported sense of direction measures and a positive association with wayfinding. When considering quantitative data, results revealed a negative effect of GPS use on environmental knowledge (r = −.18 [95% CI: −.28, −.08]) and sense of direction (r = −.25 [95% CI: −.39, −.12]) and a positive yet not significant effect on wayfinding (r = .07 [95% CI: −.28, .41]).
https://www.sciencedirect.com/science/article/pii/S027249442...
Keeping the analogy going: I'm worried we will soon have a world of developers who need GPS to drive literally anywhere.
But unless you have the actual numbers, I always find it a bit strange to assume that all people involved, who deal with large amounts of money all the time, lost all ability to reason about this thing. Because right now that would mean at minimum: All the important people at FAANG, all the people at OpenAI/Anthropic, all the investors.
Of course, there is a lot of uncertainty — which, again, is nothing new for these people. It's just a weird thing to assume that.
Of course at a basic level, if AI is indeed a "bubble", then the investors did not reason correctly. But this situation is more like poker than chess, and you cannot expect that decisions that appear rational are in fact completely accurate.
Philosopher John Rawls made this a key point for this thinking:
* https://en.wikipedia.org/wiki/Luck_egalitarianism
* https://en.wikipedia.org/wiki/John_Rawls#A_Theory_of_Justice
When I see this line of reasoning, it leads me down the road of determinism instead. Who is to say what determines the quality of choices people make? Does one's upbringing, circumstance, and genetics not determine the quality of one's mind and therefore whether or not they will make good choices in life? I don't understand how we can meaningfully distinguish between "things that happen to you" and "things you do" if the set of "things that happen to you" includes things like being born to specific people in a specific time and place. Surely every decision you make happens in your brain and your brain is shaped by things beyond your control.
Maybe this is an unprovable position, but it does lead me to think that for any individual, making a poor choice isn't really "their" fault in any strong sense.
Anthropic also recently said that they think that longer/compressed context can serve as an alternative (not sure what was the exact wording/characterization they used) to continual/incremental learning, so context space is also going to be competing with model interaction history if you want to avoid groundhog day and continually having to tell/correct the model the same things over and over.
It seems we're now firmly in the productization phase of LLM development, as opposed to seeing much fundamental improvement (other than math olympiad etc "benchmark" results, released to give the impression of progress). Yannic Kilcher is right, "AGI is not coming", at least not in the form of an enhanced LLM. Demis Hassabis' very recent estimate was for 50% chance of AGI by 2030 (i.e. still 15 years out).
While we're waiting for AGI, it seems a better approach to needing everything in context would be to lean more heavily on tool use, perhaps more similar to how a human works - we don't memorize the entire code base (at least not in terms of complete line-by-line detail, even though we may have a pretty clear overview of a 10K LOC codebase while we're in the middle of development) but rather rely on tools like grep and ctags to locate relevant parts of source code on an as-needed basis.
In your working mental model, you have broad understandings of the broader domain. You have broad understandings of the architecture. You summarize broad sections of the program into simpler ideas. module_a does x, module_b does y, insane file c does z, and so on. Then there is the part of the software you're actively working on, where you need more concrete context.
So as you move towards the central task, the context becomes more specific. But the vague outer context is still crucial to the task at hand. Now, you can certainly find ways to summarize this mental model in an input to an LLM, especially with increasing context windows. But we probably need to understand how we would better present these sorts of things to achieve performance similar to a human brain, because the mechanism is very different.
You're throwing large amounts of equity away every 4 years.
Also electric cars get killed on the depreciation curve.
Leases can be better, but again they are usually better choices in high depreciation scenarios (like luxury vehicles or EVs, as you point out), not low depreciation scenarios.