Readit News logoReadit News
km144 commented on Ford kills the All-Electric F-150   wired.com/story/ford-kill... · Posted by u/sacred-rat
behringer · 2 days ago
The math doesn't work when you calculate the same thing based on buying low mileage used cars or leases.

You're throwing large amounts of equity away every 4 years.

Also electric cars get killed on the depreciation curve.

km144 · 8 hours ago
Low mileage used cars don't come with a warranty, or probably have a more limited warranty if they're CPO.

Leases can be better, but again they are usually better choices in high depreciation scenarios (like luxury vehicles or EVs, as you point out), not low depreciation scenarios.

km144 commented on Ford kills the All-Electric F-150   wired.com/story/ford-kill... · Posted by u/sacred-rat
frumper · 2 days ago
Trading in your new car at 4 years old sounds like bad math no matter what car you buy.
km144 · 2 days ago
Have you seen the prices of pre-owned Honda/Toyota sedans that are less than 5 years old? There are absolutely cars out there where trading in your new car after 3-4 years can make sense depending on the cost of the car, the depreciation curve, and whether you want to always be driving a relatively new car. Of course it's almost always going to be a better value proposition to drive the car for 10 years if you can, but that can still depend on depreciation.
km144 commented on GLP-1 drugs linked to lower death rates in colon cancer patients   today.ucsd.edu/story/glp-... · Posted by u/gmays
exabrial · a month ago
Private people invested a lot of money to develop this and get it through testing. Allowing them to reap the benefits from their investment for a limited time is just fine.

It's not people couldn't also: Diet, exercise, choose veggies, eat more fiber, etc

km144 · a month ago
Those things also require more willpower than taking a medication. Willpower is generally determined by your particular psychology which is determined by genetics and environmental factors. People don't have a choice in the matter as much as your comment seems to imply. Getting GLP-1s to everyone who could benefit from them is extremely important for overall health.
km144 commented on Computer science courses that don't exist, but should (2015)   prog21.dadgum.com/210.htm... · Posted by u/wonger_
globular-toast · 2 months ago
Banks as an example of "getting things done" is laughable. Real industry gets things done: manufacturing, construction, healthcare etc. We could do without the massive leech that is the finance sector.
km144 · 2 months ago
"Real industry" also has quite a hard time getting things done these days. If you look around at the software landscape, you'll notice that "getting things done" is much easier for companies whose software interfaces less with the real world. Banking, government, defense, healthcare etc. are all places where real-life regulation has a trickle-down effect on the actual speed of producing software. The rise of big tech companies as the dominant economic powerhouses of our time is only further evidence that it's easier to just do a lot of things over the internet and even preferred, because the market rewards it. We would do well to figure out how to get stuff done in the real world again.
km144 commented on Today is when the Amazon brain drain sent AWS down the spout   theregister.com/2025/10/2... · Posted by u/raw_anon_1111
CaptainOfCoit · 2 months ago
> By the time their golden age is known outside of the company

Yeah, but that's the thing, when you're interviewing, you usually have some sort of access to talk to future potential colleagues, your boss and so on, and they're more open because you're not just "outside the company" but investigating if you'd like to join them. You'll get different answers compared to someone 100% outside the company.

km144 · 2 months ago
I think the problem is false positives, not false negatives. The people you interact with during the interview process have all sorts of reasons to embellish the experience of working at their company.
km144 commented on Hacker News – The Good Parts   smartmic.bearblog.dev/why... · Posted by u/smartmic
MountDoom · 2 months ago
HN as an aggregator of geek news is exceptional. It's not the first of its kind - Slashdot was quite similar - but perhaps because it's associated with the SF Bay Area, it managed to stay relevant while Slashdot withered away.

HN as a commenting community is markedly more hit-and-miss. We often comment without reading the articles, we are sometimes gratuitously negative for the sake of negativity, and there isn't any other place where I've seen so many people being confidently wrong about my areas of expertise. I think we'd be better off if we were more willing to say "this is okay and I don't need to have a strong opinion about it" or "I'm probably not an expert on X, even though I happen to be good with programming".

km144 · 2 months ago
You hit the nail on the head. There is no place on the internet more broadly susceptible to the same kinds of "founder brain" malaise that has afflicted so many in Silicon Valley--i.e. "I am good at software development so therefore I am confident I have a good understanding of (and opinion on) all sorts of intellectual topics".
km144 commented on Vibe engineering   simonwillison.net/2025/Oc... · Posted by u/janpio
anabis · 2 months ago
Yeah, its like a GPS navigation system. Useless and annoying in home turf. Invaluable in unfamiliar territory.
km144 · 2 months ago
Maybe it that's an apt analogy in more ways than one, given the recent research out of MIT on AI's impact on the brain, and previous findings about GPS use deteriorating navigation skills:

> The narrative synthesis presented negative associations between GPS use and performance in environmental knowledge and self-reported sense of direction measures and a positive association with wayfinding. When considering quantitative data, results revealed a negative effect of GPS use on environmental knowledge (r = −.18 [95% CI: −.28, −.08]) and sense of direction (r = −.25 [95% CI: −.39, −.12]) and a positive yet not significant effect on wayfinding (r = .07 [95% CI: −.28, .41]).

https://www.sciencedirect.com/science/article/pii/S027249442...

Keeping the analogy going: I'm worried we will soon have a world of developers who need GPS to drive literally anywhere.

km144 commented on How the AI Bubble Will Pop   derekthompson.org/p/this-... · Posted by u/hdvr
jstummbillig · 3 months ago
People are always so fidegty about this stuff — for super understandable reason, to be clear. People not much smarter than anyone else try to reason about numbers that are hard to reason about.

But unless you have the actual numbers, I always find it a bit strange to assume that all people involved, who deal with large amounts of money all the time, lost all ability to reason about this thing. Because right now that would mean at minimum: All the important people at FAANG, all the people at OpenAI/Anthropic, all the investors.

Of course, there is a lot of uncertainty — which, again, is nothing new for these people. It's just a weird thing to assume that.

km144 · 3 months ago
I think it's a bit fallacious to imply that the only way we could be in an AI investment bubble is if people are reasoning incorrectly about the thing. Or at least, it's a bit reductive. There are risks associated with AI investment. The important people at FAANG/AI companies are the ones who stand to gain from investments in AI. Therefore it is their job to downplay and minimize the apparent risks in order to maximize potential investment.

Of course at a basic level, if AI is indeed a "bubble", then the investors did not reason correctly. But this situation is more like poker than chess, and you cannot expect that decisions that appear rational are in fact completely accurate.

km144 commented on Nine things I learned in ninety years   edwardpackard.com/wp-cont... · Posted by u/coderintherye
throw0101c · 3 months ago
> This speaks to me. So much of our life circumstances are beyond our control (parents, genetics, geography, society, wider economy, etc.)

Philosopher John Rawls made this a key point for this thinking:

* https://en.wikipedia.org/wiki/Luck_egalitarianism

* https://en.wikipedia.org/wiki/John_Rawls#A_Theory_of_Justice

km144 · 3 months ago
> According to this view, justice demands that variations in how well-off people are should be wholly determined by the responsible choices people make and not by differences in their unchosen circumstances. Luck egalitarianism expresses that it is a bad thing for some people to be worse off than others through no fault of their own.

When I see this line of reasoning, it leads me down the road of determinism instead. Who is to say what determines the quality of choices people make? Does one's upbringing, circumstance, and genetics not determine the quality of one's mind and therefore whether or not they will make good choices in life? I don't understand how we can meaningfully distinguish between "things that happen to you" and "things you do" if the set of "things that happen to you" includes things like being born to specific people in a specific time and place. Surely every decision you make happens in your brain and your brain is shaped by things beyond your control.

Maybe this is an unprovable position, but it does lead me to think that for any individual, making a poor choice isn't really "their" fault in any strong sense.

km144 commented on Claude Sonnet 4 now supports 1M tokens of context   anthropic.com/news/1m-con... · Posted by u/adocomplete
HarHarVeryFunny · 4 months ago
Even 1 MB context is only roughly 20K LOC so pretty limiting, especially if you're also trying to fit API documents or any other lengthy material into the context.

Anthropic also recently said that they think that longer/compressed context can serve as an alternative (not sure what was the exact wording/characterization they used) to continual/incremental learning, so context space is also going to be competing with model interaction history if you want to avoid groundhog day and continually having to tell/correct the model the same things over and over.

It seems we're now firmly in the productization phase of LLM development, as opposed to seeing much fundamental improvement (other than math olympiad etc "benchmark" results, released to give the impression of progress). Yannic Kilcher is right, "AGI is not coming", at least not in the form of an enhanced LLM. Demis Hassabis' very recent estimate was for 50% chance of AGI by 2030 (i.e. still 15 years out).

While we're waiting for AGI, it seems a better approach to needing everything in context would be to lean more heavily on tool use, perhaps more similar to how a human works - we don't memorize the entire code base (at least not in terms of complete line-by-line detail, even though we may have a pretty clear overview of a 10K LOC codebase while we're in the middle of development) but rather rely on tools like grep and ctags to locate relevant parts of source code on an as-needed basis.

km144 · 4 months ago
As you alluded to at the end of your post—I'm not really convinced 20k LOC is very limiting. How many lines of code can you fit in your working mental model of a program? Certainly less than 20k concrete lines of text at any given time.

In your working mental model, you have broad understandings of the broader domain. You have broad understandings of the architecture. You summarize broad sections of the program into simpler ideas. module_a does x, module_b does y, insane file c does z, and so on. Then there is the part of the software you're actively working on, where you need more concrete context.

So as you move towards the central task, the context becomes more specific. But the vague outer context is still crucial to the task at hand. Now, you can certainly find ways to summarize this mental model in an input to an LLM, especially with increasing context windows. But we probably need to understand how we would better present these sorts of things to achieve performance similar to a human brain, because the mechanism is very different.

u/km144

KarmaCake day116March 22, 2024View Original