Readit News logoReadit News
Fargren commented on Ask HN: How can I get better at using AI for programming?    · Posted by u/lemonlime227
dayjah · 5 days ago
The Attention algo does that, it has a recency bias. Your observation is not necessarily indicative of Claude not loading CLAUDE.md.

I think you may be observing context rot? How many back and forths are you into when you notice this?

Fargren · 5 days ago
That explains why it happens, but doesn't really help with the problem. The expectation I have as a pretty naive user, is that what is in the .md file should be permanently in the context. It's good to understand why this is not the case, but it's unintuitive and can lead to frustration. It's bad UX, if you ask me.

I'm sure there are workarounds such as resetting the context, but the point is that god UX would mean such tricks are not needed.

Fargren commented on Voyager 1 is about to reach one light-day from Earth   scienceclock.com/voyager-... · Posted by u/ashishgupta2209
noduerme · 21 days ago
Not trying to oversimplify. But suppose 95% of the probe's mass was intended to be jettisoned ahead of it on arrival by an explosive charge, and would then serve as a reflector. That might give enough time for the probe to be captured by the star's gravity...?
Fargren · 21 days ago
It seems to me that building a recording device that can survive in space, that it's very light, and that can not break apart after receiving the impact from an explosive charge strong enough to decelerate it from the speeds that would take it to Alpha Centauri is... maybe impossible.

We're talking about 0.2 light years. To reach it in 20 years, that's 1/10th of the speed of light. The forces to decelerate that are pretty high.

I did a quick napkin calculation (assuming the device weighs 1kg), that's close to 3000 kiloNewton, if it has 10 seconds to decelerate. The thrust of an F100 jet engine is around 130 kN.

IANan aerounatics engineer, so I could be totally wrong.

Fargren commented on Voyager 1 is about to reach one light-day from Earth   scienceclock.com/voyager-... · Posted by u/ashishgupta2209
noduerme · 21 days ago
Could the probe just fire off some mass when it got there?
Fargren · 21 days ago
Any mass that it fires would have a starting velocity equal to that of the probe, and would need to be accelerated an equal velocity in the opposite direction. It would be a smaller mass, so it would require less fuel than decelerating the whole probe; but it's still a hard problem.

Be careful with the word "just". It often makes something hard sound simple.

Fargren commented on Court filings allege Meta downplayed risks to children and misled the public   time.com/7336204/meta-law... · Posted by u/binning
armchairhacker · 25 days ago
It protects you and your friends+family from the negative effects of using Meta platforms.
Fargren · 25 days ago
It does not. Social media platforms have had massive societal impact. From language, to social movements, to election results, social media has had effects, positive or negative, that impact the lives of even those who do not use them.
Fargren commented on Ask HN: How are Markov chains so different from tiny LLMs?    · Posted by u/JPLeRouzic
vidarh · a month ago
To make a universal Turing machine out of an LLM only requires a loop and the ability to make a model that will look up a 2x3 matrix of operations based on context and output operations to the context on the basis of them (the smallest Turing machine has 2 states and 3 symbols or the inverse).

So, yes, you can.

Once you have a (2,3) Turing machine, you can from that build a model that models any larger Turing machine - it's just a question of allowing it enough computation and enough layers.

It is not guaranteed that any specific architecture can do it efficiently, but that is entirely besides the point.

Fargren · a month ago
LLMs cannot loop (unless you have a counterexample?), and I'm not even sure they can do a lookup in a table with 100% reliability. They also have finite context, while a Turing machine can have infinite state.
Fargren commented on Ask HN: How are Markov chains so different from tiny LLMs?    · Posted by u/JPLeRouzic
vidarh · a month ago
No, I am not assuming anything about the structure of the human brain.

The point of talking about Turing completeness is that any universal Turing machine can emulate any other (Turing equivalence). This is fundamental to the theory of computation.

And since we can easily show that both can be rigged up in ways that makes the system Turing complete, for humans to be "special", we would need to be able to be more than Turing complete.

There is no evidence to suggest we are, and no evidence to suggest that is even possible.

Fargren · a month ago
An LLM is not a universal Turing machine, though. It's a specific family of algorithms.

You can't build an LLM that will factorize arbitrarily large numbers, even in infinite time. But a Turing machine can.

Fargren commented on Ask HN: How are Markov chains so different from tiny LLMs?    · Posted by u/JPLeRouzic
vidarh · a month ago
> However, LLMs will not be able to represent ideas that it has not encountered before. It won't be able to come up with truly novel concepts, or even ask questions about them. Humans (some at least) have that unbounded creativity that LLMs do not.

There's absolutely no evidence to support this claim. It'd require humans to exceed the Turing computable, and we have no evidence that is possible.

Fargren · a month ago
You are making a big assumption here, which is that LLMs are the main "algorithm" that the human brain uses. The human brain can easily be a Turing machine, that's "running" something that's not an LLM. If that's the case, we can say that the fact that humans can come up with novel concept does not imply that LLMs can do the same.
Fargren commented on Europe is scaling back GDPR and relaxing AI laws   theverge.com/news/823750/... · Posted by u/ksec
betaby · a month ago
That cookie thing should a browser's default.
Fargren · a month ago
That would be fine, if there was a law that forced every browser to have this setting and every company to respect the setting.
Fargren commented on Spec-Driven Development: The Waterfall Strikes Back   marmelab.com/blog/2025/11... · Posted by u/vinhnx
midnitewarrior · a month ago
I see rapid, iterative Waterfall.

The downfall of Waterfall is that there are too many unproven assumptions in too long of a design cycle. You don't get to find out where you were wrong until testing.

If you break a waterfall project into multiple, smaller, iterative Waterfall processes (a sprint-like iteration), and limit the scope of each, you start to realize some of the benefits of Agile while providing a rich context for directing LLM use during development.

Comparing this to agile is missing the point a bit. The goal isn't to replace agile, it's to find a way that brings context and structure to vibe coding to keep the LLM focused.

Fargren · a month ago
"rapid, iterative Waterfall" is a contradiction. Waterfall means only one iteration. If you change the spec after implementation has started, then it's not waterfall. You can't change the requirements, you can't iterate.

Then again, Waterfall was never a real methodology; it was a straw man description of early software development. A hyperbole created only to highlight why we should iterate.

Fargren commented on A definition of AGI   arxiv.org/abs/2510.18212... · Posted by u/pegasus
xyzzy123 · 2 months ago
For _investment_ purposes the definition of AGI is very simple. It is: "to what extent can it replace human workers?".

From this perspective, "100% AGI" is achieved when AI can do any job that happens primarily on a computer. This can be extended to humanoid robots in the obvious way.

Fargren · 2 months ago
That's not what AGI used to mean a year or two ago. That's a corruption of the term, and using that definition of AGI is the mark of a con artist, in my experience.

u/Fargren

KarmaCake day2654October 18, 2009View Original