Readit News logoReadit News
philipswood commented on Coding agents have replaced every framework I used   blog.alaindichiappari.dev... · Posted by u/alainrk
hakunin · 5 days ago
I don't think I'm being pedantic or narrow. Cosmic rays, power spikes, and falling cows can change the course of deterministic software. I'm saying that your "compiler" either has intentionally designed randomness (or "creativity") in it, or it doesn't. Not sure why we're acting like these are more or less deterministic. They are either deterministic or not inside normal operation of a computer.
philipswood · 4 days ago
To be clear: I'm not engaging with your main point about whether LLMs are usable in software engineering or not.

I'm specifically addressing your use of the concept of determinism.

An LLM is a set of matrix multiplies and function applications. The only potentially non-deterministic step is selecting the next token from the final output and that can be done deterministically.

By your strict use of the definition they absolutely can be deterministic.

But that is not actually interesting for the point at hand. The real point has to do with reproducibility, understand ability and tolerances.

3blue1brown has a really nice set of videos on showing how the LLM machinery fits together.

philipswood commented on Coding agents have replaced every framework I used   blog.alaindichiappari.dev... · Posted by u/alainrk
hakunin · 5 days ago
It’s not _more_ deterministic. It’s deterministic, period. The LLMs we use today are simply not.
philipswood · 5 days ago
Build systems may be deterministic in the narrow sense you use, but significant extra effort is required to make them reproducible.

Engineering in the broader sense often deals with managing the outputs of variable systems to get known good outcomes to acceptable tolerances.

Edit: added second paragraph

philipswood commented on 1 kilobyte is precisely 1000 bytes?   waspdev.com/articles/2026... · Posted by u/surprisetalk
waffletower · 9 days ago
The author decidedly has expert syndrome -- they deny both the history and rational behind memory units nomenclature. Memory measurements evolved utilizing binary organizational patterns used in computing architectures. While a proud French pedant might agree with the decimal normalization of memory units discussed, it aligns more closely to the metric system, and it may have benefits for laypeople, it fails to account for how memory is partitioned in historic and modern computing.
philipswood · 9 days ago
Yes, tomato's ARE actually a fruit.

But really!?

I'll keep calling it in nice round powers of two, thank you very much.

philipswood commented on AI will not solve world hunger   thomasrigby.com/posts/ai-... · Posted by u/speckx
philipswood · 14 days ago
Strangely enough, even mankind producing more food than it eats isn't enough to stop world hunger either.

(A fact that he does mention)

philipswood commented on Everyone is wrong about AI and Software Engineering   deadneurons.substack.com/... · Posted by u/nr378
philipswood · 18 days ago
> Consider what happens when you build software professionally. You talk to stakeholders who do not know what they want and cannot articulate their requirements precisely. You decompose vague problem statements into testable specifications. You make tradeoffs between latency and consistency, between flexibility and simplicity, between building and buying. You model domains deeply enough to know which edge cases will actually occur and which are theoretical. You design verification strategies that cover the behaviour space. You maintain systems over years as requirements shift.

I'm not sure why he thinks current LLM technologies (with better training) won't be able to do more and more of this as time passes.

philipswood commented on Seeing Geologic Time: Exponential Browser Testing   tjid3.org/paper/time... · Posted by u/TimothyMJones
lsh0 · a month ago
> This architecture was subjected to a definitive stress test at 100,000 years using a single-file artifact.

what does this even mean?

philipswood · a month ago
Yup, I also couldn't figure out after scrolling and skimming two or three pages.

Some understandable short sentence or paragraph early on needs to answer the main question the title raises.

philipswood commented on I used Lego to design a farm for people who are blind – like me   bbc.co.uk/news/articles/c... · Posted by u/ColinWright
UltraSane · a month ago
Vision is absolutely a superpower if everyone else is blind. Just think how far you can shoot something with a rifle and scope. Guns are useless to blind people. A person who can see has an enormous advantage over a blind person in a fight. Try to imagine a military where everyone is blind fighting against another where everyone can see.
philipswood · a month ago
In a blind culture there probably are no guns at all - so your hypothetical sighted-person-amongst-the-blind would need to be able to make his own.

Then again, just throwing rocks might be pretty effective.

philipswood commented on Two AI Agents Walk into a Room   nibzard.com/demig... · Posted by u/nkko
philipswood · a month ago
Um...

> This experiment was inspired by @swyx’s tweet about Ted Chiang’s short story “Understand” (1991). The story imagines a superintelligent AI’s inner experience—its reasoning, self-awareness, and evolution. After reading it and following the Hacker News discussion, ...

Umm... I <3 love <3 Understand by Ted Chiang, But the story is about super intelligent *humans*.

Like Tatja Grimm's World or the movie Limitless.

PS: Referenced tweet for the interested: https://x.com/swyx/status/2006976415451996358

Ted C

philipswood commented on     · Posted by u/smafarin
PaulHoule · a month ago
How about the boss who uses JIRA to sabotage his employees and torpedo his company?
philipswood · a month ago
I think he's trying to write fiction, not non-fiction! :p
philipswood commented on LLMs will never be alive or intelligent   hatwd.com/p/llms-will-nev... · Posted by u/hatwd
philipswood · a month ago
I'm glad the author spent some time thinking about this, clarifying his thoughts and writing it down, but I don't think he's written anything much worth reading yet.

He's mostly in very-confident-but-not-even-wrong kind of territory here.

One comment on his note:

> As an example, let’s say an LLM is correct 95% of the time (0.95) in predicting the “right” tokens to drive tools that power an “agent” to accomplish what you’ve asked of it. Each step the agent has to take therefore has a probability of being 95% correct. For a task that takes 2 steps, that’s a probability of 0.95^2 = 0.9025 (90.25%) that the agent will get the task right. For a task that takes 30 steps, we get 0.95^30 = 0.2146 (21.46%). Even if the LLMs were right 99% of the time, a 30-step task would only have a probability of about 74% of having been done correctly.

The main point that for sequential steps of tasks errors can accumulate and that this needs to be handled is valid and pertinent, but the model used to "calculate" this is quite wrong - steps don't fail probabilistically independently.

Given that actions can depend on outcomes of previous step actions and given that we only care about final outcomes and not intermediate failing steps, errors can be corrected. Thus even steps that "fail" can still lead to success.

(This is not a Bernoulli process.)

I think he's referencing some nice material and he's starting in a good direction with defining agency as goal directed behaviour, but otherwise his confidence far outstrips the firmness of his conceptual foundations or clarity of his deductions.

u/philipswood

KarmaCake day1117April 19, 2016View Original