Readit News logoReadit News
aryehof commented on AI should only run as fast as we can catch up   higashi.blog/2025/12/07/a... · Posted by u/yuedongze
aryehof · 10 days ago
> “AI always thinks and learns faster than us, this is undeniable now”

No, it neither thinks nor learns. It can give an illusion of thinking, and an AI model itself learns nothing. Instead it can produce a result based on its training data and context.

I think it important that we do not ascribe human characteristics where not warranted. I also believe that understanding this can help us better utilize AI.

aryehof commented on Trillions spent and big software projects are still failing   spectrum.ieee.org/it-mana... · Posted by u/pseudolus
aryehof · 23 days ago
Failure typically comes from two directions. Unknown and changing requirements, and management that relies on (often external) technical (engineering) leadership that is too often incompetent.

These projects are often characterized by very complex functional requirements, yet are undertaken by those who primarily only know (and endlessly argue about) non-functional requirements.

aryehof commented on Claude Advanced Tool Use   anthropic.com/engineering... · Posted by u/lebovic
aryehof · 24 days ago
This seems to derive from the “skills” feature. A set of “meta tools” that supports granular discovery of tools, but whereas you write (optional) skills code yourself, a second meta tool can do it for you in conjunction with (optional) examples you can provide.

Am I missing something else?

aryehof commented on Think in math, write in code (2019)   jmeiners.com/think-in-mat... · Posted by u/alabhyajindal
aryehof · a month ago
But many computer applications are models of systems real or imagined. Those systems are not mathematical models. That everything is an “algorithm” is the mantra of programmers that haven’t been exposed to different types of software.
aryehof commented on The Case That A.I. Is Thinking   newyorker.com/magazine/20... · Posted by u/ascertain
mbesto · a month ago
I think the discrepancy is this:

1. We trained it on a fraction of the world's information (e.g. text and media that is explicitly online)

2. It carries all of the biases us humans have and worse the biases that are present in the information we chose to explicitly share online (which may or may not be different to the experiences humans have in every day life)

aryehof · a month ago
I see this a lot in what LLMs know and promote in terms of software architecture.

All seem biased to recent buzzwords and approaches. Discussions will include the same hand-waving of DDD, event-sourcing and hexagonal services, i.e. the current fashion. Nothing of worth apparently preceded them.

I fear that we are condemned to a future where there is no new novel progress, but just a regurgitation of those current fashion and biases.

aryehof commented on Ask HN: How to deal with long vibe-coded PRs?    · Posted by u/philippta
aryehof · a month ago
This is effectively a product, not a feature (or bug). Ask the submitter how you can you determine if this meets functional and non-functional requirements, to start with?
aryehof commented on Show HN: Why write code if the LLM can just do the thing? (web app experiment)   github.com/samrolken/noko... · Posted by u/samrolken
kennywinker · 2 months ago
If this was a good answer to mobility, people would prefer the bus over their car. It’s non-deterministic - when will it come? How quick will i get there? Will i get to sit? And it’s operated by an intelligent agent (driver).

Every reason people prefer a car or bike over the bus is a reason non-deterministic agents are a bad interface.

And that analogy works as a glimpse into the future - we’re looking at a fast approaching world where LLMs are the interface to everything for most of us - except for the wealthy, who have access to more deterministic services or actual human agents. How long before the rich person car rental service is the only one with staff at the desk, and the cheaper options are all LLM based agents? Poor people ride the bus, rich people get to drive.

aryehof · 2 months ago
Bus vs car hit home for me as a great example of non vs deterministic.

It has always seemed to me that workflow or processes need to be deterministic and not decided by an LLM.

aryehof commented on OpenAI says over a million people talk to ChatGPT about suicide weekly   techcrunch.com/2025/10/27... · Posted by u/jnord
aryehof · 2 months ago
My first reaction is how do they know? Are these all people sharing their chats (willingly) with OpenAI, or is opting out of “helping improve the model” for privacy a farce?
aryehof commented on Computer science courses that don't exist, but should (2015)   prog21.dadgum.com/210.htm... · Posted by u/wonger_
aryehof · 2 months ago
CS102 Big Balls of Mud: Data buckets, functions, modules and namespaces

CS103 Methodologies: Advanced Hack at it ‘till it Works

CS103 History: Fashion, Buzzwords and Reinvention

CS104 AI teaches Software Architecture (CS103 prerequisite)

aryehof commented on The Accountability Problem   jamesshore.com/v2/blog/20... · Posted by u/FrancoisBosun
aryehof · 2 months ago
This seems to assume that any endeavor in software is something entirely established from scratch. There are no patterns, experiences or reusable parts that can be relied on. A hack at it until it works methodology.

Accordingly, it seems to imply that we as developers can’t be accountable for anything but effort. It’s a sad condemnation of our industry, and at odds with any (normal) commercial undertaking that has limited resources that must be allocated among competing alternatives.

Any real manager knows the basics of calculating the best choice amongst competing alternatives by establishing projected cashflows and calculating the PV (present value) of each. But not for software - we’re too special.

(normal) - one that can sustain itself on a commercial basis, rather than just on injected capital or borrowed funds.

u/aryehof

KarmaCake day712June 26, 2014
About
Email me …

hn638 [at] aryeh [dot] sent [dot] com

View Original