Like do people here really think making some bad decisions is incompetence?
If you do, your perfectionism is probably something you need to think about.
Or please reply to me with your exact perfect predictions of how AI will play out in the next 5, 10, 20 years and then tell us how you would run a trillion dollar company. Oh and please revisit your comment in these timeframes
It’s not perfectionism, it’s a desire to dunk on what you don’t like whenever the opportunity arises.
Static type checking is nice and is certainly my preference, but dynamic type checking doesn’t mean no types. It means the types are checked at runtime.
There are some epic looking Clojure namespaces here, e.g. this JIT compiler https://github.com/argolab/dyna3/blob/master/src/clojure/dyn...
As you said, the very title of the article acknowledged that it didn’t produce a working product.
This is just outrage for the sake of outrage.
But it’s a memory based on what it’s trained on. Of course it doesn’t have a favorite ice cream. It’s not trained to have one. But that doesn’t mean it has no memory.
My argument is that humans have fallible memories too. Sometimes you say something wrong or that you don’t really mean. Then you might or might not notice you made a mistake.
The part LLMs don’t do great at is noticing the mistake. They have no filter and say whatever they’re thinking. They don’t run through thoughts in their head first and see if they make any sense.
Of course, that’s part of what companies are trying to fix with reasoning models. To give them the ability to think before they speak.
Humans are better at noticing when their recollections are incorrect. But LLMs are quickly improving.