Readit News logoReadit News
brokencode commented on Mark Zuckerberg freezes AI hiring amid bubble fears   telegraph.co.uk/business/... · Posted by u/pera
butlike · 3 days ago
It's an entertainment tool. Like a television or playstation. Only a fool would think social media is anything more.
brokencode · 3 days ago
Sure, but society is full of fools. Plenty of people say social media is the primary way they get news. Social media platforms are super spreaders of lies and propaganda.
brokencode commented on Mark Zuckerberg freezes AI hiring amid bubble fears   telegraph.co.uk/business/... · Posted by u/pera
zpeti · 3 days ago
Maybe this time the top posters on HN should stop criticizing one of the top performing founder CEOs of the last 20 years who built an insane business, made many calls that were called stupid at the time (WhatsApp), and many that were actually stupid decisions.

Like do people here really think making some bad decisions is incompetence?

If you do, your perfectionism is probably something you need to think about.

Or please reply to me with your exact perfect predictions of how AI will play out in the next 5, 10, 20 years and then tell us how you would run a trillion dollar company. Oh and please revisit your comment in these timeframes

brokencode · 3 days ago
I think many people just really dislike Zuckerberg as a human being and Meta as a company. Social media has seriously damaged society in many ways.

It’s not perfectionism, it’s a desire to dunk on what you don’t like whenever the opportunity arises.

brokencode commented on Why Nim?   undefined.pyfy.ch/why-nim... · Posted by u/TheWiggles
GoblinSlayer · 7 days ago
I don't quite understand what's the point of a type system when it's going to be defenestrated by dynamic typing. In javascript the typeof operator can be used to check the type.
brokencode · 7 days ago
The type system still allows organizing and abstracting code in standardized ways.

Static type checking is nice and is certainly my preference, but dynamic type checking doesn’t mean no types. It means the types are checked at runtime.

brokencode commented on Dyna – Logic Programming for Machine Learning   dyna.org/... · Posted by u/matteodelabre
refset · 8 days ago
> Dyna3 — A new implementation of the Dyna programming language written in Clojure

There are some epic looking Clojure namespaces here, e.g. this JIT compiler https://github.com/argolab/dyna3/blob/master/src/clojure/dyn...

brokencode · 8 days ago
I'd be fascinated to hear about the author's experience using Clojure for something as complex as a compiler. Was the lack of types an issue? Or was the simplicity and flexibility of the language worth the tradeoff?
brokencode commented on Toothpaste made with keratin may protect and repair damaged teeth: study   kcl.ac.uk/news/toothpaste... · Posted by u/sohkamyung
brokencode · 8 days ago
That is the most made up sounding fact I’ve heard in a long time.
brokencode commented on I let LLMs write an Elixir NIF in C; it mostly worked   overbring.com/blog/2025-0... · Posted by u/overbring_labs
flax · 9 days ago
"it mostly worked" is just a more nuanced way of saying "it didn't work". Apparently the author did eventually get something working, but it is false to say that the LLMs produced a working project.
brokencode · 9 days ago
Ok. But what are you even reacting to? Who is saying that it produced a working product?

As you said, the very title of the article acknowledged that it didn’t produce a working product.

This is just outrage for the sake of outrage.

brokencode commented on Man develops rare condition after ChatGPT query over stopping eating salt   theguardian.com/technolog... · Posted by u/vinni2
jleyank · 11 days ago
How about “man gets bromine poisoning after taking ChatGPT medical advice”?
brokencode · 11 days ago
We don’t know whether ChatGPT gave medical advice. Only that it suggested using sodium bromide instead of sodium chloride. For what purpose or in what context, we don’t know. It may even have recommended against using it and the man misunderstood.
brokencode commented on Outside of the top stocks, S&P 500 forward profits haven't grown in 3 years   insight-public.sgmarkets.... · Posted by u/Terretta
nemomarx · 12 days ago
month old but https://www.wheresyoured.at/the-haters-gui/ made this argument - that a lot of that profit was just recirculation between these companies
brokencode · 12 days ago
What do you mean recirculation? Isn’t the money flowing mostly in one direction (to Nvidia)?
brokencode commented on GPT-5: Overdue, overhyped and underwhelming. And that's not the worst of it   garymarcus.substack.com/p... · Posted by u/kgwgk
arolihas · 13 days ago
So when I tell you I like vanilla ice cream I am just hallucinating and calling it a memory? And when chatgpt says they like vanilla ice cream they are doing the same thing as me? Do I need to prove it to you that they are different? Is it really baseless of me to insist otherwise? I have a body, millions of different receptors, a mouth with taste buds, I have a consciousness, a mind, a brain that interacts with the world directly, and it's all just words on a screen to you interchangeable with a word pattern matcher?
brokencode · 13 days ago
I’m not calling what you’re doing a hallucination. I’m saying that what an LLM does is in fact memory.

But it’s a memory based on what it’s trained on. Of course it doesn’t have a favorite ice cream. It’s not trained to have one. But that doesn’t mean it has no memory.

My argument is that humans have fallible memories too. Sometimes you say something wrong or that you don’t really mean. Then you might or might not notice you made a mistake.

The part LLMs don’t do great at is noticing the mistake. They have no filter and say whatever they’re thinking. They don’t run through thoughts in their head first and see if they make any sense.

Of course, that’s part of what companies are trying to fix with reasoning models. To give them the ability to think before they speak.

brokencode commented on GPT-5: Overdue, overhyped and underwhelming. And that's not the worst of it   garymarcus.substack.com/p... · Posted by u/kgwgk
arolihas · 14 days ago
Ok, that is memory. I am talking about hallucination vs human or even animal intent in an embodied meaningful experience.
brokencode · 14 days ago
All you’re doing is calling the same thing hallucination when an LLM does it and memory when a human does it. You have provided no basis that the two are actually different.

Humans are better at noticing when their recollections are incorrect. But LLMs are quickly improving.

u/brokencode

KarmaCake day2463March 26, 2015View Original