Readit News logoReadit News
jibal commented on The oldest unopened bottle of wine in the world   openculture.com/2025/08/t... · Posted by u/bookofjoe
j1elo · 15 hours ago
Funny that we can know what's the center of the Sun made of, but who knows what is inside that bottle! :)
jibal · 13 hours ago
Unlike a bottle of wine, the sun is an electromagnetic energy source. Without accessing the wine its chemical composition is unknown. Consider medical diagnostics like MRIs and CT scans ... they detect density and shape, but for a biopsy you need tissue.
jibal commented on The oldest unopened bottle of wine in the world   openculture.com/2025/08/t... · Posted by u/bookofjoe
jibal · 13 hours ago
The best wine I ever tasted was from a bottle of Montrachet fetched from the cellar of friends of a new girlfriend, saved for a special occasion which apparently was them meeting me, which added a nice glow to it.
jibal commented on Libre – An anonymous social experiment without likes, followers, or ads   libreantisocial.com... · Posted by u/rododecba
atoav · 21 hours ago
And I saw a ton of racist, genozidal, sexualized and otherwise unhinged content that the worst people would think call "freedom".

So again, tell me what you're unfree to write there?

jibal · 14 hours ago
I already said. And again, I'm not seeking that sort of freedom ... the one sort of freedom the site offers is freedom from moral consequence.
jibal commented on AGI is an engineering problem, not a model training problem   vincirufus.com/posts/agi-... · Posted by u/vincirufus
jibal · a day ago
How do we know when a newborn has achieved general intelligence? We don't need a definition amenable to proof.
jibal · 14 hours ago
P.S. The response is just an evasion.
jibal commented on AGI is an engineering problem, not a model training problem   vincirufus.com/posts/agi-... · Posted by u/vincirufus
germandiago · a day ago
Well, you should show the proof that it is possible also, So it would be a draw.

I really think it is not possible to get that from a machine. You can improve and do much fancier than now.

But AGI would be something entirely different. It is a system that can do everything better than a human including creativity, which I believe it to be exclusively human as of now.

It can combine, simulate and reason. But think out of the box? I doubt so. It is different to being able to derive ideas from which human would create. For that it can be useful. But that would not be AGI.

jibal · 14 hours ago
The burden of proof is on the person who makes a claim, especially an absolute existential claim like that. You have failed the burden of proof and of intellectual honesty. Over and out.
jibal commented on AGI is an engineering problem, not a model training problem   vincirufus.com/posts/agi-... · Posted by u/vincirufus
mattfrommars · a day ago
I don't you know about you guys but Sam Altman have said they have achieved AGI within OpenAI. That's big.
jibal · a day ago
How is it "big" that Altman told one of his many lies? He now says that AGI "is not a useful term".
jibal commented on AGI is an engineering problem, not a model training problem   vincirufus.com/posts/agi-... · Posted by u/vincirufus
signa11 · a day ago
won’t somebody please think about Mr. Godel, and the Incompleteness Theorem ?
jibal · a day ago
They aren't relevant. Even if Penrose and Lucas were right (they aren't), a computational system can solve the vast majority of the problems we would want solved.
jibal commented on AGI is an engineering problem, not a model training problem   vincirufus.com/posts/agi-... · Posted by u/vincirufus
root_axis · a day ago
The suggested requirements are not engineering problems. Conceiving of a model architecture that can represent all the systems described in the blog is a monumental task of computer science research.
jibal · a day ago
It's software engineering.
jibal commented on AGI is an engineering problem, not a model training problem   vincirufus.com/posts/agi-... · Posted by u/vincirufus
levitate · a day ago
"AGI needs to update beliefs when contradicted by new evidence" is a great idea, however, the article's approach of building better memory databases (basically fancier RAG) doesn't seem enable this. Beliefs and facts are built into LLMs at a very low layer during training. I wonder how they think they can force an LLM to pull from the memory bank instead of the training data.
jibal · a day ago
LLMs are not the proposed solution.

(Also, LLMs don't have beliefs or other mental states. As for facts, it's trivially easy to get an LLM to say that it was previously wrong ... but multiple contradictory claims cannot all be facts.)

jibal commented on AGI is an engineering problem, not a model training problem   vincirufus.com/posts/agi-... · Posted by u/vincirufus
SalmoShalazar · a day ago
The forgone conclusion that LLMs are the key or even a major step towards AGI is frustrating. They are not, and we are fooling ourselves. They are incredible knowledge stores and statistical machines, but general intelligence is far more than these attributes.
jibal · a day ago
Right ... as the article lays out.

u/jibal

KarmaCake day264November 17, 2010View Original