Readit News logoReadit News
nfc commented on “Language and Image Minus Cognition”: An Interview with Leif Weatherby   jhiblog.org/2025/06/11/la... · Posted by u/Traces
dcre · 3 months ago
I don’t know what the guidelines are, but this is not helpful or accurate as a characterization of the interview. If anything, Weatherby is saying exactly what you say he gets wrong: “LLMs are not the total distribution, but they’re a far larger chunk of it than we’ve ever before been able to see or play with.” I am no anti-LLM guy but this is an embarrassing way to use them.
nfc · 3 months ago
Thank you for your reply, I may have misinterpreted what Weatherby was saying and I admit I did not spend enough time reading it. I've re-skimmed it and think you may be right.

With respect to the use of LLMs for my original comment. I think however that this is a useful use for them. It started a conversation on an article that had not comments on it and helped at least one person (me but hopefully others too) to get a better understanding of what was said (thanks to your comment). But it's not a hill I'm willing to die on, specially after already having been wrong once in this thread :)

Dead Comment

nfc commented on AI 2027   ai-2027.com/... · Posted by u/Tenoke
nfc · 5 months ago
Something I ponder in the context of AI alignment is how we approach agents with potentially multiple objectives. Much of the discussion seems focused on ensuring an AI pursues a single goal. Which seems to be a great idea if we are trying to simplify the problem but I'm not sure how realistic it is when considering complex intelligences.

For example human motivation often involves juggling several goals simultaneously. I might care about both my own happiness and my family's happiness. The way I navigate this isn't by picking one goal and maximizing it at the expense of the other; instead, I try to balance my efforts and find acceptable trade-offs.

I think this 'balancing act' between potentially competing objectives may be a really crucial aspect of complex agency, but I haven't seen it discussed as much in alignment circles. Maybe someone could point me to some discussions about this :)

nfc commented on AI 2027   ai-2027.com/... · Posted by u/Tenoke
visarga · 5 months ago
The story is entertaining, but it has a big fallacy - progress is not a function of compute or model size alone. This kind of mistake is almost magical thinking. What matters most is the training set.

During the GPT-3 era there was plenty of organic text to scale into, and compute seemed to be the bottleneck. But we quickly exhausted it, and now we try other ideas - synthetic reasoning chains, or just plain synthetic text for example. But you can't do that fully in silico.

What is necessary in order to create new and valuable text is exploration and validation. LLMs can ideate very well, so we are covered on that side. But we can only automate validation in math and code, but not in other fields.

Real world validation thus becomes the bottleneck for progress. The world is jealously guarding its secrets and we need to spend exponentially more effort to pry them away, because the low hanging fruit has been picked long ago.

If I am right, it has implications on the speed of progress. Exponential friction of validation is opposing exponential scaling of compute. The story also says an AI could be created in secret, which is against the validation principle - we validate faster together, nobody can secretly outvalidate humanity. It's like blockchain, we depend on everyone else.

nfc · 5 months ago
I agree with your point about the validation bottleneck becoming dominant over raw compute and simple model scaling. However, I wonder if we're underestimating the potential headroom for sheer efficiency breakthroughs at our levels of intelligence.

Von Neumann for example was incredibly brilliant, yet his brain presumably ran on roughly the same power budget as anyone else's. I mean, did he have to eat mountains of food to fuel those thoughts? ;)

So it looks like massive gains in intelligence or capability might not require proportionally massive increases in fundamental inputs at least at the highest levels of intelligence a human can reach, and if that's true for the human brain why not for other architecture of intelligence.

P.S. It's funny, I was talking about something along the lines of what you said with a friend just a few minutes before reading your comment so when I saw it I felt that I had to comment :)

nfc commented on The future of AI according to thousands of forecasters   metaculus.com/ai/... · Posted by u/ddp26
sdwr · 2 years ago
I think we're reaching a point where the Turing test is no longer useful. If you get into the nitty-gritty of it (instead of just handwaving "computer should act like person"), it's about roleplaying a fake identity. Which is a specific skill, not a general test of competence.
nfc · 2 years ago
The Turing test seems to be a product of an era where the nature and capabilities of artificial intelligence were still in the realms of the unknown. Because of that it was difficult to conceive a specific test that could measure its abilities. So the test ended up focusing on human intelligence—the most advanced form of intelligence known at that time—as the benchmark for AI.

To illustrate, imagine if an extraterrestrial race created a Turing-style test, with their intelligence serving as the gold standard. Unless their cognitive processes closely mirrored ours, it's doubtful that humans would pass such an examination

nfc commented on Yann LeCun and Andrew Ng: Why the 6-Month AI Pause Is a Bad Idea [video]   youtube.com/watch?v=BY9KV... · Posted by u/georgehill
incone123 · 2 years ago
The machine might not need to hack but could instead be given privileged access to the missile launch systems. I'm not being sarcastic when I say that War Games is one of my favourite films.
nfc · 2 years ago
Just thought about this scenario, there are probably more likely ones.

If some AI had access to the missile launch system, the best course of action for it would probably not be to launch immediately. This is because nowadays it is very unlikely that it would be able to repair itself so launching immediately would ensure its own destruction (and probably auto-destruction is not its goal)

If it was discovered it could just threaten humans with launch if they do not help it reach the state at which it would be able to repair itself (at which point humans would no longer be necessary)

nfc commented on It took me 10 years to understand entropy   cantorsparadise.com/it-to... · Posted by u/dil8
nfc · 3 years ago
I enjoyed the article but have a very minor nitpick. I didn't understand why the author added this sentence.

"However, the timescales involved in these calculation are so unreasonably large and abstract that one could wonder if these makes any sense at all."

Apart from the fact that we could wonder about anything and everything I think the author does not state what evidence do we have to suspect that large enough timescales would change the laws of physics.

It could be the case of course, and it would be great to talk about them if they exist but without further justification I feel that this sentence is an unjustified opinion in what is otherwise a very nice article that helps better understand enthropy.

nfc commented on Horse-riding astronaut is a milestone in AI’s journey to make sense of the world   technologyreview.com/2022... · Posted by u/nkurz
nfc · 3 years ago
“If we define understanding as human understanding, then AI systems are very far off,”

This took me into the following line of thought. If we wanted AGI we probably should give this neural networks an overarching goal, the same way our intelligence evolved in the presence of overarching goals (survival, reproduction...). It's these less narrow goals that allowed us to evolve our "general intelligence". It's possible that if we are trying to construct AGI through the accumulation of narrow goals we are taking the harder route.

At the same time I think we should not pursue AGI the way I'm suggesting is best, too many unknown risks (paperclip problem...)

Of course all this begs the question of what is AGI, how we define a good overarching goal to prompt AGI and many more...

nfc commented on Ukraine Warned over Danger of Russian Spying on Telegram   forbes.com/sites/thomasbr... · Posted by u/nfc
nfc · 4 years ago
I'm trying to help friends in Ukraine as much as possible and advice on secure communications would be a way to do it. I know messaging apps security have been discussed in HN before but I wanted to ask the community about it in the context of the conflict of Ukraine.

The end goal for me is to give the best advice for my friends but I think it can also lead to the type of discussion HN is focused on.

u/nfc

KarmaCake day145March 21, 2014
About
After PhD in astrophysics moved to tech to try to have a better impact in the world. Amateur polyglot (speaking 7 languages), curious on whether it's possible to apply Bayes in a useful way in real life.

reverseWords(com dot inopinia at nestor)

https://www.linkedin.com/in/nestor-conde-86143389/

View Original