Readit News logoReadit News
zeknife commented on Tesla changes meaning of 'Full Self-Driving', gives up on promise of autonomy   electrek.co/2025/09/05/te... · Posted by u/MilnerRoute
dotancohen · 6 days ago
And yet deaf people regularly drive cars, as do blind-in-one-eye people, and I've never seen somebody leave their vehicle during active driving.
zeknife · 6 days ago
I didn't mean that a human driver needs to leave their vehicle to drive safely, I mean that we understand the world because we live in it. No amount of machine learning can give autonomous vehicles a complete enough world model to deal with novel situations, because you need to actually leave the road and interact with the world directly in order to understand it at that level.
zeknife commented on Tesla changes meaning of 'Full Self-Driving', gives up on promise of autonomy   electrek.co/2025/09/05/te... · Posted by u/MilnerRoute
formercoder · 6 days ago
Humans drive without LIDAR. Why can’t robots?
zeknife · 6 days ago
I wouldn't trust a human to drive a car if they had perfect vision but were otherwise deaf, had no proprioception and were unable to walk out of their car to observe and interact with the world.
zeknife commented on Gemini 2.5 Flash Image   developers.googleblog.com... · Posted by u/meetpateltech
mortsnort · 16 days ago
At $0.02 per image, it's prohibitively expensive for many use-cases. For comparison, the cheapest Flux model (Schnell) is $0.003 per image.
zeknife · 16 days ago
How many images do you need? What are the use-cases that need a bunch of artificial yet photoreal images produced or altered without human supervision?
zeknife commented on What happens when people don't understand how AI works   theatlantic.com/culture/a... · Posted by u/rmason
bsoles · 3 months ago
> Thinking in humans is prior to language.

I am sure philosophers must have debated this for millennia. But I can't seem to be able to think without an inner voice (language), which makes me think that thinking may not be prior (or without) language. Same thing also happens to me when reading: there is an inner voice going on constantly.

zeknife · 3 months ago
Thinking is subconscious when working on complex problems. Thinking is symbolic or spatial when working in relevant domains. And in my own experience, I often know what is going to come next in my internal monologues, without having to actually put words to the thoughts. That is, the thinking has already happened and the words are just narration.
zeknife commented on What happens when people don't understand how AI works   theatlantic.com/culture/a... · Posted by u/rmason
ufmace · 3 months ago
I've tended to agree with this line of argument, but on the other hand...

I expect that anybody you asked 10 years ago who was at least decently knowledgeable about tech and AI would have agreed that the Turing Test is a pretty decent way to determine if we have a "real" AI, that's actually "thinking" and is on the road to AGI etc.

Well, the current generation of LLMs blow away that Turing Test. So, what now? Were we all full of it before? Is there a new test to determine if something is "really" AI?

zeknife · 3 months ago
By what definition of turing test? LLMs are by no means capable of passing for human in a direct comparison and under scrutiny, they don't even have enough perception to succeed in theory.
zeknife commented on Order Doesn’t Matter, But Reasoning Does   arxiv.org/abs/2502.19907... · Posted by u/spaintech
belter · 6 months ago
But John Carmack promised me AGI....
zeknife · 6 months ago
I haven't kept up with his tweets, but I got the impression he deliberately chose to not get involved in LLM hype in his own AI research?
zeknife commented on Sora is here   openai.com/index/sora-is-... · Posted by u/toomuchtodo
transformi · 9 months ago
Not impressive compare to the opensource video models out there, I anticipated some physics/VR capabilities, but it's basically just a marketing promotion to "stay in the game"...
zeknife · 9 months ago
Like with music generation models, the main thing that might make "open source" models better is most likely that they have no concern about excluding copyrighted material from the training data, so they actually get a good starting point instead of using a dataset consisting of youtube videos and stock footage
zeknife commented on GitHub cuts AI deals with Google, Anthropic   bloomberg.com/news/articl... · Posted by u/jbredeche
logicchains · 10 months ago
I don't know how you can say they lack understanding of the world when in pretty much any standardised test designed to measure human intelligence they perform better than the average human. They only thing that don't understand is touch because they're not trained on that, but they can already understand audio and video.
zeknife · 10 months ago
You said it, those tests are designed to measure human intelligence, because we know that there is a correspondence between test results and other, more general tasks - in humans. We do not know that such a correspondence exists with language models. I would actually argue that they demonstrably do not, since even an LLM that passes every IQ test you put in front of it can still trip up on trivial exceptions that wouldn't fool a child.
zeknife commented on GitHub cuts AI deals with Google, Anthropic   bloomberg.com/news/articl... · Posted by u/jbredeche
dageshi · 10 months ago
I think it was the switch from desktop search traffic being dominant to mobile traffic being dominant, that switch happened around the end of 2016.

Google used to prioritise big comprehensive articles on subjects for desktop users but mobile users just wanted quick answers, so that's what google prioritised as they became the biggest users.

But also, per your point, I think those smaller simpler less comprehensive posts are easier to fake/spam than the larger more compreshensible posts that came before.

zeknife · 10 months ago
Ironically, I almost never see quick answers in the top results, mostly it's dragged out pages of paragraph after paragraph with ads inbetween.
zeknife commented on Adaptation to high-altitude hypoxia on the Tibetan Plateau   sciencealert.com/humans-a... · Posted by u/amichail
sebgr · a year ago
Everytime I see the word delve in an article I can't help but assume it was written by an LLM which this probably was?
zeknife · a year ago
I like to think I have a pretty sharp eye for LLM output, and nothing else in the article raised any alarms for me.

u/zeknife

KarmaCake day75February 28, 2021View Original