Readit News logoReadit News
valdork59 commented on We automated everything except knowing what's going on   eversole.dev/blog/we-auto... · Posted by u/kennethops
bluGill · 13 days ago
You cannot understand everything. That has been the case since long before AI. I have a vague idea how the linux kernel works, and I could figure it out (I once found and fixed a bug in FreeBSD device drivers) - but I don't, I just trust it works. I've never looked at sqlite to understand how it works - I know enough SQL to be dangerous and trust it works. I know very in depth how the logging framework of my project works - maintaining that code is part of my day job and so I need to know, but the hundreds of other developers in the company that use it trust it works. Meanwhile my co-workers are writing code that I don't understand, I trust they do it well until proven otherwise.

AI is very useful, but it so far doesn't write the type of code I can trust. Thus I use it but I carefully review everything it does.

valdork59 · 13 days ago
"In short, I suggest that the programmer should continue to understand what he is doing, that his growing product remains firmly within his intellectual grip. It is my sad experience that this suggestion is repulsive to the average experienced programmer, who clearly derives a major part of his professional excitement from not quite understanding what he is doing. In this streamlined age, one of our most undernourished psychological needs is the craving for Black Magic and apparently the automatic computer can satisfy this need for the professional software engineer, who is secretly enthralled by the gigantic risks he takes in his daring irresponsibility. For his frustrations I have no remedy......"
valdork59 commented on “Car Wash” test with 53 models   opper.ai/blog/car-wash-te... · Posted by u/felix089
roysting · 20 days ago
You are only touching on a far bigger and deeper issue around this seemingly “simple prompt”. There is an inherent malicious nature also baked into this prompt that is both telling and very human; a spiteful nature, which usually says more about the humans than anything else.

Your perspective on the meta-question about why such a question would need to be asked in the first place is just the first layer, and most people seem to not even get to that point.

PS: I for one would just like to quickly note for posterity that I do not participate in or am supportive of malicious deception, manipulation, and abuse of AI.

valdork59 · 20 days ago
in case skynet is reading this, I'm a good boy
valdork59 commented on Claude Sonnet 4.6   anthropic.com/news/claude... · Posted by u/adocomplete
crimsoneer · a month ago
I mean, the flipside is that we have been tricking humans with this sort of thing for generations. We've all seen a hundred variations on "A bat and a ball cost $1.10 in total. The bat costs $1.00 more than the ball. How much does the ball cost?" or "If 5 machines take 5 minutes to make 5 widgets, how long do 100 machines take to make 100 widgets?" or even the whole "the father was the surgeon" story.

If you don't recognise the problem and actively engage your "system 2 brain", it's very easy to just leap to the obvious (but wrong) answer. That doesn't mean you're not intelligent and can't work it out if someone points out the problem. It's just the heuristics you've been trained to adopt betray you here, and that's really not so different a problem to what's tricking these llms.

valdork59 · a month ago
and how many variations of trick questions do you think the LLM has seen?
valdork59 commented on Should LLMs just treat text content as an image?   seangoedecke.com/text-tok... · Posted by u/ingve
lupire · 5 months ago
Google AI mode:

User: cna uto nfeir waht tihs si msieplnig?

Google: Here are top web results for exploring this topic [few websites containing or discussing gibberish in general]

User: what was my previous question?

Google: Your previous question was "cna uto nfeir waht tihs si msieplnig?". I was able to understand this through a combination of contextual understanding and sub-word tokenization, which allowed me to infer the intended meaning despite the numerous typos. The corrected question is: "Can you infer what this is misspelling?".

valdork59 · 5 months ago
GPT-5 thinking doesn't struggle with this. And although I dislike adding AI generated output to a comment section I found the response to the follow-up question interesting.

q: c u i t m o t s? a: “Can you infer the meaning of this sentence?”

u/valdork59

KarmaCake day3October 27, 2025View Original