Readit News logoReadit News
Vanit commented on     · Posted by u/dxs
Vanit · 3 months ago
Is this written by AI?

Also pretty funny to compare to peanut butter & jelly, goes to show American self-centrism, and also the same bubble that's around AI tooling.

Vanit commented on Is It JavaScript?   blog.jim-nielsen.com/2025... · Posted by u/todsacerdoti
Vanit · 3 months ago
My favourite one was debugging a crash in an Electron app deployed to iOS. It turned out throwing an exception from a point event callback (deep in our app's code) was bubbling up into the device's kernel code.
Vanit commented on The problem with "vibe coding"   dylanbeattie.net/2025/04/... · Posted by u/jmmv
Vanit · 5 months ago
I agree with the article, but that's not how the vibe coders see themselves. From their perspective they can't see the gap between programming and product, and in my experience are pretty hostile to feedback from real software engineers.
Vanit commented on Recent AI model progress feels mostly like bullshit   lesswrong.com/posts/4mvph... · Posted by u/paulpauper
fnordpiglet · 5 months ago
This is less an LLM thing than an information retrieval question. If you choose a model and tell it to “Search,” you find citation based analysis that discusses that he indeed had problems with alcohol. I do find it interesting it quibbles whether he was an alcoholic or not - it seems pretty clear from the rest that he was - but regardless. This is indicative of something crucial when placing LLMs into a toolkit. They are not omniscient nor are they deductive reasoning tools. Information retrieval systems are excellent at information retrieval and should be used for information retrieval. Solvers are excellent at solving deductive problems. Use them. The better they get at these tasks alone is cool but is IMO a parlor trick since we have nearly optimal or actually optimal techniques that don’t need an LLM. The LLM should use those tools. So, click search next time you have an information retrieval question. https://chatgpt.com/share/67f2dac0-3478-8000-9055-2ae5347037...
Vanit · 5 months ago
I realise your answer wasn't assertive, but if I heard this from someone actively defending AI it would be a copout. If the selling point is that you can ask these AIs anything then one can't retroactively go "oh but not that" when a particular query doesn't pan out.
Vanit commented on     · Posted by u/ingve
Vanit · 5 months ago
Ah, somewhere you can find all those terrible interviewees in one place!
Vanit commented on I Like and Use Global Variables   codestyleandtaste.com/i-l... · Posted by u/levodelellis
Vanit · 7 months ago
You may as well just use a singleton pattern if you're going to do this, and at least that's easier to maintain if your use cases change.
Vanit commented on Building HTML in Go   templ.guide/... · Posted by u/martin360
Vanit · 9 months ago
I read through half the docs and couldn't get a definitive answer on if nested custom components (ala React) we're even possible.
Vanit commented on Message order in Matrix: right now, we are deliberately inconsistent   artificialworlds.net/blog... · Posted by u/whereistimbo
Vanit · 9 months ago
I'm throwing some shade here, but this reeks of backend engineers not caring about UX.
Vanit commented on World Labs: Generate 3D worlds from a single image   worldlabs.ai/blog... · Posted by u/dmarcos
Vanit · 9 months ago
I'm keen to drop in a few PSX-era Final Fantasy backgrounds to see what it does!
Vanit commented on NotebookLM's automatically generated podcasts are surprisingly effective   simonwillison.net/2024/Se... · Posted by u/simonw
GaggiX · a year ago
If it was vomit, why did you spend an hour on it? People complain about 2 minutes of audio sometimes, I cannot imagine a full hour of an unknown podcast, it must have been quite interesting.
Vanit · a year ago
I read some of your other replies and I can't quite get a read on your line of reasoning.

The issue is we would give less attention to these things if it wasn't for the social credit the humans gave the vomit. So we engage in good faith and it turns out it was effectively a prank, and we have no choice but to value requests from those people less now because it was clear they didn't care about our response.

u/Vanit

KarmaCake day782May 28, 2014View Original