Readit News logoReadit News
taylorlunt commented on Show HN: Why write code if the LLM can just do the thing? (web app experiment)   github.com/samrolken/noko... · Posted by u/samrolken
taylorlunt · 3 months ago
This reminds me of the recent Claude Imagine, which passed quietly through most people's radars, but let you create web interfaces of any kind on the fly. There was no JS code generated. Instead, any time the user clicked a button, the AI would manually update the page accordingly. It was also slow and terrible, but a fun idea.
taylorlunt commented on Vibe engineering   simonwillison.net/2025/Oc... · Posted by u/janpio
SeanAnderson · 4 months ago
They're so nice for prototyping ideas and not becoming attached to the code due to sunken cost. I was playing around with generating intelligent diffs for changelogs for a game. I wasn't sure what approach to highlighting changes I wanted to take without being able to see the results.

Prior to vibe-coding, it would've been an arduous enough task that I would've done one implementation, looked at the time it took me and the output, and decided it was probably good enough. With vibe-coding, I was able to prototype three different approaches which required some heavy lifting that I really didn't want to logic out myself and get a feel for if any of the results were more compelling than others. Then I felt fine throwing away a couple of approaches because I only spent a handful of minutes getting them working rather than a couple of hours.

taylorlunt · 4 months ago
I agree, prototyping seems like a great use-case.
taylorlunt commented on Vibe engineering   simonwillison.net/2025/Oc... · Posted by u/janpio
taylorlunt · 4 months ago
These seem like a lot of great ways to work around the limitations of LLMs. But I'm curious what people here think. Do any career software engineers here see more than a 10% boost to their coding productivity with LLMs?

I see how if you can't really code, or you're new to a domain, then it can make a huge difference getting you started, but if you know what you're doing I find you hit a wall pretty quickly trying to get it to actually do stuff. Sometimes things can go smoothly for a while, but you end up having to micromanage the output of the agent too much to bother. Or sacrifice code quality.

taylorlunt commented on Why do LLMs freak out over the seahorse emoji?   vgel.me/posts/seahorse/... · Posted by u/nyxt
Gigachad · 4 months ago
The fact that it's looking back and getting confused about what it just wrote is something I've never seen in LLMs before. I tried this on Gemma3 and it didn't get confused like this. It just said yes there is one and then sends a horse emoji.
taylorlunt · 4 months ago
I have a pet theory that LLMs being confused about what they just wrote is why they use so many em dashes. It's a good way to conceptually pivot at any point -- or not.

u/taylorlunt

KarmaCake day131April 6, 2016View Original