Readit News logoReadit News
layer8 commented on AI makes the easy part easier and the hard part harder   blundergoat.com/articles/... · Posted by u/weaksauce
crazygringo · 8 hours ago
That doesn't make any sense to me.

When the code is written, it's all laid out nicely for the reader to understand quickly and verify. Everything is pre-organized, just for you the reader.

But in order to write the code, you might have to try 4 different top-level approaches until you figure out the one that works, try integrating with a function from 3 different packages until you find the one that works properly, hunt down documentation on another function you have to integrate with, and make a bunch of mistakes that you need to debug until it produces the correct result across unit test coverage.

There's so much time spent on false starts and plumbing and dead ends and looking up documentation and debugging when you code. In contrast, when you read code that already has passing tests... you skip all that stuff. You just ensure it does what it claims and is well-written and look for logic or engineering errors or missing tests or questionable judgment. Which is just so, so much faster.

layer8 · 8 hours ago
> But in order to write the code, you might have to try 4 different top-level approaches until you figure out the one that works , try integrating with a function from 3 different packages until you find the one that works properly

If you haven’t spent the time to try the different approaches yourself, tried the different packages etc., you can’t really judge if the code you’re reading is really the appropriate thing. It may look superficially plausible and pass some existing tests, but you haven’t deeply thought through it, and you can’t judge how much of the relevant surface area the tests are actually covering. The devil tends to be in the details, and you have to work with the code and with the libraries for a while to gain familiarity and get a feeling for them. The false starts and dead ends, the reading of documentation, those teach you what is important; without them you can only guess. Wihout having explored the territory, it’s difficult to tell if the place you’ve been teleported to is really the one you want to be in.

layer8 commented on We mourn our craft   nolanlawson.com/2026/02/0... · Posted by u/ColinWright
orangecat · 19 hours ago
That’s like saying that the human brain is based on simple physical field equations, and therefore its behavior is easy to understand.

Right, which is the point: LLMs are much more like human coworkers than compilers in terms of how you interact with them. Nobody would say that there's no point to working with other people because you can't predict their behavior exactly.

layer8 · 19 hours ago
This thread is about what software developers like. It’s common knowledge that many programmers like working with computers because that’s different in specific ways from working with people. So saying that LLMs are just like people doesn’t help here.
layer8 commented on We mourn our craft   nolanlawson.com/2026/02/0... · Posted by u/ColinWright
cat_plus_plus · 20 hours ago
I have no idea what everyone is talking about. LLMs are based on relatively simple math, inference is much easier to learn and customize than say Android APIs. Once you do you can apply familiar programming style logic to messy concepts like language and images. Give you model a JSON schema like "warp_factor": Integer if you don't want chatter, that's way better than Star Trek computer could do. Or have it write you a simple domain specific library on top of Android API that you can then program from memory like old style BASIC rather than having to run to stack overflow for evwery new task.
layer8 · 20 hours ago
You can’t reason about inference (or training) of LLMs on the semantic level. You can’t predict the output of an LLM for a specific input other than by running it. If you want the output to be different in a specific way, you can’t reason with precision that a particular modification of the input, or of the weights, will achieve the desired change (and only that change) in the output. Instead, it’s like a slot machine that you just have to try running again.

The fact that LLMs are based on a network of simple matrix multiplications doesn’t change that. That’s like saying that the human brain is based on simple physical field equations, and therefore its behavior is easy to understand.

layer8 commented on I am happier writing code by hand   abhinavomprakash.com/post... · Posted by u/lazyfolder
rf15 · 20 hours ago
This is pointing out one factor of vibecoding that is talked about too little: that it feels good, and that this feeling often clouds people's judgment on what is actually achieved (i.e. you lost control of the code and are running more and more frictionless on hopes and dreams)
layer8 · 20 hours ago
It feels good to some people. Personally I have difficulty relating to that, it’s antithetical to important parts of what I value about software development. Feeling good for me comes from deeply understanding the problem and the code, and knowing how they do match up.
layer8 commented on We mourn our craft   nolanlawson.com/2026/02/0... · Posted by u/ColinWright
maplethorpe · 20 hours ago
> I can't empathize with the complaint that we've "lost something" at all.

I agree!. One criticism I've heard is that half my colleagues don't write their own words anymore. They use ChatGPT to do it for them. Does this mean we've "lost" something? On the contrary! Those people probably would have spoken far fewer words into existence in the pre-AI era. But AI has enabled them to put pages and pages of text out into the world each week: posts and articles where there were previously none. How can anyone say that's something we've lost? That's something we've gained!

It's not only the golden era of code. It's the golden era of content.

layer8 · 20 hours ago
I hope this is sarcasm. :)
layer8 commented on AI fatigue is real and nobody talks about it   siddhantkhare.com/writing... · Posted by u/sidk24
ted_bunny · 20 hours ago
I'm somewhat new to HN, but most times I am inclined to add an emoji to a comment, it turns out that neither the tone or content are up to community standards.

My other comments probably aren't any better, but those escape my notice!

layer8 · 20 hours ago
HN isn’t a singular hive-mind. There are different opinions on what kinds of humor have its place on it. At present the root comment has a good number of net upvotes, so there’s that.
layer8 commented on We mourn our craft   nolanlawson.com/2026/02/0... · Posted by u/ColinWright
sosomoxie · 2 days ago
I started programming over 40 years ago because it felt like computers were magic. They feel more magic today than ever before. We're literally living in the 1980s fantasy where you could talk to your computer and it had a personality. I can't believe it's actually happening, and I've never had more fun computing.

I can't empathize with the complaint that we've "lost something" at all. We're on the precipice of something incredible. That's not to say there aren't downsides (WOPR almost killed everyone after all), but we're definitely in a golden age of computing.

layer8 · 20 hours ago
> I started programming over 40 years ago because it felt like computers were magic. They feel more magic today than ever before.

Maybe they made us feel magic, but actual magic is the opposite of what I want computers to be. The “magic” for me was that computers were completely scrutable and reason-able, and that you could leverage your reasoning abilities to create interesting things with them, because they were (after some learning effort) scrutable. True magic, on the other hand, is inscrutable, it’s a thing that escapes explanation, that can’t be reasoned about. LLMs are more like that latter magic, and that’s not what I seek in computers.

> We're literally living in the 1980s fantasy where you could talk to your computer and it had a personality.

I always preferred the Star-Trek-style ship computers that didn’t exhibit personality, that were just neutral and matter-of-fact. Computers with personality tend to be exhausting and annoying. Please let me turn it off. Computers with personality can be entertaining characters in a story, but that doesn’t mean I want them around me as the tools I have to use.

layer8 commented on AI fatigue is real and nobody talks about it   siddhantkhare.com/writing... · Posted by u/sidk24
Kiro · 21 hours ago
That's not the type of fatigue the article is talking about.
layer8 · 21 hours ago
I know, hence the emoticon.
layer8 commented on LLMs as the new high level language   federicopereiro.com/llm-h... · Posted by u/swah
amelius · a day ago
Consider there are 100 upvotes and 100 downvotes. Net votes: 0. The submission would end up with a lower ranking that you wanted it to have.
layer8 · 21 hours ago
Submissions don’t have downvotes, only flagging.
layer8 commented on AI fatigue is real and nobody talks about it   siddhantkhare.com/writing... · Posted by u/sidk24
layer8 · 21 hours ago
There would be less AI fatigue if people stopped talking about AI. ;)

u/layer8

KarmaCake day36467June 22, 2018View Original