Readit News logoReadit News
zephyrthenoble commented on Vibe coding creates fatigue?   tabulamag.com/p/too-fast-... · Posted by u/rom16384
Jeff_Brown · a day ago
Agreed. Some strategies that seem to help exist, though. Write extensive tests before writing the code. They serve as guidance. Commit tests separately from library code, so you can tell the AI didn't change the test. Specify the task with copious examples. Explain why yo so things, not just what to do.
zephyrthenoble · a day ago
Interesting, I haven't tried tests outside of the code base the LLM is working on.

I could see other elements of isolation being useful, but this kind of feels like a lot of extra work and complexity which is part of the issue...

zephyrthenoble commented on Vibe coding creates fatigue?   tabulamag.com/p/too-fast-... · Posted by u/rom16384
zephyrthenoble · a day ago
I've felt this too as a person with ADHD, specifically difficulty processing information. Caveat: I don't vibe code much, partially because of the mental fatigue symptoms.

I've found that if an LLM writes too much code, even if I specified what it should be doing, I still have to do a lot of validation myself that would have been done while writing the code by hand. This turns the process from "generative" (haha) to "processing", which I struggle a lot more with.

Unfortunately, the reason I have to do so much processing on vibe code or large generated chunks of code is simply because it doesn't work. There is almost always an issue that is either immediately obvious, like the code not working, or becomes obvious later, like poorly structured code that the LLM then jams into future code generation, creating a house of cards that easily falls apart.

Many people will tell me that I'm not using the right model or tools or whatever but it's clear to me that the problem is that AI doesn't have any vision of where your code will need to organically head towards. It's great for one shots and rewrites, but it always always always chokes on larger/complicated projects, ESPECIALLY ones that are not written in common languages (like JavaScript) or common packages/patterns eventually, and then I have to go spelunking to find why things aren't working or why it can't generate code to do something I know is possible. It's almost always because the input for new code is my ask AND the poorly structured code, so the LLM will rarely clean up it's own crap as it goes. If anything, it keeps writing shoddy wrapper around shoddy wrappers.

Anyways, still helpful for writing boilerplate and segments of code, but I like to know what is happening and have control over how my code is structured. I can't trust the LLMs right now.

zephyrthenoble commented on I'm Kenyan. I don't write like ChatGPT, ChatGPT writes like me   marcusolang.substack.com/... · Posted by u/florian_s
zephyrthenoble · 2 days ago
Always interesting (in an informative way) to see people "defending" em-dashes from my personal perspective. Before you get mad, let me explain: before ChatGPT, I only ever saw em-dashes when MS Word would sometimes turn a dash into a "longer dash" as I always thought of it. I have NEVER typed an em-dash, and I don't know how to do it on Windows or Android. I actually remember having issues with running a program that had em-dashes where I needed to subtract numbers and got errors, probably from younger me writing code in something other than an IDE. Em-dashes always seem very out of place to me.

Some things I've learned/realized from this thread:

1. You can make an em-dash on Macs using -- or a keyboard shortcut

2. On Windows you can do something like Alt + 0151 which shows why I have never done it on purpose... (my first ever —)

3. Other people might have em-dashes on their keyboard?

I still think it's a relatively good marker for ChatGPT-generated-text iff you are looking at text that probably doesn't apply to the above situations (give me more if you think of them), but I will keep in mind in the future that it's not a guarantee and that people do not have the exact same computer setup as me. Always good to remember that. I still do the double space after the end of a sentence after all.

zephyrthenoble commented on Effective harnesses for long-running agents   anthropic.com/engineering... · Posted by u/diwank
roughly · 19 days ago
One of the things that makes it very difficult to have reasonable conversations about what you can do with LLMs is the effort-to-outcome curve is basically exponential - with almost no effort, you can get 70% of the way there. This looks amazing, and so people (mostly executives) look at this and think, “this changes everything!”

The problem is the remaining 30% - the next 10-20% starts to require things like multi-agent judge setups, external memory, context management, and that gets you to something that’s probably working but you sure shouldn’t ship to production. As to the last 10% - I’ve seen agentic workflows with hundreds of different agents, multiple models, and fantastically complex evaluation frameworks to try to reduce the error rates past the ~10% mark. By a certain point, the amount of infrastructure and LLM calls are running into several hundred dollars per run, and you’re still not getting guaranteed reliable output.

If you know what you’re doing and you know where to fit the LLMs (they’re genuinely the best system we’ve ever devised for interpreting and categorizing unstructured human input), they can be immensely useful, but they sing a siren song of simplicity that will lure you to your doom if you believe it.

zephyrthenoble · 19 days ago
Yes, it's essentially the Pareto principle [0]. The LLM community has conflated the 80% as difficult complicated work, when it was essentially boilerplate. Allegedly LLMs have saved us from that drudgery, but I personally have found that (without the complicated setups you mention) the 80% done project that gets one shot is in reality more like 50% done because it is built on an unstable foundation, and that final 20% involves a lot of complicated reworking of the code. There's still plenty of value but I think it is less than proponents would want you to believe.

Anecdotally, I have found that even if you type out paragraph after paragraph describing everything you need the agent to take care of, it eventually feels like you could have written a lot of the code yourself with the help of a good IDE by the time you can finally send your prompt off.

- [0] https://en.wikipedia.org/wiki/Pareto_principle

zephyrthenoble commented on 210 IQ Is Not Enough   taylor.town/iq-not-enough... · Posted by u/surprisetalk
zephyrthenoble · a month ago
Reading the text of the article, and not just reacting to the title, I do think this article has a kernal of truth to it that resonates with me. It's not really talking about intelligence, but MEASURES, and how individuals contort themselves into what they believe is valuable.

But at the end of the day, we do not have an inherent value. I wonder if people that get hung up on these metrics and what value they seemingly hold either that a person is a whole person, not just some measurement about them. The world's tallest man also has a favorite food, favorite color, and hobbies. He has friends and family. The metric you assigned to him is not the totality of the man.

I say this because recently I've been struggling with work and I feel like I have to say to myself sometimes, I am more than just a source of income and health insurance to my family. To someone who isn't in my situation, it might seem silly, but it has been scary and stressful and in some ways I did say to myself, you have value because you provide. But we have money saved, and are in a stable situation, and I could always find a new job, but my ego assigned value to the job regardless despite my best efforts at pretending that I don't play games with corporations. The stress that keeping a 9 to 5 causes in my mind is entirely self-inflicted by me.

I guess what I'm saying is that I should value other things about myself more highly, or maybe even not value anything about myself if that makes sense. What value is there in in measuring my success, as long as I am honest about my efforts and happiness?

I will never conquer the entire world by 25, or have a billion dollars, so maybe I need to learn to measure less and focus on true personal accountability and happiness instead. Hopefully that's a simple task...

zephyrthenoble commented on Is Vibe Coding Dying?   garymarcus.substack.com/p... · Posted by u/spking
locknitpicker · a month ago
> I can get near miraculous results from vibe coding, but it often gets stuck in weird “bug loops” where it goes back and forth between broken states, and I have to understand either like bracket formatting, or be able to research library failures and conflicts.

In my experience this is mainly caused by a lack of investment in tests.

Vibe-coding excels when paired with test-driven development, because TDD approaches serve as validators and problem constrains. Often coding agents get stuck on but loops because they have neither context nor feedback on what represents a broken state. Tests fix both problems, and if you stop to add one then your bug loops quickly vanish.

zephyrthenoble · a month ago
So vibe coders need to know how to write tests? I doubt that lowers the effective barrier of entry to coding very much.

I assume you can't trust the LLM to write these tests, since you are writing tests so the LLM will stop it's bug loop...

zephyrthenoble commented on Some people can't see mental images   newyorker.com/magazine/20... · Posted by u/petalmind
roxolotl · 2 months ago
As someone who has aphantasia I did the same thing, but with motes of dust on the window. I’d stare at a single bit of dust or dirt and move my head up and down to make the dirt move with the landscape. It’s funny to read these stories because it solidifies my assumption that I have aphantasia. I did the same thing as a child just without the imagery.
zephyrthenoble · 2 months ago
This is super interesting to me. A lot of threads about aphantasia devolve into both sides being mildly incredulous that the other exists, I think partially because it's _hard_ for us to imagine experiences outside of our own.

But here, I feel like we have a clear delineation of the differences between experiences, in a non-abstract way... and that feels more valuable to me, somehow.

Thank you for sharing!

zephyrthenoble commented on Some people can't see mental images   newyorker.com/magazine/20... · Posted by u/petalmind
zephyrthenoble · 2 months ago
I think an interesting different way to talk about aphantasia is not, "Can you see an apple when you close your eyes" but more along the linked of, "Can you mentally edit the visual reality you see?"

A common exercise while being in the back seat of a car while I was young was to imagine someone in a skateboard riding along the power lines on the side of the road, keeping pace with our car.

It's not literally overriding my vision, it's almost like a thin layer, less than transparent, over reality. But specifically, it's entirely in my mind. I would never confuse that imagery with reality...

Having said that, I think that is related to the way our brains process visual information. I've had an experience when I'm driving that, when I recognize where I am, coming from a new location in not familiar with, I feel like suddenly my vision expands in my peripheral vision. I think this is because my brain offloads processing to a faster mental model of the road because I'm familiar with it. I wonder if that extra "vision" is actually as ephemeral as my imagined skateboarder.

zephyrthenoble commented on Voronoi map generation in Civilization VII   civilization.2k.com/civ-v... · Posted by u/Areibman
zephyrthenoble · 3 months ago
I've been trying to generate my own maps using Voronoi diagrams as well. I was using Lloyd's algorithm [0] to make strangely shaped regions "fit" better, but I like the insight of generating larger regions to define islands, and then smaller regions on top to define terrain.

One of the things I like about algorithms like this is the peculiarities created by the algorithm, and trying to remove that seems to take some of the interesting novelty away.

- [0] https://en.m.wikipedia.org/wiki/Lloyd%27s_algorithm

zephyrthenoble commented on Permeable materials in homes act as sponges for harmful chemicals: study   news.uci.edu/2025/09/22/i... · Posted by u/XzetaU8
coder543 · 3 months ago
CO2 and VOCs, but what about PM2.5 and PM10? What about pollen? What about humidity control?

Cracking a window is also costly, since it directly raises your heating and cooling bills. It's just an "invisible" cost that's easy for some people to ignore since it's hard to directly measure. An ERV pays for itself over time, so it's more a question of whether you can afford to just crack a window?

Living in an apartment makes this difficult because your landlord may not let you improve this situation, but just ignoring the cost of opening a window doesn't make the cost go away.

zephyrthenoble · 3 months ago
I live in the DC area and whenever I hear people say "just crack a window" I think, that brings in all of the pollen I'm allergic to in all seasons except winter, plus humidity and 95 f degree heat if it's the summer... I' be been looking into getting an ERV for a while.

u/zephyrthenoble

KarmaCake day194March 2, 2014View Original