Readit News logoReadit News
Posted by u/acqbu 5 months ago
Ask HN: Am I the only one not using AI?
I've tried using various AI tools and models over the past couple of years, but honestly it feels like it gives me a false sense of confidence. Plus, the time I supposedly save building things gets eaten up debugging, correcting, improving the AI-generated slop.

Am I using the tools wrong or are others finding the same thing?

softwaredoug · 5 months ago
AI ruins your flow. That's the biggest problem. I sit here and wait for Claude to do something. Then I get distracted by social media.

No these things don't actually work if you study human psychology:

* Switching to another work task (what for like a minute?)

* Playing chess or something (sure its better than social media, but still a distraction)

But I do like AI tools that don't interfere with my flow like Github Copilot, or even chatting with Claude / ChatGPT about a task I'm doing.

digital_sawzall · 5 months ago
I started doing pushups between claude code responses. I started with 10 but now I rip ~50 like nothing. I'm getting a pull up bar and trying to do the same. Pull ups until it completes then prompt and again, squats, pushups, ect. I'm getting stronger and better at code.
pmbauer · 4 months ago
You are getting stronger. I very much doubt you are getting better at code.
objcts · 5 months ago
stare out the window. look at clouds. wonder how they take the shapes they do. think about water and how it moves through time and space. how those water molecules were once in a bowl of rice or loaf of bread. how many other things has this water been in? what about the water in my body, right now? holy shit, i’ve been a cloud before…

oh, claude’s done now. how does this thing work?

arresin · 5 months ago
E-e-e-xactly. It took an embarrassing long time for me to come to this conclusion also. There’s something hypnotising about seeing it work which is also distracting.

I wonder if I’ve actually saved time overall or, if I was in an uninterrupted flow state I would have done not just a better but also quicker job.

_jsmh · 5 months ago
"allowing AI actually increases completion time by 19%--AI tooling slowed developers down."

https://arxiv.org/abs/2507.09089

galaxy_gas · 5 months ago
I just ask right now Cursor-GPT about where a service was being called from, its has over 10 minutes and it hasn't come up with an answer. Just constant grepping and reading and planning next moves

So aggravating

Dead Comment

joss82 · 4 months ago
You are not alone.

After falling in love and hacking away with Claude for a few weeks, I'm now in the hangover phase, and barely using any AI at all.

AI works well to build boilerplate code and solve easy problems, while confidently driving full-speed into the wall as soon as complexity increases.

I also noticed that it makes me subtly lazier and dumber. I started thinking like a manager, at a higher-level, believing I could ignore the details. It turns out I cannot, and details came back to bite me quickly.

So, no AI for me right now, but I'm keeping an eye out for the next gens.

nicohayes · 5 months ago
You're definitely not alone. Social media amplifies the "AI is everywhere" narrative, but in reality? Most people are still shipping code the old-fashioned way.

I'd estimate maybe 20% of devs have actually integrated AI into their daily workflow beyond occasional ChatGPT queries. The other 80% either tried it and bounced off the friction, or are waiting to see which tools actually stick.

Not using AI doesn't mean you're falling behind - it means you're avoiding cargo-culting. The real skill is knowing when it's worth the context-switching cost and when grep + your brain is faster.

antfarm · 4 months ago
Whenever I try to use Claude for my programming work, it surprises me with how confidentially it states wrong facts or analysis results. I spend a lot of my tokens quota just for correcting it and having it generate an apology.
tstrimple · 5 months ago
Just started a Claude Code experiment this week. I'm building a new NAS but instead of using an off the shelf appropriate distro like TrueNAS I just installed NixOS and I'm having Claude Code fully manage the entire operating system. It's going pretty well so far. Initially it would reach for tools like dig that weren't available on the install but after a "# memorize We're on NixOS, you need to try to do things the NixOS way first. Including temporarily installing tools via nix-shell to run individual commands." those issues went away and it's doing NixOS things.

From a clean NixOS command line install, we've got containers and vms handled. Reverse proxy with cloudflare tunnels with all endpoints automatically getting and renewing SSL certs. All the *arr stack tools and other homelab stuff you'd expect. Split horizon DNS with unbound and pihole running internally. All of my configurations backed up in github. I didn't even create the cloudflare tunnels or the github repos. I had claude code handle that via API and cli tools. The last piece I'm waiting on to tie it all together are my actual data drives which should be here tomorrow.

Is this a smart thing to do? Absolutely not. Tons of things could go wrong. But NixOS is fairly resilient and rollbacks are easy. I don't actually have anything running on the NAS in use yet and I've got my synology limping along until I finish building this replacement. It's still an open question whether I'll use Claude Code like this to manage the NAS once I've migrated my data and my family has switched over. But I've had a very good experience so far.

prossercj · 5 months ago
I don't use it for large-scale code generation, but I do find it useful for small code snippets. For example asking how to initialize a widget in Kendo UI with specific behavior. With snippets, I can just run the code and verify that it works with minimal effort. It's often more about reminding me of something I already knew rather than discovering something novel. I wouldn't trust it with anything novel.

In general, I think of it as a better kind of search. The knowledge available on the internet is enormous, and LLMs are pretty good at finding and synthesizing it relative to a prompt. But that's a different task than generating its own ideas. I think of it like a highly efficient secretary. I wouldn't ask my secretary how to solve a problem, but I absolutely would ask if we have any records pertaining to the problem, and perhaps would also ask for a summary of those records.

overvale · 5 months ago
“I've come up with a set of rules that describe our reactions to technologies:

1. Anything that is in the world when you’re born is normal and ordinary and is just a natural part of the way the world works.

2. Anything that's invented between when you’re fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it.

3. Anything invented after you're thirty-five is against the natural order of things.”

― Douglas Adams

jzb · 4 months ago
While I love Douglas Adams, and there's a hint of truth there, I don't think it works well here. By that logic, anything invented after 2005 would be abhorrent to me. Yet, there's tons of new tech invented since that I find/found "new and exciting."

My dislike of GenAI stuff is based on practical, ethical, and economic concerns.

Practical: GenAI output is bland and untrustworthy. It also discourages thought and learning, IMO. Lest folks line up to tell me how wonderful it is for their learning, that may be true, but my observation is that is not how the majority uses it. Once upon a time I thought the Internet/Web would be a revolution in learning for people. Fool me once...

Ethical: So many problems here, from the training data sets, to people unleashing scraper bots that have been effectively DDoS'ing sites for going on a year (at least) now. If these are the kind of people who make up the industry building these tools, I want nothing to do with the tools.

Economic: Related to ethics, but somewhat separate. GenAI and other LLM/AI tools could benefit people. I acknowledge, for example, there's real promise in using various related tech to do better medical diagnostics. That would be wonderful. But, the primary motivation of the companies pushing AI right now is to 1) get people hooked on the tools and jack up prices, 2) sell tech that can be used to lower wages or reduce employment, and 3) create another hype technology so they can stuff their pockets, and the coming crash be damned.

Again, what is driving AI/LLM is not well intentioned. Ignore that at your own peril. Probably everybody else's peril, too.

Adams no doubt knew people who were aghast at PCs or mobile phones because they were not around when they were younger. I get it. But, well, I wonder how Adams would feel about GenAI tools that spit out "write blah in the style of Douglas Adams" after being trained on all of his work.

overvale · 4 months ago
I think you could make identical arguments about any new technology. Amazingly expensive computers were once not very practical, it’s easy to point at any massive world changing tech and call it unethical, you can find these arguments about oil, railroads, 24 hour news, etc. and the same is true for economic incentives. Rubber barons and railroad tycoons were not well intentioned from this point of view. I don’t think there’s anything about AI that is inherently different from previous tech.

And isn’t that the entire point of the quote?

I’m not trying to be dismissive of your point, just to pose a counterpoint.

dasefx · 5 months ago
My workflow is simple, step 1) THINK hard about the problem by yourself, 2) Define rough sketches of function names, params, flow, etc. adapt to your problem 3) Iterate with any LLM and create an action plan, this is where you correct everything, before any code is written 4) Send the plan to one the CLI LLM thingies and attack the points one by one so you don't run out of context.

So far has been working beautifully for real work stuff, sometimes the models do drift, but if you are actually paying attention to the responses, you should be able to catch it early.