Readit News logoReadit News
kassner · 5 months ago
> I've never been more productive

Maybe it’s because my approach is much closer to a Product Engineer than a Software Engineer, but code output is rarely the reason why projects that I worked on are delayed. All my productivity issues can attributed to poor specifications, or problems that someone just threw over the wall. Every time I’m blocked is because someone didn’t make a decision on something, or no one has thought further enough to see this decision was needed.

It irks me so much when I see the managers of adjacent teams pushing for AI coding tools when the only thing the developers know about the project is what was written in the current JIRA ticket.

pards · 5 months ago
> code output is rarely the reason why projects that I worked on are delayed

This is very true at large enterprises. The pre-coding tasks [0] and the post-coding tasks [1] account for the majority of elapsed time that it takes for a feature to go from inception to production.

The theory of constraints says that optimizations made to a step that's not the bottleneck will only make the actual bottleneck worse.

AI is no match for a well-established bureaucracy.

[0]: architecture reviews, requirements gathering, story-writing

[1]: infrastructure, multiple phases of testing, ops docs, sign-offs

xen2xen1 · 5 months ago
Interesting point, does that mean AI with favor startup or startup like places? New tools often seem to favor less established and smaller places.
mountainriver · 5 months ago
Disagree it’s normally the integration and alignment of systems that takes a long time e.g. you are forced to use X product but their missing a feature you need to wait on

Dead Comment

api · 5 months ago
For most software jobs, knowing what to build is harder than building it.

I’m working hard on building something right now that I’ve had several false starts on, mostly because it’s taken years for us to totally get our heads around what to build. Code output isn’t the problem.

CM30 · 5 months ago
Yeah, something like 95% of project issues are management and planning issues, not programming or tech ones. So often projects start out without anyone on the team researching the original problem or what their users would actually need, then hastily rejigging the whole thing to fix that midway through development.
inerte · 5 months ago
aka https://en.wikipedia.org/wiki/No_Silver_Bullet

And it's also interesting to think that PMs are also using AI - in my company for example we allow users to submit feedback, then there's an AI summary report sent to PMs. Which them put the report into ChatGPT with the organizational goals and the key players and previous meeting transcripts, and then they ask the AI to weave everything together into a PRD, or even a 10 slide presentation.

doug_durham · 5 months ago
I agree with you that traditionally that is the bottleneck. Think about why poor specifications are a problem. It's a problem because software is so costly and time consuming to create. Many times the stakeholders don't know that something isn't right until they can actually use it. What if it takes 50% less time to create code? Code becomes less precious. Throwing away failed ideas isn't as big an issue. Of course it is trivially easy to think of cases where this could also lead to never shipping your code.
d0liver · 5 months ago
I feel this. As a dev, most of my time is spent thinking and asking questions.
hedgew · 5 months ago
>Why bother playing when I knew there was an easier way to win? This is the exact same feeling I’m left with after a few days of using Claude Code. I don’t enjoy using the tool as much as I enjoy writing code.

My experience has been the opposite. I've enjoyed working on hobby projects more than ever, because so many of the boring and often blocking aspects of programming are sped up. You get to focus more on higher level choices and overall design and code quality, rather than searching specific usages of libraries or applying other minutiae. Learning is accelerated and the loop of making choices and seeing code generated for them, is a bit addictive.

I'm mostly worried that it might not take long for me to be a hindrance in the loop more than anything. For now I still have better overall design sense than AI, but it's already much better than I am at producing code for many common tasks. If AI develops more overall insight and sense, and the ability to handle larger code bases, it's not hard to imagine a world where I no longer even look at or know what code is written.

siffin · 5 months ago
Everyone has different objective and subjective experiences, and I suspect some form of selection will promote those who more often feel excited and relieved by using AI than those who feel it more often a negative, like it challenges some core aspect of self.

It might challenge us, and maybe those of us who feel challenged in that way need to rise to it, for there are always harder problems to solve

If this new tool seems to make things so easy it's like "cheating", then make the game harder. Can't cheat reality.

palata · 5 months ago
Without AI, I have been in a company where the general mentality was to "ship bad software but quickly". Without going into the debate of whether it was profitable in the long term or not (spoiler: it was not), my problem was the following:

I would try to build something "good" (not "perfect", just "good", like modular or future-proof or just not downright malpractice). But while I was doing this, others would build crap. They would do it so fast I couldn't keep up. So they would "solve" the problems much faster. Except that over the years, they just accumulated legacy and had to redo stuff over and over again (at some point you can't throw crap on top of crap, so you just rebuild from scratch and start with new crap, right?).

All that to say, I don't think that AIs will help with that. If anything, AIs will help more people behave like this and produce a lot of crap very quickly.

palata · 5 months ago
The calculator made it less important to be relatively good with arithmetic. Many people just cannot add or subtract two numbers without one. And it feels like they lose intuition, somehow: if numbers don't "speak" to you at all, can you ever realize that 17 is roughly a third of 50? The only way you realise it with a calculator is if you actually look for it. Whereas if you can count, it just appears to you.

Similar with GPS and navigation. When you read a map, you learn how to localise yourself based on landmarks you see. You tend to get an understanding of where you are, where you want to go and how to go there. But if you follow the navigation system that tells you "turn right", "continue straight", "turn right", then again you lose intuition. I have seen people following their navigation system around two blocks to finally end up right next to where they started. The navigation system was inefficient, and with some intuition they could have said "oh actually it's right behind us, this navigation is bad".

Back to coding: if you have a deep understanding of your codebases and dependencies, you may end up finding that you could actually extract some part of one codebase into a library and reuse it in another codebase. Or that instead of writing a complex task in your codebase, you could contribute a patch to a dependency and it would make it much simpler (e.g. because the dependency already has this logic internally and you could just expose it instead of rewriting it). But it requires an understanding of those dependencies: do you have access to their code in the first place (either because they are open source or belong to your company)?

Those AIs obviously help writing code. But do they help getting an understanding of the codebase to the point where you build intuition that can be leveraged to improve the project? Not sure.

Is it necessary, though? I don't think so: the tendency is that software becomes more and more profitable by becoming worse and worse. AI may just help writing more profitable worse code, but faster. If we can screw the consumers faster and get more money from them, that's a win, I guess.

nthingtohide · 5 months ago
> Back to coding: if you have a deep understanding of your codebases and dependencies, you may end up finding that you could actually extract some part of one codebase into a library and reuse it in another codebase.

I understand the point you are making. But what makes you think refactoring won't be AI's forte. Maybe you could explicitly ask for it. Maybe you could ask it to minify while being human-understandable and that will achieve the refactoring objectives you have in mind.

palata · 5 months ago
I don't get why you're being downvoted here.

I don't know that AI won't be able to do that, just like I don't know that AGI won't be a thing.

It just feels like it's harder to have the AI detect your dependencies, maybe browse the web for the sources (?) and offer to make a contribution upstream. Or would you envision downloading all the sources of all the dependencies (transitive included) and telling the AI where to find them? And to give it access to all the private repositories of your company?

And then, upstreaming something is a bit "strategic", I would say: you have to be able to say "I think it makes sense to have this logic in the dependency instead of in my project". Not sure if AIs can do that at all.

To me, it feels like it's at the same level of abstraction as something like "I will go with CMake because my coworkers are familiar with it", or "I will use C++ instead of Rust because the community in this field is bigger". Does an AI know that?

fallingknife · 5 months ago
Perhaps it will, but right now I find it much better at generating code from scratch than refactoring.
vertnerd · 5 months ago
I'm a little older now, over 60. I'm writing a spaceflight simulator for fun and (possible) profit. From game assets to coding, it seems like AI could help. But every time I try it out, I just end up feeling drained by the process of guiding it to good outcomes. It's like I have an assistant to work for me, who gets to have all the fun, but needs constant hand holding and guidance. It isn't fun at all, and for me, coding and designing a system architecture is tremendously satisfying.

I also have a large collection of handwritten family letters going back over 100 years. I've scanned many of them, but I want to transcribe them to text. The job is daunting, so I ran them through some GPT apps for handwriting recognition. GPT did an astonishing job and at first blush, I thought the problem was solved. But on deeper inspection I found that while the transcriptions sounded reasonable and accurate, significant portions were hallucinated or missing. Ok, I said, I just have to review each transcription for accuracy. Well, reading two documents side by side while looking for errors is much more draining than just reading the original letter and typing it in. I'm a very fast typist and the process doesn't take long. Plus, I get to read every letter from beginning to end while I'm working. It's fun.

So after several years of periodically experimenting with the latest LLM tools, I still haven't found a use for them in my personal life and hobbies. I'm not sure what the future world of engineering and art will look like, but I suspect it will be very different.

My wife spins wool to make yarn, then knits it into clothing. She doesn't worry much about how the clothing is styled because it's the physical process of working intimately with her hands and the raw materials that she finds satisfying. She is staying close to the fundamental process of building clothing. Now that there are machines for manufacturing fibers, fabrics and garments, her skill isn't required, but our society has grown dependent on the machines and the infrastructure needed to keep them operating. We would be helpless and naked if those were lost.

Likewise, with LLM coding, developers will no longer develop the skills needed to design or "architect" complex information processing systems, just as no one bothers to learn assembly language anymore. But those are things that someone or something must still know about. Relegating that essential role to a LLM seems like a risky move for the future of our technological civilization.

palata · 5 months ago
I can relate to that.

Personally, right now I find it difficult to imagine saying "I made this" if I got an AI to generate all the code of a project. If I go to a bookstore, ask for some kind of book ("I want it to be with a hard cover, and talk about X, and be written in language Y, ..."), I don't think that at the end I will feel like I "made the book". I merely chose it, someone else made it (actually it's multiple jobs, between whoever wrote it and whoever actually printed and distributed it).

Now if I can describe a program to an AI and it results in a functioning program, can I say that I made it?

Of course it's more efficient to use knitting machines, but if I actually knit a piece of clothing, then I can say I made it. And that's what I like: I like to make things.

6510 · 5 months ago
I accidentally questioned out loud if the daughter created the video. I assure you, you've made it! If you bring into existence the proverbial PalataOS in a 6 word prompt we should blame and praise you for it.
thwarted · 5 months ago
Editing and proofreading, of code and prose, are work themselves, which is often not appreciated enough to be recognized as work, and I think this is the basis for the perspective that if you can get the LLMs to do the coding/writing and all you need to do is just proof the result as if that's somehow easier because proofing is not the real work.
musicale · 5 months ago
> reading two documents side by side while looking for errors is much more draining than just reading the original letter and typing it in

Validating LLM-generated text seems to be a hard problem, because it requires a human-quality reader.

OgsyedIE · 5 months ago
I think this particular anxiety was explored rather well in the anonymous short story 'The End of Creative Scarcity':

https://www.fictionpress.com/s/3353977/1/The-End-of-Creative...

Some existential objections occur; how sure are we that there isn't an infinite regress of ever deeper games to explore? Can we claim that every game has an enjoyment-nullifying hack yet to discover with no exceptions? If pampered pet animals don't appear to experience the boredom we anticipate is coming for us, is the expectation completely wrong?

nemo1618 · 5 months ago
Thank you for sharing this :)
01HNNWZ0MV43FF · 5 months ago
Loved it, thank you for sharing
bogrollben · 5 months ago
This was great - thank you!
zem · 5 months ago
thanks, that was wonderful
xg15 · 5 months ago
As far as hobby projects are concerned, I'd agree: A bit more "thinking like your boss" could be helpful. You can now focus more on the things you want your project be able to do instead of the specific details of its code structure. (In the end, nothing keeps you from still manually writing/editing parts of the code if you want some things specifically done in a certain way. There are also projects where the code structure legitimately is the feature, I.e. if you want to explore some new style of API or architecture design for its own sake)

The one part that I believe will still be essential is understanding the code. It's one thing to use Claude as a (self-driving) car, where you delegate the actual driving but still understand the roads being taken. (Both for learning and for validating that the route is in fact correct)

It's another thing to treat it like a teleporter, where you tell it a destination and then are magically beamed to a location that sort of looks like that destination, with no way to understand how you got there or if this is really the right place.

mjburgess · 5 months ago
All articles of this class, whether positive or negative, begin "I was working on a hobby project" or some variation thereof.

The purpose of hobbies is to be a hobby, archetypical tech projects are about self-mastery. You cannot improve your mastery with a "tool" that robs you of most of the minor and major creative and technical decisions of the task. Building IKEA furniture will not make you a better carpenter.

Why be a better carpenter? Because software engineering is not about hobby projects. It's about research and development at the fringes of a business (, orgs, projects...) requirements -- to evolve their software towards solving them.

Carpentry ("programming craft") will always (modulo 100+ years) be essential here. Powertools do not reduce the essential craft, they increase the "time to craft being required" -- they mean we run into walls of required expertise faster.

AI as applied to non-hobby projects -- R&D programming in the large -- where requirements aren't specified already as prior art programs (of func & non-func variety, etc.) ---- just accelerates the time to hitting the wall where you're going to shoot yourself in the foot if you're not an expert.

I have not seen a single take by an experienced software engineer have a "sky is falling" take, ie., those operating at typical "in the large" programming scales, in typical R&D projects (revision to legacy, or greenfield -- just reqs are new).

mnky9800n · 5 months ago
I think it also misses the way you can automate non-trivial tasks. For example, I am working on a project where there is tens of thousands of different data sets each with their own meta data and structure but the underlying data is mostly the same. But because the meta data and structure are all different, it’s really impossible to combine all this data into one big data set without a team of engineers going through each data set and meticulously restructuring and conforming said metadata to a new monolithic schema. However I don’t have any money to hire that team of engineers. But I can massage LLMs to do that work for me. These are ideal tasks for AI type algorithms to solve. It makes me quite excited for the future as many of these kind of tasks could be given to ai agents that would otherwise be impossible to do yourself.
MattJ100 · 5 months ago
I agree, but only for situations where the probabilistic nature is acceptable. It would be the same if you had a large team of humans doing the same work. Inevitably misclassifications would occur on an ongoing basis.

Compare this to the situation where you have a team develop schemas for your datasets which can be tested and verified, and fixed in the event of errors. You can't really "fix" an LLM or human agent in that way.

So I feel like traditionally computing excelled at many tasks that humans couldn't do - computers are crazy fast and don't make mistakes, as a rule. LLMs remove this speed and accuracy, becoming something more like scalable humans (their "intelligence" is debateable, but possibly a moving target - I've yet to see an LLM that I would trust more than a very junior developer). LLMs (and ML generally) will always have higher error margins, it's how they can do what they do.

xg15 · 5 months ago
I'm reminded of the game Factorio: Essentially the entire game loop is "Do a thing manually, then automate it, then do the higher-level thing the automation enables you to do manually, then automate that, etc etc"

So if you want to translate that, there is value in doing a processing step manually to learn how it works - but when you understood that, automation can actually benefit you, because only then are you even able to do larger, higher-level processing steps "manually", that would take an infeasible amount of time and energy otherwise.

Where I'd agree though is that you should never lose the basic understanding and transparency of the lower-level steps if you can avoid that in any way.

skerit · 5 months ago
I've used Claude-Code & Roo-Code plenty of times with my hobby projects.

I understand what the article means, but sometimes I've got the broad scopes of a feature in my head, and I just want it to work. Sometimes programming isn't like "solving a puzzle", sometimes it's just a huge grind. And if I can let an LLM do it 10 times faster, I'm quite happy with that.

I've always had to fix up the code one way or another though. And most of the times, the code is quite bad (even from Claude Sonnet 3.7 or Gemini Pro 2.5), but it _did_ point me in the right direction.

About the cost: I'm only using Gemini Pro 2.5 Experimental the past few weeks. I get to retry things so many times for free, it's great. But if I had to actually pay for all the millions upon millions of used tokens, it would have cost me *a lot* of money, and I don't want to pay that. (Though I think token usage can be improved a lot, tools like Roo-Code seem very wasteful on that front)

fhd2 · 5 months ago
> I have not seen a single take by an experienced software engineer have a "sky is falling" take,

Let me save everybody some time:

1. They're not saying it because they don't want to think of themselves as obsolete.

2. You're not using AI right, programmers who do will take your job.

3. What model/version/prompt did you use? Works For Me.

But seriously: It does not matter _that_ much what experienced engineers think. If the end result looks good enough for laymen and there's no short term negative outcomes, the most idiotic things can build up steam for a long time. There is usually an inevitable correction, but it can take decades. I personally accept that, the world is a bit mad sometimes, but we deal with it.

My personal opinion is pretty chill: I don't know if what I can do will still be needed n years from now. It might be that I need to change my approach, learn something new, or whatever. But I don't spend all that much time worrying about what was, or what will be. I have problems to solve right now, and I solve them with the best options available to me right now.

People spending their days solving problems probably generally don't have much time to create science fiction.

mjburgess · 5 months ago
> You're not using AI right

I use AI heavily, it's my field.

davidanekstein · 5 months ago
I think AI is posing a challenge to people like the person in TFA because programming is their hobby and one that they’re good at. They aren’t used to knowing someone or something can do it better and knowing that now makes them wonder what the point is. I argue that amateur artists and musicians have dealt with this feeling of “someone can always do it better” for a very long time. You can have fun while knowing someone else can make it better than you, faster, without as much struggle. Programmers aren’t as used to this feeling because, even though we know people like John Carmack exist, it doesn’t fly in your face quite like a beautiful live performace or painted masterpiece does. Learning to enjoy your own process is what I think is key to continuing what you love. Or, use it as an opportunity to try something else — but you’ll eventually discover the same thing no matter what you do. It’s very rare to be the best at something.
palata · 5 months ago
> can make it better than you, faster, without as much struggle

Still need to prove that AI-generated code is "better", though.

"More profitable", in a world where software generally becomes worse (for the consumers) and more profitable (for the companies), sure.

doug_durham · 5 months ago
I don't see that as a likely outcome. I think it will make software better for consumers. There can be more bespoke interfaces instead of making consumers cram in to the solution space dictated by the expensive to change software as it is today.
dbalatero · 5 months ago
I'm both relatively experienced as a musician and software engineer so I kinda see both sides. If musicians want to get better, they have to go to the practice room and work. There's a satisfaction to doing this work and coming out the other side with that hard-won growth.

Prior to AI, this was also true with software engineering. Now, at least for the time being, programmers can increase productivity and output, which seems good on the surface. However, with AI, one trades the hard work and brain cells created by actively practicing and struggling with craft for this productivity gain. In the long run, is this worth it?

To me, this is the bummer.

davidanekstein · 5 months ago
I think in the workplace this is true, and a bummer, because the workplace demands the benefits that AI augmented programming offers. As a hobby, though, like music, the need for productivity isn’t as high and you can go to the proverbial practice room and program.

Overall I think you have a good point and the bummer for me is that the practice room isn’t as available for the day job.