I started out very sceptical. When Claude Code landed, I got completely seduced — borderline addicted, slot machine-style — by what initially felt like a superpower. Then I actually read the code. It was shockingly bad. I swung back hard to my earlier scepticism, probably even more entrenched than before.
Then something shifted. I started experimenting. I stopped giving it orders and began using it more like a virtual rubber duck. That made a huge difference.
It’s still absolute rubbish if you just let it run wild, which is why I think “vibe coding” is basically just “vibe debt” — because it just doesn’t do what most (possibly uninformed) people think it does.
But if you treat it as a collaborator — more like an idiot savant with a massive brain but no instinct or nous — or better yet, as a mech suit [0] that needs firm control — then something interesting happens.
I’m now at a point where working with Claude Code is not just productive, it actually produces pretty good code, with the right guidance. I’ve got tests, lots of them. I’ve also developed a way of getting Claude to document intent as we go, which helps me, any future human reader, and, crucially, the model itself when revisiting old code.
What fascinates me is how negative these comments are — how many people seem closed off to the possibility that this could be a net positive for software engineers rather than some kind of doomsday.
Did Photoshop kill graphic artists? Did film kill theatre? Not really. Things changed, sure. Was it “better”? There’s no counterfactual, so who knows? But change was inevitable.
What’s clear is this tech is here now, and complaining about it feels a bit like mourning the loss of punch cards when terminals showed up.
[0]: https://matthewsinclair.com/blog/0178-why-llm-powered-progra...
Desktop publication software killed many jobs. I worked for a publication where I had colleagues that used to typeset, place images, and use a camera to build pages by hand. That required a team of people. Once Quark Xpress and the like hit the scene, one person could do it all, faster.
In terms of illustration, the tools moved from pen and paper to Adobe Illustrator and Aldus / Macromedia Freehand. Which I'd argue was more of a sideways move. You still needed an illustrators skillset to use these tools.
The difference between what I just described and LLM image generation is the tooling changed to streamline an existing skillset. LLM's replace all of it. Just type something and here's your picture. No art / design skill necessary. Obviously, there's no guarantee that the LLM generated image will be any good. So, I'm not sure the Photoshop analogy works here.
There was a network of sites (like those mentioned above), that had feeds of interesting work done on the web. Much of it was purely an exercise in creativity. The single 1024x768 resolution target let folks go wild without the constraints of responsiveness that we see today.
While I realize that the web had to evolve, I have a lot of nostalgia for web design from those days. The "design" part of it was really centered around artistic expression, and still had a lot of influence from graphic design.
These are leather dress shoes though. As far as I know, this doesn't exist in the athletic shoe world. Considering the materials used in athletic shoes, I don't know how a "repairable" athletic shoe could exist without some serious re-engineering.
Like the author, I've been doing frontend in one way or another for 20 years. The ecosystem, churn, and the absolute juggling act of sync'ing state between the frontend and backend is batshit crazy.
I recently started a proof of concept project using Go templates and HTMX. I'm trying to approximate building a UI with "components" like I would with React. There's still a lot of rough edges, but it's promising. I'm still not sure I need HTMX tbh. I've started managing event listeners myself, and I think I prefer it.
Interestingly enough, managing complex UI state that's based on user roles and permissions is so much easier on the server. Just send the HTML that the user is allowed to see. Done.
That said, React, Vue, et. al has sooo much steam. I don't know how a collective shift in thinking would even begin. Especially considering all the developers who have never known anything but frontend frameworks as a way to build a UI.
edit: Otherwise I don't see any value in all these internet dependent AI features. Performance is more than enough even on older phones (4a for example). Google's camera is the main feature that piques my interest.
Right, I think it's less that they "didn't see it coming" and more so a basic lack of understanding around how anything works and a complete trust in perceived wealth and authority.
The vast majority of people I know that fell into the camp of voting this into power follow a basic equational logic of "he is a (presumably) wealthy businessman he must know about how to manage money, therefore he can fix the economy" that's literally the entire depth to which they go. There's zero investigation into the validity of the premises, no question of the (ridiculous) assumption that governance must be exactly like running a business, no concern over the kind of business the person has background in... etc.
I'm sure there are exceptions to this, but I'd also be confident conjecturing that, for a huge number of people, this basic, shallow, completely flawed argument is precisely why they made the choice that they made. That and a pervasive inability to recall what things were like a mere four or so years prior.
I remember when this happened to RadioShack. I went from being able to purchase just the resistor I needed to a $15 pack of 1,000 resistors I'll never use.
I'm not religious, but if Pope Francis survives, then perhaps we can convince him to add profligacy to the list of deadly sins.
The last time I bought resistors from Radio Shack, which was well over a decade ago, they were $1 a piece. A piece! While I get your sentiment, you can buy resistors in packs of 100 for roughly the same price you used to get 5 for.
List doesn't include "the same color should look the same on all renderers/browsers". One of the problems with CYMK (intended for print, not screen) is that screens render in RGB and each browser has it's own algorithm for how to convert from CYMK to RGB. So if you have an image in CYMK and use that on the web, it looks different on different browsers. And people complain and then you need to explain to them "convert it to RGB and use that, because this isn't a problem with the site it's a problem with the encoding not being meant for the site".