Readit News logoReadit News
_dwt commented on We might all be AI engineers now   yasint.dev/we-might-all-b... · Posted by u/sn0wflak3s
johnfn · 6 days ago
Time and time again that I observe it is the AI skeptic that is not reacting with curiosity. This is almost fundamentally true, as in order to understand a new technology you need to be curious about it; AI will naturally draw people who are curious, because you have to be curious to learn something new.

When I engage with AI skeptics and I "ask these people what they're really thinking, and listen" they say something totally absurd, like GPT 3.5-turbo and Opus 4.6 are interchangeable, or they put into question my ability as an engineer, or that I am a "liar" for claiming that an agent can work for an hour unprompted (something I do virtually every day). This isn't even me picking the worst of it, this is pretty much a typical conversation I have on HN, and you can go through my comment history to verify I have not drawn any hyperbole.

_dwt · 6 days ago
I'm sorry you've had that experience, and I agree there are a good share of "skeptics" who have latched on to anecdata or outdated experience or theorycrafting. I know it must feel like the goalposts are moving, too, when someone who was against AI on technical grounds last year has now discovered ethical qualms previously unevidenced. I spend a lot of time wondering if I've driven myself to my particular views exclusively out of motivated reasoning. (For what it's worth, I also think "motivated reasoning" is underrated - I am not obligated to kick my own ass out of obligation to "The Truth"!)

That said, I _did_ read your comments history (only because you asked!) and - well, I don't know, you seem very reasonable, but I notice you're upset with people talking about "hallucinations" in code generation from Opus 4.6. Now, I have actually spent some time trying to understand these models (as tool or threat) and that means using them in realistic circumstances. I don't like the "H word" very much, because I am an orthodox Dijkstraist and I hold that anthropomorphizing computers and algorithms is always a mistake. But I will say that like you, I have found that in appropriate context (types, tests) I don't get calls to non-existent functions, etc. However, I have seen: incorrect descriptions of numerical algorithms or their parameters, gaslighting and "failed fix loops" due to missing a "copy the compiled artifact to the testing directory" step, and other things which I consider at least "hallucination-adjacent". I am personally much more concerned about "hallucinations" and bad assumptions smuggled in the explanations provided, choice of algorithms and modeling strategies, etc. because I deal with some fairly subtle domain-specific calculations and (mathematical) models. The should-be domain experts a) aren't always and b) tend to be "enthusiasts" who will implicitly trust the talking genius computer.

For what it's worth, my personal concerns don't entirely overlap the questions I raised way above. I think there are a whole host of reasons people might be reluctant or skeptical, especially given the level of vitriol and FUD being thrown around and the fairly explicit push to automate jobs away. I have a lot of aesthetic objections to the entire LLM-generated corpus, but de gustibus...

_dwt commented on We might all be AI engineers now   yasint.dev/we-might-all-b... · Posted by u/sn0wflak3s
prescriptivist · 6 days ago
I don't think that people who don't want to use these tools or clean old ways are incurious. But I think these developers should face the fact that those skills and those ways they are reticent to give up are more or less obviated at this point. Not in the future, but now. It's just that the adoption of these tools isn't evenly distributed yet.

I think there's a place for thoughtful dialogue around what this means for software engineering, but I don't think that's going to change anything at this point. If developers just don't want to participate in this new world, for whatever reason, I'm not judging them, but also I don't think the genie is going back in the bottle. There will be no movement to organize labor to protect us and there be no deus ex machina that is going to reverse course on this stuff.

_dwt · 6 days ago
Well, no, not with that attitude there won’t! I am not trying to insinuate that there is a conspiracy, or that posts like yours are part of it, but there has been a huge wave of posts and comments since February which narrow the Overton window to the distance between “it’s here and it’s great” and “I’m sad but it’s inevitable”.

Humanity has possessed nuclear weapons for 80 years and has used them exactly twice in anger, at the very beginning of that span. We can in fact just NOT do things! Not every world-beating technology takes off, for one reason or another. Supersonic airliners. Eugenics. Betamax.

The best time to air concerns was yesterday. The next best time is today. I think we technologists wildly overestimate public understand and underestimate public distrust of our work and of “AI” specifically. We’ve got CEOs stating that LLMs are a bigger deal than nuclear weapons or fire(!) and yet getting upset that the government wants control of their use. We’ve got giddy thinkpieces from people (real example from LinkedIn!) who believe we’ll hit 100% white collar unemployment in 5 years and wrap up by saying they’re “5% nervous and 95% excited”. If that’s what they really think, and how they really feel, it’s psychopathic! Those numbers get you a social scene that’ll make the French Revolution look like a tea party. (“And honestly? I’m here for it.”)

So no, while I _think_ you’re correct, I don’t accept the inevitability of it all. There are possibilities I don’t want to see closed off (maybe data finally really is the new oil, and that’s the basis for a planetary sovereign wealth fund. Maybe every man, woman, and child who ever wrote a book or a program or an internet comment deserves a royalty check in the mail each month!) just yet.

_dwt commented on We might all be AI engineers now   yasint.dev/we-might-all-b... · Posted by u/sn0wflak3s
noemit · 7 days ago
Not a day goes by that a fellow engineer doesn't text me a screenshot of something stupid an AI did in their codebase. But no one ever mentions the hundreds of times it quietly wrote code that is better than most engineers can write.

The catch about the "guided" piece is that it requires an already-good engineer. I work with engineers around the world and the skill level varies a lot - AI has not been able to bridge the gap. I am generalizing, but I can see how AI can 10x the work of the typical engineer working in Startups in California. Even your comment about curiosity highlights this. It's the beginning of an even more K-shaped engineering workforce.

Even people who were previously not great engineers, if they are curious and always enjoyed the learning part - they are now supercharged to learn new ways of building, and they are able to try it out, learn from their mistakes at an accelerated pace.

Unfortunately, this group, the curious ones, IMHO is a minority.

_dwt · 6 days ago
I am going to try to put this kindly: it is very glib, and people will find it offensive and obnoxious, to implicitly round off all resistance or skepticism to incuriosity. Perhaps to alienate AI critics even further is the goal, in which case - carry on.

But if you are genuinely confused by the attitudes of your peers, try asking not "what do I have that they lack" ("curiosity"?) but "what do they see that I don't" or "what do they care about that I don't"? Is it possible that they are not enthusiastic for the change in the nature of the work? Is it possible they are concerned about "automation complacency" setting in, precisely _because_ of the ratio of "hundreds of times" writing decent code to the one time writing "something stupid", and fear that every once in a while that "something stupid" will slip past them in a way that wipes the entire net gain of AI use? Is it possible that they _don't_ feel that the typical code is "better than most engineers can write"? Is it possible they feel that the "learning" is mostly ephemera - how much "prompt engineering" advice from a year ago still holds today?

You have a choice, and it's easy to label them (us?) as Luddites clinging to the old ways out of fear, stupidity, or "incuriosity". If you really want to understand, or even change some minds, though, please try to ask these people what they're really thinking, and listen.

_dwt commented on We might all be AI engineers now   yasint.dev/we-might-all-b... · Posted by u/sn0wflak3s
v3xro · 7 days ago
The only way I see out of this crisis (yes I'm not on the token-using side of this) is strict liability for companies making software products (just like in the physical world). Then it doesn't matter if the token-generator spits out code or a software engineer spits out code - the company's incentives are aligned such that if something breaks it's on them to fix it and sort out any externalities caused. This will probably mean no vibe-coded side hustles but I personally am OK with that.
_dwt · 6 days ago
I think this is coming, alongside professional licensure for "software engineers". Every public-facing project will need someone to put a literal stamp of approval on the code, and regardless whether Claude or Codex wrote the bulk of it, it'll be that person's head on a pike when something goes wrong.

This isn't what many of us probably would have wanted, but I think the public blowback when "AI-coded" systems start failing is going to drive us there. (Note to passing hype-men: I did not say they will fail at higher rates than human-coded systems! I happen to believe this, but it is not germane to the argument - only the public perception matters here.)

_dwt commented on We might all be AI engineers now   yasint.dev/we-might-all-b... · Posted by u/sn0wflak3s
jjmarr · 6 days ago
I vibe coded a Kubernetes cluster in 2 days for a distributed compilation setup. I've never touched half this stuff before. Now I have a proof of concept that'll change my whole organization.

That would've taken me 3 months a year ago, just to learn the syntax and evaluate competing options. Now I can get sccache working in a day, find it doesn't scale well, and replace it with recc + buildbarn. And ask the AI questions like whether we should be sharding the CAS storage.

The downside is the AI is always pushing me towards half-assed solutions that didn't solve the problem. Like just setting up distributed caching instead of compilation. It also keeps lying which requires me to redirect & audit its work. But I'm also learning much more than I ever could without AI.

_dwt · 6 days ago
I hope we get a follow-up in six months or a year as to how this all went.
_dwt commented on Relicensing with AI-Assisted Rewrite   tuananh.net/2026/03/05/re... · Posted by u/tuananh
mfabbri77 · 8 days ago
This has the potential to kill open source, or at least the most restrictive licenses (GPL, AGPL, ...): if a license no longer protects software from unwanted use, the only possible strategy is to make the development closed source.
_dwt · 8 days ago
Yes, this is the reason I've completely stopped releasing any open-source projects. I'm discovering that newer models are somewhat capable of reverse-engineering even compiled WebAssembly, etc. too, so I can feel a sort of "dark forest theory" taking hold. Why publish anything - open or closed - to be ripped off at negligible marginal cost?
_dwt commented on ChatGPT Health fails to recognise medical emergencies – study   theguardian.com/technolog... · Posted by u/simonebrunozzi
simonebrunozzi · 13 days ago
> but even certain governments which shall not be named

Why can't you name them, and give us some context? Is this based on public info, or not?

_dwt · 13 days ago
Not the original commenter, but you may have noticed a wee kerfluffle between a large nation-state's "Secretary of War" and a frontier model provider over whether the model's licensing would permit autonomous lethal weapon systems operated by said - and I cannot emphasize the middle word enough - large _language_ model.
_dwt commented on I asked Claude for 37,500 random names, and it can't stop saying Marcus   github.com/benjismith/ai-... · Posted by u/benjismith
_dwt · 15 days ago
Gary Marcus is living in Claude's head rent-free?
_dwt commented on Child's Play: Tech's new generation and the end of thinking   harpers.org/archive/2026/... · Posted by u/ramimac
advisedwang · 20 days ago
> Not long before I arrived in the Bay Area, I’d been involved in a minor but intense dispute with the rationalist community over a piece of fiction I’d written that I’d failed to properly label as fiction

Anyone familiar with what work this is referring to?

_dwt · 20 days ago
This one IIRC: https://samkriss.substack.com/p/the-law-that-can-be-named-is... He writes about it here, a little: https://samkriss.substack.com/p/against-truth

In general long meandering semi-factual pieces like this, with odd historical excursions, are one of his things and I don't know anyone else that does it quite the same. (Hmm... oddly enough Scott Alexander, who he cites here, also does some similarly Borgesian stuff, but with a different bent.) One of my favorite writers and I recommend pretty much everything he's done since the early 2010s.

_dwt commented on Descent, ported to the web   mrdoob.github.io/three-de... · Posted by u/memalign
_dwt · a month ago
Descent was a huge part of my childhood (and surprisingly my little kids are now big fans as well)! Unfortunately this seems to stutter pretty badly with audio issues as well for me on Firefox on Linux. As a huge fan of three.js and other past work... I guess I'll blame Claude?

u/_dwt

KarmaCake day218June 18, 2020
About
Consultant, programmer, recovering "real" engineer.

I like sunsets, long walks on the beach, functional programming, type systems, interpreters and compilers, and many other things which don't pay the bills.

blog: https://usethe.computer

View Original