Days that I’d normally feel overwhelmed from requests by management are just Claude Code and chill days now.
Days that I’d normally feel overwhelmed from requests by management are just Claude Code and chill days now.
Takes like this are utterly insane to me
Document how to use and install your tool in the readme.
Document how to compile, test, architecture decisions, coding standards, repository structure etc in the agents doc.
The problem is this is classic Gell Mann Amnesia. I can have it restyle my website with zero work, even adding StarCraft 2 or NBA Jam themes, but ask it to work in a planning or estimation problem and I'm annoyed by its quality. Its probably bad at both but I don't notice. If we have 10 specializations required on an app, I'm only mad about 10℅. If I want to make an app entirely outside my domain, yeah sure it's the best ever.
It's good at stuff that most competent engineers can get right while also having the sort of knowledge breadth an average engineer would lack. You really need to be a domain expert to accurately judge its output in specific areas.
Deleted Comment
You’ve got people foaming at the mouth anytime someone mentions AGI, like it’s some kind of cult prophecy. “Oh it’s poorly defined, it’s not around the corner, everyone talking about it is selling snake oil.” Give me a break. You don’t need a perfect definition to recognize that something big is happening. You just need eyes, ears, and a functioning brain stem.
Who cares if AGI isn’t five minutes away. That’s not the point. The point is we’ve built the closest thing to a machine that actually gets what we’re saying. That alone is insane. You type in a paragraph about your childhood trauma and it gives you back something more coherent than your therapist. You ask it to summarize a court ruling and it doesn’t need to check Wikipedia first. It remembers context. It adjusts to tone. It knows when you’re being sarcastic. You think that’s just “autocomplete”? That’s not autocomplete, that’s comprehension.
And the logic complaints, yeah, it screws up sometimes. So do you. So does your GPS, your doctor, your brain when you’re tired. You want flawless logic? Go build a calculator and stay out of adult conversations. This thing is learning from trillions of words and still does better than half the blowhards on HN. It doesn’t need to be perfect. It needs to be useful, and it already is.
And don’t give me that “it sounds profound but it’s really just crap” line. That’s 90 percent of academia. That’s every selfhelp book, every political speech, every guy with a podcast and a ring light. If sounding smarter than you while being wrong disqualifies a thing, then we better shut down half the planet.
Look, you’re not mad because it’s dumb. You’re mad because it’s not that dumb. It’s close. Close enough to feel threatening. Close enough to replace people who’ve been coasting on sounding smart instead of actually being smart. That’s what this is really about. Ego. Fear. Control.
So yeah, maybe it’s not AGI yet. But it’s smarter than the guy next to you at work. And he’s got a pension.
> Who cares if AGI isn’t five minutes away. That’s not the point. The point is we’ve built the closest thing to a machine that actually gets what we’re saying. That alone is insane. You type in a paragraph about your childhood trauma and it gives you back something more coherent than your therapist. You ask it to summarize a court ruling and it doesn’t need to check Wikipedia first. It remembers context. It adjusts to tone. It knows when you’re being sarcastic. You think that’s just “autocomplete”? That’s not autocomplete, that’s comprehension
My experience with LLMs have been all over the place. They're insanely good at comprehending language. As a side effect, they're also decent at comprehending complicated concepts like math or programming since most of human knowledge is embedded in language. This does not mean they have a thorough understanding of those concepts. It is very easy to trip them up. They also fail in ways that are not obvious to people who aren't experts on whatever is the subject of its output.
> And the logic complaints, yeah, it screws up sometimes. So do you. So does your GPS, your doctor, your brain when you’re tired. You want flawless logic? Go build a calculator and stay out of adult conversations. This thing is learning from trillions of words and still does better than half the blowhards on HN. It doesn’t need to be perfect. It needs to be useful, and it already is.
I feel like this is handwaving away the shortcomings a bit too much. It does not screw up in the same way humans do. Not even close. Besides, I think computers should rightfully be held up to a higher standard. We already have programs that can automate tasks that human brains would find challenging and tedious to do. Surely the next frontier is something with the speed and accuracy of a computer while also having the adaptability of human reasoning.
I don't feel threatened by LLMs. I definitely feel threatened by some of the absurd amount of money being put into them though. I think most of us here will be feeling some pain if a correction happens.
The "learn to master it or become obsolete" sentiment also doesn't make a lot of sense to me. Isn't the whole point of AI as a technology that people shouldn't need to spend years mastering a craft to do something well? It's literally trying to automate intelligence.