Readit News logoReadit News
coltonv commented on Things that helped me get out of the AI 10x engineer imposter syndrome   colton.dev/blog/curing-yo... · Posted by u/coltonv
simoncion · 20 days ago
> I'm not being defensive, I'm rebutting a (false) factual claim.

You're rebutting a claim about your rant that -if it ever did exist- has been backed away from and disowned several times.

From [0]

> > Wait, now you're saying I set the 10x bar? No, I did not.

>

> I distinctly did not say that. I said your article was one of the ones that made me feel anxious. And it's one of the ones that spurred me to write this article.

and from [1]

> I'm trying to write a piece to comfort those that feel anxious about the wave of articles telling them they aren't good enough, that they are "standing still", as you say in your article. That they are crazy. Your article may not say the word 10x, but it makes something extremely clear: you believe some developers are sitting still and others are sipping rocket fuel. You believe AI skeptics are crazy. Thus, your article is extremely natural to cite when talking about the origin of this post.

[0] <https://news.ycombinator.com/item?id=44799049>

[1] <https://news.ycombinator.com/item?id=44804434>

coltonv · 19 days ago
Thanks for this. The guy really wants to pin me on the 10x thing coming from him but I keep saying it's not and he keeps ignoring me. The claims of his article are extremely plain and clear: AI-loving engineers are going "rocket fuel" fast, AI skeptical engineers are crazy (literally the title!) and are sitting still.

My post is about how those types of claims are unfounded and make people feel anxious unnecessarily. He just doesn't want to confront that he wrote an article that directly says these words and that those words have an effect. He wants to use strong language without any consequences. So he's trying to nitpick the things I say and ignore my requests for further information. It's kinda sad to watch, honestly.

coltonv commented on Things that helped me get out of the AI 10x engineer imposter syndrome   colton.dev/blog/curing-yo... · Posted by u/coltonv
ameyv · 20 days ago
Thanks colton. Man, you just made me feel 10x better :) And ahh yes I said 10x. :P
coltonv · 20 days ago
I'm happy to hear that! A lot of people posting their hot takes here about how AI is actually great or actually awful, but I was hoping to have more conversations like this in the comments. I'm glad I can help people feel better.
coltonv commented on Open models by OpenAI   openai.com/open-models/... · Posted by u/lackoftactics
hrpnk · 20 days ago
With LM Studio you can configure context window freely. Max is 131072 for gpt-oss-20b.
coltonv · 20 days ago
Yes but if I set it above ~16K on my 32gb laptop it just OOMs. Am I doing something wrong?
coltonv commented on Open models by OpenAI   openai.com/open-models/... · Posted by u/lackoftactics
simonw · 20 days ago
Just posted my initial impressions, took a couple of hours to write them up because there's a lot in this release! https://simonwillison.net/2025/Aug/5/gpt-oss/

TLDR: I think OpenAI may have taken the medal for best available open weight model back from the Chinese AI labs. Will be interesting to see if independent benchmarks resolve in that direction as well.

The 20B model runs on my Mac laptop using less than 15GB of RAM.

coltonv · 20 days ago
What did you set the context window to? That's been my main issue with models on my macbook, you have to set the context window so short that they are way less useful than the hosted models. Is there something I'm misisng there?
coltonv commented on Things that helped me get out of the AI 10x engineer imposter syndrome   colton.dev/blog/curing-yo... · Posted by u/coltonv
kasey_junk · 20 days ago
I think maybe this is another disconnect. A lot of the advantage I get does not come from the agent doing things faster than me, though for most tasks it certainly can.

A lot of the advantage is that it can make forward progress when I can’t. I can check to see if an agent is stuck, and sometimes reprompt it, in the downtime between meetings or after lunch before I start whatever deep thinking session I need to do. That’s pure time recovered for me. I wouldn’t have finished _any_ work with that time previously.

I don’t need to optimize my time around babysitting the agent. I can do that in the margins. Watching the agents is low context work. That adds the capability to generate working solutions during times that was previously barred from that.

coltonv · 20 days ago
I've done a few of these types of hands off and go to a meeting style interactions. It has worked a few times, but I tend to just find that they over do it or cause issues. Like you ask them to fix an error and they add a try catch, swallow the error, and call it a day. Or the PR has 1000 line changes when it should have two.

Either way, I'm happy that you are getting so much out of the tools. Perhaps I need to prompt harder, or the codebase I work on has just deviated too much from the stuff the LLMs like and simply isn't a good candidate. Either way, appreciate talking to you!

coltonv commented on Things that helped me get out of the AI 10x engineer imposter syndrome   colton.dev/blog/curing-yo... · Posted by u/coltonv
tptacek · 20 days ago
I asked for an example of one of the articles you'd read that said that LLMs were turning ordinary developers into 10x developers. You cited my article. My article says nothing of the sort; I find the notion of "10x developers" repellant.
coltonv · 20 days ago
If you really need some, there are some links in another comment. Another one that was made me really wonder if I was missing the bus and makes 10x claims repeatedly is this YC podcast episode[1]. But again, I'm not trying to write a point by point counter of a specific article or video but a general narrative. If you want that for your article, Ludicity does a better job eviscerating your post than I ever could: https://ludic.mataroa.blog/blog/contra-ptaceks-terrible-arti...

I'm trying to write a piece to comfort those that feel anxious about the wave of articles telling them they aren't good enough, that they are "standing still", as you say in your article. That they are crazy. Your article may not say the word 10x, but it makes something extremely clear: you believe some developers are sitting still and others are sipping rocket fuel. You believe AI skeptics are crazy. Thus, your article is extremely natural to cite when talking about the origin of this post.

You can keep being mad at me for not providing a detailed target list, I said several times that that's not what the point of this is. You can keep refusing to actually elaborate on how you use AI day to day and solve its problems. That's fine. I don't care. I care a lot more to talk about the people who are actually engaging with me (such as your friend) and helping me to understand what they are doing. Right now, if you're going to keep not actually contributing to the conversation, you're just kinda being a salty guy with an almost unfathomable 408,000 karma going through every HN thread every single day and making hot takes.

[1] https://www.youtube.com/watch?v=IACHfKmZMr8

coltonv commented on Things that helped me get out of the AI 10x engineer imposter syndrome   colton.dev/blog/curing-yo... · Posted by u/coltonv
voxleone · 20 days ago
There’s something ironic here. For decades, we dreamed of semi-automating software development. CASE tools, UML, and IDEs all promised higher-level abstractions that would "let us focus on the real logic."

Now that LLMs have actually fulfilled that dream — albeit by totally different means — many devs feel anxious, even threatened. Why? Because LLMs don’t just autocomplete. They generate. And in doing so, they challenge our identity, not just our workflows.

I think Colton’s article nails the emotional side of this: imposter syndrome isn’t about the actual 10x productivity (which mostly isn't real), it’s about the perception that you’re falling behind. Meanwhile, this perception is fueled by a shift in what “software engineering” looks like.

LLMs are effectively the ultimate CASE tools — but they arrived faster, messier, and more disruptively than expected. They don’t require formal models or diagrams. They leap straight from natural language to executable code. That’s exciting and unnerving. It collapses the old rites of passage. It gives power to people who don’t speak the “sacred language” of software. And it forces a lot of engineers to ask: What am I actually doing now?

coltonv · 20 days ago
Very interesting perspective. Thanks for sharing!
coltonv commented on Things that helped me get out of the AI 10x engineer imposter syndrome   colton.dev/blog/curing-yo... · Posted by u/coltonv
quaintdev · 20 days ago
Interesting that title of this post was changed. I think I have seen this happening 2nd time now. It seems Hacker News does not favor AI negative narratives.
coltonv · 20 days ago
Has happened to me before. It seems they change anything that has a negative connotation to try to take something more positive out of it. I don't love that they do that without asking or confirming with the author. But this title is also fine with me. I actually thought about naming it "Curing your AI 10x Imposter Syndrome", but it felt like a stretch that someone would understand what the content would be about.
coltonv commented on Things that helped me get out of the AI 10x engineer imposter syndrome   colton.dev/blog/curing-yo... · Posted by u/coltonv
kasey_junk · 20 days ago
> but getting corrected after tests are run, whereas my agents typically get stuck in these write-and-test loops

This maybe a definition problem then. I don’t think “the agent did a dumb thing that it can’t reason out of” is a hallucination. To me a hallucination is a pretty specific failure mode, it invents something that doesn’t exist. Models still do that for me but the build test loop sets them aright on that nearly perfectly. So I guess the model is still hallucinating but the agent isn’t so the output is unimpacted. So I don’t care.

For the agent is dumb scenario, I aggressively delete and reprompt. This is something I’ve actually gotten much better at with time and experience, both so it doesn’t happen often and I can course correct quickly. I find it works nearly as well for teaching me about the problem domain as my own mistakes do but is much faster to get to.

But if I were going to be pithy. Aggressively deleting work output from an agent is part of their value proposition. They don’t get offended and they don’t need explanations why. Of course they don’t learn well either, that’s on you.

coltonv · 20 days ago
What I'm saying is that the model will get into one of these loops where it needs to be killed, and I'll look at some of the intermediate states and the reasons for failure and they are because it hallucinated things, ran tests, got an error. Does that make sense?

Deleting and re-prompting is fine. I do that too. But even one cycle of that often means the whole prompting exercise takes me longer than if I just wrote the code myself.

coltonv commented on Things that helped me get out of the AI 10x engineer imposter syndrome   colton.dev/blog/curing-yo... · Posted by u/coltonv
jonas21 · 20 days ago
> You can't compress the back and forth of 3 months of code review into 1.5 weeks.

If your organization is routinely spending 3 months on a code review, it sounds like there's probably a 10 to 100x improvement you can extract from fixing your process before you even start using AI.

coltonv · 20 days ago
I think I may have worded this poorly. I mean the total amount of code review time that goes into 3 months of work (likely on hundreds of PRs) can't be compressed into 1.5 weeks at the same portion of time being allocated to code review. Each code review has a "floor" time, a minimum amount of time loss due to context switching, reading, writing, etc.

u/coltonv

KarmaCake day456March 27, 2015View Original