The more time you spend making guidelines and guardrails, the more success the LLM has at acing your prompt. There I created a wizard to get it right from the beginning, simplifying and "guiding" you into thinking what you want to achieve.
I've got some users and the stuff I can do each time I start doing vibecoding is astounding. Obviously 50% the work is just fixing what the AI didn't understood or imagined too much, but having a good AGENTS.md is key (and patience from me) - so that's why I'm buidling LynxPrompt indeed, for having an easy way to own a good AGENTS.md file for my next projects... and hopefully you too.
LLM-assisted coding feels like the next step in that same pattern. The difference is that this abstraction layer can confidently make stuff up: hallucinated APIs, wrong assumptions, edge cases it didn’t consider. So the work doesn’t disappear, it shifts. The valuable skill becomes guiding it: specifying the task clearly, constraining the solution, reviewing diffs, insisting on tests, and catching the “looks right but isn’t” failures. In practice it’s like having a very fast junior dev who never gets tired and also never says “I’m not sure”.
That’s why I don’t buy the extremes on either side. It’s not magic, and it’s not useless. Used carelessly, it absolutely accelerates tech debt and produces bloated code. Used well, it can take a lot of the grunt work off your plate (refactors, migrations, scaffolding tests, boilerplate, docs drafts) and leave you with more time for the parts that actually require engineering judgement.
On the “will it make me dumber” worry: only if you outsource judgement. If you treat it as a typing/lookup/refactor accelerator and keep ownership of architecture, correctness, and debugging, you’re not getting worse—you’re just moving your attention up the stack. And if you really care about maintaining raw coding chops, you can do what we already do in other areas: occasionally turn it off and do reps, the same way people still practice mental math even though Excel exists.
Privacy/ethics are real concerns, but that’s a separate discussion; there are mitigations and alternatives depending on your threat model.
At the end of the day, the job title might stay “software engineer”, but the day-to-day shifts toward “AI guide + reviewer + responsible adult.” And like every other tooling jump, you don’t have to love it, but you probably do have to learn it—because you’ll end up maintaining and reviewing AI-shaped code either way.
Basically, I think the author hit just in the point.
It's also not showing apps in other spaces, which I would like to be shown. Mac's default Cmd+Tab does that.
I have been following AlignTrue https://aligntrue.ai/docs/about but I think I like more your way of doing accountability and acting on thinking process instead of being passive. Apart from the fact that your way is a down-to-earth, more practical approach.
Great showcase live demo, however I would have liked a more in-depth showcasing of AAP and AIP even in this situation of multi-agent interactions, to understand the full picture better. Or simply perhaps prepare another showcase for the AAP and AIP. Just my two cents.
PS. I'm the creator of LynxPrompt, which honestly falls very short for this cases we're treating today, but with that I'm saying that I keep engaged on the topic trust/accountability, on how to organize agents and guide them properly without supervision.