Readit News logoReadit News
vladsh commented on Your job is to deliver code you have proven to work   simonwillison.net/2025/De... · Posted by u/simonw
vladsh · a day ago
We should get back to the basic definition of the engineering job. An engineer understands requirements, translates them into logical flows that can be automated, communicates tradeoffs across the organization, and makes tradeoff calls on maintainability, extensibility, readability, and security. Most importantly, they’re accountable for the outcome, because many tradeoffs only reveal their cost once they hit reality

None of this is covered by code generation, nor by juniors submitting random PRs. Those are symptoms of juniors (not only) missing fundamentals. When we forget what the job actually is, we create misalignment with junior engineers and end up with weird ideas like "spec-driven development"

If anything, coding agents are a wake-up call that clarify what engineering profession is really about

vladsh commented on Skills for organizations, partners, the ecosystem   claude.com/blog/organizat... · Posted by u/adocomplete
vladsh · a day ago
Skills are a pretty awkward abstraction. They emerged to patch a real problem, generic models require fine-tuning via context, which quickly leads to bloated context files and context dilution (ie more hallucinations)

But skills dont really solve the problem. Turning that workaround into a standard feels strange. Standardizing a patch isn’t something I’d expect from Anthropic, it’s unclear what is their endgame here

vladsh commented on Sycophancy is the first LLM "dark pattern"   seangoedecke.com/ai-sycop... · Posted by u/jxmorris12
vladsh · 18 days ago
LLMs get over-analyzed. They’re predictive text models trained to match patterns in their data, statistical algorithms, not brains, not systems with “psychology” in any human sense.

Agents, however, are products. They should have clear UX boundaries: show what context they’re using, communicate uncertainty, validate outputs where possible, and expose performance so users can understand when and why they fail.

IMO the real issue is that raw, general-purpose models were released directly to consumers. That normalized under-specified consumer products, created the expectation that users would interpret model behavior, define their own success criteria, and manually handle edge cases, sometimes with severe real world consequences.

I’m sure the market will fix itself with time, but I hope more people would know when not to use these half baked AGI “products”

vladsh commented on Writing a good Claude.md   humanlayer.dev/blog/writi... · Posted by u/objcts
vladsh · 19 days ago
What is a good Claude.md?
vladsh commented on How I use every Claude Code feature   blog.sshh.io/p/how-i-use-... · Posted by u/sshh12
vladsh · 2 months ago
Or use this solution and get fine-tuned context generated for each and every task just-in-time : devly.ai

Please stop expecting every engineer on the team to be an ai engineer just to get started with coding agents

u/vladsh

KarmaCake day40September 16, 2020View Original