I’m a SWE with ~5.5 years of professional experience now and anecdotally see AI used primarily by juniors who use it as a crutch. Moreover, the vast majority of the “best” engineers I know do not use any AI-assisted coding tools (e.g. Copilot). They do, however, occasionally use LLMs as a search engine for unqualified questions (I.e. where they identify that there are unknown unknowns). Is my anecdotal experience representative of reality? If not, I’d love to hear peoples’ workflows (especially the historical high performers)
1. Exploration: LLM first, docs second—cuts discovery time by ~3×.
2. Boilerplate: AI generates, I refactor on the spot; never merged blindly.
3. CR: bot leaves a first-pass checklist, humans focus on architecture.
4. Legacy spelunking: 200k-context summary + mermaid call-graph.
5. Rule of three: AI writes glue, I write core, tests cover both.
Result: 30-40% more features shipped per quarter without quality drop.
Code review & refactoring assistant: I use AI to sanity-check my design or spot potential edge cases.
Exploration & learning: When evaluating a new library or framework, I ask AI for comparisons or best practices.
Docs summarization: LLMs help me parse long RFCs or documentation quickly.
Prototyping / boilerplate: For scaffolding boring repetitive code.
But not for actual algorithmic thinking or critical code — those still rely on human judgment.
In short: top engineers do use AI, but they use it like they use Stack Overflow — a tool for leverage, not a crutch.
I've tried it to generate html/css for an email and it kind of works but depsite asking it to doesn't work across all versions of outlook and gmail.
I'm overly cautious about what I paste in. Just like how you can find PII data in logs I think the amount of PII data that's being pasted into AIs will be crazy.
I see some people using it to vide-code whole features which often lead to code that work on the surface but when you deep dive in what it actually does it's catastrophic