I'm Per from Scrimba (YC S20), the code-learning platform.
There's been a lot of talk lately about whether AI tools are causing skill atrophy amongst developers. We get a front-row seat to this, and we see more and more students struggle with basic concepts, and building apps on their own. This is almost always a consequence of relying too much on ChatGPT and vibe coding tools.
So we built a small side project: https://devatrophy.com
It's a test of your core web dev knowledge — no handholding, no back rubs, no AI autocomplete. Just you, your brain, and 10 questions. There are three levels (Noobie, Le Chad, Hardcore), and the questions cover HTML, CSS, JavaScript, databases, and Node.
You’ll get a score at the end, plus a downloadable certificate for bragging rights (or public shaming).
Would love for you to try it and tell us what you think. And would be curious to hear if you're feeling any signs of "dev atrophy" yourself, or in your team?
PS: Ironically we decided to produce it by vibe coding on V0. Oh the irony.
if your code doesn't work it doesn't work
you can't bullshit a computer
for people that are doing social science it's an issue
but they where way past the point of no return already so it doesn't really matter
Code can definitely only sort of work: only works on the happy path, only works on the computer it was developed on, only works for some versions of some dependencies, only works when single threaded, only works when the network is fast enough, only works for a single user at a time etc etc etc.
Why is the code like that? How are people likely to use an API? How does code change over time? How can we work effectively on a codebase that's too big for any single person to understand? How can we steer the direction of a codebase over a long timescale when it's constantly changing every day?
you can't bullshit a computer
this is wrong. I would argue the difference between a junior dev/intern and a senior engineer is that while both can write code that works, the juniors find local maximas, like solutions that work, but can't scale, or wont be very easy to integrate/add features on top/maintain etc.
This happens in maths, biology, in all science fields. Experience is partly the ability to take decisions between options that both work.
This is why coding assistants are amazing at executing things you are clear on what you want to do, but can't help (yet) on big picture tweaks
languages are subject to change
hire people who are good at finding information
not someone who is good at blindly memorizing details of a specific instance of a language or system
someone who memorized every single detail of COBOL will be a worse coder than someone who spent time thinking about abstract thinking and problem solving
you'll want to double check everything anyway
this shows of a fundamental lack of insight into what it means to be a good developer
it's like someone who thinks they are smarter than everyone else because they spent thousands on hours on playing chess
this student who has memorized the full specification of HTML, CSS and Javascript will be useless if you ask them a question about lets say Erlang, and is easily replaced by a book
Your example correct answer to "Write a function that returns the sum of two parameters" is
Is the atrophy coming from inside the house?This worked but ai wants to rewrite whole files all of the time so it broke. Our designer fixed the issue now.
Ya'll might want to switch from V0 to claude code.
I still feel more like the LLMs are the ones who need the handholding.