We need to get marketshare by going fast!
Then she said: "I know nobody that comments on online forums. Nobody would ever comment to strangers on the internet. It's too dangerous."
Took me a while to grasp what she meant with that, but I think she's right. Trust has eroded so much over the last two decades that most forums are either full of bots or full of annoyed and toxic people. It's very rare to find welcoming communities to newbies, and most of the ones I have discovered were offline connections.
She also mentioned that all of her friends use private profiles only, because having public profiles is too dangerous because of stalkers.
To me this sounded a bit absurd at first, but maybe that's a different perception on "how to use" the internet from a different younger generation that grew up post-socialmedia? My first contact with the internet was MIT opencourseware, her first contact was receiving dick pics at the age of 10 from assholes on the other side of the planet.
I miss the old phpbb forum days when the most toxic comment was someone being snarky and derailing the discussion into "did you use the search function?"
No idea how to fix the internet, maybe it's time to move to gopher or another protocol :-/
Understanding is not just doing. Understanding is being able to build something up from first principles. The author of this post will better understand the difference when he hits a non-trivial bug or the project grows past a certain size.
Claude Code will change your life when you learn how to program with it. However, if you are a programmer with not a lot of desire for automated tests and specs/designs, you are probably not going to be successful with it.
The art of coding has become a commodity. Validation and verification are the new art.
I have found that more context comments and info damage quality on hard problems.
I actually for a long time now have two views for my code.
1. The raw code with no empty space or comments. 2. Code with comments
I never give the second to my LLM. The more context you give the lower it's upper end of quality becomes. This is just a habit I've picked up using LLMs every day hours a day since gpt3.5 it allows me to reach farther into extreme complexity.
I suppose I don't know what most people are using LLMs for but the higher complexity your work entails the less noise you should inject into it. It's tempting to add massive amounts of xontext but I've routinely found that fails on the higher levels of coding complexity and uniqueness. It was more apparent in earlier models newer ones will handle tons of context you just won't be able to get those upper ends of quality.
Compute to informatio ratio is all that matters. Compute is capped.
See it as a human, the comments are there to speed up understanding of the code.
Skills are modular capabilities that extend Claude’s functionality through organized folders containing instructions, scripts, and resources.
And
Extend Claude’s capabilities for your specific workflows
E.g. building your project is definitely a workflow.
It als makes sense to put as much as you can into a skill as this an optimized mechanism for claude code to retrieve relevant information based on the skill’s frontmatter.