An example: I made a report_polisher skill that cleans some markdown formatting, check image links, and then uses pandoc to convert it to HTML. I ask the tool itself created the skill, then I just tweaked it.
An example: I made a report_polisher skill that cleans some markdown formatting, check image links, and then uses pandoc to convert it to HTML. I ask the tool itself created the skill, then I just tweaked it.
Very much this. All of my skills/subagents are highly tailored to my codebases and workflows, usually by asking Claude Code to write them and resuming the conversation any time I see some behavior I don't like. All the skills I've seen on Github are way too generic to be of any use.
Would strongly encourage you to open-source/write blog posts on some concrete examples from your experience to bridge this gap.
`/plugin marketplace add anthropics/skills`
https://github.com/anthropics/skills
2 days ago I built a skill to automate a manual workflow I was using: After Claude writes and commits some code, have Codex review that code and have Claude go back and address what Codex finds. I used this process to implement a fairly complete Docusign-like service, and it did a startlingly good job right out of the gate, the bugs were fairly shallow. In my manual review of the Codex findings, it seems to be producing good results.
Claude Code largely built that skill for me.
Implemented as a skill and I've been using it for the last 2 days to implement a "retrospective meeting runner" web app. Having it as a skill completely automates the code->review->rework step.
I would encourage you to write up a blog post of your experience and share a version of the skill you have built. And then follow up with a blog post after 3 months with analysis like how well the skill generalized for your daily use, whether you had to make some changes, what didn't work etc. This is the sort of content we need more of here.
3 months ago, Anthropic and Simon claimed that Skills were the next big thing and going to completely change the game. So far, from my exploration, I don't see any good examples out there, nor is a there a big growing/active community of users.
Today, we are talking about Cowork. My prediction is that 3 months from now, there will be yet another new Anthropic positioning, followed up with a detailed blog from Simon, followed by HN discussing possibilities. Rinse and Repeat.
This is something I have experienced first hand participating in the Vim/Emacs/Ricing communities. The newbie spends hours installing and tuning workflows with the mental justification of long-term savings, only to throw it all away in a few weeks when they see a new, shinier thing. I have been there and done that. For many, many years.
The mature user configures and installs 1 or 2 shiny new things, possibly spending several hours even. Then he goes back to work. 6 months later, he reviews his workflow and decides what has worked well, what hasn't and looks for the new shiny things in the market. Because, you need to use your tools in anger, in the ups and downs, to truly evaluate them in various real scenarios. Scenarios that won't show up until serious use.
My point is that Anthropic is incentivized in continuously moving goalposts. Simon is incentivized in writing new blogs every other day. But none of that is healthy for you and me.
They work so well because they're also "baked in" the training run of the model. The concept is simple, but training it to actually use it unlocks the "wow" factor. (using cc with other models, not trained specifically for this, reveals how big training is)
They also highlight this in their docs, you don't need to teach claude about skills. Or to write new skills. It "knows" what skills are, how to use, edit them etc.
I love org for all its bells and whistles and use them in various ways. But most of the time I need a small subset of org in a form-factor that allows ease of use.
They were only announced in October and they've already been ported to Codex and Gemini CLI and VS Code agents and ChatGPT itself (albeit still not publicly acknowledged there by OpenAI). They're also used in Cowork and are part of the internals in Fly's new Sprites. They're doing extremely well for an idea that's only three months old!
This particular post on Cowork isn't some of my best work - it was a first impression I posted within a couple of hours of release (I didn't have preview access to Cowork) just to try and explain what the thing was to people who don't have a $100+/month Claude Max subscription.
I don't think it's "unhealthy" for me to post things like this though! Did you see better coverage of Cowork than mine on day one?
I want more blogs/discussion from the community about the existing tools.
In 3/6 months, how many skills have you written? How many times have you used each skill? Did you have edit skills later due to unseen corner cases, or did they generalize? Are skills being used predominantly at the individual level, or are entire teams/orgs able to use a skill as is? What are the usecases where skills are not good at? What are the shortcomings?
(You being the metaphorical HN reader here of course.)
HN has always been a place of greater technical depth than other internet sites and I would like to see more of this sort of thing on the front page along with day one calls.