> I add to my team’s CLAUDE.md multiple times a week.
How big is that file now? How big is too big?I am currently working on a new slash command /investigate <service> that runs triage for an active or past incident. I've had Claude write tools to interact with all of our partner services (AWS, JIRA, CI/CD pipelines, GitLab, Datadog) and now when an incident occurs it can quickly put together an early analysis of a incident finding the right people to involve (not just owners but people who last touched the service), potential root causes including service dependency investigations.
I am putting this through it's paces now but early results are VERY good!
Often the challenge is users aren't interacting with Claude Code about their rules file. If Claude Code doesn't seem to be working with you ask it why it ignore a rule. Often times it provides very useful feedback to adjust the rules and no longer violate them.
Another piece of advice I can give is to clear your context window often! Early in my start in this I was letting the context window auto compact but this is bad! Your model is it's freshest and "smartest" when it has a fresh context window.
Now SKILL.md can have references to more finegrained behaviors or capabilities of our skill. My skills generally tend to have a reference/{workflows,tools,standards,testing-guide,routing,api-integration}.md. These references are what then gets "progressively loaded" into the context.
Say I asked claude to use the wireframe-skill to create profileView mockup. While creating the wireframe, claude will need to figure out what API endpoints are available/relevant for the profileView and the response types etc. It's at this point that claude reads the references/api-integration.md file from the wireframe skill.
After a while I found I didn't like the progressive loading so I usually direct claude to load all references in the skill before proceeding - this usually takes up maybe 20k to 30k tokens, but the accuracy and precision (imagined or otherwise ha!) is worth it for my use cases.
You shouldn't do this, it's generally considered bad practice.
You should be optimizing your skill description. Often times if I am working with Claude Code and it doesn't load I skill, I ask it why it missed the skill. It will guide me to improving the skill description so that it is picked up properly next time.
This iteration on skill description has allowed skills to stay out of context until they are needed rather predictably for me so far.
So what do we care about? If you care about being untrackable, then you have a couple of options, rotate VPNs, or cycle your public facing IP often. Additionally, every request you make MUST change up the request headers. You could cycle between 50 different sets of headers. Combine these two and you will likely be very hard to fingerprint.
If you only care about being identified, use Tor + the Tor browser which makes A LOT of traffic look identical.
One thing I can't get a good answer to is whether the "prewash" step is universally the case or not. I have a good Bosch dishwasher and there's no compartment for a bit of pre-wash detergent. I don't even know if my dishwasher cycle has a pre-wash step. I would assume the dishwasher manufacturer knows what's best.
The owner's manual gives advice about not pre-rinsing the dishes because the food bits actually help the wash cycle, so I'm wondering if it works differently from the two-step process in this video.
The manual is likely referring to not hand rinsing dishes before loading them which was very common 30 or 40 years ago. I had to train my Mother to stop doing that.
JWP Connatix is the most comprehensive independent video technology and monetization platform, helping broadcasters, publishers, and advertisers deliver premium streaming and online video experiences while maximizing video revenue across all screens. The company offers an end-to-end platform that streamlines live and on-demand video with hybrid monetization models, unique data and insights, unmatched customer service, and the largest independent premium video marketplace, providing the entire media ecosystem with enhanced scale, transparency, and revenue.
We are looking for a skilled and adaptable AI Engineer to join our AI Proof of Concepts team at JWP Connatix. You'll be responsible for implementing AI-First development methodologies, integrating sophisticated AI tools into our software pipeline, and rapidly building MVP prototypes that demonstrate innovative solutions. This role offers the opportunity to work at the cutting edge of AI-integrated development while delivering high-impact prototypes in a fast-paced, iterative environment. The ideal candidate thrives in rapid prototyping environments, has hands-on experience with AI tool integration, and enjoys the challenge of quickly turning concepts into working demonstrations for stakeholder validation. Candidates should also know, and work with AI code generation tools (Claude Code, Cursor, Copilot, etc).
If this sounds like something you'd be interested in please apply!
Hosting your services on AWS while having a status page on AWS during an AWS outage is an easily avoidable problem.
JWP Connatix is the most comprehensive independent video technology and monetization platform, helping broadcasters, publishers, and advertisers deliver premium streaming and online video experiences while maximizing video revenue across all screens. The company offers an end-to-end platform that streamlines live and on-demand video with hybrid monetization models, unique data and insights, unmatched customer service, and the largest independent premium video marketplace, providing the entire media ecosystem with enhanced scale, transparency, and revenue.
We are looking for a skilled and adaptable AI Engineer to join our AI Proof of Concepts team at JWP Connatix. You'll be responsible for implementing AI-First development methodologies, integrating sophisticated AI tools into our software pipeline, and rapidly building MVP prototypes that demonstrate innovative solutions. This role offers the opportunity to work at the cutting edge of AI-integrated development while delivering high-impact prototypes in a fast-paced, iterative environment. The ideal candidate thrives in rapid prototyping environments, has hands-on experience with AI tool integration, and enjoys the challenge of quickly turning concepts into working demonstrations for stakeholder validation. Candidates should also have know and work with AI code generation tools (Claude Code, Cursor, Copilot, etc).
If this sounds like something you'd be interested in please apply!