We added /brand.json and /brand.txt to our website - structured files that define how we sound, what words we use, and what to avoid, what colors to use, and where to get the logos from. Now AI tools have context instead of guessing.
Feels like this should be standard. Curious what others think.
When we rebranded BrainGrid, I wanted a simple, repeatable way to tell any LLM or coding agent what the brand is, without re-explaining it in prompts every time.
I ended up creating two files:
https://www.braingrid.ai/brand.json
https://www.braingrid.ai/brand.txt
Together, they describe tone, voice, terminology, naming conventions, and visual guidelines in a way that is easy for both humans and LLMs to consume.
I tested this by having Claude Code update the branding across our docs site: https://docs.braingrid.ai/ . The experience was smooth and required very little back and forth. The agent had the context it needed up front.
This made me wonder if we should treat brand context the same way we treat things like README files or API specs.
Would it make sense to standardize something like /brand.json or /brand.txt as a common convention for LLM-assisted development?
Curious if others have run into the same issue, or are solving brand consistency with AI in a different way.
The interesting part is the review layer — AI traces each acceptance criterion to specific lines in the PR diff, catching semantic gaps that lint and type-check miss.
We also run browser tests against the live deployment with database-level verification. Happy to answer questions about the tooling or where it breaks down.