Claude Code sometimes issues Bash commands for things it could easily do with builtin tools (e.g., shelling out to grep when it has a dedicated Grep tool). A hook that catches those and nudges the agent back — "you already have a tool for this" — could improve session quality without blocking anything.
I suspect there's a lot of overlap with what you've built: parse the command into tokens, run it against rules, decide. The difference is the output is "redirect" instead of "deny." Have you thought about non-blocking rules that warn or suggest rather than reject?
TBH, I am very hesitant to upload my CC logs to a third-party service.
The system is broken. We shouldn't be so vulnerable because of foundational infrastructure.
AI is more iPhone than ATM IMO.
What are the sharp edges if someone wanted to do this themselves? I assume there are official sources for this data but it's not trivial to "throw my agent at it".
Every time I try to seriously track metrics of my life, the excitement of the insight gets worn away by the friction of recording and managing. I expect LLMs can help reduce the cost of this by an order of magnitude but then, as you mention, the question is, what do you do / change / learn because of the data?
I recently started tracking nutrition macros with an iOS app MacroFactor which I really like. This is the first time taking my weight doesn't feel like a IDK SHRUG moment and I can actually map my food intake to my weight.
Finances is probably the other highly actionable data source that is such high friction to manage (downloading CSVs, OFXs, monthly...) that it has always been a false start for me. I finally wrote a service to talk to Plaid directly and I successfully used it to categorize my business expenses at tax time. I finally have programmatic access to my bank account data!
You conclusion is definitely a cautionary take: > the main conclusion is that it is not worth building your own solution, and investing this much time.
But, perhaps a subset of that data you find useful.
It might not have been faster this year... but I expect it will be next year.