- Claude (via Hubspot MCP) was paginating over contacts, at 40s per 800 contacts and ~150k tokens (triggering compaction) - full run was 120 of these loops @ 80 minutes and 18M tokens
- Claude + Max was 1 `max search hubspot --filter` command piped to sort | uniq -c - plus 1 `max search gdrive` query matching each of the results of the previous query, piped to sort | uniq -c - The rest of the tokens were spent producing an output from 20 words + 20 numbers
(Both of these calculations ignore cached tokens)
It works by schematising the upstream and making data locally synchronised + a common query language, so the longer term goals are more about avoiding API limits / escaping the confines of the MCP query feature set - i.e. token savings on reading data itself (in many cases, savings can be upwards of thousands of times fewer tokens)
Looking forward to trying this out!