I'd prefer it just specify a number of tokens rather than be variable on demand - I see that lets them be more generous during low periods but the opacity of it all sucks. I have 5-minute time-of-use pricing on my electricty and can look up the current rate on my phone in an insant - why not simply provide an API to look up the current "demand factor" for Claude (along with the rules for how the demand factor can change - min and max values for example) and let it be fully transparent?
But, I think you might be picturing ADRs as heavier than they need to be. Most changes seem like routine updates ("We decided Postgres" → "We're migrating to Aurora") that go through normal code review. The big architectural shifts that would trigger Security/Compliance review probably should anyway, ADR or not.
The key insight is that ADRs document decisions, they don't create them. If a change is big enough to need committee review, it likely needs that scrutiny regardless of whether there's documentation.
The alternative, undocumented drift, often creates much longer delays when people rediscover decisions during incidents. Curious to hear from folks who've actually implemented this at scale, though.
What’s still missing is structured error semantics. Right now, when a tool explodes the LLM only sees a raw stack trace, so it can’t decide whether to retry or bail. Today, I saw another Show HN post on MCP here: https://news.ycombinator.com/item?id=44778968 which converts exceptions into a small JSON envelope (error, details, isRetryable, etc.) that agents can interpret. Dropping that into mcp-use’s client loop would give you observability + smarter retries for almost free.
I might be missing some important nuances here between client side and server side MCP error handling, which I’ll leave to the authors to discuss. Just thought there might be a good open source collab opportunity here.
What prompted this post was trying to rewrite the initial parse logic for my project PdfPig[0]. I had originally ported the Java PDFBox code but felt like it should be 'simple' to rewrite more performantly. The new logic falls back to a brute-force scan of the entire file if a single xref table or stream is missed and just relies on those offsets in the recovery path.
However it is considerably slower than the code before it and it's hard to have confidence in the changes. I'm currently running through a 10,000 file test-set trying to identify edge-cases.
The 10k-file test set sounds great for confidence-building. Are the failures clustering around certain producer apps like Word, InDesign, scanners, etc.? Or is it just long-tail randomness?
Reading the PR, I like the recovery-first mindset. If the common real-world case is that offsets lie, treating salvage as the default is arguably the most spec-conformant thing you can do. Slow-and-correct beats fast-and-brittle for PDFs any day.
"That’s it. That’s the Advice Process in its entirety." (speak to everyone involved).
Presumably anyone with the term Managing as a prefix in their job title is expected to glaze over at roughly this point.
Then we get to the meat: "The four supporting Elements". So I try to find out about ADRs:
I follow the first link:
https://www.thoughtworks.com/radar/techniques/lightweight-ar...
and eventually end up with this beauty (big download button at the bottom of the page from above):
https://www.thoughtworks.com/content/dam/thoughtworks/docume...
Would you mind pointing us at an actual definition of ADRs, please?
There Harmel-Law defines ADRs as "lightweight documents, frequently stored in source code repositories alongside the artefacts they describe." He also provides a handy "Elements of an ADR" table. Let me know if you're still having problems finding it.
Here’s a link to Harmel-Laws’post if anyone's interested: https://martinfowler.com/articles/scaling-architecture-conve...
What you wrote is great can I copy/paste it in the blog post? (Referring you of course)
[1]: https://github.com/X11Libre/xserver/blob/master/doc/Xnamespa...