Readit News logoReadit News
AMeckes commented on Show HN: mcpd – manage MCP servers with a single config file   github.com/mozilla-ai/mcp... · Posted by u/AMeckes
AMeckes · 5 days ago
We built mcpd to make MCP servers easier to work with. It's a daemon that runs MCP servers as subprocesses and exposes a unified HTTP API. Write your config once in .mcpd.toml and it works everywhere. Let us know what you think!
AMeckes commented on Show HN: Any-LLM – Lightweight router to access any LLM Provider   github.com/mozilla-ai/any... · Posted by u/AMeckes
swyx · a month ago
> LiteLLM: While popular, it reimplements provider interfaces rather than leveraging official SDKs, which can lead to compatibility issues and unexpected behavior modifications

with no vested interest in litellm, i'll challenge you on this one. what compatibility issues have come up? (i expect text to have the least, and probably voice etc have more but for text i've had no issues)

you -want- to reimplement interfaces because you have to normalize api's. in fact without looking at any-llm code deeply i quesiton how you do ANY router without reimplementing interfaces. that's basically the whole job of the router.

AMeckes · a month ago
Both approaches work well for standard text completion. Issues tend to be around edge cases like streaming behavior, timeout handling, or new features rolling out.

You're absolutely right that any router reimplements interfaces for normalization. The difference is what layer we reimplement at. We use SDKs where available for HTTP/auth/retries and reimplement normalization.

Bottom line is we both reimplement interfaces, just at different layers. Our bet on SDKs is mostly about maintenance preferences, not some fundamental flaw in LiteLLM's approach.

AMeckes commented on Show HN: Any-LLM – Lightweight router to access any LLM Provider   github.com/mozilla-ai/any... · Posted by u/AMeckes
t_minus_100 · a month ago
https://xkcd.com/927/ . LiteLLM rocks !
AMeckes · a month ago
I didn't even need to click the link to know what this comic was. LiteLLM is great, we just needed something slightly different for our use case.
AMeckes commented on Show HN: Any-LLM – Lightweight router to access any LLM Provider   github.com/mozilla-ai/any... · Posted by u/AMeckes
klntsky · a month ago
Anything like this, but in TypeScript?
AMeckes · a month ago
Python only for now. Most providers have official TypeScript SDKs though, so the same approach (wrapping official SDKs) would work well in TS too.
AMeckes commented on Show HN: Any-LLM – Lightweight router to access any LLM Provider   github.com/mozilla-ai/any... · Posted by u/AMeckes
chuckhend · a month ago
LiteLLM is quite battle tested at this point as well.

> it reimplements provider interfaces rather than leveraging official SDKs, which can lead to compatibility issues and unexpected behavior modifications

Leveraging official SDKs also does not solve compatibility issues. any_llm would still need to maintain compatibility with those offical SDKs. I don't think one way clearly better than the other here.

AMeckes · a month ago
That's true. We traded API compatibility work for SDK compatibility work. Our bet is that providers are better at maintaining their own SDKs than we are at reimplementing their APIs. SDKs break less often and more predictably than APIs, plus we get provider-implemented features (retries, auth refresh, etc) "for free." Not zero maintenance, but definitely less. We use this in production at Mozilla.ai, so it'll stay actively maintained.
AMeckes commented on Show HN: Any-LLM – Lightweight router to access any LLM Provider   github.com/mozilla-ai/any... · Posted by u/AMeckes
honorable_coder · a month ago
How do I put this behind a proxy? You mean run the module as a containerized service?

But provider switching is built in some of these - and the folks behind envoy built: https://github.com/katanemo/archgw - developers can use an OpenAI client to call any model, offers preference-aligned intelligent routing to LLMs based on usage scenarios that developers can define, and acts as an edge proxy too.

AMeckes · a month ago
To clarify: any-llm is just a Python library you import, not a service to run. When I said "put it behind a proxy," I meant your app (which imports any-llm) can run behind a normal proxy setup.

You're right that archgw handles routing at the infrastructure level, which is perfect for centralized control. any-llm simply gives you the option to handle routing in your application code when that makes sense (For example, premium users get Opus-4). We leave the architectural choice to you, whether that's adding a proxy, keeping routing in your app, or using both, or just using any-llm directly.

AMeckes commented on Show HN: Any-LLM – Lightweight router to access any LLM Provider   github.com/mozilla-ai/any... · Posted by u/AMeckes
honorable_coder · a month ago
a proxy means you offload observability, filtering, caching rules, global rate limiters to a specialized piece of software - pushing this in application code means you _cannot_ do things centrally and it doesn't scale as more copies of your application code get deployed. You can bounce a single proxy server neatly vs. updating a fleet of your application server just to monkey patch some proxy functionality.
AMeckes · a month ago
Good points! any-llm handles the LLM routing, but you can still put it behind your own proxy for centralized control. We just don't force that architectural decision on you. Think of it as composable: use any-llm for provider switching, add nginx/envoy/whatever for rate limiting if you need it.

u/AMeckes

KarmaCake day53February 6, 2025View Original