Hey HN! We built Lunon to make LLM development way less of a headache. Ever wanted to see how different models handle the same prompt without all the setup hassle? That's what we fixed.
Our API lets you compare Claude, GPT, Mistral and others in real-time with just a few lines of code. No more complex infrastructure or managing multiple API connections - we handle all that boring stuff behind the scenes.
Plus, you can cut costs by intelligently routing requests to the right model for each task. Use the powerful (expensive) models only when you really need them.
If you're building with LLMs and tired of the integration headaches, would love to hear feedback!
No docs or info without signing up, confusing Grok and Groq, and claiming you have access to o4 models which haven't been released makes this look like an incredibly unserious offering.
Consider enhancing your privacy policy to match industry standards, similar to OpenRouter. Focus on addressing significant questions, like how your product stands out from established competitors. Ensure there’s no confusion between Grok and Groq. Also, verify the availability of features like access to o4 models to avoid any misunderstandings.
more constructively, this is the kind of code ai is great at writing in my codebase and once it’s there locally as a library, it’s free. it’s not clear to me why I’d want it behind an api instead, at least as a solo-ish dev. I’d recommend looking at the similar openrouter that seems to have some traction and think about why users are using them. You might also think about deeper agent or eval stuff you could add that are beyond the scope of the little backend switching lib I could have Claude write. Anyway, good luck and thanks for sharing!
On your point about local code vs. API:
- We handle all the authentication, rate limiting, and API differences between providers, which becomes complex when working with multiple models. Our platform allows smart switching between different models depending on the message context.
- Our switching is optimized for performance. We provide detailed usage analytics and cost management across models so you can optimize which models to use where. This allows you to see what costs your users may be incurring / misbehaving.
We're building features that go beyond what's easily replicable locally, including some of the agent and evaluation capabilities you mentioned (rolling out over the next two weeks). OpenRouter is definitely doing great things in this space. We're taking a slightly different approach by focusing on bringing customization within the dashboard to allow for on-the-fly updates without pushing new code edits.
Appreciate the feedback and hopefully you get a chance to try out the tool!
Some people may also find the grok/groq section confusing
Will explore some more ideas for the Groq & Grok part too.
A. a pricing link that takes you to NOT pricing B. listing Grok and Groq in the same block as if they have anything to do one another is a bad choice
A few others have also mentioned about the Grok & Groq confusion. Will think through some ideas here and update.
Appreciate the feedback!