I don't see you justify this with an explanation of the ROI anywhere.
To be reductionist, it seems the claimed product value is "better RAG for code."
The difficulties with RAG are at least:
1. Chunking: how large and how is the beginning/end of a chunk determined
2. Given the above quote, how much or many RAG results are put into the context? It seems that the API caller makes this decision, but how?
I'm curious about your approach and how you evaluated it.
No manual chunking. We index with multiple strategies (hierarchical docs structure, symbol boundaries, semantic splitting) so the agent can jump into the right part without guessing chunk edges.
Context is selective. The agent retrieves minimal snippets and can fetch more iteratively as it reasons, rather than preloading large chunks. We benchmark this using exact match evaluations on real agent tasks: correctness, reduced hallucination, and fewer round trips.
Select your coding agentCursor Installation method Local Remote Runs locally on your machine. More stable. Requires Python & pipx.
Create API Key test Create Organization required to create API keys
i can not create api key? the create button is grey and can not be pressed.
How does Nia handle project-specific patterns? Like if I always use a certain folder structure or naming convention, does it learn that?