Readit News logoReadit News
alexellman commented on Track Your Cursor and Claude Code Usage – Open-Source, Live Global Stats   pricepertoken.com/coding-... · Posted by u/alexellman
alexellman · 4 months ago
Cursor has gotten more expensive and Claude Code is about to impose new rate limits. To help keep track of your spending and usage I built an open source tracker that tracks both together.

It works in your command line and updates a live dashboard on my website (optional). I also aggregate everyone's token usage together and breakdown the models people are using.

The repo is here https://github.com/ellmanalex/pricepertoken-ai-coding-tracke...

alexellman commented on Show HN: Price Per Token – LLM API Pricing Data   pricepertoken.com/... · Posted by u/alexellman
GaggiX · 5 months ago
The input is wrong tho

Your website reports 0.30$ for input and that wouldn't make any sense as it would be priced the same as the bigger Flash model.

alexellman · 5 months ago
ok yeah fixed that one, sorry...
alexellman commented on Show HN: Price Per Token – LLM API Pricing Data   pricepertoken.com/... · Posted by u/alexellman
iambateman · 5 months ago
This is cool! Two requests:

- Filter by model "power" or price class. I want compare the mini models, the medium models, etc.

- I'd like to see a "blended" cost which does 80% input + 20% output, so I can quickly compare the overall cost.

Great work on this!

alexellman · 5 months ago
thanks for the feedback!
alexellman commented on Show HN: Price Per Token – LLM API Pricing Data   pricepertoken.com/... · Posted by u/alexellman
peterspath · 5 months ago
I am missing Grok
alexellman · 5 months ago
added
alexellman commented on Show HN: Price Per Token – LLM API Pricing Data   pricepertoken.com/... · Posted by u/alexellman
Fanofilm · 5 months ago
They should add grok. I use grok.
alexellman · 5 months ago
I just added grok
alexellman commented on Show HN: Price Per Token – LLM API Pricing Data   pricepertoken.com/... · Posted by u/alexellman
sophia01 · 5 months ago
But the data is... wrong? Google Gemini 2.5 Flash-Lite costs $0.10/mtok input [1] but is shown here as $0.40/mtok?

[1] https://ai.google.dev/gemini-api/docs/pricing#gemini-2.5-fla...

alexellman · 5 months ago
the data is not wrong you are reading my table wrong

edit: my bad I was wrong shouldnt have responded like this

alexellman commented on Show HN: Price Per Token – LLM API Pricing Data   pricepertoken.com/... · Posted by u/alexellman
dust42 · 5 months ago
As user murshudoff mentioned elsewhere in the discussion, openrouter has an endpoint to get the prices. Takes 1 minutes to get them.
alexellman · 5 months ago
then use OpenRouter, totally fine by me. Thought a dedicated website just for this would be useful.
alexellman commented on Show HN: Price Per Token – LLM API Pricing Data   pricepertoken.com/... · Posted by u/alexellman
awongh · 5 months ago
This is great, but as others have mentioned the UX problem is more complicated than this:

- for other models there are providers that serve the same model with different prices

- each provider optimizes for different parameters: speed, cost, etc.

- the same model can still be different quantizations

- some providers offer batch pricing (e.g., Grok API does not)

And there are plenty of other parameters to filter over- thinking vs. non-thinking, multi-modal or not, etc. not to even mention benchmarks ranking.

https://artificialanalysis.ai gives a blended cost number which helps with sorting a bit, but a blended cost model for input/output costs are going to change depending on what you're doing.

I'm still holding my breath for a site that has a really nice comparison UI.

Someone please build it!

alexellman · 5 months ago
would a column for "provider" meaning the place you are actually making the call to solve this
alexellman commented on Show HN: Price Per Token – LLM API Pricing Data   pricepertoken.com/... · Posted by u/alexellman
pierre · 5 months ago
Main issue is that token are not equivalent across provider / models. With huge disparity inside provider beyond the tokenizer model:

- An image will take 10x token on gpt-4o-mini vs gpt-4.

- On gemini 2.5 pro output token are token except if you are using structure output, then all character are count as a token each for billing.

- ...

Having the price per token is nice, but what is really needed is to know how much a given query / answer will cost you, as not all token are equals.

alexellman · 5 months ago
yeah I am going to add an experiment that runs everyday and the cost of that will be a column on the table. It will be something like summarize this article in 200 words and every model gets the same prompt + article

u/alexellman

KarmaCake day163July 20, 2020View Original