Readit News logoReadit News
tripplyons · a month ago
For those who aren't aware, OpenAI has a very similar batch mode (50% discount if you wait up to 24 hours): https://platform.openai.com/docs/api-reference/batch

It's nice to see competition in this space. AI is getting cheaper and cheaper!

bayesianbot · a month ago
DeepSeek has gone a bit different route - they give automatic 75% discount between UTC 16:30-00:30

https://api-docs.deepseek.com/quick_start/pricing

fantispug · a month ago
Yes, this seems to be a common capability - Anthropic and Mistral have something very similar as do resellers like AWS Bedrock.

I guess it lets them better utilise their hardware in quiet times throughout the day. It's interesting they all picked 50% discount.

calaphos · a month ago
Inference throughout scales really well with larger batch sizes (at the cost of latency) due to rising arithmetic intensity and the fact that it's almost always memory BW limited.
qrian · a month ago
Bedrock has a batch mode but only for claude 3.5 which is like one year old, which isn't very useful.
briangriffinfan · a month ago
50% is my personal threshold for a discount going from not worth it to worth it.
laborcontract · a month ago
One open secret is that batch mode generations often take much less than 24 hours. I've done a lot of generations where I get my results within 5ish minutes.
ridgewell · a month ago
It can depend a lot on the shape of your batch to my understanding. A small batch job can be tasked out a lot quicker than a large batch job waiting for just the right moment where capacity fits.
dlvhdr · a month ago
The latest price increases beg to differ
dist-epoch · a month ago
Only because Flash was mispriced to start with. It was set too cheap compared with its capabilities. They didn't raise the price of Pro.
dmos62 · a month ago
What price increases?
lopuhin · a month ago
I find OpenAI's new flex processing more attractive, as it has the same 50% discount, but allows to use the same API as regular chat mode, so you can still do stuff where Batch API won't work (e.g. evaluating agents), and in practice I found it to work well enough when paired with client-side request caching: https://platform.openai.com/docs/guides/flex-processing?api-...
irthomasthomas · a month ago
It's nice that they stack the batch pricing and caching discount. I asked the Google guy if they did the same but got no reply, so probably not.

Edit: anthropic also stack batching and caching discounts

dsjoerg · a month ago
We used the previous version of this batch mode, which went through BigQuery. It didn't work well for us at the time because we were in development mode and we needed faster cycle time to iterate and learn. Sometimes the response would come back much faster than 24 hours, but sometimes not. There was no visibility offered into what response time you would get; just submit and wait.

You have to be pretty darn sure that your job is going to do exactly what you want to be able to wait 24 hours for a response. It's like going back to the punched-card era. If I could get even 1% of the batch in a quicker response and then the rest more slowly, that would have made a big difference.

Jensson · a month ago
> If I could get even 1% of the batch in a quicker response and then the rest more slowly, that would have made a big difference.

You can do this, just send 1% using the regular API.

Implicated · a month ago
I was also rather puzzled at this comment - why not dev against real time endpoints and batch when you've got things where you need them?
cpard · a month ago
It seems that the 24h SLA is standard for batch inference among the vendors and I wonder how useful it can be when you have no visibility on when the job will be delivered.

I wonder why they do that and who is actually getting value out of these batch APIs.

Thanks for sharing your experience!

vineyardmike · a month ago
It’s like most batch processes, it’s not useful if you don’t know what the response will be and you’re iterating interactively. It for data pipelines, analytics workloads, etc, you can handle that delay because no one is waiting on the response.

I’m a developer working on a product that lets users upload content. This upload is not time sensitive. We pass the content through a review pipeline, where we did moderation and analysis, and some business-specific checks that the user uploaded relevant content. We’re migrating some of that to an LLM based approach because (in testing) the results are just as good, and tweaking a prompt is easier than updating code. We’ll probably use a batch API for this and accept that content can take 24 hours to be audited.

3eb7988a1663 · a month ago
Think of it like you have a large queue of work to be done (eg summarize N decades of historical documents). There is little urgency to the outcome because the bolus is so large. You just want to maintain steady progress on the backlog where cost optimization is more important than timing.
jampa · a month ago
> who is actually getting value out of these batch APIs

I used the batch API extensively for my side project, where I wanted to ingest a large amount of images, extract descriptions, and create tags for searching. After you get the right prompt, and the output is good, you can just use the Batch API for your pipeline. For any non-time-sensitive operations, it is excellent.

YetAnotherNick · a month ago
Contrary to other comments it's likely not because of queue or general batch reasons. I think it is because that LLMs are unique in the sense that it requires lot of fixed nodes because of vRAM requirements and hence it is harder to autoscale. So likely the batch jobs are executed when they have free resources from interactive servers.
dist-epoch · a month ago
> you have no visibility on when the job will be delivered

You do have - within 24 hours. So don't submit requests you need in 10 hours.

lazharichir · a month ago
You can also do gemini flash lite for a subset and then batch the rest with flash or pro
serjester · a month ago
We've submitted tens of millions of requests at a time and never had it take longer than a couple hours - I think the zone you submit to plays a role.
segalord · a month ago
Man googles offerings are so inconsistent, batch processing has been available on vertex for a while now, I dont really get why they have two different offering in vertex and gemini, both are equally inaccessible
rockwotj · a month ago
It’s because vertex is the “entrrprise” offering that is hippa compliant, etc. That is why vertex only has explicit prompt caching and not implicit, etc. Vertex usage is never used for training or model feedback, but the gemini API does. Basically the Gemini API is Google’s way of being able to move faster like openai and the other foundational model providers, but still having an enterprise offering. Go check Anthropic’s documentation, they even say if you have enterprise or regulatory needs go use bedrock or vertex.
Deathmax · a month ago
Vertex's offering of Gemini very much does implicit caching, and has always been the case [1]. The recent addition of applying implicit cache hit discounts also works on Vertex, as long as you don't use the `global` endpoint and hit one of the regional endpoints.

[1]: http://web.archive.org/web/20240517173258/https://cloud.goog..., "By default Google caches a customer's inputs and outputs for Gemini models to accelerate responses to subsequent prompts from the customer. Cached contents are stored for up to 24 hours."

nikolayasdf123 · a month ago
omg I realized this is not Vertex AI face-palm
kerisi · a month ago
I've been using this with nothing notable to mention besides there seems to be a common bug where you receive an empty text response.

https://discuss.ai.google.dev/t/gemini-2-5-pro-with-empty-re...

pugio · a month ago
Hah, I've been wrestling with this ALL DAY. Another example of Phenomenal Cosmic Powers (AI) combined with itty bitty docs (typical of Google). The main endpoint ("https://generativelanguage.googleapis.com/v1beta/models/gemi...") doesn't even have actual REST documentation in the API. The Python API has 3 different versions of the same types. One of the main ones (`GenerateContentRequest`) isn't available in the newest path (`google.genai.types`) so you need to find it in an older version, but then you start getting version mismatch errors, and then pydantic errors, until you finally decide to just cross your fingers and submit raw JSON, only to get opaque API errors.

So, if anybody else is frustrated and not finding anything online about this, here are a few things I learned, specifically for structured output generation (which is a main use case for batching) - the individual request JSON should resolve to this:

```json { "request": { "contents": [ { "parts": [ { "text": "Give me the main output please" } ] } ], "system_instruction": { "parts": [ { "text": "You are a main output maker." } ] }, "generation_config": { "response_mime_type": "application/json", "response_json_schema": { "type": "object", "properties": { "output1": { "type": "string" }, "output2": { "type": "string" } }, "required": [ "output1", "output2" ] } } }, "metadata": { "key": "my_id" } } ```

To get actual structured output, don't just do `generation_config.response_schema`, you need to include the mime-type, and the key should be `response_json_schema`. Any other combination will either throw opaque errors or won't trigger Structured Output (and will contain the usual LLM intros "I'm happy to do this for you...").

So you upload a .jsonl file with the above JSON, and then you try to submit it for a batch job. If something is wrong with your file, you'll get a "400" and no other info. If something is wrong with the request submission you'll get a 400 with "Invalid JSON payload received. Unknown name \"file_name\" at 'batch.input_config.requests': Cannot find field."

I got the above error endless times when trying their exact sample code: ``` BATCH_INPUT_FILE='files/123456' # File ID curl https://generativelanguage.googleapis.com/v1beta/models/gemi... \ -X POST \ -H "x-goog-api-key: $GEMINI_API_KEY" \ -H "Content-Type:application/json" \ -d "{ 'batch': { 'display_name': 'my-batch-requests', 'input_config': { 'requests': { 'file_name': ${BATCH_INPUT_FILE} } } } }" ```

Finally got the job submission working via the python api (`file_batch_job = client.batches.create()`), but remember, if something is wrong with the file you're submitting, they won't tell you what, or how.

nacholibrev · a month ago
> So you upload a .jsonl file with the above JSON, and then you try to submit it for a batch job. If something is wrong with your file, you'll get a "400" and no other info. If something is wrong with the request submission you'll get a 400 with "Invalid JSON payload received. Unknown name \"file_name\" at 'batch.input_config.requests': Cannot find field."

Thanks for your post, I've stumbled upon the same issue as you.

So I should interpret the "Unknown name \"file_name\" at 'batch.input_config.requests'" as an error with the jsonl file and not the payload itself?

I'm trying to submit a batch with a .jsonl file, but I'm always getting the "Unknown name \"file_name\" at 'batch.input_config.requests'" error.

TheTaytay · a month ago
Thank you for posting this! (When I run into errors with posted sample code, I spend WAY too long assuming it’s my fault.)
druskacik · a month ago
I've been using OpenAI's batch API for some time, then replaced it with Mistral's batch API because it was cheaper (Mistral Small with $0.10 / $0.20 per million tokens was perfect for my use case). This makes me rethink my choice, e.g. Gemini 2.5 Flash-Lite seems to be a better model[0] with only a slight price increase.

[0] https://artificialanalysis.ai/leaderboards/models

nnx · a month ago
It would be nice if OpenRouter supported batch mode too, sending a batch and letting OpenRouter find the best provider for the batch within given price and response time.