The lowest daily average cost per transaction in the last month was $0.81, and the highest was $2.52.
LLMs are multi-lingual without really trying assuming the languages in question are sufficiently well-represented in their training corpus.
I presume their ability to translate comes from the fact that there are lots of human-translated passages in their corpus; the same work in multiple languages, which lets them figure out the necessary mappings between semantic points (words.)
But I wonder about the translation capability of a model trained on multiple languages but with completely disjoint documents (no documents that were translations of another, no dictionaries, etc).
Could the emerging latent "concept space" of two completely different human languages be similar enough that the model could translate well, even without ever seeing examples of how a multilingual human would do a translation?
I don't have a strong intuition here but it seems plausible. And if so, that's remarkable because that's basically a science-fiction babelfish or universal translator.
Languages encode similar human experiences, so their conceptual spaces probably have natural alignments even without translation examples. Words for common objects or emotions might cluster similarly.
But without seeing actual translations, a model would miss nuances, idioms, and how languages carve up meaning differently. It might grasp that "dog" and "perro" relate to similar concepts without knowing they're direct translations.
Even if you were right (which I don't think) this is not how one should do or think about business.
Still, funny to see numerous hyped GenAI start-ups with bad monetary traction jump on the bandwagon and proclaim MCP as the latest revolution (after RAG, Agents, you name it)...All of these are simply tools which add zero value by themselves. Looking forward to the VC wake up calls.