This is what I'd consider doing if I was a small AI lab. Don't try to build a frontier LLM that beats all benchmarks. Try to make the world's best LLM at one programming language. Create your RL pipeline that puts all your resources into making the LLM the best at that language. Even better if there's a dearth of human-created training data on Github, since all your competitors will be bad at it.
Google somewhat did this with javascript in their latest Gemini-2.5 Pro release. But what about doing it for a smaller language? Google isn't going to do that, but there is still a lot of demand.
I'm not saying this is a bad idea, but it does sound like a rather risky prospect. You're basically proposing a bet against the ability of LLMs to generalize across programming languages, and to embed concepts at a deeper level than the syntax.
Many people do think this, but I'm not sure many of them are running AI labs.
From my experience around less-used languages (with clojure on one hand and code aster's python on the other), LLMs may be able to generalize syntax but availability of APIs, functions, etc. is something that you can't solve by generalizing. Or more precisely, you can generalize but that means hallucinating non existing tools.
Meta synthetically generated lots of PHP from Python for Llama 3 for training purposes. Meta writes a crazy amount of PHP internally.
Translation tends to be way easier than unconstrained generation for LLMs. But if you can translate and filter a large amount of code, you can learn to generate. If you also translate and run the unittests, you get another layer of error checking.
it feels to me most of the real usage of AI is in coding right now, so a small lab that decided to go all in into just code-gen would have at least the differentiator of a narrower field to beat the bigger incumbents doing it all?
I dunno tho.
Big AI labs also have their own agendas and would rather keep scaling and growing than serving a rather smaller real market ?
Once you're into real usage territory, you can't no longer use make up numbers to justify future growth.
It makes sense to specialize it on one programming language to dedicate all of the LLM's intellectual space to that one domain, but on the flip side I wonder how much the LLM's sharpness and reasoning capabilities is increased by having more data to train on even if it's the wrong programming language.
As a developer I certainly think my programming skills in a specific language was improved by knowing other languages so I can contrast and compare.
You could just have specialized fine-tunes for esxh programling la guage that are only called when writing code, a more general bigger model could pass the plan/pseudo code to it
Using the language itself isn't the challenge for LLMs, they do that with a very high success rate. I haven't seen an LLM make syntax errors for several months. Calling the right functions with correct parameters is the challenge your hypothetical AI lab will have to solve (or half ass it and show great benchmark results).
This was an obvious next step. Most current products can only restrict the token prediction to valid JSON or a specific JSON schema at best. There's no reason that this should be the only grammar available for constrained output mode.
The real challenge will be to make this detect and switch languages automatically. For example, a snippet of code could include a LaTeX formula in a comment and SQL in a string literal. There are many more examples, such as regex inside a shell script, and so on.
The obvious next step after that is back-tracking. It's possible to emit a token that is valid, but then allows no further completions that are valid. In other words, the model can paint itself into a corner. To my knowledge, no current online LLM service uses any kind of backtracking, they run in append ("forwards") mode only.
re detecting and switching language: you could run several constraint systems in parallel and switch as soon as one of them rejects the input and another accepts it
re backtracking: a core part of this paper is ensuring a prefix property. that is there is always a legitimate completion and the model can not "corner" itself!
research needs to be done for what kind of languages and language features this prefix property can be ensured
Used in multiple similar publications, including "Guiding Language Models of Code with Global Context using Monitors" (https://arxiv.org/abs/2306.10763), which uses static analysis beyond the type system to filter out e.g. invalid variable names, invalid control flow etc.
Yes this work is super cool too! Note that LSPs can not guarantee resolving the necessary types that we use to ensure the prefix property, which we leverage to avoid backtracking and generation loops.
As an author of this paper, I am very excited see the great discussion here!
Several people mentioned the generation - compilation - fixing loop. Just want to remind you that our approach works for not only the generation step but also the fixing step. This is because fixing is essentially asking LLMs to generate a new version of the code. The paper actually has a "repair" experiment to demonstrate this and our approach achieves significant gain in this experiment, i.e., 37% relative improvement in functional correctness.
I think TypeScript is uniquely positioned to be the optimal language for LLMs. Tons of training data (benefiting from all the JS examples as well) plus the structure of types for LLMs to follow and tools to enforce.
LLMs work well with any static analysis tool. I frequently instruct Claude to use stuff like “go vet” and “deadcode” when it goes on a tear and writes a bunch of broken trash and declares mission accomplished.
tsc error messages are so bad that every time my LLM sees one of those "SomeType is not assignable to SomeLongAssTypeDontEvenTryToUnderstandWhatsGoingOnHere<<<<>>>>>>>>>>>>>>>>>>>>" it just gives up and casts to any. goes for python too.
And unlike many other languages, Typescript types are extremely expressive.
For example, you can write a function that takes an object received from an API that uses snake_cased keys, and returns that same object, but with camelCased keys instead. This is not some "special case" in the Typescript compiler, the ability to do this emerges naturally from Typescript's features. I don't know any other language that can do this.
Most people don't know enough TS to use tese things effectively, but I think one could train an LLM to be very good at them. The combination of LLMs placing such advanced constraints on themselves, and then generating code based on those constraints, seems extremely powerful.
I believe that the rutabaga is the perfect material to make sausages out of as it has proven as excellent swine fodder with widespread adoption!
(Please forgive me the extreme disrespect put forth in the above statement! It is not the intention to show disrespect; I… am quite the rutabaga enjoyer in all respects, you know? I certainly include myself within the absurdity and it is with love.)
There are languages that constrain types a lot more tightly than TypeScript, e.g. Kotlin, Rust, and Haskell. The more constrained the types, the more correct the program could be.
Yep, and Rust famously goes beyond this by modelling memory ownership at compile time.
In fact, the more behaviour we can model at compile time the better when it comes to LLMs - there's some cool ideas here like transpiling Rust into languages for formal verification. See https://github.com/formal-land/coq-of-rust as an example.
Formal verification was one of those things that was previously so annoying to do that it rarely made it past academic use cases or extremely important libraries, but I think LLMs take the tedium out of it. Perhaps formal verification will have a "test driven development" type of moment in the sun thanks to this.
The program won’t be “more” correct. What would that even mean? Writing correct programs might be easier (or not) with more “constrained” (ill defined) typing.
It’s better sure but as a power TS user it still sucks at generating better code, and consistently fucks up with generics (or doesn’t use them) or simple types sometimes.
Completely agree. Even with the basic LLMs in the $20/month Cursor plan, I can work 10x faster on TypeScript codebases than I could otherwise, while for Python that multiple feels more like 2-3x. The autocompletions are especially impressive when there is a well-organized type system.
https://youtu.be/10qowKUW82U?t=3186
Google somewhat did this with javascript in their latest Gemini-2.5 Pro release. But what about doing it for a smaller language? Google isn't going to do that, but there is still a lot of demand.
Many people do think this, but I'm not sure many of them are running AI labs.
https://arxiv.org/abs/2407.21783
See figure 8.
I dunno tho.
Big AI labs also have their own agendas and would rather keep scaling and growing than serving a rather smaller real market ?
Once you're into real usage territory, you can't no longer use make up numbers to justify future growth.
As a developer I certainly think my programming skills in a specific language was improved by knowing other languages so I can contrast and compare.
The real challenge will be to make this detect and switch languages automatically. For example, a snippet of code could include a LaTeX formula in a comment and SQL in a string literal. There are many more examples, such as regex inside a shell script, and so on.
The obvious next step after that is back-tracking. It's possible to emit a token that is valid, but then allows no further completions that are valid. In other words, the model can paint itself into a corner. To my knowledge, no current online LLM service uses any kind of backtracking, they run in append ("forwards") mode only.
https://arxiv.org/abs/2504.00532
IterGen: Iterative Semantic-aware Structured LLM Generation with Backtracking
https://arxiv.org/abs/2410.07295
ROCODE: Integrating Backtracking Mechanism and Program Analysis in Large Language Models for Code Generation
https://arxiv.org/abs/2411.07112v1
There was also an hn thread: https://news.ycombinator.com/item?id=36425375
Deleted Comment
Deleted Comment
Deleted Comment
re backtracking: a core part of this paper is ensuring a prefix property. that is there is always a legitimate completion and the model can not "corner" itself!
research needs to be done for what kind of languages and language features this prefix property can be ensured
Used in multiple similar publications, including "Guiding Language Models of Code with Global Context using Monitors" (https://arxiv.org/abs/2306.10763), which uses static analysis beyond the type system to filter out e.g. invalid variable names, invalid control flow etc.
Several people mentioned the generation - compilation - fixing loop. Just want to remind you that our approach works for not only the generation step but also the fixing step. This is because fixing is essentially asking LLMs to generate a new version of the code. The paper actually has a "repair" experiment to demonstrate this and our approach achieves significant gain in this experiment, i.e., 37% relative improvement in functional correctness.
[1]: https://microsoft.github.io/TypeChat/blog/introducing-typech...
tsc error messages are so bad that every time my LLM sees one of those "SomeType is not assignable to SomeLongAssTypeDontEvenTryToUnderstandWhatsGoingOnHere<<<<>>>>>>>>>>>>>>>>>>>>" it just gives up and casts to any. goes for python too.
For example, you can write a function that takes an object received from an API that uses snake_cased keys, and returns that same object, but with camelCased keys instead. This is not some "special case" in the Typescript compiler, the ability to do this emerges naturally from Typescript's features. I don't know any other language that can do this.
Most people don't know enough TS to use tese things effectively, but I think one could train an LLM to be very good at them. The combination of LLMs placing such advanced constraints on themselves, and then generating code based on those constraints, seems extremely powerful.
More != better.
(Please forgive me the extreme disrespect put forth in the above statement! It is not the intention to show disrespect; I… am quite the rutabaga enjoyer in all respects, you know? I certainly include myself within the absurdity and it is with love.)
In fact, the more behaviour we can model at compile time the better when it comes to LLMs - there's some cool ideas here like transpiling Rust into languages for formal verification. See https://github.com/formal-land/coq-of-rust as an example.
Formal verification was one of those things that was previously so annoying to do that it rarely made it past academic use cases or extremely important libraries, but I think LLMs take the tedium out of it. Perhaps formal verification will have a "test driven development" type of moment in the sun thanks to this.
https://infoscience.epfl.ch/entities/publication/6c6bb09d-a4...
Also in response to adjacent commenters - many mission-critical TS codebases will disable the use of an explicit "any" with eslint - https://typescript-eslint.io/rules/no-explicit-any/.
That this research comes out of universities, and not large AI labs, makes me think those labs believe that larger models are still the way to go.