I have a fundamental question: What happened to the idea of specifying the syntax of a language using a "formal" grammar, however terribly flawed and imperfect that has been--for decades--and still is? It seems that if you need a grammar, e.g., to write a programming language tool, it's considered "old school". If it's published, it's of secondary consideration because it's usually out-of-sync with the implementation (e.g., C#, soon to be version 10, the latest "spec"--draft at that--version 6). Some don't even bother publishing a grammar, e.g., Julia. Why would the authors of Julia not publish a "formal" grammar of any kind, instead just offer a link to the 2500+ lines of Scheme-like code? Are we to now write tools to machine-scrape the grammar from the hand-written code or derive the grammar from a ML model of the compiler?
Antlr-ng is Mike Lischke's port of Antlr4, which he likely undertook because ANTLR is used at Oracle for one MySQL product. It's not "official ANTLR," but Terence Parr granted him the use of the "ANTLR" name and allowed a fork to port the existing Antlr4 code to TypeScript.
Mike's Antlr-ng port of the Antlr4 code began with a Java-to-TypeScript translator he wrote. Along the way, he made some improvements to the TypeScript target.
But, Antlr-ng uses ALL(star). Therefore, it shares the same performance issues as Antlr4. I'm not sure where Mike wants to take Antlr-ng to address that issue.
ANTLR is presented as a generator for small, fast parsers. ALL(star) probably can't do that. Many grammars people write are pathological for ANTLR. People hand-write parsers, reverse-engineer the EBNF from the implementation as an afterthought, drop the critical semantic predicates from the EBNF, and then refactor it into something else—example: the Java Language Spec.