The way that I heard it, it was the fact that Lisp environments on Sun workstations were able to outperform Lisp machines at a much better price point. And just like that, a significant AI specific industry collapsed, and its other promises came into question.
That said, all three versions are consistent. The fact that researchers thought that they were closer than they were caused them to overpromise and underdeliver. Then when the visible bleeding edge of their efforts publicly lost to a far cheaper architecture, their failure became very visible.
Which we call cause versus effect almost doesn't matter. All of these things happened, and lead to an AI winter. And we continued to get incremental progress until the unexpected success of Google Translate. Whose success was not welcomed by people who had been trying to get rule-based AI systems to work.
If all languages changed their reference parsers to tree-sitter, this would be moot, but that seems unlikely. Language parsers are often optimized beyond what is possible in a general purpose parser generator like tree-sitter and/or have ambiguities that cannot be resolved with the tree-sitter dsl.
What feels perhaps likely in the future is that a standard parse tree api emerges, analogous to lsp, and then language parsers could emit trees traversable by this api. Maybe it's just the tree-sitter c api with an alternate front end? Hard to say, but I suspect either something better than (but likely at least partially inspired by) tree-sitter will emerge or we will get stuck in a local minimum with tooling based on slightly incorrect language parsers.
I would say most modern editors (Helix, Neovim) do TreeSitter and LSP better than Emacs today and probably for many years to come