Hypothesis: LLMs are actually text models. They cannot properly babble with a feel of a language because they don't model that, even less so than a Markov chain.
What they do is post structural text analysis and synthesis.
Increasingly it's become clear to me that the "hallucinating" chatbots are now causing humans themselves to hallucinate, believing that they are extracting something of real value and real productivity gains from these tools when in the vast majority of cases, it's actually the opposite. We're in fact losing value and the productivity gains are a mirage.
I used to roll my eyes when people would joke "the Internet was a mistake". Now I'm not so sure…
I recognize that a language model is not a living entity, but it excels at translating human language into computer programs. This capability allows users, from novices to expert developers, to interact with computers more effectively and extract greater performance than they could without such a tool, despite the model's occasional errors and somewhat blunt nature. It’s more than just a party trick.
TFA is another iteration on the wonderful semantics debate about intelligence.
a rock is, a typewriter is, a computer is, a human is. all on some varying levels. TFA takes it as a given that "no thinking" happens. I take it as a given that thinking happens unless we prove it cannot be happening.
once again, both sides of this argument are making equally valid/invalid baseless claims that are all unfalsifiable. it is completely impossible to determine who is "metaphysically" correct. we can only judge outcomes
Is dit jouw website?
Now listening to Dutch - not so much.
What they do is post structural text analysis and synthesis.
They claimed of LLMs:
> It’s really good at making us feel like it’s intelligent, but that’s no more real than a good VR headset convincing us to walk into a physical wall.
I used to roll my eyes when people would joke "the Internet was a mistake". Now I'm not so sure…
a rock is, a typewriter is, a computer is, a human is. all on some varying levels. TFA takes it as a given that "no thinking" happens. I take it as a given that thinking happens unless we prove it cannot be happening.
https://www.thornewolf.com/its-not-intelligent/
once again, both sides of this argument are making equally valid/invalid baseless claims that are all unfalsifiable. it is completely impossible to determine who is "metaphysically" correct. we can only judge outcomes