Rater, "hallucinations" are spurious replacements of factual knowledge with fictional material caused by the use of statistical process (the pseudo random number generator used with the "temperature" parameter of neural transformers): token prediction without meaning representation.
[typo fixed]
I follow the rnd and progress in this space and I haven't heard anyone make a fuss about it. They are all LLMs or transformers or neural nets but they can be trained or optimized to do different things. For sure, there's terms like Reasoning models or Chat models or Instruct models and yes they're all LLMs.
But you can now start combining them to have hybrid models too. Are Omni models that handle audio and visual data still "language" models? This question is interesting in its own right for many reasons, but not to justify or bemoan the use of term LLM.
LLM is a good term, it's a cultural term too. If you start getting pedantic, you'll miss the bigger picture and possibly even the singularity ;)
Alternatively I have seen debates as to what counts as a 'Small Language Model' that probably are nonsensical. Particularly because in my personal language war the term 'small language model' shouldn't even exist (no one knows that the threshold is, and our 'small' language models are bigger than the 'large' language models from just a few years ago).
This is fairly typical of new technology. Marketing departments will constantly come up with new terms or try to take over existing terms to push agendas. Terms with defined meaning will get abused by casual participants and loose all real meaning. Individuals new to the field will latch on to popular misuses of terms as they try to figure out what everyone is talking about and perpetuate definition creep. Old hands will overly focus on hair splitting exercises that no one else really cares about and sigh in dismay as their carefully cultured taxonomies collapse under expansion of interest in their field.
It will all work itself out in 10 years or so.
1. Its still perceived as an issue of competitive advantage
2. There is a serious concern about backlash. The public's response to finding out that companies have used AI has often not been good (or even reasonable) -- particularly if there was worker replacement related to it.
It's a bit more complicated with "agents" as there are 4 or 5 competing definitions for what that actually means. No one is really sure what an 'agentic' system is right now.
They may already implement this technique, we can't know.
I have been testing the model for the last few hours and it does seem to be an improvement on LLAMA 3.1 upon which it is based. I have not tried to compare it to Claude or GPT4o because I don't expect a 70b model to outperform models of that class no matter how good it is. I would happy to be wrong though...
So I haven't found any reason to use Nvidia's installers.
The pleasant surprise has been games. I thought I would have to abandon gaming or keep a second Windows partition, but so far all the games I have tried have run 100% -- even though it took some minimal tweaking in some cases. V-Rising, Elder Scrolls Online, New World, RimWorld... all work as well as on Windows thanks to Steam and Proton. (Rimworld required one change in the config .ini to support my super ultra wide monitor, ESO had to be manually imported into Steam. V-Rising required installing the Proton-GE version to address a problem with cut scenes). It's a bit tedious to have to address small problems like that, but more than worth it to get rid of an OS that I feel is constantly trying to attack me.
I am moving my wife to Zorin next. I can't recommend Debian to most people that just want to use a desktop. It was difficult for me and I have decades of experience in running Linux servers. I will probably stick with Debian as its working great now, but too many things were too hard to make it an option for most desktop users I would imagine. I can recommend ditching Windows for some flavor of Linux.