Deleted Comment
Deleted Comment
Deleted Comment
The generated text is wrong, false, inaccurate. I don't think we really need a jargon-y term for it.
But since we're talking about an article in Fortune, maybe it's fair to think of this strictly from a lay-person perspective.
Also, to the extent that it matters, I'm not necessarily saying "build something (completely) different than an LLM". I can picture a world with systems that include LLM's that interoperate with other components that do more of the "world model" and "commonsense reasoning" parts, yielding an aggregate system that does more/better than any of the parts could do in isolation.
When one confabulates, one shamelessly inserts words which sound smooth and grammatical and potentially acceptable to a listener (and the speaker), but shamelessly with no hesitation or corrections to achieve actual truth.
I suspect there needs to be some layering on of reflection and inhibition to avoid this.
On the chance that there is, I will do my part:
In mice.
I'm not sure whether higher currents are all that reasonable. Cooling the precision amp is an option to reduce the noise figure, but at that point you'll want optical isolation and battery supply too.
https://www.analog.com/media/en/training-seminars/tutorials/...
All my instincts would prefer they were using the AC 4-wire option, given any reason to question results, and/or issues with contact effects, etc.