We are experiencing a global assault on truth because truth provides a foundation on which to judge those in power.
Social media is the powerful's most potent weapon against truth. When social media combines with privatized intelligence companies, it creates a tool that can be used to divide and conquer societies, turning one half of the people against the other, deputizing the ignorant or vulnerable to fight for despots.
I always thought of the interbreeding of Homo Sapiens and Neanderthals as being a sort of “one off event” or something. I mean, obviously not literally just once, but maybe some little era, part of the process of us wiping them out.
But, 7000 years is a while. I mean, how long has our current civilization lasted? I guess it depends on how you define it. But certainly that coexistence, whatever it was, lasted longer than any countries or other institutions have…
edit: using an alt bc I use antiprocrast on may username and wanted to answer. hey, it's a saturday!
If it's the same author writing these pieces of fiction, then speaking by definition, the author's opinion is more special by being the creator of those works and therefore can create fiction that reinforces their headcanon (which is why it's called canon). So I think calling the author's opinion less special is wrong for the author's tie to the work will always be more special than the consumer by virtue of being the creator.
user-land drivers are a thing, heck they are the standard for modern micro kernel architectures
and even with hybrid kernels pushing part of the code out of the kernel into something like "user land co-processes" is more then doable, now it's now trivial to retrofit in a performant way and flexible way but possible
Mac has somewhat done that (but I don't know the details).
On Linux it's also possible, through with BPF a bit of a in-between hybrid (leaving some parts of the drivers in kernel, but as BPF programs which are much less likely to cause such issues compared to "normal" drivers).
A good example for that is how graphic drivers have developed on Linux, with most code now being in a) the user-land part of the driver and b) on the GPU itself leaving the in kernel part to be mostly just memory management.
And the thing is Windows has not enforced such direction, or even pushed hard for it AFIK, and that is something you can very well blame then for. You in general should not have a complicated config file parser in a kernel driver, that's just a terrible idea, some would say negligent and Windows shouldn't have certified drivers like that. (But then given that CrowdStrike insists that it _must_ be loaded on start (outside of recovery mode) I guess it would still have hung all systems even if the parsing would have been outsourced because it can't start if it can't parse it's config).
Even here it's pretty hard to blame them due to antitrust concerns. Just google the word Patchguard.
But with zero, this idea converges on the same thing. No matter what things you were counting, if you have zero of them, you have the same idea. And so you take a step towards the idea of a number being a concept in its own right, rather than existing purely for the purpose of counting or measurement.
It is the same sort of conceptual freedom that allows you to do things like add a number to a square. To deal with an equation like x + x ^ 2 = 0. If you're stuck with numbers "meaning" something beyond themselves, then you'll never add x to x^2. One is a length, the other an area. They are different objects.
This intellectual leap is one that must be made by all students of mathematics - and many young people do not.
So it's not at all surprising to me to see Arc already being mostly solved using existing models, just with different prompting techniques and some tool usage. At some point, the naysayers about LLMs are going to have to confront the problem that, if they are right about LLMs not really thinking/understanding/being sentient, then a very large percentage of people living today are also not thinking/understanding/sentient!
I don't think it is like that but rather Chollet wants to see stronger neuroplasticity in these models. I think there is a divide between the effectiveness of existing AI models versus their ability to be autonomous, robust and consistently learn from unanticipated problems.
My guess is Chollet wants to see something more similar to biological organisms especially mammals or birds in their level of autonomous nature. I think people underestimate the degree of novel problems birds and mammals alone face in just simply navigating their environment and it is the comparison here that LLMs, for now at least, seem lacking.
So when he says LLMs are not sentient, he's asking to consider the novel problems animals let alone humans have to face in navigating their environment. This is especially apparent in young children but declines as we age and gain experience/lose a sense of novelty.