https://www.professorwatchlist.org/
https://www.mediamatters.org/charlie-kirk/charlie-kirk-makes...
https://www.azregents.edu/news-releases/abor-chair-statement
https://www.professorwatchlist.org/
https://www.mediamatters.org/charlie-kirk/charlie-kirk-makes...
https://www.azregents.edu/news-releases/abor-chair-statement
Most of the other issues in the article can be solved be wrapping things in more messages. Not great, not terrible.
As with the tightly-coupled issues with Go, I'll keep waiting for a better approach any decade now. In the meantime, both tools (for their glaring imperfections) work well enough, solve real business use cases, and have a massive ecosystem moat that makes them easy to work with.
They did it essentially as a linked list, C-strings, or UTF-8 characters: "current data, and is there more (next pointer, non-null byte, continuation bit set)?" They also noted that it could have this semantics without necessarily following this implementation encoding, though that seems like a dodge to me; length-prefixed array is a perfectly fine primitive to have, and shouldn't be inferred from something that can map to it.
The Laplace transform shines in having nicer convergence properties in some specific cases. While those are extremely valuable for control problems, it really is a much more specialized theory, not nearly as widely applicable. (You can come up with n-d versions. The obvious thing to do is copy the Fourier case and iteratively Laplace transform on each coordinate; the special role of one direction either directly in the unilateral case, or indirectly via growth properties in the bilateral case make it hard to argue that this can develop to something more unifying; the domain isn't preserved under rotation.)
No human has ever managed to read out his connectome without external instrumentation. There were entire human civilizations that thought that the seat of consciousness was the heart - which, for creatures that claim to know how their own minds work, is a baffling error to make.
LLMs are quite similar in that to humans. They, too, have no idea what their hidden size is, or how many weights they have, or how exactly are the extra modalities integrated into them, or whether they're MoE or dense. They're incredibly ignorant of their own neural architecture. And if you press them on it, they'll guess, and they'll often be wrong.
The difference between humans and LLMs comes down to the training data. Humans learn continuously - they remember what they've seen and what they haven't, they try things, they remember the outcomes, and get something of a grasp (and no, it's not anything more than "something of a grasp") of how solid or shaky their capabilities are. LLMs split training and inference in two, and their trial-and-error doesn't extend beyond a context window. So LLMs don't get much of that "awareness of their own capabilities" by default.
So the obvious answer is to train that awareness in. Easier said than done. You need to, essentially, use a training system to evaluate an LLM's knowledge systematically, and then wire the awareness of the discovered limits back into the LLM.
OpenAI has a limited-scope version of this in use for GPT-5 right now.
(To be sure, there are plenty of cases where it is clear that we are only making up stories after the fact about why we said or did something. But sometimes we do actually know and that reconstruction is accurate.)
As a "Free Speech Absolutionist", I think as much material as possible should be in public libraries, including material that some people object to. I also think that school libraries should be curated to what is appropriate for the audience. The rub here is defining what is "appropriate". Silencing minority literature is bad. Also allowing my elementary school kids to check out "the turner diaries" is bad. There needs to be a balance.
A 7-year-old doesn't need to read about nearly any topic. Excluding any mention of all of those subjects from the school library leaves a nearly empty library.
For that heavy-handed of a response to be _legally mandated_ requires not just "no need", but some strong evidence of harm. Mentions of sex, oral or otherwise, doesn't actually have much evidence of harm. Certain treatments of it might -- but that's not what the law targets, nor can effectively target. It covers mere mentions or small bits of explicit language, even where that is necessary for the effect of the book. These can and do make parents profoundly uncomfortable, though, and that is worth taking into consideration.
I would think that the usual approach of professional librarians curating based on their own judgement, subject to some oversight from the local school boards to take into account these valid discomforts, but largely baseless fears would be a far better approach.
ls -lah /usr/bin/say
-rwxr-xr-x 1 root wheel 193K 15 Nov 2024 /usr/bin/say
Usage: M1-Mac-mini ~ % say "hello world this is the kitten TTS model speaking"Nothing is perfect, everything has its failure conditions. The question is where do you choose to place the bar? Do you want your component to work at 60, 80, or 100C? Do you want it to work in high radiation environments? Do you want it to withstand pathological access patterns?
So in other words, there isnt a sufficent market for GPUs at double the $/GB RAM but are resilient to rowhammer attacks to justify manufacturing them.
The president absolutely is accountable. The problem is the Congress for their own reasons refuse to hold it to account. The Congress could remove any president in less than 24 hours with simple majority for no reason whatsoever.
District courts do not create precedent. Precedent comes from appellate courts.
Or the commissioners that are appointed by the democratically elected heads of the member state governments?