Readit News logoReadit News
wnoise commented on EU Council Approves New "Chat Control" Mandate Pushing Mass Surveillance   reclaimthenet.org/eu-coun... · Posted by u/fragebogen
iso1631 · a month ago
Which ones are unelected - the democratically elected heads of the member state governments? Or the democratically elected members of the EU parliament?

Or the commissioners that are appointed by the democratically elected heads of the member state governments?

wnoise · a month ago
The commissioners?
wnoise commented on User ban controversy reveals Bluesky’s decentralized aspiration isn’t reality   plus.flux.community/p/ban... · Posted by u/gregsadetsky
felixgallo · 3 months ago
wnoise · 3 months ago
That's for Charlie Kirk et al, not Jesse Singal.
wnoise commented on Protobuffers Are Wrong (2018)   reasonablypolymorphic.com... · Posted by u/b-man
ericpauley · 4 months ago
I lost the plot here when the author argued that repeated fields should be implemented as in the pure lambda calculus...

Most of the other issues in the article can be solved be wrapping things in more messages. Not great, not terrible.

As with the tightly-coupled issues with Go, I'll keep waiting for a better approach any decade now. In the meantime, both tools (for their glaring imperfections) work well enough, solve real business use cases, and have a massive ecosystem moat that makes them easy to work with.

wnoise · 4 months ago
They didn't. Pure lambda calculus would have been "a function that when applied to a number encoded as a function, extracts that value".

They did it essentially as a linked list, C-strings, or UTF-8 characters: "current data, and is there more (next pointer, non-null byte, continuation bit set)?" They also noted that it could have this semantics without necessarily following this implementation encoding, though that seems like a dodge to me; length-prefixed array is a perfectly fine primitive to have, and shouldn't be inferred from something that can map to it.

wnoise commented on What Is the Fourier Transform?   quantamagazine.org/what-i... · Posted by u/rbanffy
idiotsecant · 4 months ago
Everyone loves the fourier transform because it's easy to understand but everyone ignores the laplace transform, which is much more beautiful, imo, and quite related.
wnoise · 4 months ago
They are quite related, but the Fourier transform seems far more beautiful and generalizable: you can do 2-d, 3-d, etc transforms, and they automatically respect the symmetries of the problems (e.g. rotating the coordinate system rotates the Fourier transform in a corresponding way; frequencies and wave-vectors have meanings). This fully extends to any "nice" abelian group satisfying minor technical conditions, where the mapping is to it's dual group. It even mostly extends to non-abelian groups (representation theory), though some nice properties are lost.

The Laplace transform shines in having nicer convergence properties in some specific cases. While those are extremely valuable for control problems, it really is a much more specialized theory, not nearly as widely applicable. (You can come up with n-d versions. The obvious thing to do is copy the Fourier case and iteratively Laplace transform on each coordinate; the special role of one direction either directly in the unilateral case, or indirectly via growth properties in the bilateral case make it hard to argue that this can develop to something more unifying; the domain isn't preserved under rotation.)

wnoise commented on Why language models hallucinate   openai.com/index/why-lang... · Posted by u/simianwords
ACCount37 · 4 months ago
Humans can't "inspect their own weights and examine the contents" either.

No human has ever managed to read out his connectome without external instrumentation. There were entire human civilizations that thought that the seat of consciousness was the heart - which, for creatures that claim to know how their own minds work, is a baffling error to make.

LLMs are quite similar in that to humans. They, too, have no idea what their hidden size is, or how many weights they have, or how exactly are the extra modalities integrated into them, or whether they're MoE or dense. They're incredibly ignorant of their own neural architecture. And if you press them on it, they'll guess, and they'll often be wrong.

The difference between humans and LLMs comes down to the training data. Humans learn continuously - they remember what they've seen and what they haven't, they try things, they remember the outcomes, and get something of a grasp (and no, it's not anything more than "something of a grasp") of how solid or shaky their capabilities are. LLMs split training and inference in two, and their trial-and-error doesn't extend beyond a context window. So LLMs don't get much of that "awareness of their own capabilities" by default.

So the obvious answer is to train that awareness in. Easier said than done. You need to, essentially, use a training system to evaluate an LLM's knowledge systematically, and then wire the awareness of the discovered limits back into the LLM.

OpenAI has a limited-scope version of this in use for GPT-5 right now.

wnoise · 4 months ago
No, humans can't inspect their own weights either -- but we're not LLMs and don't store all knowledge implicitly as probabilities to output next token. It's pretty clear that we also store some knowledge explicitly, and can include context of that knowledge.

(To be sure, there are plenty of cases where it is clear that we are only making up stories after the fact about why we said or did something. But sometimes we do actually know and that reconstruction is accurate.)

wnoise commented on "None of These Books Are Obscene": Judge Strikes Down Much of FL's Book Ban Bill   bookriot.com/penguin-rand... · Posted by u/healsdata
bigfishrunning · 4 months ago
I think the problem with these laws is that they're too general. I think we can all agree that there are topics that should not be in elementary school libraries -- I don't think my 7 year old needs to be reading about oral sex for instance, regardless of the gender or sexuality of the participants. The real problem is the nature of the wording of "pornographic", which is poorly defined as "I know it when i see it", and stretched by disingenuous people with an agenda.

As a "Free Speech Absolutionist", I think as much material as possible should be in public libraries, including material that some people object to. I also think that school libraries should be curated to what is appropriate for the audience. The rub here is defining what is "appropriate". Silencing minority literature is bad. Also allowing my elementary school kids to check out "the turner diaries" is bad. There needs to be a balance.

wnoise · 4 months ago
Topics? No, I don't agree with that. Almost any subject can be treated in an age-appropriate manner.

A 7-year-old doesn't need to read about nearly any topic. Excluding any mention of all of those subjects from the school library leaves a nearly empty library.

For that heavy-handed of a response to be _legally mandated_ requires not just "no need", but some strong evidence of harm. Mentions of sex, oral or otherwise, doesn't actually have much evidence of harm. Certain treatments of it might -- but that's not what the law targets, nor can effectively target. It covers mere mentions or small bits of explicit language, even where that is necessary for the effect of the book. These can and do make parents profoundly uncomfortable, though, and that is worth taking into consideration.

I would think that the usual approach of professional librarians curating based on their own judgement, subject to some oversight from the local school boards to take into account these valid discomforts, but largely baseless fears would be a far better approach.

wnoise commented on Show HN: Kitten TTS – 25MB CPU-Only, Open-Source TTS Model   github.com/KittenML/Kitte... · Posted by u/divamgupta
wewewedxfgdf · 5 months ago
say is only 193K on MacOS

  ls -lah /usr/bin/say
  -rwxr-xr-x  1 root  wheel   193K 15 Nov  2024 /usr/bin/say
Usage:

  M1-Mac-mini ~ % say "hello world this is the kitten TTS model speaking"

wnoise · 5 months ago
And what dynamic libraries s it linked to? And what other data are they pulling in?
wnoise commented on GPUHammer: Rowhammer attacks on GPU memories are practical   gpuhammer.com/... · Posted by u/jonbaer
MadnessASAP · 5 months ago
Given that I wasnt surprised by the headlie Inhave to imagine that nvidia engineers were also well aware.

Nothing is perfect, everything has its failure conditions. The question is where do you choose to place the bar? Do you want your component to work at 60, 80, or 100C? Do you want it to work in high radiation environments? Do you want it to withstand pathological access patterns?

So in other words, there isnt a sufficent market for GPUs at double the $/GB RAM but are resilient to rowhammer attacks to justify manufacturing them.

wnoise · 5 months ago
The idea of pathological RAM access patterns is as ridiculous as the idea of pathological division of floating point numbers. ( https://en.wikipedia.org/wiki/Pentium_FDIV_bug ). The spec of RAM is to be able to store anything in any order, reliably. They failed the spec.
wnoise commented on US Supreme Court limits federal judges' power to block Trump orders   theguardian.com/us-news/2... · Posted by u/leotravis10
ReptileMan · 6 months ago
>According to the Supreme Court, that's exactly what it does. The President simply isn't accountable.

The president absolutely is accountable. The problem is the Congress for their own reasons refuse to hold it to account. The Congress could remove any president in less than 24 hours with simple majority for no reason whatsoever.

wnoise · 6 months ago
They cannot. It requires a simple majority in the House followed by a 2/3 majority in the Senate. This is basically impossible to achieve.
wnoise commented on US Supreme Court limits federal judges' power to block Trump orders   theguardian.com/us-news/2... · Posted by u/leotravis10
tzs · 6 months ago
> Any judge in the country based on their own subjective politics can also create a precedent by ruling a certain way, and that single precedent might be used even a hundred years later.

District courts do not create precedent. Precedent comes from appellate courts.

wnoise · 6 months ago
They do not create binding precedent. They can create so-called persuasive precedent -- something other courts (and the same district in subsequent opinions) can and do cite when they don't disagree.

u/wnoise

KarmaCake day2827September 4, 2010View Original