Readit News logoReadit News
wnoise commented on I beg you to follow Crocker's Rules, even if you will be rude to me   lr0.org/blog/p/crocker/... · Posted by u/ghd_
d-us-vb · a day ago
While I agree with the sentiment for the effect its adherents want to have, but...

Why not just

"Communicate clearly"?

- Don't add fluff

- write as plainly as possible

- write as precisely as is reasonable

- Only make reasonable assumptions about the reader

- Do your best to anticipate ambiguity and proactively disambiguate. (Because your readers may assume that if they don't understand you, what you wrote isn't for them.)

- Don't be selfish or self-centered; pay attention to the other humans because a significant amount of communication happens in nuance no matter how hard we try to minimize it.

wnoise · a day ago
Because those are far more general than what he is asking for, and what he is asking for will usually not be seen as covered by your generalization.
wnoise commented on “This is not the computer for you”   samhenri.gold/blog/202603... · Posted by u/MBCook
pjerem · 2 days ago
The same Asahi developers also wrote about how Apple didn’t document anything and especially, Apple never talked in public about this. Apple betting Apple, If they had cared a single second about this, they would have called this Bootcamp 2.

Honestly I’m pretty convinced that this « open » bootloader was just there to avoid criticism and bad press from specialized outlets when they presented the M1 because, for once, they needed specialized outlet to benchmark the M1 performance and not have anything bad to say about anything else.

They constantly break everything year after year without documenting any change which effectively makes Asahi unusable in anything recent.

I’m betting that they are just patiently waiting for Asahi to die by being too late of several years (which is already the case) to announce « The most secure Mac ever » silently releasing with closed bootloader when nobody and especially the press will care anymore.

Don’t get me wrong, I love Asahi and I even have it installed on my M2 Air, the project is doing incredible quality work. But I don’t believe it will last long. Hope I’m wrong, though.

wnoise · 2 days ago
To be clear, "Apple" is a group, not a unified thing with one will.

That doesn't mean that the engineeers will necessarily ship something more flexible than what the PMs asked for. Often not.

But sometimes they will.

wnoise commented on EU Council Approves New "Chat Control" Mandate Pushing Mass Surveillance   reclaimthenet.org/eu-coun... · Posted by u/fragebogen
iso1631 · 3 months ago
Which ones are unelected - the democratically elected heads of the member state governments? Or the democratically elected members of the EU parliament?

Or the commissioners that are appointed by the democratically elected heads of the member state governments?

wnoise · 3 months ago
The commissioners?
wnoise commented on User ban controversy reveals Bluesky’s decentralized aspiration isn’t reality   plus.flux.community/p/ban... · Posted by u/gregsadetsky
felixgallo · 5 months ago
wnoise · 5 months ago
That's for Charlie Kirk et al, not Jesse Singal.
wnoise commented on Protobuffers Are Wrong (2018)   reasonablypolymorphic.com... · Posted by u/b-man
ericpauley · 6 months ago
I lost the plot here when the author argued that repeated fields should be implemented as in the pure lambda calculus...

Most of the other issues in the article can be solved be wrapping things in more messages. Not great, not terrible.

As with the tightly-coupled issues with Go, I'll keep waiting for a better approach any decade now. In the meantime, both tools (for their glaring imperfections) work well enough, solve real business use cases, and have a massive ecosystem moat that makes them easy to work with.

wnoise · 6 months ago
They didn't. Pure lambda calculus would have been "a function that when applied to a number encoded as a function, extracts that value".

They did it essentially as a linked list, C-strings, or UTF-8 characters: "current data, and is there more (next pointer, non-null byte, continuation bit set)?" They also noted that it could have this semantics without necessarily following this implementation encoding, though that seems like a dodge to me; length-prefixed array is a perfectly fine primitive to have, and shouldn't be inferred from something that can map to it.

wnoise commented on What Is the Fourier Transform?   quantamagazine.org/what-i... · Posted by u/rbanffy
idiotsecant · 6 months ago
Everyone loves the fourier transform because it's easy to understand but everyone ignores the laplace transform, which is much more beautiful, imo, and quite related.
wnoise · 6 months ago
They are quite related, but the Fourier transform seems far more beautiful and generalizable: you can do 2-d, 3-d, etc transforms, and they automatically respect the symmetries of the problems (e.g. rotating the coordinate system rotates the Fourier transform in a corresponding way; frequencies and wave-vectors have meanings). This fully extends to any "nice" abelian group satisfying minor technical conditions, where the mapping is to it's dual group. It even mostly extends to non-abelian groups (representation theory), though some nice properties are lost.

The Laplace transform shines in having nicer convergence properties in some specific cases. While those are extremely valuable for control problems, it really is a much more specialized theory, not nearly as widely applicable. (You can come up with n-d versions. The obvious thing to do is copy the Fourier case and iteratively Laplace transform on each coordinate; the special role of one direction either directly in the unilateral case, or indirectly via growth properties in the bilateral case make it hard to argue that this can develop to something more unifying; the domain isn't preserved under rotation.)

wnoise commented on Why language models hallucinate   openai.com/index/why-lang... · Posted by u/simianwords
ACCount37 · 6 months ago
Humans can't "inspect their own weights and examine the contents" either.

No human has ever managed to read out his connectome without external instrumentation. There were entire human civilizations that thought that the seat of consciousness was the heart - which, for creatures that claim to know how their own minds work, is a baffling error to make.

LLMs are quite similar in that to humans. They, too, have no idea what their hidden size is, or how many weights they have, or how exactly are the extra modalities integrated into them, or whether they're MoE or dense. They're incredibly ignorant of their own neural architecture. And if you press them on it, they'll guess, and they'll often be wrong.

The difference between humans and LLMs comes down to the training data. Humans learn continuously - they remember what they've seen and what they haven't, they try things, they remember the outcomes, and get something of a grasp (and no, it's not anything more than "something of a grasp") of how solid or shaky their capabilities are. LLMs split training and inference in two, and their trial-and-error doesn't extend beyond a context window. So LLMs don't get much of that "awareness of their own capabilities" by default.

So the obvious answer is to train that awareness in. Easier said than done. You need to, essentially, use a training system to evaluate an LLM's knowledge systematically, and then wire the awareness of the discovered limits back into the LLM.

OpenAI has a limited-scope version of this in use for GPT-5 right now.

wnoise · 6 months ago
No, humans can't inspect their own weights either -- but we're not LLMs and don't store all knowledge implicitly as probabilities to output next token. It's pretty clear that we also store some knowledge explicitly, and can include context of that knowledge.

(To be sure, there are plenty of cases where it is clear that we are only making up stories after the fact about why we said or did something. But sometimes we do actually know and that reconstruction is accurate.)

wnoise commented on "None of These Books Are Obscene": Judge Strikes Down Much of FL's Book Ban Bill   bookriot.com/penguin-rand... · Posted by u/healsdata
bigfishrunning · 7 months ago
I think the problem with these laws is that they're too general. I think we can all agree that there are topics that should not be in elementary school libraries -- I don't think my 7 year old needs to be reading about oral sex for instance, regardless of the gender or sexuality of the participants. The real problem is the nature of the wording of "pornographic", which is poorly defined as "I know it when i see it", and stretched by disingenuous people with an agenda.

As a "Free Speech Absolutionist", I think as much material as possible should be in public libraries, including material that some people object to. I also think that school libraries should be curated to what is appropriate for the audience. The rub here is defining what is "appropriate". Silencing minority literature is bad. Also allowing my elementary school kids to check out "the turner diaries" is bad. There needs to be a balance.

wnoise · 7 months ago
Topics? No, I don't agree with that. Almost any subject can be treated in an age-appropriate manner.

A 7-year-old doesn't need to read about nearly any topic. Excluding any mention of all of those subjects from the school library leaves a nearly empty library.

For that heavy-handed of a response to be _legally mandated_ requires not just "no need", but some strong evidence of harm. Mentions of sex, oral or otherwise, doesn't actually have much evidence of harm. Certain treatments of it might -- but that's not what the law targets, nor can effectively target. It covers mere mentions or small bits of explicit language, even where that is necessary for the effect of the book. These can and do make parents profoundly uncomfortable, though, and that is worth taking into consideration.

I would think that the usual approach of professional librarians curating based on their own judgement, subject to some oversight from the local school boards to take into account these valid discomforts, but largely baseless fears would be a far better approach.

wnoise commented on Show HN: Kitten TTS – 25MB CPU-Only, Open-Source TTS Model   github.com/KittenML/Kitte... · Posted by u/divamgupta
wewewedxfgdf · 7 months ago
say is only 193K on MacOS

  ls -lah /usr/bin/say
  -rwxr-xr-x  1 root  wheel   193K 15 Nov  2024 /usr/bin/say
Usage:

  M1-Mac-mini ~ % say "hello world this is the kitten TTS model speaking"

wnoise · 7 months ago
And what dynamic libraries s it linked to? And what other data are they pulling in?
wnoise commented on GPUHammer: Rowhammer attacks on GPU memories are practical   gpuhammer.com/... · Posted by u/jonbaer
MadnessASAP · 8 months ago
Given that I wasnt surprised by the headlie Inhave to imagine that nvidia engineers were also well aware.

Nothing is perfect, everything has its failure conditions. The question is where do you choose to place the bar? Do you want your component to work at 60, 80, or 100C? Do you want it to work in high radiation environments? Do you want it to withstand pathological access patterns?

So in other words, there isnt a sufficent market for GPUs at double the $/GB RAM but are resilient to rowhammer attacks to justify manufacturing them.

wnoise · 8 months ago
The idea of pathological RAM access patterns is as ridiculous as the idea of pathological division of floating point numbers. ( https://en.wikipedia.org/wiki/Pentium_FDIV_bug ). The spec of RAM is to be able to store anything in any order, reliably. They failed the spec.

u/wnoise

KarmaCake day2831September 4, 2010View Original