Readit News logoReadit News
wahern commented on The original vi is a product of its time (and its time has passed)   utcc.utoronto.ca/~cks/spa... · Posted by u/ingve
bpye · 11 hours ago
This article is arguing against vi, most systems install vim by default.
wahern · 10 hours ago
Most systems meaning Linux and macOS. FreeBSD, NetBSD, and OpenBSD use nvi, and BusyBox (Alpine Linux core userland) uses tiny vi.c.
wahern commented on Experts Have World Models. LLMs Have Word Models   latent.space/p/adversaria... · Posted by u/aaronng91
pixl97 · a day ago
Really all you're saying is the human world model is very complex, which is expected as humans are the most intelligent animal.

At no point have I seen anyone here as the question of "What is the minimum viable state of a world model".

We as humans with our ego seem to state that because we are complex, any introspective intelligence must be as complex as us to be as intelligent as us. Which doesn't seem too dissimilar to saying a plane must flap its wings to fly.

wahern · 20 hours ago
Has any generative AI been demonstrated to exhibit the generalized intelligence (e.g. achieving in a non-simulated environment complex tasks or simple tasks in novel environments) of a vertebrate, or even a higher-order non-vertebrate? Serious question--I don't know either way. I've had trouble finding a clear answer; what little I have found is highly qualified and caveated once you get past the abstract, much like attempts in prior AI eras.
wahern commented on Thoughts on Generating C   wingolog.org/archives/202... · Posted by u/ingve
yxhuvud · a day ago
"static inline", the best way of getting people doing bindings in other languages to dislike your library (macros are just as bad, FWIW).

I really wish someone on the C language/compiler/linker level took a real look at the problem and actually tried to solve it in a way that isn't a pain to deal with for people that integrate with the code.

wahern · 20 hours ago
> I really wish someone on the C language/compiler/linker level took a real look at the problem and actually tried to solve it in a way that isn't a pain to deal with for people that integrate with the code.

It exists as "inline" and "extern inline".[1] Few people make use of them, though, partly because the semantics standardized by C99 were the complete opposite of the GCC extensions at the time, so for years people were warned to avoid them; and partly because linkage matters are a kind of black magic people avoid whenever possible--"static inline" neatly avoids needing to think about it.

[1] See C23 6.7.5 at https://www.open-std.org/jtc1/sc22/wg14/www/docs/n3220.pdf#p...

wahern commented on Experts Have World Models. LLMs Have Word Models   latent.space/p/adversaria... · Posted by u/aaronng91
kenjackson · a day ago
I think the commenter is saying that they will combine a world model with the word model. The resulting combination may be sufficient for very solid results.

Note humans generate their own non-complete world model. For example there are sounds and colors we don’t hear or see. Odors we don’t smell. Etc…. We have an incomplete model of the world, but we still have a model that proves useful for us.

wahern · a day ago
> they will combine a world model with the word model.

This takes "world model" far too literally. Audio-visual generative AI models that create non-textual "spaces" are not world models in the sense the previous poster meant. I think what they meant by world model is that the vast majority of the knowledge we rely upon to make decisions is tacit, not something that has been digitized, and not something we even know how to meaningfully digitize and model. And even describing it as tacit knowledge falls short; a substantial part of our world model is rooted in our modes of actions, motivations, etc, and not coupled together in simple recursive input -> output chains. There are dimensions to our reality that, before generative AI, didn't see much systematic introspection. Afterall, we're still mired in endless nature v. nurture debates; we have a very poor understanding about ourselves. In particular, we have extremely poor understanding of how we and our constructed social worlds evolve dynamically, and it's that aspect of our behavior that drives the frontier of exploration and discovery.

OTOH, the "world model" contention feels tautological, so I'm not sure how convincing it can be for people on the other side of the debate.

wahern commented on Claude’s C Compiler vs. GCC   harshanu.space/en/tech/cc... · Posted by u/unchar1
mahmoudimus · a day ago
I think you're referring to this one: https://github.com/jhjourdan/C11parser
wahern · a day ago
What I had specifically in mind definitely wasn't using OCaml or Menhir, but that's a very useful resource, as is the associated paper, "A simple, possibly correct LR parser for C11", https://jhjourdan.mketjh.fr/pdf/jourdan2017simple.pdf

This is closer to what I remember, but I'm not convinced it's what I had in mind, either: https://github.com/edubart/lpegrex/blob/main/parsers/c11.lua It uses LPeg's match-time capture feature (not a pure PEG construct) to dynamically memorize typedef's and condition subsequent matches. In fact, it's effectively identical to what C11Parser is doing, down to the two dynamically invoked helper functions: declare_typedefname/is_typedefname vs set_typedef/is_typedef. C11Parser and the paper are older, so maybe the lpegrex parser is derivative. (And probably what I had in mind, if not lpegrex, was derivative, too.)

wahern commented on Claude’s C Compiler vs. GCC   harshanu.space/en/tech/cc... · Posted by u/unchar1
hackyhacky · 2 days ago
What is the typedef problem?
wahern · 2 days ago
Lexical parsing C is simple, except that typedef's technically make it non-context-free. See https://en.wikipedia.org/wiki/Lexer_hack When handwriting a parser, it's no big deal, but it's often a stumbling block for parser generators or other formal approaches. Though, I recall there's a PEG-based parser for C99/C11 floating around that was supposed to be compliant. But I'm having trouble finding a link, and maybe it was using something like LPeg, which has features beyond pure PEG that help with context dependent parsing.
wahern commented on Running Your Own AS: BGP on FreeBSD with FRR, GRE Tunnels, and Policy Routing   blog.hofstede.it/running-... · Posted by u/todsacerdoti
rnhmjoj · 2 days ago
> MSS clamping is non-negotiable with tunnels. Every layer of encapsulation eats into the MTU.

Can this tunnel be avoided somehow? If I have to choose between owning my prefix and having 1500 MTU, I'd probably take the latter: MTU issues are so annoying to deal with, and MSS-clamping doesn't solve all of them.

wahern · 2 days ago
Yes, this can be avoided. All the standard advice and examples are tailored toward avoiding IP packet fragmentation entirely even when the tunnel transport can encapsulate and transmit packets larger than the underlying path MTU. Mostly this is justified for performance reasons, but it also tends to avoid even more difficult to debug situations where there's an MTU or ICMP issue between tunnel endpoints.

I haven't used Wireguard before, but I believe if you force the wg interface MTU to 1500, things will just work. I use IPSec where the solution would be to use something like link-layer tunneling that, ironically, adds another layer of encapsulation to the equation. Most tunnel solutions don't directly support fragmentation as part of their protocol, but you get it for free if they utilize, e.g., UDP or other disjoint IP protocol for transport and don't explicitly disable fragmentation (e.g. by requesting Don't Fragment (DF) flag).

If I were to do this (and I keep meaning to try), I might still lower the MSS on my server(s) just for performance reasons, but at least the tunnel would otherwise appear seamless externally.

wahern commented on Omega-3 is inversely related to risk of early-onset dementia   pubmed.ncbi.nlm.nih.gov/4... · Posted by u/brandonb
ch4s3 · 2 days ago
Not in the US it’s illegal under the Genetic Information Nondiscrimination Act (GINA) of 2008.
wahern · 2 days ago
GINA prohibits genetic discrimination in pricing common health care insurance, but not for products like life insurance, disability insurance, or even long-term care insurance. Some states have statutes that address the latter types of insurance, though.

Deleted Comment

wahern commented on Recreating Epstein PDFs from raw encoded attachments   neosmart.net/blog/recreat... · Posted by u/ComputerGuru
pimlottc · 5 days ago
Why not just try every permutation of (1,l)? Let’s see, 76 pages, approx 69 lines per page, say there’s one instance of [1l] per line, that’s only… uh… 2^5244 possibilities…

Hmm. Anyone got some spare CPU time?

wahern · 5 days ago
It should be much easier than that. You should should be able to serially test if each edit decodes to a sane PDF structure, reducing the cost similar to how you can crack passwords when the server doesn't use a constant-time memcmp. Are PDFs typically compressed by default? If so that makes it even easier given built-in checksums. But it's just not something you can do by throwing data at existing tools. You'll need to build a testing harness with instrumentation deep in the bowels of the decoders. This kind of work is the polar opposite of what AI code generators or naive scripting can accomplish.

u/wahern

KarmaCake day16846July 16, 2014View Original