Readit News logoReadit News
orlp commented on Why is the sky blue?   explainers.blog/posts/why... · Posted by u/udit99
munificent · 4 days ago
Really cool article! Tangential:

> “Scattering” is the scientific term of art for molecules deflecting photons. Linguistically, it’s used somewhat inconsistently. You’ll hear both “blue light scatters more” (the subject is the light) and “atmospheric molecules scatter blue light more” (the subject is the molecule). In any case, they means the same thing

There's nothing ambiguous or inconsistent about this. In English a verb is transitive if it takes one or more objects in addition to the subject. In "Anna carries a book", "carries" is transitive. A verb is intransivite if it takes no object as with "jumps" in "The frog jumps.".

Many verbs in English are "ambitransitive" where they can either take an object or not, and the meaning often shifts depending on how it's used. There is a whole category of verbs called "labile verbs" where the subject of the intransitive form becomes the object of the transitive form:

* Intransitive: The bell rang.

* Transitive: John rang the bell.

"Scatter" is simply a labile verb:

* Intransitive: Blue light scatters.

* Transitive: Atmospheric molecules scatter blue light more.

orlp · 3 days ago
In modern usage (e.g. in gaming communities) "carries" has become not only ambitransitive but also a noun.

If something "carries" or is "a carry", it means it is so strong it metaphorically carries the rest of the setup with it. For example:

> This card carries.

> These two are the carries of the team.

orlp commented on Attention at Constant Cost per Token via Symmetry-Aware Taylor Approximation   arxiv.org/abs/2602.00294... · Posted by u/fheinsen
logicchains · 9 days ago
It can't be successful at that any more than 1+1 can equal 3. Fundamentally, if every token wants to be able to look at every previous token without loss of information, it must be O(n^2); N tokens looking at N tokens is quadratic. Any sub-quadratic attention must hence necessarily lose some information and be unable to support perfect recall on longer sequences.
orlp · 9 days ago
> N tokens looking at N tokens is quadratic

Convolving two arrays can be done perfectly accurately in O(n log n), despite every element being combined with every other element.

Or consider the even more basic sum of products a[i] * b[j] for all possible i, j:

    total = 0
    for i in range(len(a)):
        for j in range(len(b)):
            total += a[i] * b[j]
This can be computed in linear time as sum(a) * sum(b).

Your logic that 'the result contains terms of all pairs, therefore the algorithm must be quadratic' simply doesn't hold.

orlp commented on The largest number representable in 64 bits   tromp.github.io/blog/2026... · Posted by u/tromp
tromp · 11 days ago
Please no more comments to the extent of "i can define a much larger number in only 1 bit". What makes my blog post (hopefully) interesting is that I consider tiny programs for computing huge numbers in non-cheating languages, that are not specifically equipped for doing so.
orlp · 11 days ago
An interesting follow-up question is, what is the smallest number unable to be encoded in 64 bits of binary lambda calculus?
orlp commented on Binary fuse filters: Fast and smaller than xor filters (2022)   arxiv.org/abs/2201.01174... · Posted by u/redbell
Sesse__ · 22 days ago
As far as I can see, these classes of filters (including xor filters) have some practical issues for many applications: They can become full (refuse new entries altogether), and they need to know all the elements up-front (no incremental inserts). Is there anything more modern than Bloom filters that don't have these restrictions?

I'm especially fond of tiny filters; a well-placed 32- or 64-bit Bloom filter can be surprisingly effective in the right context!

orlp · 21 days ago
Bloom filters also become full.

As it fills up the false probability rate goes up. Once the false probability rate reaches the threshold of unacceptability, the bloom filter is full, and you can no longer insert into it.

That most interfaces still let you do something that looks like an insert is an interface failure, not a bloom filter feature.

If you find this controversial and want to reply "I don't have a threshold of unacceptability", I'll counter that a false probability rate of 100% will be reached eventually. And if you still find that acceptable, you can trivially modify any probabilistic filter to "never become full" by replacing the "is full" error condition with setting a flag that all future queries should return a false positive.

orlp commented on Binary fuse filters: Fast and smaller than xor filters (2022)   arxiv.org/abs/2201.01174... · Posted by u/redbell
djmips · 22 days ago
This feels like a better xor filter implementation.
orlp · 22 days ago
Same author.
orlp commented on Scaling long-running autonomous coding   cursor.com/blog/scaling-a... · Posted by u/samwillis
trjordan · a month ago
This is going to sound sarcastic, but I mean this fully: why haven't they merged that PR.

The implied future here is _unreal cool_. Swarms of coding agents that can build anything, with little oversight. Long-running projects that converge on high-quality, complex projects.

But the examples feel thin. Web browsers, Excel, and Windows 7 exist, and they specifically exist in the LLM's training sets. The closest to real code is what they've done with Cursor's codebase .... but it's not merged yet.

I don't want to say, call me when it's merged. But I'm not worried about agents ability to produce millions of lines of code. I'm worried about their ability to intersect with the humans in the real world, both as users of that code and developers who want to build on top of it.

orlp · a month ago
> Long-running projects that converge on high-quality, complex projects

In my experience agents don't converge on anything. They diverge into low-quality monstrosities which at some point become entirely unusable.

orlp commented on Rue: Higher level than Rust, lower level than Go   rue-lang.dev/... · Posted by u/ingve
wswin · 2 months ago
Rust gets harder with codebase size, because of borrow checker. Not to mention most of the communication libraries decided to be async only, which adds another layer of complexity.
orlp · 2 months ago
I work in a 400k+ LOC codebase in Rust for my day job. Besides compile times being suboptimal, Rust makes working in a large codebase a breeze with good tooling and strong typechecking.

I almost never even think about the borrow checker. If you have a long-lived shared reference you just Arc it. If it's a circular ownership structure like a graph you use a SlotMap. It by no means is any harder for this codebase than for small ones.

orlp commented on Ireland’s Diarmuid Early wins world Microsoft Excel title   bbc.com/news/articles/cj4... · Posted by u/1659447091
lysace · 2 months ago
Programming efficiency isn’t about typing/editing fast - it’s about great decision-making. Although I have seen the combo of both working out very well.

If you focus on fast typing/editing skills to level up, but still have bad decision-making skills, you'll just end up burying yourself (and possibly your team) faster and more decisively. (I have seen that, too.)

orlp · 2 months ago
The person you replied to stated:

> how productive power users in different [fields] can be with their tools

There are a lot more tools in programming than your text editor. Linters, debuggers, AI assistants, version control, continuous integration, etc.

I personally know I'm terrible at using debuggers. Is this a shortcoming of mine? Probably. But I also feel debuggers could be a lot, lot better than they are right now.

I think for a lot of us reflecting at our workflow and seeing things we do that could be done more efficiently with better (usage of) tooling could pay off.

orlp commented on Slowness is a virtue   blog.jakobschwichtenberg.... · Posted by u/jakobgreenfeld
isolli · 2 months ago
This reminds of a question I had when I played chess for a couple of years. I was a lot better (as evidenced by my ELO score on chess.com) when playing long games (1 turn per day) than short games (say half an hour total).

At the time, I read that everybody is better at "slow" chess. But does that explanation make sense? If everybody is better, shouldn't my ELO score have stayed the same?

orlp · 2 months ago
When "everybody is better", you can still increase your relative rank to other people if you benefit even more.

For example if I were to give $1 to every person on earth, but $100 million to you, everyone would be richer but you would be a lot richer still.

u/orlp

KarmaCake day8504March 27, 2013
About
Developer at https://pola.rs/.

Publish a blog at https://orlp.net/blog/.

Other socials:

    http://github.com/orlp/  
    https://stackoverflow.com/users/565635/orlp  
    https://linkedin.com/in/orson-peters/

View Original