Readit News logoReadit News
Chinjut commented on How many paths of length K are there between A and B? (2021)   horace.io/walks... · Posted by u/jxmorris12
shiandow · 11 hours ago
I think it could generate the minimal polynomiale instead. Though it is curious that this would still make it faster for almost all matrices, just not guaranteed to be correct.
Chinjut · 3 hours ago
Note that the article describes this Berlekamp-Massey approach as involving a step of complexity on the order of EV, which is V^3 in the worst-case. So this is only beneficial for sparse matrices. It does seem like Berlekamp-Massey is used to efficiently but non-guaranteedly compute determinants for sparse matrices, as described at https://en.wikipedia.org/wiki/Block_Wiedemann_algorithm
Chinjut commented on How many paths of length K are there between A and B? (2021)   horace.io/walks... · Posted by u/jxmorris12
Labo333 · 16 hours ago
CH gives you recurrence on the matrix. You want recurrence on an individual element (indexed by [start][end]).
Chinjut · 3 hours ago
Any recurrence that holds on the matrix also holds on each individual element (and vice versa, in that a recurrence holds on the matrix just in case it holds on every individual element).
Chinjut commented on How many paths of length K are there between A and B? (2021)   horace.io/walks... · Posted by u/jxmorris12
Chinjut · 21 hours ago
Odd to use Berlelamp-Massey to recover a linear recurrence, when Cayley-Hamilton already directly gives you a linear recurrence whose characteristic polynomial is that of the matrix.
Chinjut commented on Prime Number Grid   susam.net/primegrid.html... · Posted by u/todsacerdoti
Chinjut · 6 days ago
Your description here does not quite match your linked code, in that it is not that the N-th pack contains integers spaced out by N. Rather, packs on the N-th row contain integers spaced out by N. For example, the third pack does not contain "every third integer", but rather draws alternating integers just like the second pack, because it is on the second row. The second pack contains (first cell of the second row) contains {101, 103, 105, ..., 299} and the third pack (second cell of the second row) contains {102, 104, 106, ..., 300}.

With this in mind, the seeming patterns of the figure you link to are explained by https://news.ycombinator.com/item?id=17106193

Chinjut · 6 days ago
My one quibble with the comment I linked is about asymptotics. By the Prime Number Theorem, asymptotically, the density of black squares should approach zero and the density of red squares should approach 100% (including among the left diagonal which is entirely black in the displayed window, and including losing the regular appearance of rows that are entirely black except for their last cell. These black line patterns in the displayed window are both small number phenomena caused by (1 - 1/ln(R))^100 being nearly zero for small R, which stops and then goes the other way for large R.)
Chinjut commented on Prime Number Grid   susam.net/primegrid.html... · Posted by u/todsacerdoti
mg · 6 days ago
Here is a strange one:

You look at integers in "packs" of 100. If a pack contains a prime number, you color it black, otherwise you color it red.

The first pack contains 100 consecutive integers. The second every second integer. The third every third integer and so on.

Every pack starts where the last one stopped.

On the first row, you draw 1 pack, on the second 2, on the third 3 and so on:

https://www.gibney.org/parallax_primes

It looks like hieroglyphs from another universe.

I'm still not sure why it looks the way it looks.

If you want to compare it to a random distribution, you can change this line:

    if (isPrime(myNum)) return 1;
To this:

    if (Math.random()>0.99) return 1;
Very different. I wonder where the symmetry and all the other properties of the pattern come from when using primes.

Chinjut · 6 days ago
Your description here does not quite match your linked code, in that it is not that the N-th pack contains integers spaced out by N. Rather, packs on the N-th row contain integers spaced out by N. For example, the third pack does not contain "every third integer", but rather draws alternating integers just like the second pack, because it is on the second row. The second pack contains (first cell of the second row) contains {101, 103, 105, ..., 299} and the third pack (second cell of the second row) contains {102, 104, 106, ..., 300}.

With this in mind, the seeming patterns of the figure you link to are explained by https://news.ycombinator.com/item?id=17106193

Chinjut commented on AI's changed (is changing) college education   theatlantic.com/technolog... · Posted by u/LAsteNERD
LAsteNERD · 7 days ago
The class of 2026 has had generative AI for their entire college career. What started as a novelty in 2022 has become second nature: surveys show >90% of undergrads now use AI for schoolwork, from drafting essays to summarizing readings.

For students, the motivation is pragmatic: AI saves time, reduces stress, and helps balance overwhelming academic and extracurricular demands. It’s less about “cheating” and more about survival in a system that prizes productivity and credentials. Professors, meanwhile, are scrambling—reverting to handwritten exams, shifting grading toward tests, or trying moral appeals. Yet many remain unaware of just how normalized AI has become on campus.

The result: higher ed has been fundamentally reshaped in just three years. Students expect project-based, real-world assignments that resist AI shortcuts. But with faculty stretched thin by budget cuts, research demands, and political headwinds, systemic redesign feels unlikely. For now, both students and professors face the same reality: a college education is what you make of it—AI included.

If you're wondering--yes, I used AI for the synopsis. Big question for me, is what does the future of education look like? How do kids get the skills they need to use AI, while still getting the skills they need to be skeptical of it?

Chinjut · 7 days ago
If I wanted to read an LLM-generated comment, I'd go to ChatGPT myself.
Chinjut commented on Derivatives, Gradients, Jacobians and Hessians   blog.demofox.org/2025/08/... · Posted by u/ibobev
whatever1 · 8 days ago
I can look around me and find the minimum of anything without tracing its surface and following the gradient. I can also identify immediately global minima instead of local ones.

We all can do it in 2-3D. But our algorithms don’t do it. Even in 2D.

Sure if I was blindfolded, feeling the surface and looking for minimization direction would be the way to go. But when I see, I don’t have to.

What are we missing?

Chinjut · 8 days ago
You're thinking of situations where you are able to see a whole object at once. If you were dealing with an object too large to see all of, you'd have to start making decisions about how to explore it.
Chinjut commented on Why LLMs can't really build software   zed.dev/blog/why-llms-can... · Posted by u/srid
IshKebab · 10 days ago
Yeah it's also kind of funny people discovering all the LLM failure modes and saying "see! humans would never do that! it's not really intelligent!". None of those people have children...
Chinjut · 10 days ago
I don't want a computer that's as unreliable as a child. This is not what originally interested me about computers.
Chinjut commented on GPT-5: "How many times does the letter b appear in blueberry?"   bsky.app/profile/kjhealy.... · Posted by u/minimaxir
seanhunter · 16 days ago
Do you think “b l u e b e r r y” is not tokenized somehow? Everything the model operates on is a token. Tokenization explains all the miscounts. It baffles me that people think getting a model to count letters is interesting but there we are.

Fun fact, if you ask someone with French, Italian or Spanish as a first language to count the letter “e” in an english sentence with a lot of “e’s” at the end of small words like “the” they will often miscount also because the way we learn language is very strongly influenced by how we learned our first language and those languages often elide e’s on the end of words.[1] It doesn’t mean those people are any less smart than people who succeed at this task — it’s simply an artefact of how we learned our first language meaning their brain sometimes literally does not process those letters even when they are looking out for them specifically.

[1] I have personally seen a French maths PhD fail at this task and be unbelievably frustrated by having got something so simple incorrect.

Chinjut · 15 days ago
One can use https://platform.openai.com/tokenizer to directly confirm that the tokenization of "b l u e b e r r y" is not significantly different from simply breaking this down into its letters. The excuse often given "It cannot count letters in words because it cannot see the individual letters" would not apply here.

u/Chinjut

KarmaCake day2781June 23, 2014View Original