2. The result is an "almost surely" result, i.e. in the collection of all possible infinite samples, the set of samples for which it fails has 0 measure. In non technical terms this means that it works for typical random samples and may not work for handpicked counterexamples.
In our particular case let f=Sha256. Then X must be discrete, i.e. a natural number. Now the particulars depend on the distribution on X, but the general idea is that since we have discrete values, the probability that we get an infinite sample where the values tend to infinity is 0. So we get that in a typical sample theres going to be an infinitude of x ties and furthermore most x values arent too large (in a way you can make precise), so the tie factors l_i dominate since there just arent that many distinct values encountered total. And so we get that the coefficient tends to 1.
Some examples of insanely powerful and ubiquitous concepts which do not have easy examples:
- algebraic stacks
- QFT
- _general_ relativity
There's this idea that everything should somehow be explainable to my grandmother, but this idea is never presented with any justification, and it seems to me that there's counterexamples aplenty. And something irks me about the idea. I feel like it comes from the same place as people who've done a 5 minute search on google feeling like they can take on experts who've spent decades studying a subject.
But claiming a technique is a bridge between disparate areas of mathematics and then subsequently failing to give concrete examples of such bridging is a bit odd, dont you think?
For what its worth the book "7 sketches in compositionality" (https://arxiv.org/abs/1803.05316) has a chapter on topos theory which provides a good introduction with some simple examples!