Readit News logoReadit News
kqr commented on The ROI of Exercise   herman.bearblog.dev/exerc... · Posted by u/ingve
almost_usual · a day ago
Nope. I get 8-7.5hrs every night. I’m asleep within 15 minutes, zero screen time.
kqr · a day ago
I believe it is uncommon to have a schedule that allows a bedtime of 8.15 pm. Maybe I'm wrong.
kqr commented on The ROI of Exercise   herman.bearblog.dev/exerc... · Posted by u/ingve
ceejayoz · 2 days ago
Honest question: Why?

There's a free court near me, and both balls and racquets can be gotten for peanuts.

kqr · a day ago
Tennis requires a certain proficiency to have fun with. Beginners tend to have trouble getting the ball reliably across the net onto the other player. This proficiency takes time to build. Thus, unless one makes a big up-front time investment, tennis is not particularly good exercise. Up-front time investments are expensive.

Also one cannot tennis alone. Anything one must practise with a partner is more expensive due to scheduling requirements.

kqr commented on The ROI of Exercise   herman.bearblog.dev/exerc... · Posted by u/ingve
almost_usual · a day ago
There are plenty of wealthy people who are unhealthy.

Wake up at 4:30am and go for a run. You’re already accomplishing more at that point in the day than most wealthy people who are comfortably laying in bed.

The hard thing is doing the thing. Just do, that’s it.

kqr · a day ago
You seem to be forgetting that insufficient sleep is also unhealthy.
kqr commented on All managers make mistakes; good managers acknowledge and repair   terriblesoftware.org/2025... · Posted by u/matheusml
kqr · 2 days ago
After reading just the title I was going to make a crack that the skills of parents and managers largely overlaps.[1] Turns out the author did also make that connection, and that's what spawned the article!

[1]: I even have a draft article on similarly transferred learning: https://entropicthoughts.com/transparent-leadership-beats-se...

kqr commented on Everything is correlated (2014–23)   gwern.net/everything... · Posted by u/gmays
apples_oranges · 3 days ago
People didn't always use statistics to discover truths about the world.

This, once developed, just happened to be a useful method. But given the abuse using those methods, and the proliferation of stupidity disguised as intelligence, it's always fitting to question it, and this time with this correlation noise observation.

Logic, fundamental knowledge about domains, you need that first. Just counting things without understanding them in at least one or two other ways, is a tempting invitation for misleading conclusions.

kqr · 2 days ago
> People didn't always use statistics to discover truths about the world.

And they were much, much worse off for it. Logic does not let you learn anything new. All logic allows you to do is restate what you already know. Fundamental knowledge comes from experience or experiments, which need to be interpreted through a statistical lens because observations are never perfect.

Before statistics, our alternatives for understanding the world was (a) rich people sitting down and thinking deeply about how things could be, (b) charismatic people standing up and giving sermons on how they would like things to be, or (c) clever people guessing things right every now and then.

With statistics, we have to a large degree mechanised the process of learning how the world works, and anyone sensible can participate, and they can know with reasonable certainty whether they are right or wrong. It was impossible to prove a philosopher or a clergyman wrong!

That said, I think I agree with your overall point. One of the strengths of statistical reasoning is what's sometimes called intercomparison, the fact that we can draw conclusions from differences between processes without understanding anything about those processes. This is also a weakness because it makes it easy to accidentally or intentionally manipulate results.

kqr commented on Everything is correlated (2014–23)   gwern.net/everything... · Posted by u/gmays
taneq · 3 days ago
I’d say rather that “statistically significance” is a measure of surprise. It’s saying “If this default (the null hypothesis) is true, how surprised would I be to make these observations?”
kqr · 2 days ago
Maybe you can think of it as saying "should I be surprised" but certainly not "how surprised should I be". The magnitude of the p-value is a function of sample size. It is not an odds ratio for updating your beliefs.
kqr commented on Everything is correlated (2014–23)   gwern.net/everything... · Posted by u/gmays
simsla · 3 days ago
This relates to one of my biggest pet peeves.

People interpret "statistically significant" to mean "notable"/"meaningful". I detected a difference, and statistics say that it matters. That's the wrong way to think about things.

Significance testing only tells you the probability that the measured difference is a "good measurement". With a certain degree of confidence, you can say "the difference exists as measured".

Whether the measured difference is significant in the sense of "meaningful" is a value judgement that we / stakeholders should impose on top of that, usually based on the magnitude of the measured difference, not the statistical significance.

It sounds obvious, but this is one of the most common fallacies I observe in industry and a lot of science.

For example: "This intervention causes an uplift in [metric] with p<0.001. High statistical significance! The uplift: 0.000001%." Meaningful? Probably not.

kqr · 2 days ago
To add nuance, it is not that bad. Given reasonable levels of statistical power, experiments cannot show meaningless effect sizes with statistical significance. Of course, some people design experiments at power levels way beyond what's useful, and this is perhaps even more true when it comes to things where big data is available (like website analytics), but I would argue the problem is the unreasonable power level, rather than a problem with statistical significance itself.

When wielded correctly, statistical significance is a useful guide both to what's a real signal worth further investigation, and it filters out meaningless effect sizes.

A bigger problem even when statistical significance is used right is publication bias. If, out of 100 experiments, we only get to see the 7 that were significant, we already have a false:true ratio of 5:2 in the results we see – even though all are presented as true.

kqr commented on Everything is correlated (2014–23)   gwern.net/everything... · Posted by u/gmays
ants_everywhere · 3 days ago
Which means that statistical significance is really a measure of whether N is big enough
kqr · 2 days ago
This has been known ever since the beginning of frequentist hypothesis testing. Fisher warned us not to place too much emphasis on the p-value he asked us to calculate, specifically because it is mainly a measure of sample size, not clinical significance.
kqr commented on Everything is correlated (2014–23)   gwern.net/everything... · Posted by u/gmays
Evidlo · 3 days ago
This is such a massive article. I wish I had the ability to grind out treatises like that. Looking at other content on the guy's website, he must be like a machine.
kqr · 3 days ago
IIRC Gwern lives extremely frugally somewhere remote and is thus able to spend a lot of time on private research.

u/kqr

KarmaCake day17144January 2, 2015
About
Quant, systems thinker, anarchist.

I write at https://entropicthoughts.com

My inbox is hn[at]xkqr.org

View Original