There's a free court near me, and both balls and racquets can be gotten for peanuts.
Also one cannot tennis alone. Anything one must practise with a partner is more expensive due to scheduling requirements.
Wake up at 4:30am and go for a run. You’re already accomplishing more at that point in the day than most wealthy people who are comfortably laying in bed.
The hard thing is doing the thing. Just do, that’s it.
[1]: I even have a draft article on similarly transferred learning: https://entropicthoughts.com/transparent-leadership-beats-se...
This, once developed, just happened to be a useful method. But given the abuse using those methods, and the proliferation of stupidity disguised as intelligence, it's always fitting to question it, and this time with this correlation noise observation.
Logic, fundamental knowledge about domains, you need that first. Just counting things without understanding them in at least one or two other ways, is a tempting invitation for misleading conclusions.
And they were much, much worse off for it. Logic does not let you learn anything new. All logic allows you to do is restate what you already know. Fundamental knowledge comes from experience or experiments, which need to be interpreted through a statistical lens because observations are never perfect.
Before statistics, our alternatives for understanding the world was (a) rich people sitting down and thinking deeply about how things could be, (b) charismatic people standing up and giving sermons on how they would like things to be, or (c) clever people guessing things right every now and then.
With statistics, we have to a large degree mechanised the process of learning how the world works, and anyone sensible can participate, and they can know with reasonable certainty whether they are right or wrong. It was impossible to prove a philosopher or a clergyman wrong!
That said, I think I agree with your overall point. One of the strengths of statistical reasoning is what's sometimes called intercomparison, the fact that we can draw conclusions from differences between processes without understanding anything about those processes. This is also a weakness because it makes it easy to accidentally or intentionally manipulate results.
People interpret "statistically significant" to mean "notable"/"meaningful". I detected a difference, and statistics say that it matters. That's the wrong way to think about things.
Significance testing only tells you the probability that the measured difference is a "good measurement". With a certain degree of confidence, you can say "the difference exists as measured".
Whether the measured difference is significant in the sense of "meaningful" is a value judgement that we / stakeholders should impose on top of that, usually based on the magnitude of the measured difference, not the statistical significance.
It sounds obvious, but this is one of the most common fallacies I observe in industry and a lot of science.
For example: "This intervention causes an uplift in [metric] with p<0.001. High statistical significance! The uplift: 0.000001%." Meaningful? Probably not.
When wielded correctly, statistical significance is a useful guide both to what's a real signal worth further investigation, and it filters out meaningless effect sizes.
A bigger problem even when statistical significance is used right is publication bias. If, out of 100 experiments, we only get to see the 7 that were significant, we already have a false:true ratio of 5:2 in the results we see – even though all are presented as true.