Another good one to read is Statistical Rethinking via https://xcelab.net/rm/statistical-rethinking/. A bit easier to understand than Gelman's book, but together, these give you an amazing foundation in modern bayesian analysis.
Cam's book, mentioned also in the comments, is also wonderful.
At the risk of sounding quite silly, how do people read these textbooks? Do people (who are not in graduate studies) actually work through entire books, or just particular chapters?
I grinded through textbooks during my graduate studies, but I had to, in order to complete the HW and pass the courses.
But since joining industry I've not been able to actually work through a textbook - when I try to attempt the problems, I'll find a couple have passed and only one or two
problems have been completed - I simply find it a challenge to find the time to work through book exercises.
For something in my field, I "speed read" it. (In quotes because I have no idea if this is how actual speed reading works.) I.e. set aside one or two blocks of 4 hours or so and commit to finishing the book in that window.
I usually don't retain a ton, but the big benefit is that I know where to find the relevant sections when I need them in the future, and have some sort of big-picture view of how they fit together.
I read these kind of books front to back while trying to program the examples and graphs for myself. It's slow going in all but I've gotten great rewards from only a few books. (These would be 4 in 14 (!) years Strang linear algebra, Greene econometric analysis, Gelman Bayesian statistics and McKay information theory). For me, build once, never forget just works. The thing is, I never feel the need to refresh earlier work after this. Even if I look at my code from a decade ago it just clicks. Did some MOOCs on the side and while I passed those quite well, I didn't retain as much as from the struggle above.
Possibly a controversial opinion but my belief is that you can effectively measure the usefulness of a non-fiction work (text book, academic paper, article, tutorial, etc) by asking if you learnt anything from it.
One. Simple. Thing. Is. Enough.
Freedom from the pressure to fully understand everything from a book (or even from a single chapter) has allowed me to learn a lot more in recent years, and in a far more enjoyable manner.
So for me, I read the book quickly or academic paper almost as if it is a fiction. Generally not going back or pausing. Sometimes even faster. (I have found this gives me a better overview of the entire content, comapred to meticulously starting slowly at the first chapter and eventually getting stuck.)
I do this until, a particular section stands out and really piques my interest. This is typically either because:
(i) it is coincidentally relevant to something I have been recently working on, or
(ii) the author's description of a topic is written in just the right way that things 'just click' and I have a newer or deeper understanding of the topic.
Often these scenarios give me the intrinsic motivation to spend a longer time on that topic.
In this way, I often read the same book/paper many times over, during a period of a few months, and each time I learn something new. In some ways, this strategy is similar to what some people call the wedge approach which is a balance between the debate of being a studying wide or deep. That is, study a lot of things broadly, several things moderately, and a few things very deeply.
The corollary to this idea comes from my teaching experience. Teachers know that the best learning comes when the difficulty is just at the edge of a student's ability. Not too easy. Not too hard. This is the power of incremental learning. So it makes sense you only have to find one thing in the text book that is just at the edge of what you already know, and learn about that.
So often when I need to do something for work, I apply this +1 technique. I learn what I need to do for my project and then just 'one bit more'. All those 'one bit more' explorations add up to quite a lot over time.
For mathematics: The most important part is the problems. You can actually start at the problems, and read the sections until you can answer the problems. Most people avoid the problems because they take time and require them to actually think about the content (it's painful). So my advice: Pick the problems you find interesting and see if you can find solutions to them from the content of the book and learn that well. That would be the most optimised form of studying mathematics. Note: You do not need solution sets to do this.
>At the risk of sounding quite silly, how do people read these textbooks
literally just front to back. I did maths in university and when I started to work for a few years I didn't get too do much of it so I just got into the habit to put half an hour a day aside to work through whatever books I find interesting.
I actually enjoy it much more now than I did in uni given that I can do it at my own pace now and for fun.
Depends on the person, their interest and the textbook. I've read a few college textbooks over the years in areas I'm interested in. Some, I just read specific sections/chapters. Others, the whole thing. I don't generally do the exercises unless I'm not comfortable that I've got the material down. I usually find out pretty quickly if that's the case, since I typically apply the knowledge shortly after reading.
I read textbooks recreationally, in addition to genre fiction. I purposely try to read broadly across multiple topics, but have a horrible habit of getting hooked and doing deep dives in particular areas. In my case, I think it stems from enjoying reading college history textbooks (from my mother) and music theory textbooks (father) starting at about 10 years old. I enjoy the dry non-narrative and technical writing.
From my experience, the solution is to find a book that matches your current knowledge and goal; which usually means finding a combination of books. Also, if you do not have the necessary prerequisite you'll need to find a mentor who can fill you in.
About exercise, I do not think you are meant to solve them all. The most important thing is to get the main idea from chapter to chapter.
I think a good conceptual foundation is the difference between someone who throws buzzword solutions at a problem to see what sticks, and someone who can make a good tweak to get working a standard-ish idea that doesn’t quite work out of the box. Without understanding these concepts it is easy to get stuck spinning in loops on a complicated problem — where tweaking to improve one thing messes with another thing.
I won’t claim Bayesian is the only conceptual framework, but I found it particularly intuitive and straightforward — and gives you a lot of flexibility. Refer an earlier discussion on Bayesian approaches a few days ago.
We had this book for our cross-listed(undergrad and grad) Applied Bayesian class. It is a very easy book to read and uses little to no math. Highly recommended if you do not have rigorous training in statistical inference.
I don't get this fear of math. You're talking about probability theory here. It literally _is_ mathematics. Why would someone want to study applied probability theory but try and dodge the foundation of the entire discipline? Sigh.
Came here to rave about McElreath. The book is great, as are the attached materials and YouTube lectures. Thumbs way up, especially when Gelman is giving you trouble.
If someone wants a more interactive companion-book targeted more towards Python developers, check out "Probabilistic Programming & Bayesian Methods for Hackers":
This is a _fantastic_ book. If anyone's worried it's too technical, I'd say that it's not as dry as it might look at first glance. There's lots of practical advice and there's actually not that much heavy maths.
I don't think this is true having read some of the later chapters and not knowing measure theory. And if you don't trust me, Gelman doesn't know measure theory either (https://statmodeling.stat.columbia.edu/2008/01/14/what_to_le...) and he wrote the book...
There's little that can be said about Dirichlet processes without measure theory; fundamentally speaking they are random measures. That said, the vast majority of the book presupposes no measure theory.
It's a great book if you want to understand bayesian modeling in detail. Its not 'dry' as in boring - it's an interesting read.
If you want something less technical then read Gelman and Hill 'Data Analysis Using Regression and Multilevel/Hierarchical Models', which is also great. More for scientists than statisticians, I'd say.
> Would you recommend it as an introduction to topic?
No. I would not recommend it unless you have a strong foundation in statistic.
If you want an introduction, I like
"Doing Bayesian Data Analysis: A Tutorial with R and BUGS" by John K. Kruschke. It's basically the Bayesian intro statistic version of Wackerly's statistic book (frequentist).
Cam's book, mentioned also in the comments, is also wonderful.
I grinded through textbooks during my graduate studies, but I had to, in order to complete the HW and pass the courses.
But since joining industry I've not been able to actually work through a textbook - when I try to attempt the problems, I'll find a couple have passed and only one or two problems have been completed - I simply find it a challenge to find the time to work through book exercises.
I usually don't retain a ton, but the big benefit is that I know where to find the relevant sections when I need them in the future, and have some sort of big-picture view of how they fit together.
One. Simple. Thing. Is. Enough.
Freedom from the pressure to fully understand everything from a book (or even from a single chapter) has allowed me to learn a lot more in recent years, and in a far more enjoyable manner.
So for me, I read the book quickly or academic paper almost as if it is a fiction. Generally not going back or pausing. Sometimes even faster. (I have found this gives me a better overview of the entire content, comapred to meticulously starting slowly at the first chapter and eventually getting stuck.) I do this until, a particular section stands out and really piques my interest. This is typically either because: (i) it is coincidentally relevant to something I have been recently working on, or (ii) the author's description of a topic is written in just the right way that things 'just click' and I have a newer or deeper understanding of the topic. Often these scenarios give me the intrinsic motivation to spend a longer time on that topic.
In this way, I often read the same book/paper many times over, during a period of a few months, and each time I learn something new. In some ways, this strategy is similar to what some people call the wedge approach which is a balance between the debate of being a studying wide or deep. That is, study a lot of things broadly, several things moderately, and a few things very deeply.
The corollary to this idea comes from my teaching experience. Teachers know that the best learning comes when the difficulty is just at the edge of a student's ability. Not too easy. Not too hard. This is the power of incremental learning. So it makes sense you only have to find one thing in the text book that is just at the edge of what you already know, and learn about that.
So often when I need to do something for work, I apply this +1 technique. I learn what I need to do for my project and then just 'one bit more'. All those 'one bit more' explorations add up to quite a lot over time.
Hope that helps. ;)
literally just front to back. I did maths in university and when I started to work for a few years I didn't get too do much of it so I just got into the habit to put half an hour a day aside to work through whatever books I find interesting.
I actually enjoy it much more now than I did in uni given that I can do it at my own pace now and for fun.
About exercise, I do not think you are meant to solve them all. The most important thing is to get the main idea from chapter to chapter.
I feel like I won't be able to answer with satisfaction till I have a good foundation...
I won’t claim Bayesian is the only conceptual framework, but I found it particularly intuitive and straightforward — and gives you a lot of flexibility. Refer an earlier discussion on Bayesian approaches a few days ago.
http://camdavidsonpilon.github.io/Probabilistic-Programming-...
Relevant quote:
> "I ... read this book ... I like it!" - Andrew Gelman
Deleted Comment
Uh... sure if you know measure theory.
The later chapter especially in Dirichlet chapters assume you know measure theory.
Deleted Comment
I've always heard that it's a bit on the dry side of things, but haven't actually read it myself.
If you want something less technical then read Gelman and Hill 'Data Analysis Using Regression and Multilevel/Hierarchical Models', which is also great. More for scientists than statisticians, I'd say.
No. I would not recommend it unless you have a strong foundation in statistic.
If you want an introduction, I like "Doing Bayesian Data Analysis: A Tutorial with R and BUGS" by John K. Kruschke. It's basically the Bayesian intro statistic version of Wackerly's statistic book (frequentist).