The book assumes limited knowledge (similar to what is required for Pattern Recognition I would say) and gives a good intuition on foundational principles of machine learning (bias/variance tradeoff) before delving to more recent research problems. Part I is great if you simply want to know what are the core tenets of learning theory!
Much of old theory is barely applicable and people are, understandably, bewildered and in denial.
If someone were to be inclined to theory, I'd just recommend reading papers that don't try oversimplify the domain:
https://arxiv.org/abs/2006.15191
https://arxiv.org/abs/2210.10749
https://arxiv.org/abs/2205.10343
https://arxiv.org/abs/2105.04026
Anyways, I still believe that learning foundational stuff such as the bias-variance tradeoff is useful before diving to more advanced stuff. I even think that tackling recent research question with old tool is insightful too. But that's only my opinion, and perhaps I'm in denial :)