If you're interested, here is a (still in-progress) simulator I wrote where you can try out Define-Combine on a simple grid. https://mpalmer.shinyapps.io/DefineCombine/
You are approaching this like an established natural sciences field where old classics = good. This is not true for ML. ML is developing and evolving quickly.
I suggest taking a look at Kevin Murphy's series for the foundational knowledge. Sutton and Barto for reinforcement learning. Mackay's learning algorithms and information theory book is also excellent.
Kochenderfer's ML series is also excellent if you like control theory and cybernetics
https://algorithmsbook.com/https://mitpress.mit.edu/9780262039420/algorithms-for-optimi...https://mitpress.mit.edu/9780262029254/decision-making-under...
For applied deep learning texts beyond the basics, I recommend picking up some books/review papers on LLMs, Transformers, GANs. For classic NLP, Jurafsky is the go-to.
Seminal deep learning papers: https://github.com/anubhavshrimal/Machine-Learning-Research-...
Data engineering/science: https://github.com/eugeneyan/applied-ml
For speculation: https://en.m.wikipedia.org/wiki/Possible_Minds
While having strong mathematical foundation is useful, I think developing intuition is even more important. For this, I recommend Andrew Ng's coursera courses first before you dive too deep.