You've really got to have certain contexts before thinking you ought to be pushing ifs up.
I mean generally, you should consider pushing an if up. But you should also consider pushing it down, and leaving it where it is. That is, you're thinking about whether you have a good structure for your code as you write it... aka programming.
I suppose you might say, push common/general/high-level things up, and push implementation details and low-level details down. It seems almost too obvious to say, but I guess it doesn't hurt to back up a little once in a while and think more broadly about your general approach. I guess the author is feeling that ifs are usually about a higher-level concern and loops about a lower-level concern? Maybe that's true? I just don't think it matters, though, because why wouldn't you think about any given if in terms of whether it specifically ought to move up or down?
This is not to say we shouldn't be having conversations about good practices, but it's really important to also understand and talk about the context that makes them good. Those who have read The Innovator's Solution would be familiar with a parallel concept. The author introduces the topic by suggesting that humanity achieved powered flight not by blindly replicating the wing of the bird (and we know how many such attempts failed because it tried to apply a good idea to the wrong context) but by understanding the underlying principle and how it manifests within a given context.
The recommendations in the article smell a bit of premature optimisation if applied universally, though I can think of context in which they can be excellent advice. In other contexts it can add a lot of redundancy and be error prone when refactoring, all for little gain.
Fundamentally, clear programming is often about abstracting code into "human brain sized" pieces. What I mean by that is that it's worth understanding how the brain is optimised, how it sees the world. For example, human short term memory can hold about 7±2 objects at once so write code that takes advantage of that, maintaining a balance without going to extremes. Holy wars, for example, about whether OO or functional style is always better often miss the point that everything can have its placed depending on the constraints.
From what I can tell, he views Rust for Linux as a social project as much as a technical one - attract new (and younger) developers into the kernel, which has become pretty "senior", force all of the "unwritten rules" to be written down, and break down some of the knowledge siloing and fiefdom behavior that has taken place over the years (by sharing it specifically with that younger group of developers).
If you think about it, Rust for Linux is in some ways essentially an auditing committee reviewing all of the API interactions between kernel subsystems and forcing them to be properly defined and documented. Even if the Rust part were to fail (exceedingly unlikely at this point), it's a useful endeavour in those other respects.
But of course, if Rust is successful at encoding all of those rules using the type system, that unlocks a lot of value in the kernel too. Most of the kernel code is in drivers, and it's typically written by less experienced kernel developers while being harder to review by maintainers that don't have background on or access to the hardware. So if you can get drivers written in Rust with APIs that are harder to misuse, that takes a big load off the maintainers.