However, most of the content in the last half of the book consists of naming and describing what seemed like obvious strategies for refactoring and rewriting code. I would squint at the introduction to a new term, try to parse its definition, look at the code example, and when it clicked I would think "well that just seems like what you would naturally choose to do in that situation, no?" Then the rest of the chapter describing this pattern became redundant.
It didn't occur to me that trying to put the terms themselves to memory would be particularly useful, and so it became a slog to get through all of the content that I'm not sure was worth it. Curious if that was the experience of anyone else.
If we're going to go with economic/strategy models, I think the Laffer Curve is more relevant. Seriously extrapolating here: AI is optimal for many tasks which if used in those contexts can maximize productivity. Over-using it on unsuitable tasks destroys productivity.
Applied to AI I think it would be something like - ease of development increases the complexity attempted.