We kinda-sorta already know this is true. The lottery-ticket hypothesis [0] says that every large network contains a randomly initialized small network that performs as well as the overall network, and over the past eight years or so researchers have indeed managed to find small networks inside large networks of many different architectures that demonstrate this phenomenon.
Nobody talks much about the lottery-ticket hypothesis these days because it isn’t practically useful at the moment. (With the pruning algorithms and hardware we have, pruning is more costly than just training a big network.) But the basic idea does suggest that there may be hope for interpretability, at least in the odd application here or there.
That is, the (strong) lottery-ticket hypothesis suggests that the training process is a search through a large parameter space for a small network that already (by random initialization) exhibit the overall desired network behavior; updating parameters during the training process is mostly about turning off the irrelevant parts of the network.
For some applications, one would think that the small sub-network hiding in there somewhere might be small enough to be interpretable. I won’t be surprised if some day not too far into the future scientists investigating neural networks start to identify good interpretable models of phenomena of intermediate complexity (those phenomena that are too complex to be amenable to classic scientific techniques, but simple enough that neural networks trained to exhibit the phenomena yield unusually small active sub-networks).
The problem with these is they're also problems where there are actors profiting from the failure to fix the system - the issue isn't that we don't understand the complex nature of the domain, it's that the components of the system actively and agentically resist changes to the system. George Soros called this Reflexivity - the fact that the system responds to your manipulations means you can't treat yourself and the system as separate agents, and you can't treat the system as a purely mechanistic/passive recipient of your changes. It's maybe the biggest blind spot for people who want to apply the rules and methods of physics to social issues - the universe may be indifferent, but your neighbors are not.
I think what you're saying is poverty is actually simple, and the solution is to stop the bad actors causing poverty? But at the same time, you are correctly recognizing that attempts to stop bad actors from causing poverty triggers reflexive responses and cascading repercussions. Which sounds mighty like a complex system?