The uncomfortable truth is that "the right thing" depends a lot on the point of view and narrative at hand. In large organizations political capital is inherently limited, even in very senior positions. It's especially challenging in large scale software development because ground-level expertise really is needed to determine "the right thing", but human communication inherently has limits. I would say most people, and especially most software engineers, have strong opinions about how things "should" be, but if they were put in charge they would quickly realize that when they describe that a hundred person org they would get a hundred different interpretations. It's hard to grok the difficulty of alignment of smart, independent thinkers at scale. When goals and roles are clear (like Apollo), that's easy mode for organizational politics. When you're building arbitrary software for humans each with their own needs and perspective, it's infinitely harder. That's what leads to saccharine corporate comms, tone deaf leaders, and the "moral mazes" Robert Jackall described 30+ years ago.
And yet here we are, able to talk to a computer, that writes Pytorch code that orchestrates the complexity below it. And even talks back coherently sometimes.
It writes something that that's almost, but not quite entirely unlike Pytorch. You're putting a little too much value on a simulacrum of a programmer.