There is also the counter-intuitive phenomenon where training a model on a wider variety of content than apparently necessary for the task makes it better somehow. For example, models trained only on English content exhibit measurably worse performance at writing sensible English than those trained on a handful of languages, even when controlling for the size of the training set. It doesn't make sense to me, but it probably does to credentialed AI researchers who know what's going on under the hood.
i.e. there is a lot of commonality between programming languages just as there is between human languages, so training on one language would be beneficial to competency in other languages.
The post documents issues like some assembly workers stuffing so much wire into the post that not enough protruded to make a connection. I will hope that in the US the workers are paid enough that they notice/care that the result can be connected. Or the managers.
Do you want documented experiences of Chinese manufacturing repeatedly attempting to cut corners? Like substituting inferior goods to increase their profit margin even after the initial product line is running smoothly.
For this not to be a problem a worker would have to notice it and put two and two together, then investigate further and then persuade their supervisor to raise it with the customer and get a change made to the spec.
While enjoying your faith in the rigour and attention to detail of the US assembly line worker, I think this example tells exactly the story the article says it does - that you have to specify everything.