Further, "enough" training from another model's outputs – de facto 'distillation' – is likely to have similar effects as starting from a common base model, just "from thge other direction".
(Finally: some of the more nationalistic-paranoid observers seem to think Chinese labs have relied on exfiltrated weights from US entities. I don't personally think that'd be a likely or necessary contributor to Z.ai & others' successes, the mere appearance of this occasional "I am Claude" answer is sure to fuel further armchair belief in those theories.)
This simply hasn't been my experience.
Its too shallow. The deeper I go, the less it seems to be useful. This happens quick for me.
Also, god forbid you're researching a complex and possibly controversial subject and you want it to find reputable sources or particularly academic ones.
The quality varies wildly across models & versions.
With humans, the statement "my tutor was great" and "my tutor was awful" reflect very little on "tutoring" in general, and are barely even responses to each other withou more specificity about the quality of tutor involved.
Same with AI models.