edit: should have mentioned the low-level stuff I work on is mature code and a lot of times novel.
I ended shoehorned into backend dev in Ruby/Py/Java and don't find it improves my day to day a lot.
Specifically in C, it can bang out complicated but mostly common data-structures without fault where I would surely do one-off errors. I guess since I do C for hobby I tend to solve more interesting and complicated problems like generating a whole array of dynamic C-dispatchers from a UI-library spec in JSON that allows parsing and rendering a UI specified in YAML. Gemini pro even spat out a YAML-dialect parser after a few attempts/fixes.
Maybe it's a function of familiarity and problems you end using the AI for.
Not sure thats exactly what that means. Its already likely the case that these models contained IMO problems and solutions from pretraining. It's possible this means they were present in the system prompt or something similar.