As things became more homogeneous, and furthermore as other languages also could do that “one weird trick” of cross platform support, the shortcomings of both Perl and its community came to the fore.
Also, I make it work the same way I do: I first come up with the data model until it "works" in my head, before writing any "code" to deal with it. Again, clear instructions.
Oh another thing, one of my "golden rule" is that it needs to keep a block comment at the top of the file to describe what's going on in that file. It acts as a second "prompt" when I restart a session.
It works pretty well, it doesn't appear as "magic" as the "make it so!" approach people think they can get away with, but it works for me.
But yes, I still also spend maybe 30% of the time cleaning up, renaming stuff and do more general rework of the code before it comes "presentable" but it still allows to work pretty quickly, a lot quicker than if I were to do it all by hand.
We found it mostly starts to abandon instructions when the context gets too polluted. Subagents really help address that by not loading the top context with the content of all your files.
Another tip: give it feedback as PR comments and have it read them with the gh CLI. This is faster than hand editing the code yourself a lot of times. While it cleans up its own work you can be doing something else.
Only if Word formats remain dominant. There might be hope with the EU moving off Word that an alternative, real standard might take root.
It raises additional questions. Plenty of questions already unanswered. Seems likely it's been a shitshow.
If there were anything like proper processes in place, controls would have made that very difficult.
Then there are the weird issues about why obvious close ties to xAI here....
What do you mean?