This is the pattern I settled on about a year ago. I use it as a rubber-duck / conversation partner for bigger picture issues. I'll run my code through it as a sanity "pre-check" before a pr review. And I mapped autocomplete to ctrl-; in vim so I only bring it up when I need it.
Otherwise, I write everything myself. AI written code never felt safe. It adds velocity but velocity early on always steals speed from the future. That's been the case for languages, for frameworks, for libraries, it's no different for AI.
In other words, you get better at using AI for programming by recognizing where its strengths lie and going all in on those strengths. Don't twist up in knots trying to get it to do decently what you can already do well yourself.
I personally did not hit the wall where the use of LLMs would slow me down in the long run.
It has been smooth sailing most of the time, and getting better with newer models.
For me it comes down to "know what you are being paid for".
I'm not a library maintainer. My code will not be scrutinized by thousands of peers. My customer will be happy with faster completion that does the same thing like the more perfect hand crafted version.
Welcome to the industrial revolution in programming. This is the way of things.
I tend to have lots of uncommitted files and changes that i want to keep around in this state while I move around branches and while having multiple change lists (jetbrains implementation) that I will commit at some point in time.
This loose, flexible way of using git seems hard to do in jj.