If there was such a thing you would just check in your prompts into your repo and CI would build your final application from prompts and deploy it.
So it follows that if you are accepting 95% of what random output is being given to you. you are either doing something really mundane and straightforward or you don't care much about the shape of the output ( not to be confused with quality) .
Like in this case you were also the Product Owner who had the final say about what's acceptable.
I am not, and don't expect to be able to do that for many years yet. The models aren't that good yet.
I would estimate that I accepted perhaps 25% of the initial code output from the LLM. The other 75% of output I wasn't satisfied with I just unapplied and retried with a different prompt, or I refactored or mutated it using a followup prompt.
In the final project 95% of the committed lines of code in the published version were written by AI, however there was probably 4x as much discarded AI generated code along the way that was also written by AI. Often the first take wasn't good enough so I modified it or refactored it, also using AI. Over the course of using the project I got better at providing more precise prompts that generated good code the first time, however, I rarely accepted the first draft of code back from Kiro without making followup prompts.
A lot of people have a misguided thought that using AI means you just accept the first draft that AI returns. That's not the case. You absolutely should be reading the code, and iterating on it using followup prompts.
> in line with what they would have written,
point i am making is that they didn't know what they would've written. they had a rough overall idea but details were being accepted on the fly. They were trying out bunch of things and see what looks good based on a rough idea of what output should be.
In a real world project you are not both product owner and coder.