I think AI for science is a good idea in its core, but academia really needs a way to "mark" AI generated papers. Maybe a better application of this would be in ML literature where experiments can be done with something like codex or claude code?
There are no best practices that are universally applicable. Some don’t even like the concept of MVP. But putting a product in front of users - however clunky the product is - is the best practice.
Get early feedback, prioritize, iterate - rinse and repeat. That is all there is to it.