Even if nobody would cheat and massage data, we would still have studies that do not replicate on new data. 95 % confidence means that one in twenty surveys finds an effect that is only noise. The reporting of failed hypothesis testing would really help to find these cases.
So pre-registration helps, and it would also help to establish the standard that everything needed to replicate must be published, if not in the article itself, then in an accompanying repository.
But in the brutal fight for promotion and resources, of course labs won’t share all their tricks and process knowledge. Same problem if there is an interest in using the results commercially. E.g. in EE often the method is described in general but crucial parts of the code or circuit design are held back.
Current work has been improving boot time. Was nearly two minutes because of one board, and that's a long time for the lights to be out if you have to reboot during a show. I'd wanted to use buildroot to get a custom kernel that should boot much more quickly, but the buildroot learning curve was steep for me, particularly as I've no expectation of ever needing the knowledge again.
Independently but concurrently I decided I really ought to understand what all this AI stuff was about, for fear of getting left behind. That coincided with the release of opus 4.5, and holy heck has it made a difference! With a little guidance from me Claude got the buildroot environment working and the boot time down to less than 10 seconds. I've been _really_ impressed. I've had Claude write a few boring utilities that I could easily have done but Claude managed much faster and with less boredom on my part. Fortunately for my AI revolution I think I'm a better Business Analyst/writer than I am a coder, so it fits with my temperament.