The article is well put together and nicely illustrated, though :)
https://github.com/mrc-ide/covid-sim/blob/e8f7864ad150f40022...
This was used by the Imperial College for COVID-19 predictions. It has race conditions, seeds the model multiple times, and therefore has totally non-deterministic results[0]. Also, this is the cleaned up repo. The original is not available[1].
A lot of my homework from over 10 years ago still runs (Some require the right Docker container: https://github.com/sumdog/assignments/). If journals really care about the reproducibility crisis, artifact reviews need to be part of the editorial process. Scientific code needs to have tests, a minimal amount of test coverage, and code/data used really need to be published and run by volunteers/editors in the same way papers are reviewed, even for non-computer science journals.
[0] https://lockdownsceptics.org/code-review-of-fergusons-model/
After a paper has been accepted, authors can submit a repository containing a script which automatically replicates results shown in the paper. After a reviewer confirms that the results were indeed replicable, the paper gets a small badge next to its title.
While there could certainly be improvements, I think it's a step in the right direction.
For something a bit more modern, I'd recommend [0], but one might argue that old OpenGL is easier to learn since you don't have to setup your own shaders.
GDPR should be resisted by as many companies and startups as possible.
Deleted Comment
Instead, I wrote a wrapper around OpenGL that provided some functions like "drawRectangle()" or "drawImage()", and they have used it to build all sorts of things. Additionally, they are constantly wanting to try other stuff. I don't think they would have done this if they didn't enjoy the process of writing code.