Seems like many from this group now pursue open-endedness in AI and view evolution as a way towards this goal (or lack thereof).
A very interesting evolution (ha!) of these ideas was presented in POET[0] towards evolution of agents in evolving environments.
There is also an interesting paper about accelerating neural architecture search when generating fake training data in generative teacher networks[1].
Lastly, a paper that i find very very interesting but might not be as relevant but still is 'First return, then explore'[2]
[0] : https://eng.uber.com/poet-open-ended-deep-learning/
It's hard to just do it to test the waters when you have a technical project and multiple interviewing rounds (which also include live coding, which for me is very stressful).