Shameless plug, for anyone who's interested in "self-improvement" agent check out StreamBench[1] where we benchmark and try out what's essential for improvements in online settings. Basically we find feedback signal is vital and the stronger the signal the more improvement you can get if you were able to feed it back to the agent in terms of weights (LoRA) or in-context examples.
Do beware on some reasoning task, our recent work[0] actually found it may cause some performance degradation as well as possible reasoning weakening in JSON. I really hope they fix this in the latest GPT-4o version.
[1] https://arxiv.org/abs/2406.08747