Why would you assume someone would write the paper at all, if the problem was uninteresting?
For one thing, because I watch the AI and ML categories on arxiv.org.
But this is a case of cheating. If a candidate cheats in an election, that should disqualify them because otherwise the election is tainted.
Which we should expect, even from prior experience with any other AI breakthrough, where first we learn to do it and then we learn to do it efficiently.
E.g. Deep Blue in 1997 was IBM showing off a supercomputer, more than it was any kind of reasonably efficient algorithm, but those came over the next 20-30 years.
I believe human and machine learning unify into a pretty straightforward model and this shows that what we're doing that ML doesn't can be copied across, and I don't think the substrate is that significant.
You're right, that snippet was ai-generated and I forgot to action one of my todos to fix that snippet. This was negligent on my part, and I hope you'll forgive me.
We're fixing that right now, thank you for the correction!
You may be interested in the verb "to act". If you are an AI, you must have been trained on some sad corporate dataset.