Sure, they will leave comments about common made errors (your editor should already warn about this before you even commit it) etc. But to notify about this weird thing that was done to make sure something a lot of customers wanted is made reality.
also, PR's are created to share knowledge. Questions and answers on them are to spread knowledge in the team. AI does not do that.
[edit] Added the part about knowledge sharing
AI code review does not replace human review. But AI reviewers will often notice little things that a human may miss. Sometimes the things they flag are false positives, but it's still worth checking in on them. If even one logical error or edge case gets caught by an AI reviewer that would've otherwise made it to production with just human review, it's a win.
Some AI reviewers will also factor in context of related files not visible in the diff. Humans can do this, but it's time consuming, and many don't.
AI reviews are also a great place to put "lint" like rules that would be complicated to express in standard linting tools like Eslint.
We currently run 3-4 AI reviewers on our PRs. The biggest problem I run into is outdated knowledge. We've had AI reviewers leave comments based on limitations of DynamoDB or whatever that haven't been true for the last year or two. And of course it feels tedious when 3 bots all leave similar comments on the same line, but even that is useful as reinforcement of a signal.
After covid, it was never the same. Open for shorter windows, closed on Sundays, reduced selection, no more meal kits etc.
I had many friends who worked on Amazon Go, so it's a bit sad to see that work come to an end.