Because lying about your usage of AI is a good way to get completely kicked out of the open source community once caught. That's like asking 'why should you bother with anti-cheating measures for speedruns'. Why should we have any guidelines or regulations if people are going to bypass them? The answer I hope should be very obvious.
> high quality PRs with AI will get the "AI slop" label. At this point, why even disclose if the AI-assisted high-quality PR is indistinguishable from having been manually written (which it should be)? No point.
Then obviously the repository in question doesn't want people using AI and you should go elsewhere. They're not even against LLM tooling for this repo but people are freaking out because how dare you ask me to disclose what tools I'm using.
There was a blog article about mixing together different agents into the same conversation, taking turns at responses and improving results/correctness. But it takes a lot of effort to make your own claude-code-clone with correct API for each provider and prompts tuned for those models and tool use integrated etc. And there's no incentive for Anthropic/OpenAI/Google to write this tool for us.
OTOH it would be relatively easy for the bash loop to call claude code, codex CLI, etc in a loop to get the same benefit. If one iteration of one tool gets stuck, perhaps another LLM will take a different approach and everything can get back on track.
Just a thought.
Most definitely can. It's insane how well just telling Claude to ask help from Gemini works in practice.
https://github.com/raine/consult-llm-mcp
Disclaimer: made it