why is what showing up? it just looks like they're a new Github user trying to make a tool to basically ask a bunch of different questions about signapore to varius LLMs.
Skimming the article, this would seem like another case of the explanainability problem, no? the conversation with the llm makes the results "easier to understand" (which is a requirement for real use-cases) but loses accuracy? Still good if we have more studies confirming this tradeoff to be the case.