Here in the Netherlands everyone uses "buienradar" which is limited to the Netherlands, has very bad privacy, and is also not super great at predicting rainfall.
So not seeing them means either lying or incompetent. I always try to attribute to stupidity rather than malice (Hanlon's razor).
The big problem of LLMs is that they optimize human preference. This means they optimize for hidden errors.
Personally I'm really cautious about using tools that have stealthy failure modes. They just lead to many problems and lots of wasted hours debugging, even when failure rates are low. It just causes everything to slow down for me as I'm double checking everything and need to be much more meticulous if I know it's hard to see. It's like having a line of Python indented with an inconsistent white space character. Impossible to see. But what if you didn't have the interpreter telling you which line you failed on or being able to search or highlight these different characters. At least in this case you'd know there's an error. It's hard enough dealing with human generated invisible errors, but this just seems to perpetuate the LGTM crowd
How do you value your Saturday grocery run? The multiple hours spent at kids sports practices? Time spent doing home improvements? Those are the hours that are more difficult to accurately model.