However, as an app ecosystem, it's not tough at all. For example, there is not a single open source email app on iOS which supports GPG email. In the iOS app ecosystem, privacy and FLOSS is an afterthought, since iOS users are more likely to pay for proprietary software. On Android, there are a lot more options, including things like F-Droid which are full of FLOSS apps which are graded based on their patterns and anti-patterns.
Seems like the same idea would apply to regional differences in health including mental health. An obvious one would be if the land is high in lead, the people living on it might be more violent. Areas with oil wells are likely affecting air and food quality. Maybe areas with a lot of lithium result in happier people. Western medicine has largely ignored environmental effects. A map of background radiation levels across the world might have an interesting health story to tell as could multi-spectral satellite imagery. Oritain might be sitting on a wealth of very useful health data, unaware of it.
We keep hearing about these giant models like GPT3 with 1.5 billion paramaters. Parameters are the things that change when we train a model, you can think about them as degrees of freedom. If you have a lot of parameters, theory made us believe that the model would just "overfit" the training data, e.g. memorize it. That's bad, because when new data comes in in production we'd expect the model to not be able to "generalize" to it, e.g. make accurate predictions on data it hasn't seen before, because it's just memorized training data instead of uncovering the "guiding principles" of the data so to speak.
In practice, these huge models are, in laymans terms, fucking awesome and work really well e.g. they generalize and work in production. No one understands why.
This paper is a survey or overview of what "too many paramaters" are, and all the research into why these models work even though they shouldn't.