https://en.wikipedia.org/wiki/Pangaea_Proxima
Life might very well exist on earth even through those conditions, but not to the extent we have today.
https://en.wikipedia.org/wiki/Pangaea_Proxima
Life might very well exist on earth even through those conditions, but not to the extent we have today.
Here's the first paragraph in English:
> The town studio of Signor Jacobelli faced the west. It was situated on the top floor of an old eight-storied building in the West Fifties. Thirty years ago this had been given over entirely to studios, but now it was broken up into a more profitable mêlée of semi-commercial establishments and light-housekeeping apartments.
Here's the first paragraph in the Swedish translation:
> Signor Jacobelli hade en ateljé högst upp i ett gammalt hus med åtta våningar. För trettio år sedan var huset fullt av konstnärer, men nu fanns där både butiker och lägenheter.
I get that the translation is to a 'simplified' version of Swedish; translations of fiction are often restructures of the original language, but this is to a point where one not only needs to know what the words in Swedish mean, but be able to interpret them based on a vast restructure compared with the original.
Compare with a Kagi (DeepL) translation of the text:
> Signor Jacobellis ateljé i staden vette mot väster. Den låg högst upp i ett gammalt åttavåningshus på West Fifties. För trettio år sedan hade detta uteslutande varit ateljéer, men nu var det uppdelat i en mer lönsam blandning av halvkommersiella etablissemang och lägenheter med enklare hushållning.
Kagi maintains the original structure, which makes it far easier to compare words and the original structure.
I could be wrong but to me it seems far easier to learn a language when a translation doesn't come with a vast restructure of the original content.
The reason for both things is that the best models perform, at best, on the level of a recent graduate.
When would you hire a recent graduate in either role, if you could afford better?
These models are essentially the same models for both science and art, and it was a surprise to everyone that GPT-3 was able to turn into ChatGPT, or that Stable Diffusion was able to generalise so well with relatively few issues (even despite the occasional Cronenberg anatomy study). The flaws with the LLMs that prevent accurate science are the same flaws that cause object impermanence in written stories; the flaws that prevent image and video models from being physically plausible are the same flaws — incorrect world model — that cause them to be wrong about weather forecasts, chemistry, etc.
In both cases, increasing quality of AI raises the metaphorical water level, and in this example rising tides don't lift all boats, but instead drown (the careers of) people who can't swim. I don't have a fix for that, and I'm deeply skeptical that any of the suggestions from the AI firms will work — they're not economists, and even if they were (or even if they hired loads), if the AI companies are right, this change will be at least as big as the industrial revolution, which upended old economic models.
https://www.scientificamerican.com/article/cats-kill-a-stagg...
Do you think preserving your privacy in this one aspect of your life will have a greater net benefit to your life than driving a safer car (under the assumption that newer cars are safer)? Especially given that presumably there's still data being collected on you even in an old car (cameras on the road, other people's cars, your phone, etc).
By analogy, what's the marginal benefit of not eating any food in packaged in plastic if your water supply is full of (unavoidable, for the sake of argument) microplastics? Is doing so worth the cost (no food for you, buddy!)?
I guess this is just another round of being principled duking it out with pragmatism.
That sounds like marketing BS, especially when most likely these functions just call into or are implemented nearly identically to the old C functions which are already going to "offers the best possible performance".
I did some benchmarks, and the new routines are blazing fast![...]around 4.5x faster than stoi, 2.2x faster than atoi and almost 50x faster than istringstream
Are you sure that wasn't because the compiler decided to optimise away the function directly? I can believe it being faster than istringstream, since that has a ton of additional overhead.
After all, the source is here if you want to look into the horse's mouth:
https://raw.githubusercontent.com/gcc-mirror/gcc/master/libs...
Not surprisingly, under all those layers of abstraction-hell, there's just a regular accumulation loop.
https://github.com/fastfloat/fast_float
For more historical context:
https://lemire.me/blog/2020/03/10/fast-float-parsing-in-prac...
How does this insane number get unnoticed for so long. I really find it hard to believe. < One drink per day more dangerous than smoking a pack per day?
Edit: Ok, looked into the reference and it's a bit more subtle, though I can't find numbers for people not consuming anything, allthough one would think they'd get 0% alcohol related cancers.
"For example, a study of 226,162 individuals reported that the absolute risk of developing any alcohol-related cancer over the lifespan of a woman increases from approximately 16.5% (about 17 out of every 100 individuals) for those who consume less than one drink per week, to 19.0% (19 out of every 100 individuals) for those who consume one drink daily on average to approximately 21.8% (about 22 out of every 100 individuals) for those who consume two drinks daily on average (Figure 5). That is about five more women out of 100 who would have developed cancer due to a higher level of alcohol consumption."
Pretty significant, although "less than one drink per day" is a bit vague.
I assume "alcohol related" in this context means that alcohol consumption increases the risk for those types of cancers, but you might still get those types of cancers even if you have never consumed any alcohol. And "less than once drink per week" is assumed to be almost the same as never consuming any alcohol at all, so 17% is the risk for women who never consume any alcohol.