But isn't that accounted for by the fact the floating point number distribution is non-uniform? Half of all floating point numbers are between -1 and 1.
My reasoning is about how much information can be encoded in the format.
The IEEE-754 double format have 11 bits to encode the exponent and 52 bits to encode the fraction.
Therefore, the multiplying factor from double is in the range: 2^1023 to 2^-1022. To give an idea how large this is, the scientist estimate there are about 10^80 atoms in universe, in base 2 this is "little" less than 2^266.
Most application only don't work with numbers on this magnitude. And the ones that does, don't care so much about precision.
Let me know if there is something wrong with my logic.
The nano mineral sunscreens are bad, but they can be coated, which sounds like it makes them less harmful.
The most commons concerns specific of the nano version are the skin absorption and free radical creation when expose to UV radiation -- not exclusive of the mineral sunscreen though, azobenzene is also highly unstable.
We often hear about studies done with the nano crystal because are worst. Experiments done with them are more likely to produce a negative result. But this doesn't mean the "macro" version isn't affected as well.
The aspiration concern is very serious one. Some countries bans spray and power mineral sunscreen of any kind.
Just to be clear. I'm not advocating against using them. Every sunscreen has its place, saying one option is better than other is reductive.
For instance, think an Avobenzone + antiox only chemical sunscreen is a good option to minimize the UVA damage and enable the body to produce vitamin D minimizing the damage created by UVA radiation. But they wouldn't be my choice when going to the beach.
Similarly, Mineral sunscreens don't degrade in the sun and block a wide range of UV radiation. But they are bad for marine wild life. They are a good choice for daily facial sunscreen, specially for skin sensitized by "anti aging" treatments.
Again. There is no such thing as the best sunscreen. Each is best in different use cases.
There are two classes of sunscreen, chemical and mineral/physical.
The mineral/physical ones are group by two mineral zinc oxide and titanium dioxide.
You get a ghost cast using mineral but you can offset it by using a tan/colored version.
Mineral sunscreen are better because it stop UVB and UVA. It also doesn't require a waiting period after applying. Zinc provide wider spectrum of protection than titanium.
My personal choice for to vitamin D synthesis is Avobenzone stabilized with ubiquinone.
[1] https://oceanservice.noaa.gov/news/sunscreen-corals.html [2] https://www.sciencedirect.com/science/article/pii/S0041008X1...
I simply stay out of the sun, wear long sleeves and a hat if I can't avoid it, and ... Well I have ugly teeth.
Not wearing sunscreen isn't a pass for exposure; there are still precautions that can be taken.
https://www.ewg.org/sunscreen/report/the-trouble-with-sunscr...
Demo: #include <stdio.h>
int
main ()
{
volatile float v;
float acc = 0;
float den = 1.40129846432e-45;
for (size_t i; i < (1ul<<33); i++) {
acc += den;
}
v = acc;
return 0;
}
With -01:
$ gcc float.c -o float -O1 && time ./float
./float 8.93s user 0.00s system 99% cpu 8.933 totalWith -O0: $ gcc float.c -o float -O1 && time ./float ./float 20.60s user 0.00s system 99% cpu 20.610 total
I've been fighting the compiler to generate a minimal working example of the subnormals, but didn't have any success.
Some things take need to be taken in account (from the top of my head):
- Rounding. You don't want to get stuck in the same number. - The FPU have some accumulator register that are larger than the floating point register. - Using more register than the architecture has it not trivial because the register renaming and code reordering. The CPU might optimize in a way that the data never leaves those register.
Trying to make a mwe, I found this code:
#include <stdio.h>
int
main ()
{
double x = 5e-324;
double acc = x;
for (size_t i; i < (1ul<<46); i++) {
acc += x;
}
printf ("%e\n", acc);
return 0;
}
Runs is fraction of seconds with -O0: gcc double.c -o double -O0
But takes forever (killed after 5 minutes) with -O1: gcc double.c -o double -O1
I'm using gcc (Arch Linux 9.3.0-1) 9.3.0 on i7-8700I also manage to create a code that sometimes run in 1s, but in others would take 30s. Didn't matter if I recompiled.
Floating point is hard.
1. -0 now encodes NAN.
2. +inf/-inf are all Fs with sign: 0x7FFFFFFF, 0xFFFFFFFF.
3. 0 is the only denorm.
Which does four good things:
1. Gets rid of the utter insanity which is -0.
2. Gets rid of all the redundant NANs.
3. Makes INF "look like" INF.
4. Gets rid of "hard" mixed denorm/norm math.
And one seriously bad thing:
1. Lose a bunch of underflow values in the denorm range.
However, as to the latter: who the fuck cares! Getting down to that range using anything other than divide-by-two completely trashes the error rate anyways, so why bother?
The rest of Gustafson's stuff always sounds like crazy-people talk, to me.
When working with numbers that exceed the posit representation you use the quire to accumulate. At the end of the computation you convert again to posit to store in memory, or store the quire in memory.
In C, it would look like something like:
posit32_r a, b;
quire_t q;
q = a; // load posit into quire
q = q + b; // accumulate in quire
a = q; // load quire into posit
> The rest of Gustafson's stuff always sounds like crazy-people talk, to me.I've read all his papers on posit and agree. But I do believe the idea of encoding exponent with golomb-rice is actually very good and suit most users. The normalization hardware (used in the subtraction operation) can be easily repurposed to decode the exponent and shift the exponent.
But the quire logic (fixed point arithmetic) might use more area than a larger float-point. But maybe in power usage it pays of.