Readit News logoReadit News
piadodjanho commented on Floating Point Visually Explained (2017)   fabiensanglard.net/floati... · Posted by u/bmease
sigjuice · 6 years ago
Compiler warnings are not straightforward and depend on many things; compiler version, optimization settings, warning settings.

  $ gcc-9 -Wuninitialized -O0 float.c  # NO WARNINGS!!!

  $ gcc-9 -O1 float.c  # NO WARNINGS!!!
  
  $ gcc-9 -Wuninitialized -O1 float.c
  float.c: In function 'main':
  float.c:9:5: warning: 'i' is used uninitialized in this function [-Wuninitialized]
      9 |     for (size_t i; i < (1ul << 33); i++) {
        |     ^~~

piadodjanho · 6 years ago
Weirdly, those two codes behaves differently:

    $ gcc hn.c -O0 -o hn

    for (size_t i; i < (1ul<<46); i++) {
        printf ("%zd\n",i);
        acc += x;
        break;
    }
Without break:

    $ gcc hn.c -O0 -o hn

    for (size_t i; i < (1ul<<46); i++) {
        printf ("%zd\n",i);
        acc += x;
    }

piadodjanho commented on Floating Point Visually Explained (2017)   fabiensanglard.net/floati... · Posted by u/bmease
BeetleB · 6 years ago
> If that wasn't bad enough, since the magnitude of the numbers follows a normal distribution (someone whose name I forgot's law), the most significant bits of the exponent field are very rarely used. The IEEE-754 encoding is suboptimal.

But isn't that accounted for by the fact the floating point number distribution is non-uniform? Half of all floating point numbers are between -1 and 1.

piadodjanho · 6 years ago
Hm. I don't know.

My reasoning is about how much information can be encoded in the format.

The IEEE-754 double format have 11 bits to encode the exponent and 52 bits to encode the fraction.

Therefore, the multiplying factor from double is in the range: 2^1023 to 2^-1022. To give an idea how large this is, the scientist estimate there are about 10^80 atoms in universe, in base 2 this is "little" less than 2^266.

Most application only don't work with numbers on this magnitude. And the ones that does, don't care so much about precision.

Let me know if there is something wrong with my logic.

piadodjanho commented on Floating Point Visually Explained (2017)   fabiensanglard.net/floati... · Posted by u/bmease
sigjuice · 6 years ago
Shouldn't i be initialized?
piadodjanho · 6 years ago
lol, how the compiler didn't warn about uninitialized variable
piadodjanho commented on Is Sunscreen the New Margarine? (2019)   outsideonline.com/2380751... · Posted by u/plessthanpt05
01100011 · 6 years ago
Non-nano zinc oxide, the traditional kind that is white, like OP is talking about, is fine as far as I know(https://www.vogue.com/article/reef-safe-sunscreens-oxybenzon...).

The nano mineral sunscreens are bad, but they can be coated, which sounds like it makes them less harmful.

piadodjanho · 6 years ago
The issues I enumerated affects both nano and non-nano sunscreen with different degrees.

The most commons concerns specific of the nano version are the skin absorption and free radical creation when expose to UV radiation -- not exclusive of the mineral sunscreen though, azobenzene is also highly unstable.

We often hear about studies done with the nano crystal because are worst. Experiments done with them are more likely to produce a negative result. But this doesn't mean the "macro" version isn't affected as well.

The aspiration concern is very serious one. Some countries bans spray and power mineral sunscreen of any kind.

Just to be clear. I'm not advocating against using them. Every sunscreen has its place, saying one option is better than other is reductive.

For instance, think an Avobenzone + antiox only chemical sunscreen is a good option to minimize the UVA damage and enable the body to produce vitamin D minimizing the damage created by UVA radiation. But they wouldn't be my choice when going to the beach.

Similarly, Mineral sunscreens don't degrade in the sun and block a wide range of UV radiation. But they are bad for marine wild life. They are a good choice for daily facial sunscreen, specially for skin sensitized by "anti aging" treatments.

Again. There is no such thing as the best sunscreen. Each is best in different use cases.

piadodjanho commented on Is Sunscreen the New Margarine? (2019)   outsideonline.com/2380751... · Posted by u/plessthanpt05
anthony_doan · 6 years ago
Best sunscreen is zinc oxide btw.

There are two classes of sunscreen, chemical and mineral/physical.

The mineral/physical ones are group by two mineral zinc oxide and titanium dioxide.

You get a ghost cast using mineral but you can offset it by using a tan/colored version.

Mineral sunscreen are better because it stop UVB and UVA. It also doesn't require a waiting period after applying. Zinc provide wider spectrum of protection than titanium.

piadodjanho · 6 years ago
A sunscreen that blocks UVB, harm marine life [1] and can cause allergic airways inflammation [2] when breathed (inevitable when it dries on your skin) is hardly the best sunscreen in my option.

My personal choice for to vitamin D synthesis is Avobenzone stabilized with ubiquinone.

[1] https://oceanservice.noaa.gov/news/sunscreen-corals.html [2] https://www.sciencedirect.com/science/article/pii/S0041008X1...

piadodjanho commented on Is Sunscreen the New Margarine? (2019)   outsideonline.com/2380751... · Posted by u/plessthanpt05
dleslie · 6 years ago
I'm one of those weird people who gets a rash when using sunscreen, and am also one of those folks who has dental fluorosis.

I simply stay out of the sun, wear long sleeves and a hat if I can't avoid it, and ... Well I have ugly teeth.

Not wearing sunscreen isn't a pass for exposure; there are still precautions that can be taken.

piadodjanho · 6 years ago
The the most commonly used sunscreen chemical have hormone like activity [1].

https://www.ewg.org/sunscreen/report/the-trouble-with-sunscr...

piadodjanho commented on Floating Point Visually Explained (2017)   fabiensanglard.net/floati... · Posted by u/bmease
thechao · 6 years ago
0 is unsigned. I would reject 1/inf — it would be NAN. If the user wants to play silly games with derivatives, computer algebra systems are that way: —>.
piadodjanho · 6 years ago
NaN is NaR (not a real) in posit notation.
piadodjanho commented on Floating Point Visually Explained (2017)   fabiensanglard.net/floati... · Posted by u/bmease
thechao · 6 years ago
I come from GPU-land, and a quire always brings a chuckle from the fp HW folk. They like the rest of Gustafson’s stuff, though.
piadodjanho · 6 years ago
Yeah. Gustafons is like, memory is cheap, just use a few hundreds of kb to store the quire. To be fair, he is not HW guy.
piadodjanho commented on Floating Point Visually Explained (2017)   fabiensanglard.net/floati... · Posted by u/bmease
piadodjanho · 6 years ago
In practice, subnormal are very rarely used. Most compiler disable subnormals when compiling with anything other than -O0. It takes over a hundred cycle to complete an operation.

Demo: #include <stdio.h>

    int
    main ()
    {
    
        volatile float v;
        float acc = 0;
        float den = 1.40129846432e-45;
    
        for (size_t i; i < (1ul<<33); i++) {
            acc += den;
        }
    
        v = acc;
    
        return 0;
    }
With -01: $ gcc float.c -o float -O1 && time ./float ./float 8.93s user 0.00s system 99% cpu 8.933 total

With -O0: $ gcc float.c -o float -O1 && time ./float ./float 20.60s user 0.00s system 99% cpu 20.610 total

piadodjanho · 6 years ago
I looked at the asm generate from my original example and they generate very different codes, gcc applies other optimization when compiled with -O1.

I've been fighting the compiler to generate a minimal working example of the subnormals, but didn't have any success.

Some things take need to be taken in account (from the top of my head):

- Rounding. You don't want to get stuck in the same number. - The FPU have some accumulator register that are larger than the floating point register. - Using more register than the architecture has it not trivial because the register renaming and code reordering. The CPU might optimize in a way that the data never leaves those register.

Trying to make a mwe, I found this code:

    #include <stdio.h>
    
    int
    main ()
    {
        double x = 5e-324;
        double acc = x;
    
        for (size_t i; i < (1ul<<46); i++) {
                acc += x;
    
        }
    
        printf ("%e\n", acc);
    
        return 0;
    }

Runs is fraction of seconds with -O0:

    gcc double.c -o double -O0
But takes forever (killed after 5 minutes) with -O1:

    gcc double.c -o double -O1
I'm using gcc (Arch Linux 9.3.0-1) 9.3.0 on i7-8700

I also manage to create a code that sometimes run in 1s, but in others would take 30s. Didn't matter if I recompiled.

Floating point is hard.

piadodjanho commented on Floating Point Visually Explained (2017)   fabiensanglard.net/floati... · Posted by u/bmease
thechao · 6 years ago
I'm mixed on Gustafson's posit stuff. For me, the only thing I'd change for fp would be:

1. -0 now encodes NAN.

2. +inf/-inf are all Fs with sign: 0x7FFFFFFF, 0xFFFFFFFF.

3. 0 is the only denorm.

Which does four good things:

1. Gets rid of the utter insanity which is -0.

2. Gets rid of all the redundant NANs.

3. Makes INF "look like" INF.

4. Gets rid of "hard" mixed denorm/norm math.

And one seriously bad thing:

1. Lose a bunch of underflow values in the denorm range.

However, as to the latter: who the fuck cares! Getting down to that range using anything other than divide-by-two completely trashes the error rate anyways, so why bother?

The rest of Gustafson's stuff always sounds like crazy-people talk, to me.

piadodjanho · 6 years ago
He also propose the use of an opaque register to accumulate (quire), in contrast to the transparent float register (its a mess, each compiler does what it think is best).

When working with numbers that exceed the posit representation you use the quire to accumulate. At the end of the computation you convert again to posit to store in memory, or store the quire in memory.

In C, it would look like something like:

    posit32_r a, b;
    quire_t q;
    
    q = a; // load posit into quire
    
    q = q + b; // accumulate in quire
    
    a = q; // load quire into posit

> The rest of Gustafson's stuff always sounds like crazy-people talk, to me.

I've read all his papers on posit and agree. But I do believe the idea of encoding exponent with golomb-rice is actually very good and suit most users. The normalization hardware (used in the subtraction operation) can be easily repurposed to decode the exponent and shift the exponent.

But the quire logic (fixed point arithmetic) might use more area than a larger float-point. But maybe in power usage it pays of.

u/piadodjanho

KarmaCake day154November 24, 2018View Original