Readit News logoReadit News
sparkie commented on OCaml as my primary language   xvw.lol/en/articles/why-o... · Posted by u/nukifw
sparkie · 13 days ago
> At present, I don’t know anyone who has seriously used languages like OCaml or Haskell and was happy to return to languages with less sophisticated type systems (though an interesting project can sometimes justify such a technological regression).

Recovered typeaholic here. I still occasionally use OCaml and I primarily wrote F# and Haskell for years. I've been quite deep down the typing rabbit hole, and I used to scorn at dynamically typed languages.

Now I love dynamic typing - but not the Python kind - I prefer the Scheme kind - latent typing. More specifically, the Kernel[1] kind, which is incredibly powerful.

> I think the negative reputation of static type checking usually stems from a bad experience.

I think this goes two ways. Most people's experience with dynamic typing is the Python kind, and not the Kernel kind.

To be clear, I am not against static typing, and I love OCaml - but there are clear cases where static typing is the wrong tool - or rather, no static typing system is sufficient to express problems that are trivial to write correctly with the right dynamic types.

Moreover, some problems are inherently dynamic. Take for example object-capabilities (aka, security done right). Capabilities can be revoked at any time. It makes no sense to try and encode capabilities into a static type system - but I had such silly thoughts when I was a typeaholic, and I regularly see people making the same mistake. Wouldn't it be better to have a type system which can express things which are dynamic by nature?

And this is my issue with purely statically typed systems: They erase the types! I don't want to erase the types - I want the types to be available at runtime so that I can do things with them that I couldn't do at compile time - without me having to write a whole new interpreter.

My preference is for Gradual Typing[2], which lets us use both worlds. Gradual typing is static typing with a `dynamic` type in the type system, and sensible rules for converting between dynamic and static types - no transitivity in consistency.

People often mistake gradual typing with "optional typing" - the kind that Erlang, Python and Typescript have - but that's not correct. Those are dynamic first, with some static support. Gradual typing is static-first, with dynamic support.

Haskell could be seen as Gradual due to the presence of `Data.Dynamic`, but Haskell's type system, while a good static type system, doesn't make a very good dynamic type system.

Aside, my primary language now is C, which was the first language I learned ~25 years ago. I regressed! I came back to C because I was implementing a gradually typed language and F#/OCaml/Haskell were simply too slow to make it practical, C++/Rust were too opinionated and incompatible with what I want to achieve, and C (GNU dialect) let me have almost complete control over the CPU, which I need to make my own language good enough for practical use. After writing C for a while I learned to love it again. Manually micro-optimizing with inline assembly and SIMD and is fun!

[1]:https://web.cs.wpi.edu/~jshutt/kernel.html

[2]:https://jsiek.github.io/home/WhatIsGradualTyping.html

sparkie commented on F-Droid build servers can't build modern Android apps due to outdated CPUs    · Posted by u/nativeforks
johnklos · 13 days ago
> It makes no sense for developers to keep maintaining software for 20 year old CPUs when they can't test it.

This is horribly inaccurate. You can compile software for 20 year old CPUs and run that software on a modern CPU. You can run that software inside of qemu.

FYI, there are plenty of methods of selecting code at run time, too.

If we take what you're saying at face value, then we should give up on portable software, because nobody can possibly test code on all those non-x86 and/or non-modern processors. A bit ridiculous, don't you think?

sparkie · 13 days ago
> You can compile software for 20 year old CPUs and run that software on a modern CPU.

That's testing it on the new CPU, not the old one.

> You can run that software inside of qemu.

Sure you can. Go ahead. Why should the maintainer be expected to do that?

> A bit ridiculous, don't you think?

Not at all. It's ridiculous to expect a software developer to give any significance to compatibility with obsolete platforms. I'm not saying we shouldn't try. x86 has good backward compatibility. If it still works, that's good.

But if I implement an algorithm in AVX2, should I also be expected to implement a slower version of the same algorithm using SSE3 so that a 20 year old machine can run my software?

You can always run an old version of the software, and you can always do the work yourself to backport it. It's not my job as a software developer to be concerned about ancient hardware unless someone pays me specifically for that.

Would you expect Microsoft to ship Windows 12 with baseline compatibility? I don't know if it is, but I'm pretty certain that if you tried running it on a 2005 CPU, it would be pretty much non-functional, as performance would be dire. I doubt it is anyway due to UEFI requirements which wouldn't be present on a machine running such CPU.

sparkie commented on F-Droid build servers can't build modern Android apps due to outdated CPUs    · Posted by u/nativeforks
Arech · 14 days ago
In most cases (and this was the case of Mozilla I referred to) it's only a matter of compiling code that already have all support necessary. They are using some upstream component that works perfectly fine on my architecture. They just decided to drop it, because they could.
sparkie · 14 days ago
It's not only your own software, but also its dependencies. The link above is for glibc, and is specifically addressing incompatibliy issues between different software. Unless you are going to compile your own glibc (for example, doing Linux From Scratch), you're going to depend on features shipped by someone else. In this case that means either baseline, with no SIMD support at all, or level A, which includes SSE4.1. It makes no sense for developers to keep maintaining software for 20 year old CPUs when they can't test it.
sparkie commented on F-Droid build servers can't build modern Android apps due to outdated CPUs    · Posted by u/nativeforks
Arech · 14 days ago
This is super annoying how SW vendors forcefully deprecate good enough hardware.

Genuinely hate that, as Mozilla has deprived me from Firefox's translation feature because of that.

sparkie · 14 days ago
OTOH, if software wants to take advantage of modern features, it becomes hell to maintain if you have to have flags for every possible feature supported by CPUID. It's also unreasonable to expect maintainers to package dozens of builds for software that is unlikely to be used.

There's some guidelines[1][2] for developers to follow for a reasonable set of features, where they only need to manage ~4 variants. In this proposal the lowest set of features include SSE4.1, which is basically includes nearly any x86_64 CPU from the past 15 years. In theory we could use a modern CPU to compile the 4 variants and ship them all in a FatELF, so we only need to distribute one set of binaries. This of course would be completely impractical if we had to support every possible CPU's distinct features, and the binaries would be huge.

[1]:https://lists.llvm.org/pipermail/llvm-dev/2020-July/143289.h...

[2]:https://en.wikipedia.org/wiki/X86-64#Microarchitecture_level...

sparkie commented on UK government advises deleting emails to save water   gov.uk/government/news/na... · Posted by u/bifftastic
drcongo · 15 days ago
The UK government is a punchline.
sparkie · 14 days ago
They're the entire circus.
sparkie commented on Ask HN: How is it possible to get -0.0 in a sum?    · Posted by u/gus_massa
gus_massa · 22 days ago
So, if I use 0x00450000 I can swap -inf.0 and +inf.0 without modifying any other value? (I don't expect this swap operation to be useful, but I'm trying to understand the details.)

---

Thanks again, it's very interesting. I used assembler a long time ago, for the Z80 and 80?86, when the coprosesor was like 2 inches away :) . The problem is that Chez Scheme emits it's own assembler, and support many platforms. So after going into the rabbit hole, you get to asm-fpt https://github.com/search?q=repo%3Acisco%2FChezScheme+asm-fp... (expand and look for "define asm-fpt" near line 1300-2000)

This is like 2 or 3 layers below the level I usually modify, so I'm not sure about the details and quirks in that layer. I'll link to this discussion in github in case some of the maintainers wants to add something like this. My particular case is a very small corner cases and I'm not sure they'd like to add more complexity, but it's nice to have this info in case there are similar cases because once you notice them, they star to appear everywhere.

You can tag yourself in case someone wants to ask more questions or just get updates, but I expect that I'll go in the oposite direction.

sparkie · 21 days ago
> So, if I use 0x00450000 I can swap -inf.0 and +inf.0 without modifying any other value?

Yeah, you got it. See test: https://ce.billsun.dev/#g:!((g:!((g:!((h:codeEditor,i:(filen...

If you are going to implement something like this you basically need a fallback for where it is not supported. In C you write:

    #ifdef __AVX5125F__
        // optimized code
    #else
        // fallback code
    #endif
The optimized code will only be emitted if -mavx512f is passed to the compiler. This flag is implied if `-march=native` and the host compiling the code supports it, or if `-march=specificarch` and specificarch supports it. Otherwise the fallback code will be used.

If using the custom assembler you would need to test whether AVX512F is available by using the CPUID instruction.

sparkie commented on Twenty Eighth International Obfuscated C Code Contest   ioccc.org/2024/index.html... · Posted by u/mdl_principle
maxmcd · 24 days ago
sparkie · 23 days ago
Indeed. Even when you see the hidden unicode it's still not obvious what's going on.

The "salmon" string is mostly unicode TAG characters[1] which contains the printed string followed by 2 En Quad[2], which not so obviously has the effect that `putchar` returns 0 when given as it's argument. Then the defines aren't what they seem, and the body of main is never executed because it's surrounded by a while loop where the condition is 0 (due to putchar() with en quad).

[1]:https://unicodeplus.com/block/E0000

[2]:https://unicodeplus.com/U+2000

sparkie commented on Ask HN: How is it possible to get -0.0 in a sum?    · Posted by u/gus_massa
gus_massa · 23 days ago
Thanks! [Sorry for the delay.]

---

FYI: For more context, I'm trying to send a PR to Chez Scheme (and indirectly to Racket) https://github.com/cisco/ChezScheme/pull/959 to reduce expressions like

  (+ 1.0 (length L))  ;  ==>  (+ 1.0 (fixnum->flonum (length L)))
where the "fixnums" are small integers and "flonums" are double.

It's fine, unless you have the case

  (+ -0.0 (length L))  ;  =wrong=>  (+ -0.0 (fixnum->flonum (length L)))
because if the length is 0, it get's transformed into 0.0 instead of -0.0

There are a few corner cases, in particular because it's possible to have

   (+ 1.0 x (length L))
and I really want to avoid the runtime check of (length L) == 0 if possible.

So I took a look, asked there, and now your opinion confirms what I got so far. My C is not very good, so it's nice to have a example of how the rounding directions are used. Luckily Chez Scheme only uses the default rounding and it's probably correct to cut a few corners. I'll take a looks for a few days in case there is some surprise.

sparkie · 23 days ago
I'm not sure you can avoid the check, but you can avoid a branch.

An AVX-512 extension has a `vfixupimm` instruction[1] which can adjust special floating point values. You could use this to adjust all zeroes to -0 but leave any non-zeroes untouched. It isn't very obvious how to use though.

    vfixupimmsd dst, src, fixup, flag

 * The `flag` is for error reporting - we can set it to zero to ignore errors.

 * `dst` and `src` are a floating point value - they can be the same register.

 * The instruction first checks `src` and turns any denormals into zero if the MXCSR.DAZ flag is set.

 * It then categorizes `src` as one of {QNAN, SNAN, ZERO, ONE, NEG_INF, POS_ING, NEG_VALUE, POS_VALUE}

 * `fixup` is an array of 8 nybbles (a 32-bit int) and is looked up based on the categorization of `src` {QNAN = 0 ... POS_VALUE = 7}

 * The values of each nybble denote which value to place into `dst`:

    0x0 : dst (unchanged)
    0x1 : src (with denormals as zero if MXCSR.DAZ is set)
    0x2 : QNaN(src)
    0x3 : QNAN_Indefinite
    0x4 : -INF
    0x5 : +INF
    0x6 : src < 0 ? -INF : +INF
    0x7 : -0
    0x8 : +0
    0x9 : -1
    0xA : +1
    0xB : 1/2
    0xC : 90.0
    0xD : PI/2
    0xE : MAX_FLOAT
    0xF : -MAX_FLOAT
You want to set the nybble for categorization ZERO (bits 11..8) to 0x7 (-0) in `fixup`. This would mean you want `fixup` to be equal to `0x00000700`. So usage would be:

    static __m128i zerofixup = { 0x700 };

    double fixnum_to_flonum(int64_t fixnum) {
        __m128d flonum = { (double)fixnum };
        return _mm_cvtsd_f64(_mm_fixupimm_sd(flonum, flonum, zerofixup, 0));
    }
Which compiles to just 4 instructions, with no branches:

    .FIXUP:
        .long   1792                            # 0x700
        .long   0                               # 0x0
        .long   0                               # 0x0
        .long   0                               # 0x0
    fixnum_to_flonum:
        vcvtsi2sd       xmm0, xmm0, rdi
        vmovq           xmm0, xmm0
        vfixupimmsd     xmm0, xmm0, qword ptr [rip + .FIXUP], 0
        ret
It can be extended to operate on 8 int64->double at a time (__m512d) with little extra cost.

You could maybe use this optimization where the instruction is available and just stick with a branch version otherwise, or figure out some other way to make it branchless - though I can't think of any other way which would be any faster than a branch.

[1]:https://www.intel.com/content/www/us/en/docs/intrinsics-guid...

sparkie commented on Ask HN: How is it possible to get -0.0 in a sum?    · Posted by u/gus_massa
sparkie · 25 days ago
It depends on the FP rounding mode. If rounding mode is FE_TOWARDZERO/FE_UPWARD/FE_TONEAREST then the case you gave is the only one I'm aware of. If rounding mode is FE_DOWNWARD (towards negative infinity) then other calculations that result in a zero will give a -0.0.

Here's an example of -1.0f + 1.0f resulting in -0.0: https://godbolt.org/z/5qvqsdh9P

sparkie commented on Why I write recursive descent parsers, despite their issues (2020)   utcc.utoronto.ca/~cks/spa... · Posted by u/blobcode
johnwbyrd · a month ago
I'm surprised, and a little disappointed, that no one in this thread has mentioned parsing expression grammars (https://en.wikipedia.org/wiki/Parsing_expression_grammar) which are a much more human-friendly form of grammar for real-world parsing tasks.
sparkie · a month ago
PEGs are closely related to recursive descent, and have some of the same problems.

A PEG is always unambiguous because it picks the first option - but whether that was the intended parse is not necessarily straightforward. In practice these problems don't usually show up, so they're fine to work with.

The advantage LR gives you is that it produces a parser where there are no ambiguities and every successful parse is the one intended. An LR grammar is a proof, as well as a means of producing a parser. A decent LR parser generator is like a simple proof assistant - it will find problems with your language before you do, so you can fix your syntax before putting it into production.

In "real-world" parsing tasks as you put it, the problems of LR parser generators is that they're not the best suited to parsing languages that have ambiguities, like C, C++ and many others. Some of the complaints about LR are about the workarounds that need to be done to parse these languages, where it's obviously the wrong tool for the job because those languages aren't described by proper LR grammars.

But if you're designing a new language from scratch, surely it's better to not repeat those mistakes? If you carefully design your language to be parsed by an LR grammar then other developers who come to parse your language won't encounter those issues. They won't need lexical tie-ins and other nonsense that complicates the process.

u/sparkie

KarmaCake day2144February 15, 2012View Original