Readit News logoReadit News
PeCaN commented on A Ray of Hope: Array Programming for the 21st Century [video]   youtube.com/watch?v=x1FoT... · Posted by u/akkartik
Gravityloss · 5 years ago
I understand that. I still don't want to spend the effort to learn APL.

It's like digital cameras that came around. Many users knew how to use film cameras so you made the digital cameras to be mostly like film cameras even if the digital medium would have enabled a very different, much better camera straight out of the box. But the market had invested so much time in this learning how to work with film that you had to do it like that. Path dependency is not just about rigid thinking, it's about using what you have because that saves a lot of resources.

Regarding, .map being all wrong, in Ruby it's not a property of an array, it's a method for enumerables. Array is one type of enumerable, but it works with hashmaps etc. https://ruby-doc.org/core-2.6.5/Enumerable.html So it's not that non-general. It is noisy (and weird with the pipes) because it's general.

PeCaN · 5 years ago
To be honest I don't really see people who don't want to learn APL being that interested in putting in the effort to completely upend how they think about programming and algorithms in order to use other array languages, regardless of syntax. (After all this is by far the hardest part of learning APL, the symbols are easy enough and easy to look up anyway.)

map is general in kind of the wrong way. You could after all add a #map method to Object for scalars and make a Matrix class that also implements it and then just call map everywhere. However you still run into the problem, mentioned in the video, that it doesn't easily generalize to x + y where both x and y are arrays; you have to use zip or map2 or something (and now you still have to figure out how to do vector + matrix) and yes you can kind of do explicit "array programming" in Ruby if for some reason you're really compelled to do that but it will look awful. And that's just what array languages do for you implicitly. As a paradigm there's a bit more too it than "just call map everywhere"—there's still all the functions for expressing algorithms as computations on arrays.

PeCaN commented on A Ray of Hope: Array Programming for the 21st Century [video]   youtube.com/watch?v=x1FoT... · Posted by u/akkartik
Gravityloss · 5 years ago
Can it be done in some easier syntax than APL?

Easier means - less effort in learning coming from someone who knows mainstream languages like Java.

The idea is not only limited to APL. I don't like crafting for loops or maintain indexes. Fortran has something similar. With Matlab many operators operate in an intuitive way on vectors and matrices. It breaks down quite quickly if you try to do something more complex though. This somewhat extends to Julia. In Ruby also you can have .map or .each.

Julia:

    x=10
    v=[1 2 3 4]
    x.*v
    #1×4 Array{Int64,2}:
    # 10  20  30  40

PeCaN · 5 years ago
If you watch the video it looks like their proposed syntax is not APL-like but closer to mainstream languages.

I'm honestly not sure if this is a good thing or not. You said "easier" syntax than APL but APL is honestly a very easy syntax for working with arrays. That's a significant part of the advantage of APL, it makes it very easy to come up with, talk about, and maintain array algorithms.

Matlab and Julia and other languages aimed at scientific computing have some array language-like traits but lack a lot of the functions that make APL more generally applicable. And .map is all wrong; it's extra noise and it doesn't generalize down to scalars or up to matrices—the defining feature of array languages is that operations are implicitly polymorphic over the rank of the input.

PeCaN commented on A Ray of Hope: Array Programming for the 21st Century [video]   youtube.com/watch?v=x1FoT... · Posted by u/akkartik
PeCaN · 5 years ago
I've been working on something like this on and off for the past 4 years or so, although with something more like generators than streams.

I think it's a very, very promising idea (I admit to heavily being biased towards anything APL-influenced) although surprisingly difficult to get right. Gilad Bracha is obviously way smarter than me so I'm definitely curious where he goes with this.

One additional idea that I keep trying to make work is integrating variations of (constraint) logic programming and treating solutions to a predicate as a generator or stream that operations can be lifted to rank-polymorphically. As a simple example a range function could be defined and used like (imaginary illustrative syntax)

    range(n,m,x) :- n <= x, x <= m
    
    primesUpto(n) = range(2,n,r),               # create a generator containing all solutions wrt r
      mask(not(contains(outerProduct(*, r, r), r)), r)  # as in the video
    
I've never really gotten this to work nicely and it always feels like there's a sort of separation between the logic world and the array world. However this feels incredibly powerful, especially as part of some sort of database, so I keep returning to it even though I'm not really sure it goes anywhere super useful in the end.

PeCaN commented on Servo’s new home   blog.servo.org/2020/11/17... · Posted by u/g0xA52A2A
trevyn · 5 years ago
Is there anything specific GCC does for performance that Clang/LLVM could adopt? How is this race expected to evolve over time?
PeCaN · 5 years ago
it's not so much that gcc does anything specific so much as LLVM is just really really inefficient—they don't track compilation time at all so it's easy for releases to regress, half the stuff in there is academics implementing their PhD thesis aiming for algorithmic accuracy with little regard for efficiency, and LLVM's design itself is somewhat inefficient (multiple IRs, lots and lots of pointers in IR representation, etc)

that said this makes it an excellent testbed but compilation time will keep getting slower every release until they start caring about it

PeCaN commented on Servo’s new home   blog.servo.org/2020/11/17... · Posted by u/g0xA52A2A
pmarin · 5 years ago
Not answering your question but in my experience reading assembly output of the two compilers the unoptimized output of Clang is atrocious while GCC is closer of what a human could have wrote. Clang seems to have to do better in optimization passes to archive similar results. Usually Clang was a faster compiler in O1 but I don’t think it’s true anymore.
PeCaN · 5 years ago
that's just because gcc has certain optimization passes that can't be disabled

(that said gcc -O0 is still absolutely nothing like what a human would write)

PeCaN commented on Apple Silicon M1 Emulating x86 Is Still Faster Than Every Other Mac   macrumors.com/2020/11/15/... · Posted by u/syrusakbary
fooker · 5 years ago
Two things afaik:

* Decoding an x86 instruction takes a ridiculous amount of resources. Can't be optimized away because of backwards compatibility.

* Limited to small pages without jumping through weird OS hoops.

PeCaN · 5 years ago
- it takes some die space sure but no x86 processors are actually limited by instruction decoding (except, iirc, the first generation xeon phi in some cases)

- huge pages don't exactly require weird OS hoops although i agree the 4kb→2mb→1gb page sizes are inconvenient

PeCaN commented on XuanTie C906 based Allwinner RISC-V processor to power $12 Linux SBC's   cnx-software.com/2020/11/... · Posted by u/todsacerdoti
throwaway4good · 5 years ago
Can you give an example of an chip with a hidden backdoor?
PeCaN · 5 years ago
All the Intel CPUs with RDRAND, although I guess that's not exactly hidden anymore.
PeCaN commented on A First Look at the JIT   blog.erlang.org/a-first-l... · Posted by u/lelf
PeCaN · 5 years ago
I sort of wonder if this approach to JITing is worth it over just writing a faster interpreter. This is basically like what V8's baseline JIT used to be and they switched to an interpreter without that much of a performance hit (and there's still a lot of potential optimizations for their interpreter). LuaJIT 1's compiler was similar, although somewhat more elaborate, and yet still routinely beaten by LuaJIT 2's interpreter (to be fair LuaJIT 2's interpreter is an insane feat of engineering).
PeCaN commented on An ex-ARM engineer critiques RISC-V   gist.github.com/erincande... · Posted by u/ducktective
tom_mellior · 5 years ago
> Executing more instructions for a (really) common operation doesn't mean an ISA is somehow better designed or "more RISC", it means it executes more instructions.

True. But as bonzini points out (or rather, hints at) in https://news.ycombinator.com/item?id=24958644, the really common operation for array indexing is inside a counted loop, and there the compiler will optimize the address computation and not shift-and-add on every iteration.

See https://gcc.godbolt.org/z/x5Mr66 for an example:

    for (int i = 0; i < n; i++) {
        sum += p[i];
    }
compiles to a four-instruction loop on x86-64 (if you convince GCC not to unroll the loop):

    .L3:
        addsd   xmm0, QWORD PTR [rdi]
        add     rdi, 8
        cmp     rax, rdi
        jne     .L3
and also to a four-instruction loop on RISC-V:

    .L3:
        fld     fa5,0(a0)
        addi    a0,a0,8
        fadd.d  fa0,fa0,fa5
        bne     a5,a0,.L3
This isn't a complete refutation of the author's point, but it does mitigate the impact somewhat.

PeCaN · 5 years ago
That's fair. It's definitely not a killer, (or even in my opinion the worst thing about RISC-V,) just another one of these random little annoyances that I'm not really sure why RISC-V doesn't include.
PeCaN commented on An ex-ARM engineer critiques RISC-V   gist.github.com/erincande... · Posted by u/ducktective
fulafel · 5 years ago
We were talking ISAs so let's focus on that.

The quantifiability comes from measuring results when you give compilers new instructions, vs paying implementation complexity (time, money and future baggage to support the insn forever). The upsides and downsides here come in different units so it's still tricky.

Lots of instructions can be proposed with impressive qualitative speeches convincing you how dandy they are, but in the end it's down to the real world speedup yield vs the price you pay in complexity and resulting second order effects.

(In rarer cases the instructions might be added not for performance reasons but to ease complexity and cost, that's where qualitative arguments still have a place when arguing for adding instructions).

It's fine if we don't have the evidence in this thread - I was just asking on the off chance that someone can point to a reference.

PeCaN · 5 years ago
It's not like someone is proposing some crazy new instruction to do vector math on binary coded decimals while also calculating CRC32 values as a byproduct. It's conditional move. Every ISA I can think of has that.

u/PeCaN

KarmaCake day3584October 30, 2015View Original