Readit News logoReadit News
pklausler commented on Show HN: A zoomable, searchable archive of BYTE magazine   byte.tsundoku.io... · Posted by u/chromy
ghaff · 18 hours ago
Computing sort of got too big. Early on, Byte could be eclectic with lots of different architectural discussions and hardware like circuit cellar. But from my perspective they got to a point where they were, for lack of a better word, pretty random. I was at an event where they were trying to reboot sometime in the 2000s but not sure what the market was for a popular computing magazine trying to cover all the bases at that point.
pklausler · 18 hours ago
BYTE was best before Jerry Pournelle showed up.
pklausler commented on How to Think About GPUs   jax-ml.github.io/scaling-... · Posted by u/alphabetting
einpoklum · 7 days ago
> that can itself perform actual SIMD instructions?

Mostly, no; it can't really perform actual SIMD instructions itself. If you look at the SASS (the assembly language used on NVIDIA GPUs) I don't believe you'll see anything like that.

In high-level code, you do have expressions involving "vectorized types", which look like they would translate into SIMD instruction, but they 'serialize', at the single thread level.

There are exceptions to this though, like FP16 operations which might work on 2xFP16 32-bit registers, and other cases. But that is not the rule.

pklausler commented on How to Think About GPUs   jax-ml.github.io/scaling-... · Posted by u/alphabetting
einpoklum · 7 days ago
> It's not clear from the above what a "CUDA core" (singular) _is_

A CUDA core is basically a SIMD lane on an actual core on an NVIDIA GPUs.

For a longer version of this answer: https://stackoverflow.com/a/48130362/1593077

pklausler · 7 days ago
So it's a "SIMD lane" that can itself perform actual SIMD instructions?

I think you want a metaphor that doesn't also depend on its literal meaning.

pklausler commented on I made a real-time C/C++/Rust build visualizer   danielchasehooper.com/pos... · Posted by u/dhooper
Night_Thastus · 13 days ago
I am extremely interested in this.

I am stuck in an environment with CMake, GCC and Unix Make (no clang, no ninja) and getting detailed information about WHY the build is taking so long is nearly impossible.

It's also a bit of a messy build with steps like copying a bunch of files from the source into the build folder. Multiple languages (C, C++, Fortran, Python), custom cmake steps, etc.

If this tool can handle that kind of mess, I'll be very interested to see what I can learn.

pklausler · 13 days ago
strace might help, if you have it.
pklausler commented on Why tail-recursive functions are loops   kmicinski.com/functional-... · Posted by u/speckx
eru · 13 days ago
> If you iterate by const reference over a const container, and you make every assign-once variable in the loop body const (or in Rust: just not mut), is there any advantage to tail recursion except someone on the internet said it's the proper functional style?

Function calls can express all kinds of useful and interesting control flow. They are so useful that even people who love imperative programming use functions in their language. (Early and primitive imperative programming languages like very early Fortran and underpowered dialects of BASIC didn't have user defined functions.)

So we established that you want functions in your language anyway. Well, and once you properly optimise function calls, what's known as tail call optimisation, you notice that you don't need no special purpose loops (nor goto) built into your language. You can define these constructs as syntactic sugar over function calls. Just like you can define other combinators like map or filter or tree traversals.

See how in the bad old days, Go had a handful of generic functions and data structures built-in (like arrays), but didn't allow users to define their own. But once you add the ability for users to define their own, you can remove the special case handling.

And that's also one thing C++ does well: as much as possible, it tries to put the user of the language on the same footing as the designers.

When 'map' or 'filter' are the best construct to express what you want to say, you should use them. When a 'for'-loop is the best construct, you should use it. (And that for-loop could be defined under the hood as syntactic sugar on top of function calls.) The scenario your concocted is exactly one where a foreach-loop shines.

Though to be a bit contrarian: depending on what your loop does, it might be useful to pick an ever more constrained tool. Eg if all you do run one action for each item, with no early return and you are not constructing a value, you can use something like Rust's 'foreach' (https://docs.rs/foreach/latest/foreach/). If you transform a container into another container (and no early return etc), you can use 'map'. Etc.

The idea is to show the reader as much as possible what to expect without forcing them to dive deep into the logic. The transformation in a 'map' might be very complicated, but you know the shape of the result immediately from just spying that it's a 'map'.

When you see the for-loop version of the above, you have to wade through the (complicated) body of the loop just to convince yourself that there's no early return and that we are producing a new container with exactly the same shape as the input container.

> I think functional programming contains some great ideas to keep state under control, but there is no reason to ape certain superficial aspects. E.g. the praise of currying in Haskell tutorials really grinds my gears, I think it's a "clever" but not insightful idea and it really weirds function signatures.

Yes, that's mixing up two separate things. Haskell doesn't really need currying. All you need for Haskell to work as a language is a convenient way to do partial application. So if Haskell (like OCaml) used tuples as the standard way to pass multiple arguments, and you had a syntactically convenient way to transform the function (a, b, c) -> d into (b, c) -> d by fixing the first argument that would get you virtually all of the benefits Haskell gets from pervasive currying without the weird function signatures.

In practice, people tend to get used to the weird function signatures pretty quickly, so there's not much pressure to change the system.

pklausler · 13 days ago
Even the first release of FORTRAN had statement functions.
pklausler commented on Myths About Floating-Point Numbers (2021)   asawicki.info/news_1741_m... · Posted by u/Bogdanp
ForceBru · 14 days ago
Sure, but it makes sense, doesn't it? Even `inf-inf == NaN` and `inf/inf == NaN`, which is true in calculus: limits like these are undefined, unless you use l'Hôpital's rule or something. (I know NaN isn't equal to itself, it's just for illustration purposes) But then again, you usually don't want these popping up in your code.
pklausler · 14 days ago
In practice, though, I can't recall any HPC codes that want to use IEEE-754 infinities as valid data.
pklausler commented on Myths About Floating-Point Numbers (2021)   asawicki.info/news_1741_m... · Posted by u/Bogdanp
ivankra · 14 days ago
You can put stuff into the sign bit too, that makes 53. Yeah, the lower 52 bits can't all be zero - that'd be ±INF, but the other 2^53-2 values are all yours to use.
pklausler · 14 days ago
It's possible for the sign bit of a NaN to be changed by a "non-arithmetic" operation that doesn't trap on the NaN, so don't put anything precious in there.
pklausler commented on Myths About Floating-Point Numbers (2021)   asawicki.info/news_1741_m... · Posted by u/Bogdanp
ForceBru · 14 days ago
The article's point 3 says that this is a myth. Indeed, the _limit_ of `1/x` as `x` approaches zero from the right is positive infinity. What's more, division by _negative zero_ (which, perhaps surprisingly, is a thing) yields negative infinity, which is also the value of the corresponding limit. If you divide a finite float by infinity, you get zero, because `lim_{x\to\infty} c/x=0`. In many cases you can treat division by zero or infinity as the appropriate limit.
pklausler · 14 days ago
I am allowed to disagree with the article.
pklausler commented on Myths About Floating-Point Numbers (2021)   asawicki.info/news_1741_m... · Posted by u/Bogdanp
ivankra · 14 days ago
My favorite trick: NaN boxing. NaN's aren't just for errors, but also for smuggling other data inside. For a double, you have whopping 53 bits of payload, enough to cram in a pointer and maybe a type tag, and many javascript engines do (since JS numbers are double's after all)

https://wingolog.org/archives/2011/05/18/value-representatio...

https://piotrduperas.com/posts/nan-boxing

pklausler · 14 days ago
52 bits of payload, and at least one bit must be set.
pklausler commented on Myths About Floating-Point Numbers (2021)   asawicki.info/news_1741_m... · Posted by u/Bogdanp
kccqzy · 14 days ago
This is also my favorite thing about floating point numbers. Unfortunately languages like Python try to be smart and prevent me from doing it. Compare:

    >>> 1.0/0.0
    ZeroDivisionError
    >>> np.float64(1)/np.float64(0)
    inf
I'm so used to writing such zero division in other languages like C/C++ that this Python quirk still trips me up.

pklausler · 14 days ago
Division by zero is an error and it should be treated as such. "Infinity" is an error indication from overflow and division by zero, nothing more.

u/pklausler

KarmaCake day3408December 30, 2015
About
Sperry-Univac 1981-83, Cray Research 1983-89, Cray Computer 1989-91, Cray/SGI/Cray 1991-2008, Google Platforms 2008-14, NVIDIA 2014-present. Started flang-new Fortran compiler and wrote plurality of it.
View Original