Readit News logoReadit News
prestonbriggs commented on Donald Knuth’s 2023 Christmas Lecture: Making the Cells Dance   thenewstack.io/donald-knu... · Posted by u/howrude
velo_aprx · 2 years ago
Why is he wearing, what to me looks like, traditional Sami clothing?
prestonbriggs · 2 years ago
If you look at some of the other Christmas lectures, you'll see he's often (always?) wearing the same outfit. So, tradition!

Deleted Comment

prestonbriggs commented on A whirlwind tour of the LLVM optimizer   slideshare.net/nikita_ppv... · Posted by u/matt_d
NavinF · 2 years ago
> ask ChatGPT and trust its outputs

If anyone's wondering, its response to the GP comment is 100% correct as far as I can tell. Only problem is that it's missing technical details and has a hard time giving specific examples.

---

There are several reasons why programming languages might choose to rely on LLVM or other external compilers and optimization frameworks rather than implementing optimizations on their own:

Focus on language design: Many programming languages prioritize language design and functionality over compiler optimization. Implementing optimizations can be a complex and time-consuming process, and language designers may prefer to focus on other aspects of the language.

Difficulty of optimization: While some optimizations may seem simple in theory, they can be difficult to implement in practice, especially for complex programs. Additionally, different optimizations may interact with each other in unpredictable ways, making it challenging to optimize code effectively.

Targeting multiple platforms: Many programming languages are designed to be portable and run on multiple platforms. Relying on external compilers and optimization frameworks like LLVM can help ensure that code is optimized for each platform without requiring extensive platform-specific optimizations.

Access to expertise: Compiler optimization is a specialized field that requires a deep understanding of computer architecture, algorithms, and programming languages. By relying on external frameworks like LLVM, programming languages can leverage the expertise of experienced optimization engineers and researchers.

As for the implementation of the top 20 optimizations, the specific details will depend on the optimization and the programming language being used. However, many optimizations involve analyzing the code to identify opportunities for improvement and then applying transformations to the code to make it more efficient. Some optimizations, like constant propagation and loop unrolling, may involve relatively simple transformations, while others, like register allocation and instruction scheduling, can be more complex.

--- snippets from "Use highly technical details and examples to answer this question.":

Memory reuse involves optimizing the use of memory to reduce the number of accesses and minimize cache misses. This can involve techniques such as loop-invariant code motion, which moves code that does not depend on loop variables outside of the loop, or loop fusion, which combines multiple loops that iterate over the same data into a single loop. However, these optimizations may also depend on the specific architecture being targeted, as the optimal memory access patterns may differ depending on the cache size and associativity

Writing a high-quality compiler that can handle all of the edge cases and produce optimized code for a wide range of architectures requires a significant amount of expertise and resources. Additionally, using an external compiler can provide a degree of portability, as the same code can be compiled for multiple architectures without requiring changes to the code itself

--- snippet from "Why would constant propagation need a PhD level proficiency to implement? It’s basically running part of your code and no different from parsing.":

Constant propagation is indeed a relatively simple optimization technique in theory, and it does not require a PhD level proficiency to implement it. In fact, most compilers include some form of constant propagation as part of their optimization pipeline, and many of these implementations are relatively straightforward.

However, there are several factors that can make constant propagation more challenging to implement in practice. Some of these factors include:

Handling interactions with other optimizations: Constant propagation can interact with other optimizations in non-trivial ways, such as dead code elimination and loop unrolling. Ensuring that these optimizations work together correctly can require careful analysis and testing.

Dealing with complex data structures: In some cases, constant propagation may need to deal with complex data structures, such as arrays or pointers. Ensuring that constant propagation works correctly in these cases can require more advanced techniques, such as dataflow analysis or points-to analysis.

Supporting language features: Different programming languages may have different rules for when constant propagation is allowed or required. For example, some languages may allow constants to be defined using complex expressions or function calls, while others may require constants to be literal values. Supporting these language features can require additional analysis and code generation logic.

prestonbriggs · 2 years ago
Doing a good job of constant prop means doing a thorough job of implementing the best algorithm in the best paper. And to do that, you need to build a form of SSA. And to do that you need to compute dominators. And to do that, you need to build an appropriate form of the CFG. Don't need a PhD to do all those things (not inventing anything new here, just using other peoples' beautiful results), but it's still plenty of work to build a fast & thorough implementation, with lots of testing required.

Can't possibly be economical; but of course, the experience will teach you a lot.

prestonbriggs commented on An ultimate guide to memory palaces   metacognitive.me/memory-p... · Posted by u/scher
tpoacher · 4 years ago
As someone who, back when I used to play (semi-professionally) I preferred to memorise all my music, I would say that memorising pieces for me was "effortless", and thus needed no technique, provided that when I would first study a piece, the intent was specifically to memorise it, rather than, say, to sight read, or analyse it.

I always found it extremely odd that, if the memorisation intent was not there, no amount of exposure would lead to effective memorisation. E.g. I could sight read an accompaniment 50 times and I would not remember most of it; once I decided to study specifically with the intent of "memorising" the piece, typically a handful of passes would be enough.

Scott Adams recently said something similar in one of his coffee podcasts. He made an experiment with himself: he has a short "skit" he always starts his show with, and he made a bet with himself how long it would take, if ever, to memorise that skit, simply by reading it, with no intentional effort to memorise it.

So far it's been over a year afaik :)

prestonbriggs · 4 years ago
Yes, memorizing everything (all the music) helped hugely in practicing. I used to believe it was because I didn't have to read the music, and could focus on fingers, etc. But maybe the memorization effort (and it wasn't hard, as you mentioned) was enough to establish a framework, and the bulk of my practicing was hanging new details onto the framework.
prestonbriggs commented on An ultimate guide to memory palaces   metacognitive.me/memory-p... · Posted by u/scher
prestonbriggs · 4 years ago
Has anyone used this approach to memorize music? E.g., for playing an instrument.

I used to be able to remember plenty when I was young and practicing a lot. And certainly professionals can remember significant amounts (and not just the music, but performance details, fingerings, etc).

Seems like a different mechanism, relying less on visual and spatial associations, more on hearing. But perhaps it's similar in that you learn a structure and attach details to it.

Or, having developed a good memory for music, can we use it to help memorize random facts? Perhaps associating them with places in a song.

prestonbriggs commented on Parser generators vs. handwritten parsers: surveying major languages in 2021   notes.eatonphil.com/parse... · Posted by u/eatonphil
eatonphil · 4 years ago
That's right! But HN has a title length limit. See the (actual) blog post title and the blog post contents. :)
prestonbriggs · 4 years ago
Indeed. But consider the 2nd sentence, the conclusion, and the twitter post.

On the other hand, I certain agree with your idea about teach/using handwritten recursive-descent parsers. Here's an old book that presents it pretty clearly, along with a nice approach for error handling. Maybe you can find it in a library. https://www.springer.com/gp/book/9783540082408

prestonbriggs commented on Parser generators vs. handwritten parsers: surveying major languages in 2021   notes.eatonphil.com/parse... · Posted by u/eatonphil
prestonbriggs · 4 years ago
Compilers have parsers, not languages. Many languages have more than one compiler (e.g., C), and the various compilers may use different techniques for parsing.
prestonbriggs commented on Why most programming language performance comparisons are most likely wrong   nibblestew.blogspot.com/2... · Posted by u/Sindisil
prestonbriggs · 4 years ago
I argued for years in favor of keeping the minimum runtime of a number of samples; never found much support.
prestonbriggs commented on Conway's Game of Life on FPGA   k155la3.blog/2020/10/09/c... · Posted by u/petrohi
darzu · 4 years ago
I wonder how this would compare to a GPU accelerated GoL.
prestonbriggs · 4 years ago
Golly would crush them all.
prestonbriggs commented on The Feynman Lectures on Physics Audio Collection   feynmanlectures.caltech.e... · Posted by u/sohkamyung
beagle3 · 4 years ago
My theory is something I call "The Feynman effect". Feynman has a talent of making the listener believe that they (the listener) understand everything at a very deep level. So it gives the feeling he's an amazing teacher -- and if you don't actually try to apply that knowledge, you might never notice that you're wrong.

I realized that after reading his lecture on "the principle of least action" coming out with the feeling that I deeply understand (among other things) calculus of variations - a field I didn't even know existed until I read that. So I tried to use it -- and realized that, other than recreating Feynman's example, I can't really use it for anything.

I shared the sentiment with others over lunch the next day (a couple of other undergrads and two graduate students), and they were all familiar with that feeling....

prestonbriggs · 4 years ago
Happens in other fields too. When I worked at Tera, I (and plenty of others) would talk to Burton Smith in the halls. Whatever the topic, he always made us feel included and smart. Then, after the conversation, as we moved apart, our IQ would drop and our understanding would fail.

Except maybe some of it stuck. Hope so.

u/prestonbriggs

KarmaCake day341January 18, 2011View Original