Readit News logoReadit News
Joky commented on Nvidia Warp: A Python framework for high performance GPU simulation and graphics   github.com/NVIDIA/warp... · Posted by u/jarmitage
raytopia · a year ago
I love how many python to native/gpu code projects there are now. It's nice to see a lot of competition in the space. An alternative to this one could be Taichi Lang [0] it can use your gpu through Vulkan so you don't have to own Nvidia hardware. Numba [1] is another alternative that's very popular. I'm still waiting on a Python project that compiles to pure C (unlike Cython [2] which is hard to port) so you can write homebrew games or other embedded applications.

[0] https://www.taichi-lang.org/

[1] http://numba.pydata.org/

[2] https://cython.readthedocs.io/en/stable/

Joky · a year ago
> I'm still waiting on a Python project that compiles to pure C

In case you haven't tried it yet, Pythran is an interesting one to play with: https://pythran.readthedocs.io

Also, not compiling to C but to native code still would be Mojo: https://www.modular.com/max/mojo

Joky commented on Reorient GitHub pull requests around changesets   mitchellh.com/writing/git... · Posted by u/jamesog
jen20 · 2 years ago
This would explain something else too, probably: a few years ago I did a call with some GH folks talking about the idea of making the commit message applied to a squash merge part of the review itself.

Apparently this was very common feedback and I know that at least five other people who were maintaining large scale open source at the time gave it that week too. It’s never gone anywhere though, and as a result I have to disable all workflows except “rebase and merge” for every repo…

Joky · 2 years ago
You can configure "squash&merge" to use the PR title and description for the commit message now, which makes it reviewable!
Joky commented on What I have changed my mind about in software development   henrikwarne.com/2023/09/1... · Posted by u/henrik_w
mvdtnz · 2 years ago
I don't necessarily agree, I think it's situational but I think that the dev community as a group has swung too far to the "only test public interfaces" side lately. It ignores some important realities in favour of ideological purity.

Sometimes there are well-defined processes for performing a task, and that task is performed in only one place in the system. Therefore the details of the process can be kept class private inside of the only consumer. That doesn't mean that the processes should never be tested. If the task is cumbersome to set up or runs slowly then there is good reason to test the internal parts of the process which can be tested with dozens, hundreds or even thousands of permutations of input data cheaply and efficiently. Always relying on the large-scale tests to hit every combination of inputs for a well understood subroutine can be inefficient.

You could make the argument that this well-understood process could be broken out into its own class/package/module and tested with its own public interface, but if there really is only one consumer then that's kind of a strange trade-off to make in many cases.

Joky · 2 years ago
> You could make the argument that this well-understood process could be broken out into its own class/package/module and tested with its own public interface, but if there really is only one consumer then that's kind of a strange trade-off to make in many cases.

That's how I develop in general: a "component" does not exist because it has multiple-clients, but because it is a conceptual piece of logic that makes sense to document and test in isolation. It allows to define what is the public API of this component and what isn't. This is how software scales and stays maintainable over time IMO.

Joky commented on Amazon corporate workers plan walkout next week over return-to-office policies   cnn.com/2023/05/23/tech/a... · Posted by u/GiorgioG
kcplate · 2 years ago
It’s my opinion that people feel like they are more productive at WFH, but it’s a bit of a hard metric to actually measure. I get the sense that the freedom and flexibility might create a feeling of personal productivity to accomplish many things (not just career related) they couldn’t before WFH, but as for job productivity…I bet it’s probably a wash overall.

I say this as a remote worker who wouldn’t want to RTO—because I feel like I am more productive at work from home. However, I wouldn’t bet the farm on that if there was a reliable way to measure it.

Joky · 2 years ago
There is something to be said about individual productivity (whatever that means in a very innovative/creative environment) vs team/company output, just today I saw this in my feed: https://flocrivello.com/changing-my-mind-on-remote-about-bei... And that's coming from someone who actually tried to build a business out of remote work (TeamFlow was the product).

I can be much more productive at home when it is about my individual contribution (me coding to deliver something unambiguous), but xxx individuals doing this does not necessarily align into a great product: that does not scale.

Joky commented on Mojo – a new programming language for AI developers   modular.com/mojo... · Posted by u/lairv
int_19h · 2 years ago
Why is "no GC" an advantage for something aimed at such high-level tasks?
Joky · 2 years ago
They claim they are writing the actual kernel code (as in the implementation of a matmul) with it, and it was presented as a "system programming language": this goes far beyond "high-level tasks" it seems.
Joky commented on Building a Compiler with Multi-Level Intermediate Representation (MLIR) (2020) [pdf]   llvm.org/devmtg/2020-09/s... · Posted by u/informalo
mathisfun123 · 2 years ago
>As I understand it, MLIR is the new subsystem that the LLVM project is transitioning to in the long term, and LLVM IR is the old.

this is very much not a forgone conclusion and many people in LLVM would boo vociferously at the idea (see last year's LLVM US meeting where Johannes Doerfert actually argued the exact opposite - extending LLVM IR to do some/many of the things that MLIR does).

Joky · 2 years ago
It depends what you mean by "new subsystem" and "transitioning to": what seems like a given is that the notion of "one size fits all" of LLVM IR is behind us and the need to multi-level IR is embraced. LLVM IR is evolving to accommodate this better, within reason (that is: it stay organized around a pretty well defined core instruction set and type system), and MLIR is just the fully extensible framework beyond this. It is to be seen if anyone would have the appetite to port LLVM IR (and the LLVM framework) to be a dialect, I think there are challenges for this.
Joky commented on OpenXLA Is Available Now   opensource.googleblog.com... · Posted by u/alphabetting
mathisfun123 · 2 years ago
basically all correct but

>You can then hypothetically lower (compiler terminology) it to a TensorRT MLIR dialect that then in turn runs on the Nvidia GPU.

there's no tensorrt dialect (there are nvgpu and nvvm dialects) nor would there be as tensorrt is primarily a runtime (although arguably dialects like omp and spirv basically model runtime calls).

Joky · 2 years ago
TensorFlow is also a runtime, yet we model its dataflow graph (the input to the runtime) as a dialect, same for ONNX. TensorRT isn't that different actually.
Joky commented on OpenXLA Is Available Now   opensource.googleblog.com... · Posted by u/alphabetting
londons_explore · 2 years ago
OpenXLA is an optimizing compiler... It's main purpose is to optimize stuff...

So why does there seem to be no published metrics showing performance of various common ML models on common hardware with OpenXLA vs other frameworks/compilers?

Joky · 2 years ago
All of Google TPU is powered by the XLA compiler, so any MLPerf benchmark result from Google comes powered by XLA. Anything JAX is also built on top of XLA, so you can take JAX performance as a point of comparison as well if you'd like.
Joky commented on The industry has been sizing kayak paddles wrong   paddlingmag.com/skills/bu... · Posted by u/troydavis
tormeh · 3 years ago
What I don't get is why one oar is rotated 90 degrees off the other one. It means whenever I use these double-sided paddles I have to rotate them ever so slightly whenever I make a new stroke. It's annoying and I don't understand why it's done. Does anyone know?
Joky · 3 years ago
The movement of paddling has a natural rotation of the shaft when you raised the fixed hand for a stroke on the other side, it's quite straightforward to figure out sitting and mimicing the movement.

During this movement if the blade aren't feathered at all you have to compensate with some bending of the wrist. The amount of rotation of the shaft induced depends on how much you raise the hand/elbow, and so is fairly dependent on your style of stroke. This is the main way I think should be approached feathering: how much vertical do you intend to paddle? From there the angle should follow to optimize for the least amount of wrist twisting.

In general paddling very vertical will come with more angle in between the blades. I practice slalom and use to have 70-80 degrees crossing, but I tend to paddle less vertically now (aging? Lack of training?) and I'm down to 60 degrees comfortably now.

Joky commented on ISO C became unusable for operating systems development   arxiv.org/abs/2201.07845... · Posted by u/pcw888
GoblinSlayer · 4 years ago
Joky · 4 years ago
But these are options, it's not a big deal to me that compiler offers special options for special use cases. It's not clear to me if you are saying that the *default* for clang and GCC differs, aren't they both using `fno-wrapv` by default?

u/Joky

KarmaCake day850October 17, 2014View Original