Readit News logoReadit News
arc619 commented on Rust running on every GPU   rust-gpu.github.io/blog/2... · Posted by u/littlestymaar
omnicognate · 7 months ago
Zig can also compile to SPIR-V. Not sure about the others.

(And I haven't tried the SPIR-V compilation yet, just came across it yesterday.)

arc619 · 7 months ago
Nim too, as it can use Zig as a compiler.

There's also https://github.com/treeform/shady to compile Nim to GLSL.

Also, more generally, there's an LLVM-IR->SPIR-V compiler that you can use for any language that has an LLVM back end (Nim has nlvm, for example): https://github.com/KhronosGroup/SPIRV-LLVM-Translator

That's not to say this project isn't cool, though. As usual with Rust projects, it's a bit breathy with hype (eg "sophisticated conditional compilation patterns" for cfg(feature)), but it seems well developed, focused, and most importantly, well documented.

It also shows some positive signs of being dog-fooded, and the author(s) clearly intend to use it.

Unifying GPU back ends is a noble goal, and I wish the author(s) luck.

Deleted Comment

arc619 commented on Java Language Update – a look at where the language is going by Brian Goetz   youtube.com/watch?v=bKwzO... · Posted by u/belter
belter · 2 years ago
> Data wants to be pure, and code should be able to act on this freeform data independently, not architecturally chained to it.

If behaviors are decoupled from the data they operate on, you risk a procedural programming style lacking the benefits of encapsulation. This can increase the risk of data corruption and reduce data integrity...

arc619 · 2 years ago
Behaviours don't have to be decoupled from the data they operate on. If I write a procedure that takes a particular data type as a parameter, it's a form of coupling.

However, there's no need to fuse data and code together as a single "unit" conceptually as OOP does, where you must have particular data structures to use particular behaviours.

For example, let's say I have a "movement" process that adds a velocity type to a position type. This process is one line of code. I can also use the same position type independently for, say, UI.

To do this in an OOP style, you end up with an "Entity" superclass that subclasses to "Positional" with X and Y, and another subclass for "Moves" with velocity data. These data types are now strongly coupled and everything that uses them must know about this hierarchy.

UI in this case would likely have a "UIElement" superclass and different subclass structures with different couplings. Now UI needs a separate type to represent the same position data. If you want a UI element to track your entity, you'd need adapter code to "convert" the position data to the right container to be used for UI. More code, more complexity, less code sharing.

Alternatively, maybe I could add position data to "Entity" and base UI from the "Positional" type.

Now throw in a "Render" class. Does that have its own position data? Does it inherit from "Entity", or "Positional"? So how do we share the code for rendering a graphic with "Entity" and "UIElement"?

Thus begins the inevitable march to God objects. You want a banana, you get a gorilla holding a banana and the entire jungle.

Meanwhile, I could have just written a render procedure that takes a position type and graphic type, used it in both scenarios, and moved on.

What do I gain by doing this? I've increased the complexity and made everything worse. Are you thinking about better hierarchies that could solve this particular issue? How can you future proof this for unexpected changes? This thinking process becomes a huge burden to make brittle code.

> you risk a procedural programming style lacking the benefits of encapsulation. This can increase the risk of data corruption and reduce data integrity...

You can use data encapsulation fine without taking on the mantle of OOP. I'm not sure why you think this would introduce data corruption/affect integrity.

There's plenty of compositional and/or functional patterns beyond OOP to use beyond procedural programming, but I'd hardly consider using procedural programming a "risk". Badly written code is bad regardless of the pattern you use.

That's not to say procedural programming is all you need, but at the end of the day, the computer only sees procedural code. Wrapping things in objects doesn't make the code better, just more baroque.

arc619 commented on Java Language Update – a look at where the language is going by Brian Goetz   youtube.com/watch?v=bKwzO... · Posted by u/belter
belter · 2 years ago
Or more about boundaries, like in less Monoliths, and more micro services and container deployments including cloud functions like AWS Lambda and Azure Functions. And "coupled by less strongly typed schemas" is more of a fact statement, but is it really a good thing?

Software Engineering will not progress into real Engineering, until it starts building on the past instead of throwing away past lessons. OO was about many things but particularity about code reuse. Is that also a bad thing?

arc619 · 2 years ago
Unfortunately, while OOP promises code reuse, it usually makes it worse by introducing boundaries as static architecture.

OOP's core tenet of "speciating" processing via inheritance in the hope of sharing subprocesses does precisely the opposite; defining "is-a" relationships, by definition, excludes sharing similar processing in a different context, and subclassing only makes it worse by further increasing specialisation. So we have adapters, factories, dependency injection, and so on to cope with the coupling of data and code. A big enough OOP system inevitably converges towards "God objects" where all potential states are superimposed.

On top of this, OOP requires you to carefully consider ontological categories to group your processing in the guise of "organising" your solution. Sometimes this is harder than actually solving the problem, as this static architecture has to somehow be both flexible yet predict potential future requirements without being overengineered. That's necessary because the cost to change OOP architectures is proportional to the amount of it you have.

Of course, these days most people say not to use deep inheritance stacks. So, what is OOP left with? Organising code in classes? Sounds good in theory, but again this is another artificial constraint that bakes present and future assumptions into the code. A simple parsing rule like UFCS does the job better IMHO without imposing structural assumptions.

Data wants to be pure, and code should be able to act on this free-form data independently, not architecturally chained to it.

Separating code and data lets you take advantage of compositional patterns much more easily, whilst also reducing structural coupling and thus allowing design flexibility going forward.

That's not to say we should throw out typing - quite the opposite, typing is important for data integrity. You can have strong typing without coupled relationships.

Personally, I think that grouping code and data types together as a "thing" is the issue.

arc619 commented on Rusty.hpp: A Borrow Checker and Memory Ownership System for C++20   github.com/Jaysmito101/ru... · Posted by u/stanulilic
j-james · 2 years ago
It will not emit warnings saying it did that. The static analysis is not very transparent. (If you can get the right incantation of flags working to do so and it works, let me know! The last time I did that it was quite bugged.)

Writing an equivalent program is a bit weird because: 1) Nim does not distinguish between owned and borrowed types in the parameters (except wrt. lent which is bugged and only for optimizations), 2) Nim copies all structures smaller than $THRESHOLD regardless (the threshold is only slightly larger than a pointer but definitely includes all integer types - it's somewhere in the manual) and 3) similarly, not having a way to explicitly return borrows cuts out much of the complexity of lifetimes regardless, since it'll just fall back on reference counting. The TL;DR here though is no, unless I'm mistaken, Nim will fall back on reference counting here (were points 1 and 2 changed).

For clarity as to Nim's memory model: it can be thought of as ownership-optimized reference counting. It's basically the same model as Koka (a research language from Microsoft). If you want to learn more about it, because it is very neat and an exceptionally good tradeoff between performance/ease of use/determinism IMO, I would suggest reading the papers on Perseus as the Nim implementation is not very well-documented. (IIRC the main difference between Koka and Nim's implementation is that Nim frees at the end of scope while Koka frees at the point of last use.)

arc619 · 2 years ago
> It will not emit warnings saying it did that.

You're right. I was sure I read that it would announce when it does a copy over a sink but now I look for it I can't find it!

> The static analysis is not very transparent.

There is '--expandArc' which shows the compile time transformations performed but that's a bit more in depth.

arc619 commented on Rusty.hpp: A Borrow Checker and Memory Ownership System for C++20   github.com/Jaysmito101/ru... · Posted by u/stanulilic
steveklabnik · 2 years ago
Nim has a garbage collector.

That said, you're right on some level that it's truly semantics that matter, not syntax, but you need syntax to control the semantics.

arc619 · 2 years ago
Nim is stack allocated unless you specifically mark a type as a reference, and "does not use classical GC algorithms anymore but is based on destructors and move semantics": https://nim-lang.org/docs/destructors.html

Where Rust won't compile when a lifetime can't be determined, IIRC Nim's static analysis will make a copy (and tell you), so it's more as a performance optimisation than for correctness.

Regardless of the details and extent of the borrow checking, however, it shows that it's possible in principle to infer lifetimes without explicit annotation. So, perhaps C++ could support it.

As you say, it's the semantics of the syntax that matter. I'm not familiar with C++'s compiler internals though so it could be impractical.

arc619 commented on Rusty.hpp: A Borrow Checker and Memory Ownership System for C++20   github.com/Jaysmito101/ru... · Posted by u/stanulilic
steveklabnik · 2 years ago
C++ cannot because it does not have the necessary information present in its syntax. It’s really that simple. C++ could add such syntax, but outside of what Circle is doing, I’m not aware of any real proposal to add it.

Also, Google (more specifically, the Chrome folks) tried to make it work via templates, but found that it was not possible. There’s a limit to template magic, even.

arc619 · 2 years ago
Although it's not as extensive as Rust's lifetime management, Nim manages to infer lifetimes without specific syntax, so is it really a syntax issue? As you say, though, C++ template magic definitely has its limits.
arc619 commented on Leaving Rust gamedev after 3 years   loglog.games/blog/leaving... · Posted by u/darthdeus
pcwalton · 2 years ago
No, the original article said that you don't get parallelism from Bevy in practice:

> Unfortunately, after all the work that one has to put into ordering their systems it's not like there is going to be much left to parallelize. And in practice, what little one might gain from this will amount to parallelizing a purely data driven system that could've been done trivially with data parallelism using rayon.

It's not saying "yes, you get parallelism, but I don't need the performance"; it's claiming that in practice you don't get (system-level) parallelism at all. That's at odds with my experience.

arc619 · 2 years ago
To be fair, you've posted a toy example. Real games are often chains of dependent systems, and as complexity increases, clean threading opportunities decrease.

So, while yes it's nice in theory, in practice it often doesn't add as much performance as you'd expect.

arc619 commented on Arraymancer – Deep learning Nim library   github.com/mratsim/Arraym... · Posted by u/archargelod
FireInsight · 2 years ago
Nim as a language is a good place to go. The ecosystem is another story entirely. I suggest you search for the kinds of libraries you'd need and check their maintenance status, maybe do some example project to get a feel for the compiler and `nimble`.
arc619 · 2 years ago
Native Nim libs are definitely nicer, but being able to output C/C++/JS/LLVM-IR with nice FFI means you can access those ecosystems natively too. It's one reason the language has been so great for me, as I can write shared Nim code that uses both C and JS libs (even Node) in the same project.
arc619 commented on Fortran vs Python: The counter-intuitive rise of Python in scientific computing (2020)   fortran-lang.discourse.gr... · Posted by u/zaikunzhang
arc619 · 2 years ago
Personally, I think Python's success is down to the productivity of its peudocode-like syntax letting you hack prototypes out fast and easy. In turn, that makes building libraries more attractive, and these things build on each other. FORTRAN is very fast but it's a less forgiving syntax, especially coming from Python.

In that regard, I'm surprised Nim hasn't taken off for scientific computing. It has a similar syntax to Python with good Python iterop (eg Nimpy), but is competitive with FORTRAN in both performance and bit twiddling. I would have thought it'd be an easier move to Nim than to FORTRAN (or Rust/C/C++). Does anyone working in SciComp have any input on this - is it just a lack of exposure/PR, or something else?

u/arc619

KarmaCake day170June 21, 2019View Original