Let's say you're building a game where you want a sphere to stick to whatever player you throw it at. How would you do that with a scene graph/OOP model? It'd be awkward, removing objects from one parent and adding them to another. Even more awkward if it's a complex object and you only want a part of that complex object to stick to the player. ECS + a constraint or physics system does a decent job (not perfect) of handling this in a relatively elegant and performant way.
I've used Three.js enough--built my portfolio[1] out of it, and then switched to Babylon when I realized how little I liked Three.js. For the record, I also dislike Babylon.
could you go more into detail what you mean when you say "anything non trivial"? is there a real example of something that would not be possible to create in, say, threejs?
<cube id="myCube" x="0" y="0" z="0" width="10px" height="10px" length="10px" onClick="someFoo()" onMouseOver="someOtherFoo()" onTouchStart="someBar()">
<video src="blah.mp4" rotateX="45deg" rotateY="30deg" autoplay="true" loop="true" onClick="document.querySelector('#myCube').rotateX(45);">
...
No reason the above can't be done. But many people will come up with lame excuses about why we shouldn't have nice things.
as for react, it merely expresses threejs, and adds something which three doesn't have: self contained components that are now sharable. something like this: https://twitter.com/0xca0a/status/1394697847556149250 just didn't exist in the web previously.
Have we gotten so bloated with framework on top of framework that even this is deemed an impossible load?
If you write straight-up WebGL code 2000 cubes at 60fps should be a walk in the park for any modern PC.
If you have a coffee cup on a table in VR, is that coffee cup a child of the table? How do you move the coffee cup off the table and put it onto another table? Is it now a child of that other table? What about the coffee in the cup? Is that a child of the cup? How do you change properties of the coffee without necessarily accessing the table and the cup?
Developers working on 3D systems have developed much better paradigms than the DOM for dealing with this problem. An Entity-Component-System architecture with "constraints" is the current best solution. In that architecture, you would create a coffee cup "entity" with a mesh "component" with another "constraint" component, constraining that coffee cup to the table (or better yet, mass component acted on by a physics system). Then you can simply remove the constraint component when removing the cup from one table, and re-add the constraint component when adding it to the other table.
Overall, I think web developers are in for some intense learning and paradigm shifts if 3D becomes the norm.
The underlying abstraction model of having a tree of components and re-rendering only the parts that have changed between renders doesn’t map to the hardware at all, meaning you’ll waste most of the HW performance just on maintaining the abstraction.
You’ll also get zero benefits from the third-party libraries - there’s nothing in them that can help you with stuff that matters, like minimizing amount of the GPU state transitions for example or minimizing amount of GPU/CPU syncs.
It will be scenegraphs all over again, and the graphics industry has ditched these long ago in favor of simpler models, for good reasons.
Long story short, the happy path in graphics programming is very narrow and fragile, and you typically want to structure your abstraction around it.
I recently played around with using https://reactpixi.org/ to build a simple ant farm simulation (https://meomix.github.io/antfarm/) and was left disappointed regarding rendering performance, but not surprised. It's a real challenge to get strong numbers when stacking a declarative wrapper on an imperative base. I found myself fighting React's reconciler for performance almost immediately.
I was talking to my coworkers about the issue and one of them suggested trying to use react-three-fiber and just force it to render 2D, but, as the author notes, the problem feels intractable with competing layers of abstraction.
I'm really excited to learn about this library. I feel that I was between a rock and a place with my web-first, declarative graphics tooling. I was pushing myself to learn Rust, to use Bevvy, to have a well-supported, declarative framework, but I felt I would prototype quicker if I stuck with JavaScript. I considered A-Frame, but it's really not about 2D rendering at all with its VR-first approach.
There's really quite a desert of modern, active, declarative, web-first graphics rendering frameworks. Stoked for UseGPU to hit 1.0!
In Fiber to animate is to use useFrame, which happens to be outside of React and bears no extra cost or overhead, nor does it conflict with reactive prop updates. useFrame allows any individual component to tie its mutations into the render loop.
The reactivity debate is next to irrelevant in WebGL because that is a loop driven system, it is not event-driven like the DOM. If you doubt that observe Svelte/Threlte, which can update the view directly, yet relies on useFrame modeled after Fibers. That is because you need deltas (to be refresh-rate independent), flags (needsUpdate), imperative functions (updateProjectionMatrix etc), it is never just a prop update.
That said, i don't know much about React-Pixi/Konva and so on, and if these do not have a loop outlet then yes i agree. But the whole premise of that article just falls flat.