Readit News logoReadit News
elabajaba commented on Unsafe and Unpredictable: My Volvo EX90 Experience   myvolvoex90.com/... · Posted by u/prova_modena
everybodyknows · a month ago
Was told by a mechanic a few months back that continuously-variable transmissions are standard in gas cars now, but have reliability problems. Old-tech automatics can (could?) still be had from Toyota and Mazda.
elabajaba · a month ago
E-CVTs are extremely reliable and are different from CVTs (CVTs use a belt attached to 2 cones, E-CVTs are just a single planetary gear set), but a lot of car guys and even some mechanics don't realize they're completely different.
elabajaba commented on Alberta separatism push roils Canada   nytimes.com/2025/05/22/wo... · Posted by u/paulpauper
bdcravens · 3 months ago
A lot of this may hinge on whether the US's drive to end subsidies for solar and EVs stick and if they take hold elsewhere.
elabajaba · 3 months ago
Gas car sales peaked in 2018 globally. EVs are already >20% of new car sales worldwide, and the US is a joke when it comes to EV sales compared to Europe or China.

https://ourworldindata.org/electric-car-sales

elabajaba commented on Rust’s dependencies are starting to worry me   vincents.dev/blog/rust-de... · Posted by u/chaosprint
palata · 4 months ago
> some of the difference here is just perception due to dependencies in C/C++ being less immediately visible since they're dynamically loaded.

Not in my case. I manually compile all the dependencies (either because I need to cross-compile, or because I may need to patch them, etc). So I clearly see all the transitive dependencies I need in C++. And I need a lot less than in Rust, by a long shot.

elabajaba · 4 months ago
Part of the rust dependency issue is that the compiler only multithreads at the crate level currently (slowly being improved on nightly, but there's still some bugs before they can roll out the parallel compiler), so most libraries split themselves up into a ton of small crates because otherwise they just take too long to compile.

edit: Also, `cargo-vet` is useful for distributed auditing of crates. There's also `cargo-crev`, but afaik it doesn't have buy in from the megacorps like cargo-vet and last I checked didn't have as many/as consistent reviews.

https://github.com/mozilla/cargo-vet

https://github.com/crev-dev/cargo-crev

elabajaba commented on Migrating away from Rust   deadmoney.gg/news/article... · Posted by u/rc00
palata · 4 months ago
I didn't mean that the OP should use Java. BTW the OP does not use C++, but Rust.

This said, they moved to Unity, which is C#, which is garbage collected, right?

elabajaba · 4 months ago
The core unity game engine is c++ that you can't access, but all unity games are written in c#.
elabajaba commented on Framework's first desktop is a strange–but unique–mini ITX gaming PC   arstechnica.com/gadgets/2... · Posted by u/perihelions
znpy · 6 months ago
They are a fairly small company, and going for amd/intel means reaching the widest audience.

Linux on arm is very mature, but windows on arm not completely.

That being said, other companies could very well develop and sell boards for the frameworks laptop. So much so that iirc sifive did release a risc-v laptop board to use in the frameworks laptop case.

elabajaba · 6 months ago
Linux on arm is actually pretty terrible outside of the server space due to their (Qualcomm, Imagination, and ARM) integrated GPUs being bad and having terrible drivers.
elabajaba commented on OpenWrt 24.10.0 – First Stable Release   openwrt.org/releases/24.1... · Posted by u/pm2222
ndsipa_pomu · 7 months ago
Oh great - I don't need another nanopi and yet I'm now going to check out the spec/price for that one.

NVMe is great for adding swap and frequently updating containers.

Edit: just checked and it's only got 2 or 4gb ram so I'm less interested in it.

elabajaba · 6 months ago
R5s has a garbage CPU that won't be able to handle QoS on probably >250mbps.

I'd avoid anything arm based that doesn't have a7x cores (ideally a76/a78 or newer, though I don't think there's any SBC socs using the a710/715/720 yet). A55 cores are old stupidly slow efficiency cores (area efficient, not power efficient).

elabajaba commented on FLAC 1.5 Delivers Multi-Threaded Encoding   phoronix.com/news/FLAC-1.... · Posted by u/mikece
arp242 · 7 months ago
> a modern audio codec at 320kbps bitrate is more than good enough.

MP3 V0 should already be, and is typically smaller.

That said, it does depend on the quality of the encoder; back in the day a lot of MP3 encoders were not very good, even at high quality settings. These days LAME is the de-facto standard and it's pretty good, but maybe some others aren't(?)

elabajaba · 6 months ago
Hell, modern audio codecs (opus and AAC, but not the ffmpeg opus/AAC encoders) are transparent at ~160-192k. MP3 is a legacy codec these days, and generally needs ~30% more bitrate for similar quality.
elabajaba commented on Intel's Battlemage Architecture   chipsandcheese.com/p/inte... · Posted by u/ksec
bee_rider · 7 months ago
I have always associated Intel iGPUs with good drivers but people seem to often complain about their Linux dGPU drivers in these threads. I hope it is just an issue of them trying to break into a new field, rather than a slipping of their GPU drivers in general…
elabajaba · 6 months ago
Intel GPU drivers have always been terrible. There's so many features that are just broken if you try to actually use them, on top of just generally being extremely slow.

Hell, the B580 is CPU bottlenecked on everything that isn't a 7800x3d or 9800x3d which is insane for a low-midrange GPU.

elabajaba commented on Intel's Battlemage Architecture   chipsandcheese.com/p/inte... · Posted by u/ksec
myrmidon · 7 months ago
Loosely related question:

What prevents manufacturers from taking some existing mid/toprange consumer GPU design, and just slapping like 256GB VRAM onto it? (enabling consumers to run big-LLM inference locally).

Would that be useless for some reason? What am I missing?

elabajaba · 7 months ago
The amount of memory you can put on a GPU is mainly constrained by the GPU's memory bus width (which is both expensive and power hungry to expand) and the available GDDR chips (generally require 32bits of the bus per chip). We've been using 16Gbit (2GB) chips for awhile, and they're just starting to roll out 24Gbit (3GB) GDDR7 modules, but they're expensive and in limited demand. You also have to account for VRAM being somewhat power hungry (~1.5-2.5w per module under load).

Once you've filled all the slots your only real option is to do a clamshell setup that will double the VRAM capacity by putting chips on the back of the PCB in the same spot as the ones on the front (for timing reasons the traces all have to be the same length). Clamshell designs then need to figure out how to cool those chips on the back (~1.5-2.5w per module depending on speed and if it's GDDR6/6X/7, meaning you could have up to 40w on the back).

Some basic math puts us at 16 modules for a 512 bit bus (only the 5090, have to go back a decade+ to get the last 512bit bus GPU), 12 with 384bit (4090, 7900xtx), or 8 with 256bit (5080, 4080, 7800xt).

A clamshell 5090 with 2GB modules has a max limit of 64GB, or 96GB with (currently expensive and limited) 3GB modules (you'll be able to buy this at some point as the RTX 6000 Blackwell at stupid prices).

HBM can get you higher amounts, but it's extremely expensive to buy (you're competing against H100s, MI300Xs, etc), supply limited (AI hardware companies are buying all of it and want even more), requires a different memory controller (meaning you'll still have to partially redesign the GPU), and requires expensive packaging to assemble it.

elabajaba commented on Intel announces Arc B-series "Battlemage" discrete graphics with Linux support   phoronix.com/review/intel... · Posted by u/rbanffy
AnthonyMouse · 9 months ago
> They can't just slap more memory on the board

Why not? It doesn't have to be balanced. RAM is cheap. You would get an affordable card that can hold a large model and still do inference e.g. 4x faster than a CPU. The 128GB card doesn't have to do inference on a 128GB model as fast as a 16GB card does on a 16GB model, it can be slower than that and still faster than any cost-competitive alternative at that size.

The extra RAM also lets you do things like load a sparse mixture of experts model entirely into the GPU, which will perform well even on lower end GPUs with less bandwidth because you don't have to stream the whole model for each token, but you do need enough RAM for the whole model because you don't know ahead of time which parts you'll need.

elabajaba · 9 months ago
To get 128GB of RAM on a GPU you'd need at least a 1024 bit bus. GDDR6x is 16Gbit 32 pins, so you'd need 64 GDDR6x chips, which good luck even trying to fit that around the GPU die since traces need to be the same length, and you want to keep them as short as possible. There's also a good chance you can't run a clamshell setup so you'd have to double the bus width to 2048 because 32 GDDR6x chips would kick off way too much heat to be cooled on the back of a GPU. Such a ridiculous setup would obviously be extremely expensive and would use way too much power.

A more sensible alternative would be going with HBM, except good luck getting any capacity for that since it's all being used for the extremely high margin data center GPUs. HBM is also extremely expensive both in terms of the cost of buying the chips and due to it's advanced packaging requirements.

u/elabajaba

KarmaCake day445February 24, 2018View Original