Readit News logoReadit News
semessier commented on Croatian freediver held breath for 29 minutes   divernet.com/scuba-news/f... · Posted by u/toomanyrichies
semessier · 8 days ago
wondering how newborns under respiratory distress can survive hypoxia almost 30' post-birth with 50 and 57% measured SpO2, respectively.
semessier commented on Why has Linux stopped innovating?   cocz.net/why-is-linux-not... · Posted by u/melchizedek6809
semessier · 11 days ago
IBM hasn't had a hardware winner in more than 30 years, so this is still a pretty good record.
semessier commented on Jan – Ollama alternative with local UI   github.com/menloresearch/... · Posted by u/maxloh
semessier · 17 days ago
still looking for vLLM to support Mac ARM Metal GPUs
semessier commented on Getting into Flow State with Agentic Coding   kau.sh/blog/agentic-codin... · Posted by u/vortex_ape
semessier · a month ago
separate contexts for dev/test case development/documentation?
semessier commented on Multiplatform Matrix Multiplication Kernels   burn.dev/blog/sota-multip... · Posted by u/homarp
semessier · a month ago
I had bet that matmult would be in transformer-optimized hardware costing a fraction of GPUs first class in torch 2 years ago with no reason to use GPUs any more. Wrong.
semessier commented on Apple vs the Law   formularsumo.co.uk/blog/2... · Posted by u/tempodox
semessier · 2 months ago
what is the effective tax rate for Apple in Europe last year?
semessier commented on Ask HN: What are you working on? (May 2025)    · Posted by u/david927
semessier · 3 months ago
legal tech apps via AI
semessier commented on 'I paid for the whole GPU, I am going to use the whole GPU'   modal.com/blog/gpu-utiliz... · Posted by u/mooreds
semessier · 4 months ago
well, related: fractional GPUs to multiplex workloads for aggregate utilization have been a topic for some time with no definite (NVIDIA) solutions for it: https://vhpc.org
semessier commented on Bolt Graphics Zeus a New GPU Architecture with Up to 2.25TB of Memory and 800GbE   servethehome.com/bolt-gra... · Posted by u/Teever
dinfinity · 5 months ago
I believe this is mainly due to everything ML/AI optimizing for CUDA, with even AMD cards (which are very similar to Nvidia cards) unable to compete due to lack of proper support for CUDA.
semessier · 5 months ago
this was/is the chip opportunity of the century. Even more optimized than the still general purpose nvidia cards. And no matrix mult is abstracted away for decades, don't need CUDA. So a chip would likely be much much easier than a mixed signal chip with the Apple C1 being on the high end of nightmare in comparison.

u/semessier

KarmaCake day34August 14, 2016View Original