Readit News logoReadit News
atty commented on Cloudflare outage on February 20, 2026   blog.cloudflare.com/cloud... · Posted by u/nomaxx117
atty · 20 days ago
I do not work in the space at all, but it seems like Cloudflare has been having more network disruptions lately than they used to. To anyone who deals with this sort of thing, is that just recency bias?
atty commented on Claude is good at assembling blocks, but still falls apart at creating them   approachwithalacrity.com/... · Posted by u/bblcla
TeMPOraL · 2 months ago
> The technology cannot create new concepts/abstractions, and so fails at abstraction. Reliably.

That statement is way too strong, as it implies either that humans cannot create new concepts/abstractions, or that magic exists.

atty · 2 months ago
I think both your statement and their statement are too strong. There is no reason to think LLMs can do everything a human can do, which seems to be your implication. On the other hand, the technology is still improving, so maybe it’ll get there.
atty commented on Someone at YouTube Needs Glasses: The Prophecy Has Been Fulfilled   jayd.ml/2025/11/10/someon... · Posted by u/jaydenmilne
gessha · 4 months ago
I guess they just hate their users lmao
atty · 4 months ago
Never ascribe to malice what can be sufficiently explained by incompetence. And i think it’s fair to say the best and brightest at Google aren’t turning their attention to YouTube lately. Except maybe to make training datasets for Gemini N+1 :)
atty commented on Uv is the best thing to happen to the Python ecosystem in a decade   emily.space/posts/251023-... · Posted by u/todsacerdoti
adastra22 · 4 months ago
I wish the Python ecosystem would just switch to Rust. Things are nice over here… please port your packages to crates.
atty · 4 months ago
The unspoken assertion that Rust and Python are interchangeable is pretty wild and needs significant defense, I think. I know a lot of scientists who would see their first borrow checker error and immediately move back to Python/C++/Matlab/Fortran/Julia and never consider rust again.
atty commented on Apple Silicon GPU Support in Mojo   forum.modular.com/t/apple... · Posted by u/mpweiher
lqstuart · 6 months ago
I like Chris Lattner but the ship sailed for a deep learning DSL in like 2012. Mojo is never going to be anything but a vanity project.
atty · 6 months ago
To be fair, triton is in active use, and this should be even more ergonomic for Python users than triton. I dont think it’s a sure thing, but I wouldn’t say it has zero chance either.
atty commented on Chat Control Must Be Stopped   privacyguides.org/article... · Posted by u/_p2zi
rhizome · 6 months ago
>I wonder why there has been such silence on this

Some combination of cowardice, conflict of interest, and fear of ICE.

atty · 6 months ago
Which ICE are you referring to? This is an EU law.
atty commented on Vijaye Raji to become CTO of Applications with acquisition of Statsig   openai.com/index/vijaye-r... · Posted by u/tosh
nerdsniper · 6 months ago
“CTO” makes sense as a signal that “the buck stops here” for technical issues. They are the highest-ranking authority on technical decisions for their silo, with no one above them (but two CEO’s above them for business decisions)

If Mira Murati (CTO of OpenAI) has authority over their technical decisions, then it’s an odd title. If I was talking with a CTO, I wouldn't expect another CTO to outrank or be able to overrule them.

atty · 6 months ago
It would be quite strange indeed for Mira Murati to have a say over their technical decisions, considering she does not work for OpenAI :)
atty commented on Ask HN: How can ChatGPT serve 700M users when I can't run one GPT-4 locally?    · Posted by u/superasn
rythie · 7 months ago
First off I’d say you can run models locally at good speed, llama3.1:8b runs fine a MacBook Air M2 with 16GB RAM and much better on a Nvidia RTX3050 which are fairly affordable.

For OpenAI, I’d assume that a GPU is dedicated to your task from the point you press enter to the point it finishes writing. I would think most of the 700 million barely use ChatGPT and a small proportion use it a lot and likely would need to pay due to the limits. Most of the time you have the website/app open I’d think you are either reading what it has written, writing something or it’s just open in the background, so ChatGPT isn’t doing anything in that time. If we assume 20 queries a week taking 25 seconds each. That’s 8.33 minutes a week. That would mean a single GPU could serve up to 1209 users, meaning for 700 million users you’d need at least 578,703 GPUs. Sam Altman has said OpenAI is due to have over a million GPUs by the end of year.

I’ve found that the inference speed on newer GPUs is barely faster than older ones (perhaps it’s memory speed limited?). They could be using older clusters of V100, A100 or even H100 GPUs for inference if they can get the model to fit or multiple GPUs if it doesn’t fit. A100s were available in 40GB and 80GB versions.

I would think they use a queuing system to allocate your message to a GPU. Slurm is widely used in HPC compute clusters, so might use that, though likely they have rolled their own system for inference.

atty · 7 months ago
The idea that a GPU is dedicated to a single inference task is just generally incorrect. Inputs are batched, and it’s not a single GPU handling a single request, it’s a handful of GPUs in various parallelism schemes processing a batch of requests at once. There’s a latency vs throughput trade off that operators make. The larger that batch size the greater the latency, but it improves overall cluster throughput.
atty commented on Multiplatform Matrix Multiplication Kernels   burn.dev/blog/sota-multip... · Posted by u/homarp
almostgotcaught · 8 months ago
> matmult would be in transformer-optimized hardware

It is... it's in GPUs lol

> first class in torch

It is

> costing a fraction of GPUs

Why would anyone give you this for cheaper than GPUs lol?

atty · 8 months ago
I think they’re referring to hardware like TPUs and other ASICs. Which also exist, of course :)
atty commented on How to solve computational science problems with AI: PINNs   mertkavi.com/how-to-solve... · Posted by u/mertkavi
atty · a year ago
I work on a team that has actually deployed NN based surrogate models into production in industry. We don’t use PINNs for the simple reason that many industrial scale solvers are solving significantly more complex systems than a single global PDE (at least in CFD, perhaps other areas are simpler). For instance, close to the boundaries, the solver our engineers use uses an approximation that does not satisfy conservation of mass and momentum. So when we try to use physical constraints, our accuracy goes down. Even in the cases where we could technically use PINNs we find they are underwhelming, and spending time on crafting better training data sets has always been a better option for us.

u/atty

KarmaCake day2068September 29, 2019View Original