Readit News logoReadit News
InCom-0 commented on The maths you need to start understanding LLMs   gilesthomas.com/2025/09/m... · Posted by u/gpjt
saagarjha · 15 hours ago
If you're just piecing together a bunch of libraries, sure. But anyone who is adjacent to ML research should know how these work.
InCom-0 · 14 hours ago
Anyone actually physically doing ML research knows it ... but doesn't write the actual code for this stuff (or god forbid write some byzantine math notations somewhere), doesn't even think about this stuff except through X levels of higher level abstractions.

Also, those people understand LLMs already :-).

InCom-0 commented on The maths you need to start understanding LLMs   gilesthomas.com/2025/09/m... · Posted by u/gpjt
antegamisou · 17 hours ago
> attempt to derail people into low level math when that is not the crux of the question at all.

Is the barrier to entry to the ML/AI field really that low? I think no one seasoned would consider fundamental linear algebra 'low level' math.

InCom-0 · 15 hours ago
What do you mean 'low'? :-)

The barrier to entry is probably epicly high because to be actually useful you need to understand how to actually train a model in practice, how it is actually designed, how existing practices (ie. at OpenAI or wherever) can be built upon further ... and you need to be cutting edge at all of those things. This is not taught anywhere, you can't read about it in some book. This has absolutely nothing to do with linear algebra ... or more accurately you don't get better at those things by understanding linear algebra (or any math) better than the next guy. It is not as if 'If I were better at math, I would have been better AI researcher or programmer or whatever' :-). This is just not what these people do or how that process works. Even the foundational research that sparked rapid LLM development ('Attention Is All You Need' paper) is not some math heavy stuff. The whole thing is a conceptual idea that was tested and turned out to be spectacular.

InCom-0 commented on The maths you need to start understanding LLMs   gilesthomas.com/2025/09/m... · Posted by u/gpjt
jasode · 21 hours ago
>This is as if you started explaining how an ICE car works by diving into chemical properties of petrol.

But wouldn't explaining the chemistry actually be acceptable if the title was, "The chemistry you need to start understanding Internal Combustion Engines"

That's analogous to what the author did. The title was "The maths ..." -- and then the body of the article fulfills the title by explaining the math relevant to LLMs.

It seems like you wished the author wrote a different article that doesn't match the title.

InCom-0 · 21 hours ago
'The maths you need to start understanding LLMs'.

You don't need that math to start understanding LLMs. In fact, I'd argue its harmful to start there unless your goal is to 'take me on a epic journey of all the things mankind needed to figure out to make LLMs work from the absolute basics'.

InCom-0 commented on The maths you need to start understanding LLMs   gilesthomas.com/2025/09/m... · Posted by u/gpjt
49pctber · 21 hours ago
Anyone who would like to run an LLM would need to perform their computations on hardware. So picking hardware that is good at matrix multiplication is important for them, even if they didn't develop their LLM from scratch. Knowing the basic math also explains some of the rush to purchase GPUs and TPUs on recent years.

All that is kind of missing the point though. I think people being curious and sharpening their mental models of technology is generally a good thing. If you didn't know an LLM was a bunch of linear algebra, you might have some distorted views of what it can or can't accomplish.

InCom-0 · 21 hours ago
Being curious is good ... nothing wrong with that. What I took issue with above is (what I see as) attempt to derail people into low level math when that is not the crux of the question at all.

Also: nobody who wants to run LLMs will write their own matrix multiplications. Nobody doing ML / AI comes close to that stuff ... its all abstracted and not something anyone actually thinks about (except the few people who actually write the underlying libraries ie. at Nvidia).

InCom-0 commented on The maths you need to start understanding LLMs   gilesthomas.com/2025/09/m... · Posted by u/gpjt
InCom-0 · a day ago
These are technical details of computations that are performed as part of LLMs.

Completely pointless to anyone who is not writing the lowest level ML libraries (so basically everyone). This does now help anyone understand how LLMs actually work.

This is as if you started explaining how an ICE car works by diving into chemical properties of petrol. Yeah that really is the basis of it all, but no it is not where you start explaining how a car works.

InCom-0 commented on Anthropic agrees to pay $1.5B to settle lawsuit with book authors   nytimes.com/2025/09/05/te... · Posted by u/acomjean
NovemberWhiskey · a day ago
Classic indications of a cartel (in the economic sense) are deliberate limitations of supply and fixing of prices through collusion. I don’t know about other cities, but NYC absolutely had a taxi cartel.
InCom-0 · a day ago
This is true ... except that it is simplistically naive way of looking at things, because this is just one form (out of many) of anti-competitive practices. It is essentially high-school level elementary basics of anti-trust. In actual reality there is quite a bit more to it than that.

For instance: Monopolies often don't actually limit supply. You only make it so customers can't choose an alternative and set prices accordingly (that is higher than they would have been if there were real alternatives). Big-tech companies do this all the time. Collusion is also not required, but only one form (today virtually unheard of or very rare) of how it may happen. For instance: big-tech companies often don't actually encroach on core parts of the business of other big-tech companies. Google, Microsoft and Apple or Uber are all totally different business with little competitive overlap. They are not doing this because of outright collusion. It's live and let live. Why compete with them when they are leaving us alone in our corner? Also: trying to compete is expensive (for them), it's risky and may hurt them in other ways. This is one of the dirty little secrets: Established companies don't (really) want to compete with other big companies. They all just want to protect what's their and keep it that way. If you don't believe me have a look at the (publicly available) emails from execs that are public record. Anti-competitive thinking through and through.

InCom-0 commented on Anthropic agrees to pay $1.5B to settle lawsuit with book authors   nytimes.com/2025/09/05/te... · Posted by u/acomjean
jimmaswell · a day ago
> It was faster to just put unlicensed taxis on the streets and use investor money to pay fines and lobby for favorable legislation

And thank god they did. There was no perfectly legal channel to fix the taxi cartel. Now you don't even have to use Uber in many of these places because taxis had to compete - they otherwise never would have stopped pulling the "credit card reader is broken" scam, taking long routes on purpose, and started using tech that made them more accountable to these things as well as harder for them to racially profile passengers. (They would infamously pretend not to see you if they didn't want to give you service back when you had to hail them with an IRL gesture instead of an app..)

InCom-0 · a day ago
The supposed 'taxi cartel' were just (some) scummy operators ... not really a cartel. Fast forward to today => you are paying more for what is essentially very similar service (because it literally turned into a monopoly because of network effects) and the money ends up in the pocket of some corporate douche not even the people doing the actual work.

This is the business model: get more money out of customers (because no real alternative) and the drivers (because zero negotiating power). Not to mention that they actually got to that position by literally operating at a loss for over a decade (because venture money). Textbook anti-competitive practices.

However, the idea itself (that is having an app to order taxi) is spectacular. It also something a high-school kid could make in a month in his garage. The actual strength of the business model is the network effects and the anti-competitive practices, not the app or anything having to do with service quality.

InCom-0 commented on Where's the shovelware? Why AI coding claims don't add up   mikelovesrobots.substack.... · Posted by u/dbalatero
InCom-0 · 3 days ago
On one hand I don't understand what all the fuss is about. LLMs are great at all kinds of things around and about: searching for (good) information, summarizing existing text, conceptual discussions where it points you in the right directions very quickly, etc. ..... they are just not great (some might say harmful) at straight up non-trivial code generation or design of complex systems with the added peculiarity that on the surface the models seem almost capable to do it but never quite ... which is sort their central feature: producing text so that it is seems correct from statistical perspective, but without actual reasoning.

On the other hand, I do understand that the things the LLMs are really great at is not actually all that spectacular to monetize ... and so as a result we have all these snake oil salesmen on every corner boasting about nonsensical vibecoding achievements, because that's where the real money would be ... if it were really true ... but it is not.

InCom-0 commented on Where's the shovelware? Why AI coding claims don't add up   mikelovesrobots.substack.... · Posted by u/dbalatero
falcor84 · 3 days ago
That's a fantastic piece of short fiction, but it is fiction. In practice though, I've seen so many copy&pasted unsourced open source snippets in proprietary code that I've lost all ability to be surprised by it, and I can't think of any one time where the company was sued about that, let alone anyone facing any personal repercussions, not even those junior devs. And if anything, by being "lossy encyclopedias" rather than copy-pasters, LLMs significantly reduce this ostensible legal liability.

Oh, and then you have the actual tech giants offering legal commitment to protect you against any copyright claims:

https://blogs.microsoft.com/on-the-issues/2023/09/07/copilot...

https://cloud.google.com/blog/products/ai-machine-learning/p...

InCom-0 · 3 days ago
You can copy paste unsourced open source snippets just fine, ain't nothing wrong with that (usually) It is another story whether anyone should do that for other reasons having nothing to do with open source or licensing.
InCom-0 commented on What to do with C++ modules?   nibblestew.blogspot.com/2... · Posted by u/ingve
feelamee · 6 days ago
sure, but don't forget that rust gives us also a nice tooling, functional syntax sugar like pattern matching, enums, monads; and other more or less useful things like explicit lifetimes
InCom-0 · 6 days ago
Of course. This is kind of to be expected as it has the benefit of hindsight of 25+ years. It is infinitely easier to design better things with all the accumulated experience and know-how of what works and what doesn't under your belt. It would have been truly horrifying if that had not been the case.

That being said, Rust is really about lifetimes. That's the big ticket selling point. My point above was that 1) it isn't a silver bullet and 2) it can be a real hindrance is many applications.

u/InCom-0

KarmaCake day40August 30, 2025View Original