Y of X providing Z - Windows Subsystem for Linux.
Y providing X on Z - Linux Subsystem for Windows.
The former is "for [having]", the latter is "for [use on]".
While W S for L is fine in the intended sense, it could just as well mean a subsystem on Linux that runs Windows (like Vine, I guess). Parallel examples might be Brake Pads for Chevys or Oven Cleaner for Microwaves.
As further examples of the weirdness of compound nouns in English, consider Atomic Scientist, which does not mean a scientist who is atomic, but rather an 'ist' (= person) who does atomic science. Likewise Nuclear Physicist, Artificial Intelligence Researcher (at least for now, since AI systems aren't researchers :)).
So anyway, should we be calling this "hairpaste for teeth", or "toothpaste from hair"?
Generative AI is a lot of things. LLM’s in particular (subset of generative AI) are somewhat useful, but nowhere near as useful as what Sam claims. And i guess LLM’s specifically - if we focus on chatgpt, will not be solving cancer lol.
So we agree that Sam is selling snake oil. :)
Just wanted to point out that a lot of the fundamental “tech” is being used for genuinely useful things!
careful. I too am pessimistic on the generative AI hype, but you seem even more so, to the point where it’s making you biased and possibly uninformed.
Today’s news from BBC, 6 hours ago. “AI designs antibiotics for gonorrhoea and MRSA superbugs”
https://www.bbc.com/news/articles/cgr94xxye2lo
> Now, the MIT team have gone one step further by using *generative AI* to design antibiotics in the first place for the sexually transmitted infection gonorrhoea and for potentially-deadly MRSA (methicillin-resistant Staphylococcus aureus).
…
> "We're excited because we show that generative AI can be used to design completely new antibiotics," Prof James Collins, from MIT, tells the BBC.
---------- One of those algorithms, known as chemically reasonable mutations (CReM), works by starting with a particular molecule containing F1 and then generating new molecules by adding, replacing, or deleting atoms and chemical groups. The second algorithm, F-VAE (fragment-based variational autoencoder), takes a chemical fragment and builds it into a complete molecule. It does so by learning patterns of how fragments are commonly modified, based on its pretraining on more than 1 million molecules from the ChEMBL database. ----------
(The technical article about the MIT work is here: https://www.cell.com/cell/abstract/S0092-8674(25)00855-4)
Both the MIT programs and GPT5 use "generative AI", but with entirely different training sets and perhaps very different training methods, architectures etc. Indeed, the AI systems used in the MIT work were described in conference papers in 2018 and 2020 (citations in the Cell paper), meaning that they preceded by quite a bit the current generations of GPT-5. In sum, the fact that the MIT model (reportedly) works well in developing antibiotics does not in any way imply that GPT-5 is a "scientific breakthrough", much less that LLMs will lead to AI that is able to "rewrite and improve itself, or to cure cancer, unify physics or any kind of scientific or technological breakthrough" (quoting the OP).
I bet it could generate assembly instructions and list each part and help diagnose or tune. And that's remarkable and demonstrates enough fake understanding to be useful.