Readit News logoReadit News
mcswell commented on Toothpaste made with keratin may protect and repair damaged teeth: study   kcl.ac.uk/news/toothpaste... · Posted by u/sohkamyung
aidenn0 · 8 days ago
"Windows Subsystem" is a noun-phrase here though. If you want an X providing Y on Z, then it would be "Windows Subsystem for Linux on Windows"
mcswell · 6 days ago
"Windows Subsystem" is a compound noun. It could serve as a noun phrase in a sentence like "Windows Subsystem is nice" or "I like Windows Subsystem", although without some article or other determiner (like "the" or "this") it doesn't sound very grammatical. Generally only mass nouns, like "dirt" or "food", or plural nouns, like "people" or "subsystems", can be noun phrases without an article or determiner. "the Windows Subsystem" or "a Windows Subsystem" (or "a Windows subsystem") would be (complete) noun phrases.
mcswell commented on Toothpaste made with keratin may protect and repair damaged teeth: study   kcl.ac.uk/news/toothpaste... · Posted by u/sohkamyung
card_zero · 8 days ago
No it wouldn't. Following the scheme a couple of comments above, we have:

Y of X providing Z - Windows Subsystem for Linux.

Y providing X on Z - Linux Subsystem for Windows.

The former is "for [having]", the latter is "for [use on]".

mcswell · 7 days ago
I wrote the comment you're referring to, but it wasn't intended as a complete schema, rather as a way of saying two nouns in a compound can be related in most any way. The interpretation is pragmatic and conventional, not syntactic. (And while [W S] is a compound, [W S for L] isn't, it's a (compound) noun plus a prepositional phrase.)

While W S for L is fine in the intended sense, it could just as well mean a subsystem on Linux that runs Windows (like Vine, I guess). Parallel examples might be Brake Pads for Chevys or Oven Cleaner for Microwaves.

As further examples of the weirdness of compound nouns in English, consider Atomic Scientist, which does not mean a scientist who is atomic, but rather an 'ist' (= person) who does atomic science. Likewise Nuclear Physicist, Artificial Intelligence Researcher (at least for now, since AI systems aren't researchers :)).

mcswell commented on AI is different   antirez.com/news/155... · Posted by u/grep_it
mcswell · 8 days ago
Reads like it was written by an AI.
mcswell commented on Toothpaste made with keratin may protect and repair damaged teeth: study   kcl.ac.uk/news/toothpaste... · Posted by u/sohkamyung
swayvil · 8 days ago
Correlation causation etc. I'm gonna start eating whole mice. It's a nice compromise. They're velvety.
mcswell · 8 days ago
There's a scene in the 1983 movie "Never Cry Wolf" about that. Apparently they taste better with ketchup.
mcswell commented on Toothpaste made with keratin may protect and repair damaged teeth: study   kcl.ac.uk/news/toothpaste... · Posted by u/sohkamyung
ben_w · 8 days ago
Was thinking about oddities of language recently (happens a lot since moving to Germany), specifically how "toothpaste" isn't made from teeth and "tomato paste" isn't something you rub onto a tomato.

So anyway, should we be calling this "hairpaste for teeth", or "toothpaste from hair"?

mcswell · 8 days ago
This semantic variability in the relation between the two nouns of a compound is pretty common in compound nouns: "Y made of X", like "tomato paste", "Y used (somehow) for X" (like "toothpaste", "paintbrush", "electrical outlet"--here an adjective, but still a lexicalized phrase), "Y in X" ("treehouse"), "Y for X" ("doghouse"), "Y containing X" ("paint can"), not to mention metaphorical uses, with some etymological relation between X and Y ("moon shot", "crapshoot", "greenhouse"), and so on. Not to mention multi-word compounds, like "greenhouse gas"--but I'm sure you've seen lots of those in Germany :).
mcswell commented on Sam Altman is in damage-control mode after latest ChatGPT release   cnn.com/2025/08/14/busine... · Posted by u/reconnecting
lurking_swe · 10 days ago
I agree with you. I’m eager to see the details once MIT releases it.

Generative AI is a lot of things. LLM’s in particular (subset of generative AI) are somewhat useful, but nowhere near as useful as what Sam claims. And i guess LLM’s specifically - if we focus on chatgpt, will not be solving cancer lol.

So we agree that Sam is selling snake oil. :)

Just wanted to point out that a lot of the fundamental “tech” is being used for genuinely useful things!

mcswell · 9 days ago
The details were released previously in the Cell paper I link to a couple posts above this. It is behind a paywall (my university allowed me access).
mcswell commented on Sam Altman is in damage-control mode after latest ChatGPT release   cnn.com/2025/08/14/busine... · Posted by u/reconnecting
lurking_swe · 10 days ago
“no scientific breakthrough”

careful. I too am pessimistic on the generative AI hype, but you seem even more so, to the point where it’s making you biased and possibly uninformed.

Today’s news from BBC, 6 hours ago. “AI designs antibiotics for gonorrhoea and MRSA superbugs”

https://www.bbc.com/news/articles/cgr94xxye2lo

> Now, the MIT team have gone one step further by using *generative AI* to design antibiotics in the first place for the sexually transmitted infection gonorrhoea and for potentially-deadly MRSA (methicillin-resistant Staphylococcus aureus).

> "We're excited because we show that generative AI can be used to design completely new antibiotics," Prof James Collins, from MIT, tells the BBC.

mcswell · 9 days ago
Hold on, you're talking about something entirely different from what stephc (the person you are responding to) was talking about. He (or she) was talking about LLMs, and GPT5 in particular. The MIT article you're referring to is talking about two generative AI programs which are not LLMs. From the MIT article (https://news.mit.edu/2025/using-generative-ai-researchers-de...), of which the BBC article you reference appears to be an excerpt:

---------- One of those algorithms, known as chemically reasonable mutations (CReM), works by starting with a particular molecule containing F1 and then generating new molecules by adding, replacing, or deleting atoms and chemical groups. The second algorithm, F-VAE (fragment-based variational autoencoder), takes a chemical fragment and builds it into a complete molecule. It does so by learning patterns of how fragments are commonly modified, based on its pretraining on more than 1 million molecules from the ChEMBL database. ----------

(The technical article about the MIT work is here: https://www.cell.com/cell/abstract/S0092-8674(25)00855-4)

Both the MIT programs and GPT5 use "generative AI", but with entirely different training sets and perhaps very different training methods, architectures etc. Indeed, the AI systems used in the MIT work were described in conference papers in 2018 and 2020 (citations in the Cell paper), meaning that they preceded by quite a bit the current generations of GPT-5. In sum, the fact that the MIT model (reportedly) works well in developing antibiotics does not in any way imply that GPT-5 is a "scientific breakthrough", much less that LLMs will lead to AI that is able to "rewrite and improve itself, or to cure cancer, unify physics or any kind of scientific or technological breakthrough" (quoting the OP).

mcswell commented on New treatment eliminates bladder cancer in 82% of patients   news.keckmedicine.org/new... · Posted by u/geox
tiahura · 11 days ago
“almost half the patients were cancer-free a year later.”
mcswell · 11 days ago
More than half would be nice, but: these tests were run on "individuals with high-risk non-muscle-invasive bladder cancer whose cancer had previously resisted treatment." One could expect that it would be even more effective on patients whose cancers were not resistant to treatment.
mcswell commented on GPT-5: Overdue, overhyped and underwhelming. And that's not the worst of it   garymarcus.substack.com/p... · Posted by u/kgwgk
jvanderbot · 14 days ago
Its true if you expect general intelligence. Its forgiveable for expecting general intelligence given the hype. But there's no real reason we should have expected a language model to create an image that is for some reason a perfect bicycle schematic (other than hype). And I'm not sure that imagery is actually a required format to demonstrate intelligence.

I bet it could generate assembly instructions and list each part and help diagnose or tune. And that's remarkable and demonstrates enough fake understanding to be useful.

mcswell · 13 days ago
"fake understanding" is exactly the right term. And the image is just fine, it's the labeling that's bonkers. What it illustrates is that the LLM can repeat words, but it has no idea what it's saying. Whereas any pre-1817 engineer, reading descriptions of bicycles (which GPT-5 obviously has access to), could easily have labeled a picture of one. (1817 is the date the first real bicycle is believed to have been invented, but driven by the rider's feet on the ground. Bicycles with chain drives weren't invented until decades later. But an engineer would have understood the principle.)
mcswell commented on GPT-5: Overdue, overhyped and underwhelming. And that's not the worst of it   garymarcus.substack.com/p... · Posted by u/kgwgk
jvanderbot · 14 days ago
I will never understand the "bad diagram" kind of critique. Yes maybe it can't build and label a perfect image of a bicycle, but could it list and explain the major components of a bike? Schematics are a whole different skill, and do we need to remind everyone what the L is?
mcswell · 14 days ago
Listing and explaining is essentially repeating what someone else has said somewhere. Labeling a schematic requires understanding what you're saying (or copy-pasting a schematic, so I guess we can be happy that GPT-5 doesn't do that). No one who actually understood the function of the major components would mislabel the schematic like that, unless they were blind.

u/mcswell

KarmaCake day1057September 15, 2018View Original