Readit News logoReadit News
lebek commented on The baffling intelligence of a single cell: The story of E. coli chemotaxis   jsomers.net/e-coli-chemot... · Posted by u/jsomers
jjk166 · 2 years ago
Evolution doesn't produce 1st part of the flagellum, second part of the flagellum, third part of the flagellum.

It produces shitty flagellum, better flagellum, good flagellum.

But the problem is we don't see the intermediate forms. So right now you might see a complicated flagellum that has a lot of highly specialized parts that all need eachother, but that is merely a refinement that took place after all the pieces were already there. Like once an arch is complete, all the scaffolding that was holding it up is now vestigial and if it is removed the arch will remain standing.

lebek · 2 years ago
I understand that, but it seems like even the MVP "shitty" flagellum would require many mutations that individually have no benefit. But I suppose with enough generations/parallelism you get enough stacking of useless mutations to reach the useful ones.
lebek commented on The baffling intelligence of a single cell: The story of E. coli chemotaxis   jsomers.net/e-coli-chemot... · Posted by u/jsomers
jjk166 · 2 years ago
Flagella only exist as components of something, they do not need to and shouldn't exist by themselves. If flagella spontaneously popped into existence and cells picked them up, that would be quite difficult to explain without design, but cells producing flagella because they are useful components makes perfect sense on its own.
lebek · 2 years ago
I think he's saying, random mutation wouldn't produce all required components at once. One mutation gives you a bit of a flagella, another gives you bit of a nose, but how does the flagella mutation survive to coexist with the nose mutation that makes it useful.

I suspect the answer is that having flagella without a nose is still better than having no flagella. If so it suggests evolution isn't good at accessing groups of mutations that aren't individually beneficial.

lebek commented on The False Promise of Imitating Proprietary LLMs   arxiv.org/abs/2305.15717... · Posted by u/lebek
blazespin · 3 years ago
To be fair, this paper has been made obsolete in its entirety with recent research. It's not really their fault, but folks need to start publishing faster as posters or something if they want to provide something relevant.

A better title, knowing what we now, might be "To outperform GPT4, do more than imitating"

lebek · 3 years ago
Link to said research?
lebek commented on The False Promise of Imitating Proprietary LLMs   arxiv.org/abs/2305.15717... · Posted by u/lebek
evrydayhustling · 3 years ago
This is a very weird type of paper. They take a specific approach, then make arguments about a broad class of approaches that are under constant development. The finding that distilled LLMs must be more specialized than the giant LLMs that train them is unsurprising; nobody at this point expects a 13B parameter model to succeed with the same accuracy at the broad range of tasks supported by what may be a 1T parameter model.
lebek · 3 years ago
> nobody at this point expects a 13B parameter model to succeed with the same accuracy at the broad range of tasks supported by what may be a 1T parameter model

I think a lot of people believe exactly that. To take one example from the "We Have No Moat" essay:

"It doesn’t take long before the cumulative effect of all of these fine-tunings overcomes starting off at a size disadvantage. Indeed, in terms of engineer-hours, the pace of improvement from these models vastly outstrips what we can do with our largest variants, and the best are already largely indistinguishable from ChatGPT." - https://www.semianalysis.com/p/google-we-have-no-moat-and-ne...

u/lebek

KarmaCake day1298March 16, 2013
About
https://twitter.com/_lebek
View Original