Readit News logoReadit News

Deleted Comment

Deleted Comment

Aqueous commented on New research on anesthesia and microtubules gives new clues about consciousness   sciencedaily.com/releases... · Posted by u/isaacfrond
IWeldMelons · a year ago
Very verbose, could you please tldr?
Aqueous · a year ago
sorry you can’t keep up
Aqueous commented on New research on anesthesia and microtubules gives new clues about consciousness   sciencedaily.com/releases... · Posted by u/isaacfrond
IWeldMelons · a year ago
You are delusional. Each and every LLM (by design) is uncapable of having arbitrary long conversation as it has finite context window, and hallucinate left and right. But that is all irrelevant, as Penroses point is not about that.

In fact what Penrose saying is that LLMs are Searles Chinese rooms, as they lack qualia, and he offers quantum processes as basis for the qualia, however vagues it sounds.

So the point is not intelligence, not consciosness; cats arguably has less intelligence than LLM, but they clearly have emotions and are conscious.

Aqueous · a year ago
Anyone who thinks LLMs have not come a long way in approximating human linguistic capabilities (and associated thinking) are in fact, engaging in (delusional) wishful thinking regarding human exceptionalism.

With respect to consciousness, you are doing nothing more than asserting a special domain inside the brain that, unlike the rest of the mechanisms of the brain, has special "magic" that creates qualia where classical mechanisms cannot. You are saying that there is possibly a different explanation for intelligence as consciousness, when it would be much simpler to say the same mechanisms explain both. Furthermore, you have no explanation for why this quantum "magic", even if it was there, would solve the hard problem of consciousness - you are just saying that it does. Why should quanta lend themselves anymore to the possibility of subjective experience/qualia than classical systems? Finally, a brain operates at 98.6° and we can't even create verifiable quantum computing effects at near absolute zero, the only place where theory and experiment both agree is the place quantum effects start to dominate. The burden of proof is on you and Penrose as what you are both saying is wildly at odds with both physics, experimental and theoretical, and recent advancements in computing. Penrose is a very smart guy but I fear on these questions he's gone pretty rogue scientifically.

Aqueous commented on New research on anesthesia and microtubules gives new clues about consciousness   sciencedaily.com/releases... · Posted by u/isaacfrond
mrbgty · a year ago
> it seems there is conclusive evidence (LLMs) that quantum explanations are not necessary to explain at the very least linguistic intelligence as advanced linguistic intelligence is possible in a purely classical computing domain

Any reference explaining this? It isn't clear to me that LLMs have proven advanced linguistic intelligence

Aqueous · a year ago
In just 2-3 years we've gone from primitive LLMs to LLMs reaching Graduate PhD-level knowledge and intelligence in multiple domains. LLMs can complete almost any code I write with high accuracy given sufficient context. I can have a naturalistic dialog with an LLM that goes on for hours in multiple languages. Frankly (and humblingly, and frighteningly) they have already surpassed my own knowledge and intelligence in many, probably most, domains. Obviously they aren't perfect and make a lot of errors - but so do most humans.
Aqueous commented on New research on anesthesia and microtubules gives new clues about consciousness   sciencedaily.com/releases... · Posted by u/isaacfrond
Aqueous · a year ago
What's odd about the current moment is that in the very same era in which it seems there is conclusive evidence (LLMs) that quantum explanations are not necessary to explain at the very least linguistic intelligence as advanced linguistic intelligence is possible in a purely classical computing domain, there is at the same time an insistence elsewhere that consciousness must be a quantum phenemonon. Frankly I am increasingly skeptical that this is the case. LLMs show that intelligence is at least mostly algorithmic, and the brain is far too warm and wet for quantum effects to dominate. Why should intelligence be purely classical but consciousness (another brain phenemenon) be quantum? It lacks parsimony.
Aqueous commented on A useful productivity measure?   jamesshore.com/v2/blog/20... · Posted by u/rglullis
simonw · 2 years ago
That's why it's celebrated as a valuable skill - it's hard!

Did you read this? https://lethain.com/migrations/

I have decades of experience and I'm just about reaching the point now where a migration like this doesn't intimidate me.

Aqueous · 2 years ago
The fact that it takes decades to master such a mundane task may mean the entire approach is wrong. The article hand-waves a lot of the complexity of "automating as much as possible."

In my opinion, the solution lies in append-only software as dependencies. Append-only means you never break an existing contract in a new version. If you need to do a traditional "breaking change" you instead add a new API, but ship all old APIs with the software. In other words - enable teams to upgrade to the latest of anything without risking breaking anything and then updating their API contracts as necessary. This creates the least friction. Of course, it's a long way for every dependency and every transitive dependency to adopt such a model.

Aqueous commented on A useful productivity measure?   jamesshore.com/v2/blog/20... · Posted by u/rglullis
simonw · 2 years ago
I'm with Charity Majors on this one: https://twitter.com/mipsytipsy/status/1778534529298489428

"Migrations are not something you can do rarely, or put off, or avoid; not if you are a growing company. Migrations are an ordinary fact of life.

Doing them swiftly, efficiently, and -- most of all -- completely is one of the most critical skills you can develop as a team."

See also this, one of my favourite essays on software engineering: https://lethain.com/migrations/

Aqueous · 2 years ago
"Doing them swiftly, efficiently, and -- most of all -- completely is one of the most critical skills you can develop as a team."

That all sounds great. However, I'd like to understand what teams are actually able to do this, because it seems like a complete fantasy. Nobody I've seen is doing migrations swiftly and efficiently. They are giant time-sucks for every company I've ever worked for and any company anyone I know has ever worked for.

Aqueous commented on Show HN: You don't need to adopt new tools for LLM observability   github.com/traceloop/open... · Posted by u/tomerf2
nirga · 2 years ago
I wouldn't call it misleading marketing - it is what it is, similar to what you can get today from tools like Langsmith, etc - Observability for the LLM part of your system, but using your existing tools. You can further extend that to monitor specific LLM outputs - but that's just another layer on top of that.
Aqueous · 2 years ago
Not talking about just monitoring outputs though. I'm talking about monitoring the internals of the model as it reaches its output. The entire issue around interpretability / observability inside the LLM's model is the hard problem, one for which considerable resources are being dedicated to solve - not simply hooking the public-facing APIs up to observability tools like any other service API. This is just conventional telemetry. Calling this LLM observability implies there is something special about it and unique to LLMs in particular that enhances introspection into the AI model itself, which is not true. The title is highly misleading, classic startup-bro fake-it-til-you-make-it hustling crap, and deserves to be called out.
Aqueous commented on Show HN: You don't need to adopt new tools for LLM observability   github.com/traceloop/open... · Posted by u/tomerf2
tracerbulletx · 2 years ago
Pretty sure this just structures logs for requests to common 3rd party LLM providers. Which I guess is useful, but it's not some kind of problem unique to LLMs.
Aqueous · 2 years ago
Correct- the summary is misleading marketing. This is just normal system / service observability. What people mean by observability in the LLM context is specific.

u/Aqueous

KarmaCake day3659February 23, 2012View Original