Readit News logoReadit News

Dead Comment

Dead Comment

deltaninenine commented on Crypto collapse? Get in loser, we’re pivoting to AI   davidgerard.co.uk/blockch... · Posted by u/Al0neStar
gregjor · 3 years ago
I gave my anecdotal evidence, and the evidence of numerous posts on HN and elsewhere you can easily search for. Or just look at the votes on our comments.

Getting one person to post here with one opinion or another doesn't constitute useful data. It just adds one more anecdote. It looks like no one besides the two of us pay attention to this thread.

In any case I engaged to express my opinion, not to prove you or myself right or wrong in our opinions. Time will tell.

deltaninenine · 3 years ago
>Or just look at the votes on our comments.

Votes are a popularity contest. I have a lot of downvotes. So you win the popularity contest. It's fine. Im ok with that.

I'm more going for the correctness contest here. Who's actually right? That's all I care about here.

>Getting one person to post here with one opinion or another doesn't constitute useful data

This isn't true. One person lends data to your case. Why? Because my claim is that nearly all people on HN aren't fooled by chatGPT. So if you say it's so common then just find one.

My claim is that it's so uncommon you can't even find one.

>I gave my anecdotal evidence, and the evidence of numerous posts on HN and elsewhere you can easily search for

I searched for this. I could not find one. You claim it's easily found, so you can win this debate by simply finding one comment that proves your point and link it here. If it's as common as you say then at least one person can be found. This makes sense.

Dead Comment

deltaninenine commented on Crypto collapse? Get in loser, we’re pivoting to AI   davidgerard.co.uk/blockch... · Posted by u/Al0neStar
gregjor · 3 years ago
I think you misread my first sentence.
deltaninenine · 3 years ago
No. You just mis expressed your point with a logical mistake.

You wanted to explain why I can't find evidence for the Eliza effect on HN, but you didn't realize that it contradicts your overall point of the effect.

I exploited the flaw to point out the contradiction in your thinking. Your ideas are not logically coherent your following a sort of bias here where you're trying to construct ideas to support your bias.

deltaninenine commented on Crypto collapse? Get in loser, we’re pivoting to AI   davidgerard.co.uk/blockch... · Posted by u/Al0neStar
davidgerard · 3 years ago
co-author here: I warn you, I've got a paperclip and I'm not afraid to use it.
deltaninenine · 3 years ago
Don't actually understand your joke here.
deltaninenine commented on Crypto collapse? Get in loser, we’re pivoting to AI   davidgerard.co.uk/blockch... · Posted by u/Al0neStar
JdeBP · 3 years ago
It is ironic that you just anthropomorphized it yourself by using "it forgets".
deltaninenine · 3 years ago
When you delete things from your hard drive that fits the definition of making your computer forget something.

Look up the definition of forget. It is not a human exclusive action. Therefore it is not "anthropomorphizing": https://www.merriam-webster.com/dictionary/forget

deltaninenine commented on Crypto collapse? Get in loser, we’re pivoting to AI   davidgerard.co.uk/blockch... · Posted by u/Al0neStar
gregjor · 3 years ago
> Nobody fully understands these networks. Not even the people who build them. The "experts" admit this.

That's repeated a lot but it's not entirely accurate. No one can explain how an LLM gives the answers it does (not even the LLM). LLMs have a vast search space of tokens and use probabilities to make their responses non-deterministic. But the people who build and train LLMs do know how they work -- obviously since quite a few people know how to make one. By analogy, if I give a pile of Legos to a six-year-old I don't know what they will make, though I do know the constraints and limits (imposed by how Legos work and what was in the pile). It's not correct to say "I don't understand how Legos work" when I really mean "I can't predict what a six-year-old will make from a pile of Legos."

> You misunderstand how trivial it is to use this technology and figure out the limitations ... Hallucinations aren't technical level concepts.

I get that. But when I have discussed ChatGPT hallucinations, with examples from my own chats, I'm surprised when people don't even recognize the hallucinations until I point them out. Anecdotally people seem to defend the "AI" by accusing me of misleading it, or giving unclear information. They have anthropomorphized and then want to impose human notions of fairness, give the "AI" the benefit of the doubt even when they know that a person would have not made the mistake, or answered "I don't know" rather than confidently make stuff up like ChatGPT will. I think people don't believe computers can lie or make a mistake -- they imagine an intelligence like Mr. Spock at the other end, not a stochastic parrot.

I got ChatGPT to tell me -- in its confident and authoritative tone -- that no even number is also evenly divisible by 3. When I gave it contradictory examples -- 6, 12, 24 -- it then apologized but maintained that no even number was divisible by both 3 and 5 (um, 30, 60, 90...). I was just trying to get it to solve FizzBuzz with some variations. I could feed my younger children that same misinformation and they wouldn't question it. I could tell my parents that their Alexa listens to everything they say and records it forever on a big disk on a satellite in orbit, and they would believe me. Elon Musk can tell the world Teslas can drive from SF to New York without human intervention and get a whole TEDTalk audience and media "experts" to believe him. P.T. Barnum had some quips about that tendency.

> Find one guy on this entire thread who is honestly fooled by chatGPT and I'll concede to your argument. If you can't find even one guy... Then if you're rational you'll see that my argument is true: nearly no one is fooled by chatGPT.

That's not proof of anything. Maybe no one will chime in on this thread but you can easily find posts and comments on HN with people claiming LLMs are sentient. A Google researcher said that publicly (he got fired), and much discussion took place here. If I was more interested in the topic I could poll people, but I'm pretty sure I would find that quite a lot of people think current "AI" (LLMs like ChatGPT) are sentient, or they will be in the next couple of years. And then I could ask them what "sentient" means and never stop face-palming.

deltaninenine · 3 years ago
>That's repeated a lot but it's not entirely accurate. No one can explain how an LLM gives the answers it does (not even the LLM).

Uh I literally said no one fully understands these networks. And you go on to say that my statement isn't accurate then confirm my statement by saying:

>No one can explain how an LLM gives the answers it does (not even the LLM).

I mean this is exactly what I said. We can't explain it... Because we don't fully understand it..

>But the people who build and train LLMs do know how they work -

No they actually don't. The surprising accurate responses of chatGPT were actually not predicted. Many experts literally do not fully understand what's going on. This is categorically true and I can quote them if you need it, but this is easily googable.

>I could feed my younger children that same misinformation and they wouldn't question it. I could tell my parents that their Alexa listens to everything they say and records it forever on a big disk on a satellite in orbit, and they would believe me.

>A Google researcher said that publicly (he got fired), and much discussion took place here. If I was more interested in the topic I could poll people,

I already polled people and did a Google search of HN. My other post was a poll. And a Google search of yielded nothing. This is actually quite strong proof. HN has multitudes of users, not being able to find one is a nearly 0 ratio.

The researcher who got fired by Google is an interesting case. The reason is because he's not referring to gpt4 or gpt3.5 or bard. In subsequent interviews he has said that he's referring to lamda. An internal google LLM that hasn't been released. He said that one is "awake" and specified directly that it's different from the LLMs the public currently plays with.

Nobody can confirm or deny that statement because we can't directly interact with the lamda AI as Google has it locked down pretty hard.

deltaninenine commented on Crypto collapse? Get in loser, we’re pivoting to AI   davidgerard.co.uk/blockch... · Posted by u/Al0neStar
gregjor · 3 years ago
> There is no layer of indirection. There is no room for someone to lie to themselves. Additionally, the bias for chatGPT is actually in the other direction.

Bing search and customer service chatbots, for example, give a layer of indirection. Spam emails, LLM-generated legal briefs and term papers have indirection when the recipients (judge, professor) don't interact directly with the LLM. Since interacting directly with ChatGPT takes some skill and doesn't seem immediately useful most people will interact with it through things like search engines and friendly chat widgets and word processor plugins, just like programmers already interact with an LLM indirectly with Github Copilot.

> Nobody wants to believe that an AI can trivialize their skill set. People would rather believe chatGPT is garbage because that is what they prefer to believe.

They may not want to believe it, but you must have seen the numerous articles -- many of them posted on HN -- about exactly that happening. Not a day goes by that HN doesn't get multiple posts expressing fear and worry about "AI" taking over their job soon, or making the job redundant. And people may simultaneously believe "ChatGPT is garbage" and worry that they will lose their job, or get killed by a robot drone.

I argue that too many people already have a bias towards believing ChatGPT/LLMs equals AGI, because the media has primed them to believe that. The term "artificial intelligence" itself gives it away. If no one used "AI" to refer to ChatGPT et al. and instead called them large language models that might help people realistically evaluate LLMs as tools rather than as a true artificial intelligence. The term AI has been applied to so many ideas, fantasies, experiments, and now products that it means everything and nothing, and every individual can and will interpret that according to their own biases and knowledge. Of course "AI" sells a lot better than "LLMs" and we're seeing the self-serving hype in full-swing already, as numerous companies and VCs try to capitalize and recoup their losses from the last hype cycles that people got wise to (crypto) or never got interested in to begin with (Web3 and metaverse).

I'm old enough to remember when scientists successfully cloned a sheep, and immediately the media, popular and specialized, cranked out story after story about how cloning would reshape humanity in just a few years. We were told that human clones were just around the corner, with all the attendant hand-wringing. Of course that never happened, but I wouldn't find it all surprising to poll random people and find that they believe human cloning happens all the time, because the hype didn't get followed by a correction or apology.

deltaninenine · 3 years ago
>Bing search and customer service chatbots, for example, give a layer of indirection

There is no layer of indirection you are directly chatting with the AI. You are not having a third party describing his experience with the AI to you.

>I argue that too many people already have a bias towards believing ChatGPT/LLMs equals AGI, because the media has primed them to believe that.

No point in arguing if you don't have some form of evidence. My evidence is there isn't a single person on this thread who is fooled by AI or isn't aware of the limitations of current gen AIs.

You just need to find one person in this entire thread who fits your description, link it here and you'll be right as you falsified my statement. This is the data driven Conclusion.

Let's use data to get to the bottom of this. Seriously.

deltaninenine commented on Crypto collapse? Get in loser, we’re pivoting to AI   davidgerard.co.uk/blockch... · Posted by u/Al0neStar
gregjor · 3 years ago
You're arguing that something observed so often and consistently that it has had a name for decades -- the ELIZA Effect [1] -- doesn't actually happen often enough to care about.

I have referred to ChatGPT hallucinations with multiple friends and family, some in tech and some not (like my parents and my kids), and with one exception none of them knew what I was talking about. Like most people they think computers can't make mistakes, so it follows logically (for them) that an (apparently) intelligent machine can't make mistakes, i.e. hallucinate. I have a couple of my own ChatGPT transcripts that include hallucinations and when I show those to people they say that I deliberately misled the AI, because how could it make a mistake?

In my own experience, which includes people who work in the software field and people who don't, including a couple of friends who work with neural networks and LLMs, almost no one understands how LLMs work, or what limitations they might have, or what "hallucinate" means in the context of ChatGPT. Almost everyone I know is much more likely to believe AIs have already or will soon put them out of a job and start turning us into slaves or launching nuclear strikes, because that's the nonsense they get fed my the media.

[1] https://en.wikipedia.org/wiki/ELIZA_effect

deltaninenine · 3 years ago
>doesn't actually happen often enough to care about.

That's my entire point. It doesn't happen often enough to care about.

Sounds like you have some anecdotal experience of it happening to your entire family and a lot of your friends.

I experience the opposite. It has happened to exactly none of my friends and family.

We do live in contradictory universes where you experience one thing and I experience another thing. Given the contradiction let's refer to the shared experience: nobody on this entire HN thread has experienced the Eliza effect. The shared experience proves my pov.

>Almost everyone I know is much more likely to believe AIs have already or will soon put them out of a job

This first part of your sentence has a higher likelihood of being true. The reason is because there are instances of it are already happening. It's limited given the limitations of LLMs but we are at a point where if the hallucinations are fixed then it can very much replace many jobs.

Nuclear strikes and slavery is a bit far fetched.

u/deltaninenine

KarmaCake day-4May 30, 2023View Original