Readit News logoReadit News
stuckinhell · 3 years ago
No, it really isn't the same as crypto.

Crypto really didn't solve so many problems in such an immediately visible way.

AI has some immediate and fully practical uses, it's completely different. Stable Diffusion with Control Net/Art AI's are game changers for art creation. AI artwork is already winning awards.

Generative AI's are evolving so rapidly. ElevenLabs Voice AI is absolutely amazing, we are planning to use them over hiring any voice actors for internal presentations going forward.

AI generated Seinfeld was watched by millions,and I thought it was pretty damn good.

The flexibilities and immediate usefulness of neural net AI's is just astounding, and to think we are still in the beginning of the paradigm shift.

vannevar · 3 years ago
Agreed, the two are virtual opposites: no one is making money from AI (yet), but everyone is using it, while in crypto, a bunch of people were making money from it and no one was actually using it.
rg111 · 3 years ago
We, in my last company saved our business customers US$2-3mn/year using AI products. And we made good money selling those products.

Those weren't products sold from techies to techies (the way Blockchain made some people money) either. And those products were end-user facing- it solved problems for them, too.

And I know a lot of people who are making money solving real problems for people outside of tech.

Equating AI and blockchain is a common HN commentariat fallacy.

lostlogin · 3 years ago
Siemens are selling AI for large sums from their medical division. I use it all day every day to denoise then upscale medical images and it’s fantastic.

https://www.siemens-healthineers.com/magnetic-resonance-imag...

Kalanos · 3 years ago
Did you mean one is making money from "generative" AI yet? Let's not forget the decade of supervised AI success
ElijahLynn · 3 years ago
Agreed. AI has a TON of real-world use cases, right now. I just wrote a letter to an insurance company asking for a settlement for a car crash with the help of ChatGPT. My friend couldn't afford a lawyer or knew how much to ask for. ChatGPT suggested some amounts for minor whiplash and whipped out a letter (pun intended) to send off for negotiation. This is distributing equity to people who couldn't afford it otherwise.
textninja · 3 years ago
This criticism, like many others, attacks the mechanics of how LLMs think, apparently dismissing the models for not doing so using the same process, faculties, and background life experience as a human. It does not contain any compelling arguments to refute the notion that LLMs think. We may not have consumer level AGI just yet, but suggesting a LLM is just a dumb anything (stochastic parrot or otherwise) is a rather extraordinary claim to make about something that basically patterns the whole internet.

We’ve been here before. Our sense of place in the universe was first upset by the heliocentric model (we’re not as special as we thought), then the theory of general relativity (not as correct as we thought), then quantum mechanics (not even living in a deterministic universe, really). With all these fantastic discoveries behind us, now seems like the right time to learn that we’re not as smart as we thought either.

thom · 3 years ago
The stochastic parrot claim seems to crash people’s brains because they assume this is all about words. The fact that these models very clearly learn not just patterns of words, but patterns of ideas, and ideas about ideas, seems a very profound result. Personally I do remain unimpressed by how the current models ‘think’ on top of this knowledge, but I’d have to work very hard to be as cynical as to imply this was all a scam, or somehow not progress (whatever you think of the destination).
AndrewDucker · 3 years ago
"these models very clearly learn not just patterns of words, but patterns of ideas, and ideas about ideas"

This is not at all clear to me.

masswerk · 3 years ago
Hum, as someone who grew up in the times of the linguistic turn, I'm somewhat missing the groundbreaking revolution here. The entire idea of "text" was based on this.
jjtheblunt · 3 years ago
> The fact...

Proof or at least citation somewhere?

dusted · 3 years ago
They do not think.. They're attacking the mechanics of what it does.
bambax · 3 years ago
We're certainly not as smart, or special, as we think, and I often wonder if other species like dolphins or whales -- or crows! -- see the universe as made especially for them.

But comparing the current AI situation, and its parroting LLM models, to the Copernic revolution is so over the top that it's absurd.

Sam Bankman, meet Sam Altman?

textninja · 3 years ago
Is it really so over the top, though? We have machines that have basically internalized the Internet and can easily pass a Turing test, and yet we still have people trivializing these marvels as mere “parrots”. I believe this is partly a defence mechanism (we humans are inherently hubristic), and if a stubborn attachment to humanity’s “specialness” is the cause, then that certainly mirrors the psychology of Copernicus’s detractors.

In a way, though, the “parrot” moniker is apt. The (long aspirational) Turing Test was originally called the “imitation game”, and what’s better at imitation than a parrot? Apparently, it’s ChatGPT - I never did see a parrot write code.

flangola7 · 3 years ago
I've read that there is evidence that some ocean mammals have stronger emotional intelligence and experiences than humans do. Social relationships which are more complicated.

The experience of losing a mate might very well be a deeper kind of pain for them than humans are capable of feeling.

akhosravian · 3 years ago
“It’s just auto complete” is probably one of the worst takes out there on AI.

It’s also clear the author doesn’t actually know how LLMs work and is parroting information, e.g. “…it tries to get you to finish your sentence with the statistically median thing that everyone would type next, on average.” is just not correct, and, frankly, suggests the author hasn’t even observed what autocomplete does.

I completely respect the view that there’s more to being human than pattern matching text, but I also am open to the possibility consciousness may not actually be that much more than stochastic parrots.

textninja · 3 years ago
It’s a bad take for several reasons.

- It’s reductive. The model may be architecturally relatively simple, at least insofar as humans are involved, but the data that was used train it is anything but. Code is data, data is code. The model is more than its architecture. It’s all the weights and biases within.

- It’s qualitatively wrong. The temperature settings change it from a predictive system (auto complete) to a generative one (digital assistant).

- It’s quantitatively ignorant. How do you eat an elephant? One bite at a time. For that matter, how do you speak if not one word at a time? It happens that the model generates one token at a time as well, but the million dollar question remains: what chain of reasoning did it have to follow to come up with that token? Just because it‘s only outputting a single token doesn’t mean it’s not “thinking” about the structure of the sentence, paragraph, or the entire document - several tokens ahead, to say the least. Indeed, having been trained on a such a large internet corpus it has the capacity to model entire personalities and all the quirks and neuroses that go on behind the scenes.

So, yes, it is a bad take. That being said, it’s still a useful one because it encourages people to think about the models as tools, which is exactly correct given how they’re structured. So I still describe them as such.

anileated · 3 years ago
> “thinking” about the structure of the sentence, paragraph, or the entire document - several tokens ahead, to say the least

You put thinking in quotes, so what exactly do you mean by thinking? Where does it do it? When?

AI as autocomplete is a perfect summarization of both what it does and how it does it. Anything more, including models “thinking”, is strategic PR aimed to make the public happy with their mouths gaping while they take all intellectual property ever published online and monetize it without paying the author.

pokeypokes · 3 years ago
"It's just a parrot" parroted repeatedly to suggest a lack of deeper understanding, while revealing the same.. So meta.

The general reaction of people to all this has been one of the most interesting things about it imo. Earlier versions were generally just fun and sometimes amazing but suddenly so many are going out of their ways to play it down. Of course there are valid criticisms and worthy discussions but some of it feels like more than that.

I think one side of it is that for the first time many people are genuinely comparing bots to humans, which by itself is kind of mind blowing.

Another side seems to be more about "controlling" something new and scary. Maybe that thing is tangible, like the tools themselves, or maybe it's just the idea that we're not that special.

ieee2 · 3 years ago
Agreed
dmurko · 3 years ago
With all the arguments how ChatGPT is "just autocomplete", I wonder if these people ever used it. I know it is technically autocomplete, but the end results are so much more than that.
padjo · 3 years ago
Indeed, I was quite sceptical of all of this until I actually tried Co-pilot. Sure you can characterise it as “just autocomplete” but that autocomplete has saved me a bunch of typing and thinking.
isaacremuant · 3 years ago
I'm not saying it's wrong but usually thinking is the most important part for programming and relevant to long term success.

Typing is pretty easy and not the bottleneck of software development. That's why readable variables are better than abbreviations.

Judicious use of abstractions helps with that as well.

I think where things like copilot might shine is being a JIT educator of coding practices but they should be part of your thinking process and not replace it.

The risk with over relying on crutches is substituting the knowledge and intentionality of development.

There's a balance, and one that needs to be sought.

I'm not sceptical about the power, but I am about the claims of people who want to cash in or simply say "it's awesome" and allow no discussion. That means that you get to detail projects trying to use GPT models as some sort of authority that doesnt need to justify why it's better than some alternative. If you argue against it you're "not seeing the opportunity".

It's not a new battle, fighting hype and separating the actual capabilities of a tech and the lies or misconceptions. It's hard to get people to see short Vs long term.

nuancebydefault · 3 years ago
If you think about it, autocomplete based on heuristics has already quite some intelligence. At least a lot more intelligence than just the words output of a parrot. The fact that neural nets are influenced by the same architecture, does not mean they could not be plentyfold more intelligent. I'm pretty convinced that the next level, within 10 years from now, will be indistinguishable from human intelligence in a lot of areas. Think about diagnosis of illnesses, predictions of weather and climate, logistics, teaching, driving cars, traffic regulation, prediction of natural disasters, migration,... We people tend to put the bar for intelligence higher after each breakthrough.
plutonorm · 3 years ago
If you haven't spent more than an hour inveestigating ChaGPT or Bing Chat then you really should. They are astonishing. I would say that in certain ways they are smarter than me. In a generation or two, with a motivational layer on top, they will be smarter than me. Do not scoff until you have honest to god sat down and interrogated these things.
somenameforme · 3 years ago
I don't think the argument is that "it's just autocomplete" by itself, but rather the implications of such. The current product is absolutely useful for all sorts of little things, but I think we're all looking to the future - whether consciously or not. The idea is not that ChatGPT, in its current state, will change the world - but the exciting possibility that ChatGPT shows we're on the verge of artificial general intelligence. OpenAI themselves are happy to play into this with regularly repeated claims of AGI coming within 10 years.

So the question is, is it coming? And I think this is where "it's just autocomplete" comes into play. Can you get from a really sophisticated autocomplete system to AGI? Look to the past of humanity. Our intelligence drove the epitome of human knowledge being 'bash stone, poke with pointy part' to putting a man on the moon. Now imagine we were able to seed a ChatGPT style program with all of the expressed knowledge of humanity from that former time. Where would it lead us? Your answer here is going to be driven largely be whether or not what you think what we're seeing today is "just autocomplete."

textninja · 3 years ago
From where I’m sitting, ChatGPT in its current nascent state is absolutely changing the world. It’s not only showing what’s possible but also making a powerful and highly disruptive tech available to the masses to hack and extend.

Of course, we won’t stop here, but as one of the seeds from which AGI will spring, I truly believe it will be seen as a historical innovation in the same league as early search engines, crypto (yes, unironically), or even the internet itself.

Mike_12345 · 3 years ago
What they don't understand is that it's technically not an autocomplete. It is a hierarchical model of concepts within a 96 layer neural network.
plutonorm · 3 years ago
He can't have done in any serious way. All these negative opinions just show you how much people make stuff up out of thin air, rather than bothering to spend time with the actual real world. It's all emotion driven and it drives me wild with rage that people are so unprincipled.
time0ut · 3 years ago
I've made this type of argument and have used it quite a bit (though I wouldn't say "just"). Autocomplete is a very useful tool. ChatGPT even more so. It is actually fantastic. The point I was making when explaining it this way was that, in its current state, this technology isn't going to do anything amazing on its own, but will help more people do more amazing things. It still needs to be carefully directed. For people outside the technology world, this distinction is important. I don't think it will hold up forever though.
lwhi · 3 years ago
When you break anything down to its constituent parts it can seem pretty mundane; walking is just controlled falling.
janalsncm · 3 years ago
It’s kind of amazing how wrong the author is here. Any comparison to crypto bubbles fails when one digs anywhere below the surface level. Crypto was a solution in search of a problem. Machine learning is a collection of techniques specifically designed to solve problems. That’s why it was used long before your grandma knew what ChatGPT was, and ML will continue to be used even if OpenAI shuts down ChatGPT tomorrow.

I will say that crypto is a big reason why AI is blowing up, though. It primed people to believing in tech-backed get rich quick schemes. That’s why I avoid all ex-crypto “entrepreneurs” turned AI aficionados who couldn’t backpropagate their way through a paper bag.

fwlr · 3 years ago
LLMs may be stochastic parrots, but the critics of LLMs are starting to look like deterministic parrots. Do they know any other phrases besides “autocomplete on steroids” and “stochastic parrot”?

(The LLMs do: I asked ChatGPT for some sarcastic and dismissive phrases for language models and it gave me back “mindless mimics”, “algorithmic babblers”, “robotic regurgitators”, “synthetic chatterboxes”, and my personal favorite: “soulless scribble-bots”.)

danaris · 3 years ago
I think this is in large part because they are reacting against the breathless hype that LLMs are either already sentient themselves, and thus we need to start giving them rights and/or being terrified that they're coming for us, or they're the last step before that happens (with basically the same conclusions, just time-shifted slightly).
norwalkbear · 3 years ago
I think they are in reacting from a place of fear.

Writers and liberals arts are under the automation threat for the first time ever.

AstralStorm · 3 years ago
Imaginationless idea-free implements.

So, when will ChatGPT write a credible self-critical article?

Oh wait, it really cannot. It can only transform text. The difference between writing and producing text does indeed exist.

fwlr · 3 years ago
It used to be someone would confidently say “computers can’t do this” and they could be right for 10 or 20 years before the state of the art caught up and proved them wrong.

Now it’s like, “it can’t write a credible and coherent article“. Well, GPT3 does fine on paragraphs, and whole articles are literally the very next incremental step from that, so probably GPT4 will be able to do that. And when does that come out? Like next week?

maxdoop · 3 years ago
The problem I have with claims like “LLMS aren’t thinking— they are just parrots”, is we don’t even know what human thinking really is! So many people want to assume that human complexity makes us special, yet there is not any proof of that right now.

This article claims “AI is not intelligent”— and I’ll counter with, “what is intelligence?” And further, say that we somehow prove LLMs aren’t “thinking” as humans do, but they still give (eventually) a near perfect illusion that they do — what does it matter that it’s not “really thinking?” I feel like crazy AI cultist at times when discussing this, but my main (admittedly petty) point is that such strong confidence about similarity or dissimilarity of human thinking to LLMs is unfounded.

It’s like we are comparing the insides of two black boxes and trying to make absolute claims on them.

zamnos · 3 years ago
Ah I mean if you're hell bent on not being impressed by ChatGPT you don't have to be. Everyone's entitled to their own opinions too. It's already useful for those in certain roles, the only question is what's the medium term situation with ChatGPT gonna be? It's free, with a paid option for now. Is it going to get shut down? is it going to go pay-only? or go away and only be available as BingGPT? If you're already using this for work (eg gptforwork.com) or play (https://gamesplayedbadly.com/2023/02/14/create-your-own-dd-a...), those are your real questions. Pundits and critics have a vested interest in predicting the future the way their readers want, but your time machine is as good as mine - it only goes 1 second per second and we'll get to the future at the same time.

If your role doesn't involve things ChatGPT would be useful for (eg you're a blue collar carpenter), it doesn't seem very useful, but neither do computers or the Internet, really. They still revolutionized the world though, so do you want to be a buggy whip manufacturer, or a computer (the job, mostly employing women, prior to the advent of the digital computer and auto calculating spreadsheets, who performed the math for spreadsheets at accounting firms)? Or do you want to at least be aware of incoming trends.

Crypto and web3 still has yet to have a clearly defined use case by anyone outside that industry. Meanwhile, anybody with a phone number can make an openAI account and try out ChatGPT. Some, like our carpenter, will walk away thinking it's neat but ultimately useless. Others simply won't be impressed, for whatever reason. Some will see immediate uses for it in their life and can't live without it again. Don't expect them to speak up about it either, they're too busy using it to write emails and make plans to be bothered to convince the haters.

ChatGTP · 3 years ago
If your role doesn't involve things ChatGPT would be useful for (eg you're a blue collar carpenter), it doesn't seem very useful, but neither do computers or the Internet, really.

Are you freaking serious? Carpenters use computers all the time. Ever heard of CAD? Calculators? Talking to customers via email? Ordering materials, researching, I'm shocked actually this is how naive people are ?

Crypto and web3 still has yet to have a clearly defined use case by anyone outside that industry.

What exactly is the use case for ChatGPT? I mean it can do a bunch of different things, but to what degree really depends on a great deal of factors, so I don't really get your point.

I actually think maybe this will be a problem for ChatGPT as a product in the future. It doesn't really do anything especially well and it's not clear when you should trust it to be correct. Maybe it will get 99.9% accurate soon, until then, will be interesting to see what actually happens when the novelty wears off.

I do remember walking home from my friends house after using a VR headset about 6 years ago and thinking, well that's it, I'm going into the matrix. It's been 6 years and I've never had the need to use one again. Maybe when designing our house I would've liked to have put one on for 10 minutes to walk through the plans.

ChatGPT I has had a similar effect for me, I used it, it was fun, didn't really have that much daily use for it, now it's just a tool, like many in my toolbag, I pull out of I can think of a good use for it, it sometimes yields good results, then I move on.

Edit: Please if you down vote, I'd like to hear why, don't hate on people for having a difference of opinion.

doix · 3 years ago
My girlfriend uses it to write her Instagram captions. My nephews use it to do homework.

Anywhere you need text written, it can generate something. Like you said, it might not be correct, but you can read what it tells you, you don't need to copy paste it verbatim.

Rather than asking it a question, feed it some bullet points and watch it convert it into paragraphs. Read the paragraphs and remove anything extra it added. It probably saved you 10-20 minutes and lots of frustration depending on how much you hate writing.

If you can't see the use for yourself, that's fine. But I really don't believe that you can't see how it would be useful to other people.

I find it hilarious to compare with crypto, where the majority of projects are purely about speculation.

textninja · 3 years ago
> It doesn't really do anything especially well and it's not clear when you should trust it to be correct.

You just described most humans. Imagine someone gave you access to a free digital workforce that you can only interact with through chat; it’s text in, text out, they only do what you say, and although you’re conversing with middle-tier experts they don’t have internet access or even a paper or pen handy so you have to take any facts or hard numbers they reference with a grain of salt. What is the use case for that?

What I’m getting at here is that ChatGPT doesn’t have a single clear use case. OpenAI is positioning themselves to be vendors of foundational AGI and it will be up to the market to create specialized fine-tuned models with higher levels of agency and built-in accountability.

zamnos · 3 years ago
I didn't downvote (because I can't, since I'm the poster you're replying to), but I'm willing to point out what stuck out to me since you asked.

You wrote:

> I'm shocked actually this is how naive people are ?

There was no need for name calling, and this is mentioned in the site guidelines (https://news.ycombinator.com/newsguidelines.html):

> When disagreeing, please reply to the argument instead of calling names. "That is idiotic; 1 + 1 is 2, not 3" can be shortened to "1 + 1 is 2, not 3."

Moreover, let's assume that since I've made it to this site that I'm not a complete idiot (well, I mean, I can be, sometimes), and that it's entirely possible for me to make my claim, while also being well aware of the fact that carpenters use email and the Internet these days. I did some renovations to my home a few months back and, surprise surprise, we used email, along with texting for communicating. (We even used gasp pictures in these emails.) So maybe I'm trying to make a deeper point about the intersection of carpentry and computers that you maybe missed?

zamnos · 3 years ago
> now it's just a tool, like many in my toolbag

Right it's just a tool, one which doesn't get much daily use by you. Which is totally fine. But say your job was to hammer in nails all day long, would you ask about the use case for screwdrivers? At your hammering job there are no screws, so screwdrivers must be totally useless, right?

isaacremuant · 3 years ago
> Ah I mean if you're hell bent on not being impressed by ChatGPT you don't have to be.

This is such a disingenuous strawman. You've just responded to a long argument with a lazy faux dismissal based on personality and animosity.

The criticism are well laid out and sourced. It's about the hype and lies around it, not the tech itself. I appreciate the tech and dislike the lies and misunderstandings.

If you just dismiss me as "a hater" you're part of the lie.

> Pundits and critics have a vested interest in predicting the future the way their readers want, but your time machine is as good as mine - it only goes 1 second per second and we'll get to the future at the same time.

Except that's a lie. And when the insights of people who stop and think instead of just buying into claims unquestionably are proven right years down the line we hear the old "oh, hindsight is 20 20". That's not accurate. We can reflect on things when they happen. No need to wait.

> If your role doesn't involve things ChatGPT would be useful for (eg you're a blue collar carpenter), it doesn't seem very useful, but neither do computers or the Internet, really.

False dichotomy. If you think chatgpt is solving a problem by generating text, are you qualified to judge that text for accuracy, could you have produced that text yourself? Why is it that chatgpt beats you, typing speed? Thinking speed? What are the risks of overlooking things by virtue of allegedly being handed an answer.

Wizards can be useful but also dangerous. The point is to reflect about it and not dismiss anyone who doesn't just hype it up.

> Or do you want to at least be aware of incoming trends.

Your whole argument seems aimed at some luddite strawman when in reality it's people who love tech, love new things, appreciate Chatgpt and others forms of ML, but see the BS flying around and want to stay intentional and have conversations grounded in reality.

> Don't expect them to speak up about it either, they're too busy using it to write emails and make plans to be bothered to convince the haters.

Another empty dismissal. People are using it so any related criticism is "haters". Implicitly your whole comment just wants to, instead of engage in the conversation, derail it and dismiss it.

Reminder, when there's hype, everyone and their mother wants to "cash in on the hype". It doesn't matter if they need to sell you a bridge. Going into specific and reasoned arguments is more productive, specially in a place like HN that should foster discussion instead of squashing it like parent poster is doing.

zamnos · 3 years ago
That's a pretty good takedown of my comment, I appreciate the effort that went into that, and, well, you make some fair points, but I went back and reread the post, and stand by my response. To me, Cory Doctorow isn't making any substantive arguments in the linked article that I feel are worth digging into.

He denigrates ChatGPT as glorified auto complete but in the end, auto complete is... actually pretty useful?

I do appreciate him bringing in Ted Chiang, I love that quote - "it’s easier to imagine the end of the world than to imagine the end of capitalism".

I haven't been exposed to Pluralistic before so maybe that's just his style there, but I think the difference between the hype around crypto and the hype around ChatGPT is that people in the world of finance are able to intuit where crypto fits, but people outside that world can't really appreciate that. ChatGPT; anyone who uses English can create their own account and play with it and see what the computer is doing. Sometimes hype is actually justified.