Personally, I'm getting very fatigued by ChatGPT. Now I basically understand how it works (string words together with some randomness consistent with an existing body of work), everything else flowing from that is just tiring. Yes, it's going to get facts wrong, yes it's going to sound sentient, yes it's going to be subject to manipulation by users. (and btw, of course it's going to sound sentient, it's trained on a dataset that includes a shit load of scifi fiction about robots becoming sentient) It's very uninteresting to me at this point, I wonder as people start to understand what ChatGPT does there's going to be a backlash, where it gets entirely written off. In much the same way that by 2020 alot of people would hear the word "cyrpto" or "web3" and just shut down, I suspect that will happen with ChatGPT too.
When you really start to prod this and think "Ok, this tool by design is going to produce highly plausible but completely untrue information" do we really think the sensible next step is "Better integrate this very tightly into our primarily tool for finding out accurate information"? Because that appears to be the current plan.
I've been feeling exactly like this from the start. It's been very clear and very public from the very minute ChatGPT came out, that it's not concerned with the truth and simply makes things up as it goes along.
It can be fun to use for creative writing -- why not? Creative writing is about making shit up.
But for gathering information it's the opposite of what we want. Integrating it with a search engine is not just ill-advised, it's incredibly stupid.
I tried to get it to write _1984_ erotic fanfiction, which it did - but everything it spewed out was a cliche. I imagine it had read all of FFN and AO3.
> gathering information
It will probably be like the Google infoboxes and “People Also Search” entries we’ve had for a while, only more superficially coherent.
It seems like the “fake it until you make it” craze has fully taken over the AI/ML scene.
I agree with your sentiment, and for some time now I fear that all this hype will lead to another AI winter.
Large language models are a true breakthrough and they are, in my view, much more useful than ChatGPT. They allow for some fundamental NLP tasks that can then be used as building blocks for truly intelligent systems.
Symbolic AI failed because it could not deal with complexity and fuzziness. Statistical learning can deal with complexity and fuzziness, but it is very limited when it comes to rigorous reasoning.
I believe that true AI will combine both ideas, and I have a strong hunch that neurosymbolic approaches will be crucial, but nobody really knows yet how to create a true neurosymbolic approach that seamlessly combines the strengths of symbolic and statistical approaches. My fear is that we throw the baby out with the bath water when people are finally fed up with ChatGPT hype, and then we will have to wait a couple decades before the next serious attempt at AI is made.
Absolutely agreeing that true “intelligence” would come from such a synthesis - it also loosely matches what we know from cognitive psychology. We won’t go anywhere without rigorously addressing knowledge representation and reasoning - unlike humans, language is the only faculty ChatGPT possesses (and the power of language has been oversold for a while).
The cynic in me would rather have the backlash (and a rude awakening for the industry), than the looming hyper-normalization of convincing-sounding bullshit/plagiarism/etc (and new, more convoluted forms of “saying the magic words to make the algorithm behave”) in the near future.
(Relatedly: Say what you will about the rationalist community, they appear to have really thought about a lot of this.)
One thing that makes me depressed about ChatGPT is that the software engineering behind it is not some 1996 Doom Carmack style wizardry. It's attention mechanisms, a transformer architecture, and lots and lots and lots of data. The data does all the programming. I look at a lot of AI projects and there's not a lot of code. Sure it takes a lot of work to understand it, but in the end it's not a lot of code.
This makes me think that in the future programming will be just about collecting data and then doing things with the data. We will all be data generators and collectors with a few math PHDs in the middle working on the activation functions and so forth. The rest of us will be stringing models together Stable Diffusion style or writing the billing code.
ChatGPT is boring, although novel (unexpected) applications can be fun. Not the results (yes, it can write texts), but ideas how it can be applied, what sort of texts it can write, that I haven't thought of before. But what is now interesting to me is research into how the heck we, the meat blobs, actually manage to think.
> string words together with some randomness consistent with an existing body of work
To best of my awareness (and observation of my inner monologue as well as what I say during a conversation), this is also very simplified but very true description of my thought processes. Obviously, my observations are very limited (and surely nothing new under the sun, just me not being aware), but in retrospective (and sometimes that happens even while my working memory is still pretty much there as I've barely said another word) I can spot the points at which my thought process was influenced by some unknown (non-observable) factor, and I've picked a different word or switched a line of thought, making it kind of obvious to me that I was stitching words in a line. Best noticed when thinking in a comfortable but foreign language. And I can also spew bullshit to myself, when I have limited knowledge of some subject or limited time to think something through but I'm pressed to blurt out something.
Obviously, there's nothing new. But I'm hoping all this AI hype would also inspire more research into Natural Intelligence research, even though those are very different fields (although, who knows, maybe efforts made into analyzing AI outputs/behaviors would reveal something about us, too).
In my view, underestimating the platform's capabilities may lead one to assume its imminent shutdown. For instance, I find the platform particularly useful for generating succinct bullet-point summaries of articles, enabling me to consume content in 1 minute instead of, say, 15 minutes.
>When you really start to prod this and think "Ok, this tool by design is going to produce highly plausible but completely untrue information" do we really think the sensible next step is "Better integrate this very tightly into our primarily tool for finding out accurate information"? Because that appears to be the current plan.
It's not completely wrong all the time. Also the conditioning puts it into state where it has to respond with answers even if it doesn't know something. So coupling it with the tools for finding out accurate information is a logical next step. If it works out and you get highly plausible and true information most of the time, that's magical. Just because it's integrated doesn't mean you have to use it but many will. It'll keep getting better because that's how AI model have been doing at every benchmark, so even if you think it's not there yet, trends imply it might get there soon.
I had an interesting ChatGPT experience yesterday where I asked it
> How can I increase the version of a TextDocument in VSCode from an extension?
and got a perfectly plausible answer, very eloquently explaining the following code
// Get the TextDocument object
const document = vscode.window.activeTextEditor.document;
// Update the TextDocument version
document.update([], {incrementVersion: true});
The problem is, there is no document.update method... But the really amazing thing is that when I told ChatGPT about the nonexistence of the update method, it apologised, and then gave me a more involved, but correct answer.
I'd be interested to know happens if you ask your original question again. Do you get the original answer referencing the non-existent method? Or did you force it to update its state (knowledge of the world) such that it now always gives the second answer?
ChatGpt is finished training and doesn't update itself in any way.
The current conversation is fed each time as additional context back when you ask additional questions.
No, starting a new chat it gives me again a (different) wrong answer.
I should add that I looked closer now at the transcript from yesterday, and the second answer it gave me after apologising was actually not correct, but correct enough for me to code it with the help of some actual API documentation.
> Imagine you have two semi-infinite conducting metal plates at right-angles, in an L shape. The plates are along the x and y axes. You place a charge +q at a point (d, d) in the top right-hand corner of the plane. Using the method of images, work out the force on the charge.
It gave me a very detailed, and utterly incorrect answer. I pointed out its errors and it corrected them politely. I eventually asked it to draw me an ascii-art diagram of what it thought was going on, and it drew a coordinate system with mis-labelled axes that looked both utterly plausible and yet was filled with genuinely creative bollocks. The whole thing was a bit like reading a patronising answer in a bad sci-fi film (that needed severe editing and had bits of reality in it).
I'm sure, however, if you took the output and sent it to the International Journal of Please Pay Us Open Access "Publishing" it'd get accepted and given a DOI...
It's not difficult to hit the limits of ChatGPT's "knowledge". I've had it invent completely fictitious historical pacts and documents, as well as give me utterly incorrect "facts" about programming competitions.
The issue for me isn't that it's wrong. By using ChatGPT, I'm an unpaid beta tester, so I expect inaccuracies. The main problem is that it is cheerfully and confidently wrong. Even if it said "this answer is accurate to within xx%", it would be a start.
I'm not sure how it could be giving this percentage. In its world, the answer just is, accurate to 100% based on the training. You'd need an oracle or the like to check on the output, which we don't have (way less shiny work).
I've only toyed around with training machine learning models, but each time it would give a percentage of probability (78% probability this thing is a handwritten 2 and not just a squiggle, for example). I have no experience training LLMs at all, so maybe it's different.
Well, at least know we have a benchmark where we can test it. Imagine it at 20% being accurate, being confident while not knowing anything about the subject etc. Do we have a reasonable argument why it can't be 40% in the next 6 months, 80% a bit later and keep improving? There are so many experiments going on, also with tools that we use to gather accurate information. I don't see a reason why it can't get better.
ChatGPT is one of the greatest cultural pushes toward originality ever. It is the perfect summary of an era focused on making derivative outputs from historical data, and really shows how ultimately limited and uninteresting those outputs are. Rather than decimating the creative fields I think it is going to make them all explode with vibrance as we finally understand that it is novelty that makes new culture compelling and valuable. For this the bots have no way to truly compete, and we, when unburdened by the need to faithfully reproduce the past, can at last focus on producing the originality that makes culture slappppp
ChatGPT is like a lower manager that has to create a last minute presentation for upper management, it does it by copy pasting google search results from a list of big tech terms.
It’s clear that the AI snake oil that is being sold here by the AI bros is no better than the hype and mania of the crypto bros of 2021.
If anything it just shows that by just using it, it fails to live up to the hype and certainly falls flat on competing against search engines with it hallucinating its results.
Another orchestrated failed attempt at selling Microsoft-flavoured AI snake-oil with techno-speak to the markets in order to create a worse search engine than Google.
When you really start to prod this and think "Ok, this tool by design is going to produce highly plausible but completely untrue information" do we really think the sensible next step is "Better integrate this very tightly into our primarily tool for finding out accurate information"? Because that appears to be the current plan.
It can be fun to use for creative writing -- why not? Creative writing is about making shit up.
But for gathering information it's the opposite of what we want. Integrating it with a search engine is not just ill-advised, it's incredibly stupid.
I tried to get it to write _1984_ erotic fanfiction, which it did - but everything it spewed out was a cliche. I imagine it had read all of FFN and AO3.
> gathering information
It will probably be like the Google infoboxes and “People Also Search” entries we’ve had for a while, only more superficially coherent.
It seems like the “fake it until you make it” craze has fully taken over the AI/ML scene.
Large language models are a true breakthrough and they are, in my view, much more useful than ChatGPT. They allow for some fundamental NLP tasks that can then be used as building blocks for truly intelligent systems.
Symbolic AI failed because it could not deal with complexity and fuzziness. Statistical learning can deal with complexity and fuzziness, but it is very limited when it comes to rigorous reasoning.
I believe that true AI will combine both ideas, and I have a strong hunch that neurosymbolic approaches will be crucial, but nobody really knows yet how to create a true neurosymbolic approach that seamlessly combines the strengths of symbolic and statistical approaches. My fear is that we throw the baby out with the bath water when people are finally fed up with ChatGPT hype, and then we will have to wait a couple decades before the next serious attempt at AI is made.
The cynic in me would rather have the backlash (and a rude awakening for the industry), than the looming hyper-normalization of convincing-sounding bullshit/plagiarism/etc (and new, more convoluted forms of “saying the magic words to make the algorithm behave”) in the near future.
(Relatedly: Say what you will about the rationalist community, they appear to have really thought about a lot of this.)
This makes me think that in the future programming will be just about collecting data and then doing things with the data. We will all be data generators and collectors with a few math PHDs in the middle working on the activation functions and so forth. The rest of us will be stringing models together Stable Diffusion style or writing the billing code.
https://dallasinnovates.com/exclusive-qa-john-carmacks-diffe...
> string words together with some randomness consistent with an existing body of work
To best of my awareness (and observation of my inner monologue as well as what I say during a conversation), this is also very simplified but very true description of my thought processes. Obviously, my observations are very limited (and surely nothing new under the sun, just me not being aware), but in retrospective (and sometimes that happens even while my working memory is still pretty much there as I've barely said another word) I can spot the points at which my thought process was influenced by some unknown (non-observable) factor, and I've picked a different word or switched a line of thought, making it kind of obvious to me that I was stitching words in a line. Best noticed when thinking in a comfortable but foreign language. And I can also spew bullshit to myself, when I have limited knowledge of some subject or limited time to think something through but I'm pressed to blurt out something.
Obviously, there's nothing new. But I'm hoping all this AI hype would also inspire more research into Natural Intelligence research, even though those are very different fields (although, who knows, maybe efforts made into analyzing AI outputs/behaviors would reveal something about us, too).
For me it is just another tool, that I can use to help me in certain areas. Not more. Not less.
If I was Sam Altman I would sell the thing asap before the hype dies out like with crypto.
I think he just did sell it to Microsoft for an additional $10bn.
Deleted Comment
But the point here is that as the subtlety and abstraction level of the material increases, the value of the statistical brute-forcing tapers off.
But one is less likely to apply such a tool to more challenging texts, in any case.
It's not completely wrong all the time. Also the conditioning puts it into state where it has to respond with answers even if it doesn't know something. So coupling it with the tools for finding out accurate information is a logical next step. If it works out and you get highly plausible and true information most of the time, that's magical. Just because it's integrated doesn't mean you have to use it but many will. It'll keep getting better because that's how AI model have been doing at every benchmark, so even if you think it's not there yet, trends imply it might get there soon.
But it makes up plausible answers and even fake references. Which makes it worse than something which is always dead wrong.
> How can I increase the version of a TextDocument in VSCode from an extension?
and got a perfectly plausible answer, very eloquently explaining the following code
The problem is, there is no document.update method... But the really amazing thing is that when I told ChatGPT about the nonexistence of the update method, it apologised, and then gave me a more involved, but correct answer.I should add that I looked closer now at the transcript from yesterday, and the second answer it gave me after apologising was actually not correct, but correct enough for me to code it with the help of some actual API documentation.
> Imagine you have two semi-infinite conducting metal plates at right-angles, in an L shape. The plates are along the x and y axes. You place a charge +q at a point (d, d) in the top right-hand corner of the plane. Using the method of images, work out the force on the charge.
It gave me a very detailed, and utterly incorrect answer. I pointed out its errors and it corrected them politely. I eventually asked it to draw me an ascii-art diagram of what it thought was going on, and it drew a coordinate system with mis-labelled axes that looked both utterly plausible and yet was filled with genuinely creative bollocks. The whole thing was a bit like reading a patronising answer in a bad sci-fi film (that needed severe editing and had bits of reality in it).
I'm sure, however, if you took the output and sent it to the International Journal of Please Pay Us Open Access "Publishing" it'd get accepted and given a DOI...
The issue for me isn't that it's wrong. By using ChatGPT, I'm an unpaid beta tester, so I expect inaccuracies. The main problem is that it is cheerfully and confidently wrong. Even if it said "this answer is accurate to within xx%", it would be a start.
If anything it just shows that by just using it, it fails to live up to the hype and certainly falls flat on competing against search engines with it hallucinating its results.
Another orchestrated failed attempt at selling Microsoft-flavoured AI snake-oil with techno-speak to the markets in order to create a worse search engine than Google.