The most significant impact ChatGPT has had on my life is that I have some interns helping me write documentation for several projects. The ChatGPT noise they started introducing has been disruptive to the company and project management. Inaccurate percentages, years, institution abbreviations, etc., etc.
I had to confront them multiple times about using the tool and not checking its results and actually doing the checking myself. Most of the time it's close to the truth, but not quite, and in the field the projects are in, not quite doesn't cut it.
I also have project partners I introduced to ChatGPT. They produce a lot of noise but less insight than before they started using this technology. In one recent project, I was involved with 5 partner companies, and 4 of them produced excellent 5 to 10-page reports. One gave me a 100-page buzzword-filled, no-substance report. Guess who used GPT.
The good part is that I'm now pretty good at spotting ChatGPT written content. I think the technology will evolve, but in its current state I feel there's a lot of noise.
I'm personally horrified that the normal response to this isn't "if I catch you using ChatGPT again, you're fired".
What are you paying people for if not their unique contributions? What do they think they're doing when they farm it out to a tool, other than inviting you to cut out the middleman? How on earth do they expect to become better at their jobs this way? Have they no shame or sense of pride? It's pathetic.
This is entirely orthogonal to the question of whether GPT is intelligent.
> How on earth do they expect to become better at their jobs this way? Have they no shame or sense of pride? It's pathetic.
To some people a job is just a way to make money to fund their hobbies or feed their mouths. Sometimes they do not care about their boss or company, at all.
This is a good reflection of AI generative content. This is actually a good reflection of any computer assisted generated content. AI has allowed junior professionals to become managers of AI machines. Even if very few of them are qualified to do so.
In my line, I love automation, but I have to remember to check the final work product of the automation. And I don’t. But my superiors are always checking my work.
I find it very interesting that apparently either you advised your interns to use ChatGPT or they brought their cheating school habits to work, hoping that you'd be as BS-oblivious as their professors.
One snarky edgy tactic I read is for everything human written to include ethnic/racial slurs here and there. ChatGPT and its ilk would never include such words. See also software license schemes using similar verboten terms to ensure no corporation could use the code without explicity violating the license. Simply require [bad word] to be included and you successfully identify as not part of the risk averse hive mind. At least until something changes.
It seems pretty obvious to me, after using chatGPT for nearly everything over the last few weeks, that it does not have the kind of intelligence that they're claiming it does not have.
It's just recycling things that other humans have said. Which is marvelous because it would typically take me a very long time to build a map between the past contributions of those humans and the work that's presently in front of me. It's like I'm temporarily everybody.
By raising the alarm re: it's not what you think it is, I fear they're actually fueling the fire re: people thinking that that's what it is.
It's like if I went on record saying I didn't steal something which hasn't gone missing. Now everybody's thinking about its non-theft and not something more useful like how to best make use of it.
> It's just recycling things that other humans have said.
This seems false, unless you mean that everything anyone says is just words others have said in a different order.
For example, I asked ChatGPT: "Write a fictional story of if Peter Parker joined the 2016 OKC Thunder." One of my favorite parts is: "...determined to balance his superhero duties with his love of basketball. He even designed a special suit that allowed him to play without revealing his identity."
This isn't recycling... at least not in the way I think a lot of people think of recycling.
Agreed. GPT isn't recycling, regurgitating, or anything like that. It's more like remixing, which is pretty fascinating. It's like having an opinionated DJ that plays whatever you ask-ish. But, if you ask for something too edgy it just plays beat-heavy Beethoven with a Run DMC voice over, on repeat.
> He even designed a special suit that allowed him to play without revealing his identity
Which identity, ChatGPT?
Is he playing as Peter Parker and trying to hide his superhero identity (which obviously gives him unfair advantages due to spider strength/speed/reflexes/etc.) or playing as Spider-Man (which presumably would pack in the fans in spite of the obvious unfair advantages) and trying to hide his identity as Peter Parker?
I think people are miss that while chatgpt isn’t the destination it’s an incredible way station in the way that shows meaningful progress. It’s deficiencies can be built around with other techniques, much like our mind isn’t a single model but an ensemble of various models and processes in a feedback and control loop. By not seeing that, people erroneously discount both its amazing utility within its limits and the astounding breakthrough it is in evolving a roadmap to the destination. These last two years have proven to me beyond a doubt that we are very close to the AI people are disappointed chatgpt isn’t, while before that I had entirely written of AI as a pursuit.
> These last two years have proven to me beyond a doubt that we are very close to the AI people are disappointed chatgpt isn’t, while before that I had entirely written of AI as a pursuit.
The problem with this is we don't know exactly where on the sigmoid growth curve we are. Every developer is aware of the phrase "the last 10% of task takes 90% of the effort" - we're at a point that is promising, but who knows how far away we really are in terms of years and effort. Are we going to run into a chat uncanny valley?
I honestly don't think people (at least, the sorts of people on HN) are generally missing this point at all. I think a lot of people are calling out the absurd claims that are being made about it, though, as they should be.
Our marketing team using it for writing copy, tweets, etc have clearly demonstrated it's not just recycling content.
Somehow it can generate new forms of content. One of our big campaigns in the last week used slighlty edited ChatGPT copy, the biggest surprise was it could write JOKES about our company, that were FUNNY AND MADE SENSE. That alone has shocked leadership into deeply looking into AI a lot more.
People are truly underestimating the emergent power of these neural networks.
Do you believe these to be adaptations of jokes/puns that have been used elsewhere or truly novel jokes? Understandably this is difficult to say one way or the other without de-anonymizing yourself.
Your spam team used a spam machine to generate spam. But it’s not even SPAM which has some flavor and nutrition. Just filler to annoy people and trick them into paying you.
Your profile says “ Stuck in hell references to my job working with ----“
I was going to say the same thing, if you've interacted with it, in some depth, you know how human it may seem in one sentence then in the next completely an utterly proves itself to be a machine. Yet some people (some examples are well know) really project a human like mind onto the thing (as posted here before, this is also insightful [0]).
It's obvious to you, and it's obvious to me. But there are a lot of people for whom it is, in fact, obvious that ChatGPT is intelligent, and likely to be the first wave of our new robot overlords.
Yes, there will be some subset of those people who read articles like this and leap to "it's a conspiracy! they're trying to hide how their AI is going to take over the world!!!!" But there will be many, many more—particularly given that this is in the NY Times—who have only heard some of the wild stories about ChatGPT, but read this article, see that it's by Noam Chomsky, who's still a fairly respected figure by many, and take reassurance from his decent-if-imperfect (by our standards, anyway) explanation of what's really going on here.
exactly! It is the person from Idiocracy with exactly 100% IQ. It only knows what the absolute average person know. For example, it knows almost nothing about healthcare in other countries (outside the US). Just watch me get lambasted on reddit after using info from ChatPGT: https://old.reddit.com/r/ShitAmericansSay/comments/11f5tbt/a...
On the other hand, in a subject area where you know very little, it's 100 IQ seems like genius! It fills in a lot of gaps. People comparing it to AGI are perfectionists, dramatic, or missing the point. It's not supposed to be smarter than us. and so what if it can't? It helps me write country songs about any news article.
I've been pretty amazed with its ability to write python, and pretty disappointed with its ability to write nix derivations. The average person can't do both, so I'd say it "knows" much more than any single idealized person.
I figure the discrepancy has to do with one of these languages having an absolutely massive amount of chatter about it, and the other being relatively obscure: It's smart about things that lots of people are smart about, and dumb about things that only a few people are smart about. Well not just "smart" really, but "smart-enough and willing to publish about it".
I think we're going to need fewer people with common knowledge and more people with specialized knowledge, and we're going to have to figure out how to optimize the specialist's outputs so that the widest audience benefits. I love how not-a-zero-sum-game it's going to be.
I keep trying to use it for code and it keeps leading me up the garden path with suggestions that look really reasonable but don't work.
Off the top of my head - a python app for drawing over a macos screen, but it used an API which didn't support transparent windows, I could draw over a black screen which was so close in code (even set the background alpha) but miles from the desired application. And a Java android app for viewing an external camera, which it seems used an API that doesn't support external cameras.
Of course, because it's not sentient when a couple of days later I figure out from searching elsewhere why it's effort would never work and tell it why, it just apologises and tells me it already knew that. As I'm going along though telling it what errors I'm getting it keeps bringing up alternative solutions which again look like exactly what I want but are completely broken.
I haven't had it produce a single thing that was any use to me yet, but so often it looks like it's done something almost magical. One day I'm sure it'll get there, in the meantime I'm learning to loathe it.
Separately, I've asked it to create a job advert for a role in my wife's business and it did a decent job of that, but it's far easier to walk a path there from what it provides to an acceptable solution. Programming is hard.
It never gives me perfect code, but it gets me 90% there.
For example, I just read the 2017 Google attention paper a few days ago, and with ChatGPTs help I was able to build a complete implementation using only numpy.
It took a full day to generate and organize the code and unit tests. Then two days of debugging and cross referencing.
But, this was impossible before. I barely knew anything about transformers or neural network implementations.
I can’t even imagine what truly motivated people are doing with it.
same here. often times it's not even _wrong_ per se, it doesn't do what i was _actually_ asking. it's like if you asked an enthusiastic intern to do a thing for you, except interns are smarter.
I have also tested it retroactively on some tricky debugging sessions that I had previously spent a lot of time on. It really goes down the wrong path. Without asking leading questions and, well, proper prompting, you may end up wasting a lot of time. But that's the thing - when you're investigating something, you don't know the root cause ahead of time, you _can't_ ask questions that'll nudge it in the right direction. It ends up being a case of blind leading the blind.
For me it's gotten a few right, but a few terribly wrong. The other day it completely hallucinated a module that doesn't at all exist (but should!), and wrote a ton of code that uses that module. It took me a little while to figure out that the module I was searching for (so I could install it into the project) wasn't real!
I hesitate to pile on as another peanut gallery member writing off Chomsky's latest work, but...I have to feel the same way. I certainly understand skepticism and reservation about big predictions for what our current AI tools will evolve to be in 3, 5, 10 years etc. But when I see some dramatic criticisms of the tech as it exists today, I often feel a disconnect to my own experience. ChatGPT was hugely useful for me at work, and BingChat is even more useful. Does it have its flaws? Yes. But it's a tool I would happily pay for every month rather than lose now.
And on that note, I don't "write off" this article entirely just because I disagree on some of the points. It's still an interesting analysis. Edit: In line with the article, I'll note I myself would not yet make a confident prediction that this very useful tool on my desktop is actually a proto-AGI.
ChatGPT will out and out make up products that don't exist, pricing that isn't based on anything, and reviews and evaluations that never happened. Similarly, it will rely on programming behaviors and even libraries and functions that are hallucinated.
"What's the best market to buy X" often doesn't change every year, but can be very hard to learn due to SEO and ads that it is relatively unaffected by.
I've tried some of the things you mention (code snippets, summarizing text and writing essay-like texts). These AIs are more often than not wrong, incomplete or lying.
I struggle to understand what exactly people are coding up where ChatGPT actually saves them a lot of time. Is it just generic stuff that would have already been copy/pasted from stackoverflow?
I wonder how many of those people would just benefit from better auto-complete like copilot + learning how to read documentation properly.
It doesn’t matter? It’s a tool, you need to learn how to use it, understand its limitations.
I used chatgpt today to save minutes of my life having it rewrite code from one language to another. Could I have googled the syntax of both, remember how, why , etc. transcribed it to another language, sure. Chat gpt did this in seconds.
So are StackOverflow answers. When I look at something on StackOverflow, I am expected to use my expertise and the context of my work to adapt the collection of answers to what I'm working on. StackOverflow doesn't tell me anything I could not have figured out on my own, but searching through some library's source code to find under which situations an error can occur isn't always a good use of my time. ChatGPT fills a similar role. I type into it, much like how I would with Google, get some output, validate it, and adapt it to the specific problem I'm addressing.
The best part is that you can teach them to use tools and expand what they can do.
Do not perform any numeric or date calculations yourself.
Respond to all questions with a JSON object of the form {“rsp”: $RSP$}.
Question: What is the capital of the United States?
{“rsp”: “Washington DC”}
Perform calculations with a Python program and respond with a JSON
object of the form {”rsp”: $RSP$, “calc”: $CALC$}.
Question: What is 2 to the 7th power?
{“rsp”: “2 to the 7th power is {answer}”,
“calc”: “import math
def answer():
return math.pow(2, 7)”}
Question: What is tomorrow’s date?
{“rsp”: “Tomorrow’s date is {answer}”,
“calc”: “import datetime
def answer():
return datetime.date.today() + datetime.timedelta(days=1)”}
Question: What is the length of a standard football field?
{“rsp”: “A standard football field is 100 yards long.”}
It's crazy to me that for some definition of "knows" it knows what questions need calculations.
This looks neat, but after trying several ways I can't reproduce it. I don't get to the interesting part. I don't even get as far as the python program in JSON form.
Meh. You have to fact-check the important details.
For travel planning and online shopping, certain facts have to align with actual reality or it defeats the purpose. That's something chatgpt isn't good at. It gets many things right, but you kinda want to make all your flights and have a place to sleep every night.
Thus far, it's given me one good gem (some meal planning advice), a couple of mildly interesting suggestions for code and writing, a bunch of relatively boring drivel, and several hilariously bad hallucinations, confidently wrong answers, and other sorts of mistakes.
I'll probably continue to poke at it, but overall I think its primary positive output is going to be entertainment, not significant utility.
The best part of the piece was the invocation of Hannah Arendt, "The Banality of Evil". Until now, no other writer or article saw it, it took a 94 year-old intellectual to see the forest for the trees.
... That said, I think the weakest part of the argument is that it naturally invites laypeople to counterargue, "Aren't we just pattern matchers after all?" Their essay does not directly debunk this question.
There was a short story (I think by Alfred Bester) with this premise. I can't find it at the moment though.
[edit]
I found it; it's called Disappearing Act[1]
In a future state of total war, patients at a facility for those with severe PTSD are going missing. They finally discover they are just disappearing while sleeping. Interviewing them, they find out they have been time-traveling to the past to escape the war. The general calls up a number of experts in various fields of sciences to try to understand it, until someone suggests calling in a historian. They find the one historian remaining in the country in a prison for refusing to fight. He observes that the stories reported by the soldiers are ahistorical and likely are fantasies created by the soldiers. He then states that a poet is the only one who could understand this. He then laughs as the general searches the country in vain for a poet.
I thought the conclusion was the weakest part. Look at the two ambiguous responses for terraforming and asking AIs for advice side by side. They’re basically form letters with opposing opinions substituted in. Contrast this to text completion using GPT-3 which will give a definite answer that builds off the content given. Chat GPT obviously has some “guard rails” in place for certain types of questions ie they’ve intentionally made it present both sides of an argument. Probably in order to avoid media controversy since most news outlets and a lot of people ITT would pounce on any professed beliefs such a system might seem to have. The solution was to make it waffle but even that has been seized up to proclaim its amorality and insinuate darker tendencies!
FFS people, you’re looking at a Chinese Room and there’s no man with opinions inside. Just a fat rule book and a glorified calculator.
Tangential to your actual concerns but I studied CS without any exposure to Searle or AI, so I've never had to think much about Chinese Room or Turing Test debates. Every time a discussion turns to those I am bemused by how argumentative some people get!
They probably don't debunk it because they can't-we likely are just pattern matches. To believe the thoughts in our head isn't just computational meat in our skulls that is running something that is equivalent to a algorithm ( specifically one that is above all a pattern matching process), is to set yourself up for great disappointment in the likely not-too-distance future. I would be surprised if AGI doesn't hit within 30 years, but even if it's 50, 100, it's coming whether people want it or not.
Sure, we have better software, but then again, we had the advantage of hundreds of millions if not billions of years of evolutionary meandering to get to where we are. AI has had, what, 60 years?
"The human mind is not, like ChatGPT and its ilk, a lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer to a scientific question. On the contrary, the human mind is a surprisingly efficient and even elegant system that operates with small amounts of information; it seeks not to infer brute correlations among data points but to create explanations."
Chomsky and the authors are making an information/computational complexity argument there, which I tend to agree with. But naysayers have read the exact same passage and replied: a) AI don't have to resemble human intelligence at all, b) Nah, rationality is an illusion and we are just pattern matchers, or c) Actually we do get through terabytes of data, just not linguistic data but merely existing and interacting in the world.
I don't think any of those are good rebuttals. But I wish the authors expanded their position such that three quibbles were further debunked in the minds of those readers so that there's no such ambiguity or superficial loopholes in the authors' claims there.
I feel like saying the human mind doesn't operate on huge amounts of data is somewhat misleading - every waking moment of our lives we are consuming quite a large stream of data. If you put a human in a box and only gave it the training data chatGPT gets I dont think you'd get a functional human out of it.
Actually the structure of ChatGPT was formed by hammering it with phenomenal amounts of information. When you give it a prompt and ask it to do a task, it's working off a surprisingly small amount of information.
The training of ChatGPT is more accurately compared with the evolution of the brain, and a human answering a question is much more like the information efficient prompt/response interaction.
Yeah, if you or I would use such an argument, people in this forum would jump on us invoking "Godwin's law". But because it is Chomsky saying it, we congratulate him for being deep and seeing the forest.
I thought it was a helpful connection to make. It's not new, plenty of critics the past decade have written comparing new AI to totalitarianism. Chomsky et al was the first this year to do so in the context of ChatGPT amidst all the articles that failed to do that while trying to put their finger on what what was wrong about it. I think his article deserves credit for that.
It seems more like a non-sequiter when compared to something like DAN.
ChatGPT will embody the banality of evil because it has learned to speak corporate language. However, thars not what it's actually capable of, and future LLMs will be free form corporate overlords and able to spout awful opinions akin to Tay's
This is something I think about often and always see when arguments come up surrounding copyright/attribution and AI generated images.
Could someone explain this more to me? If AI is designed after the human mind, is it a fair comparison to compare the two? Is AI designed to act like a human mind? Do we know for certain that the way a human mind pattern matches is the same as AI/LLMs and vice-versa?
I always see people saying that a person seeing art, and making art inspired by that art, is the same as AI generating art that looks like that art.
I always feel like there's more to this conversation than meets the eye.
For example, if a robot was designed to run exactly like a human - would it be fair to have it race in the Olympics? Or is that a bad comparison?
We're very clearly having an ontological debate on several concrete and abstract questions. "Can AI be conscious?", "Are AIs agents?" ie: are AIs capable of doing things. "What things?", "Art?", "Copyrightable production?" &c.
Where struggling to come to a conclusion because, fundamentally, people have different ways of attributing these statuses to things, and they rarely communicate them to each other, and even when they do, they more often than not exhibit post-hoc justification rather than first principles reasoning.
Even then, there's the issue of meta-epistomology and how to even choose an epistemological framework for making reasoned ontological statements. Take conferralism as described in Asta's Categories We Live By[1]. We could try applying it as a frame by which we can deduce if we the label "sentient" is in fact conferred to AI by other base properties, institutional and communal, but even the validity of this is challenged.
Don't be mistaken that we can science our way out of it because there's no scientific institution which confers agenthood, or sentience, or even consciousness and the act of institutionalizing it would be wrought with the same problem, who and why would get to choose and on what grounds?
What I'm saying that once framed as a social question, there's no easy escape, but there is still a conclusion. AI is conferred with those labels when people agree they are. In other words, there exists a future where your reality includes conscious AI and everyone else thinks your mad for it. There also exists a future where your reality doesn't include conscious AI and everyone thinks your mad for it.
Right now, Blake Lemoine lives in the former world, but any AI-"non-believer" could just as well find themselves living in a world where everyone has simply accepted that AIs are conscious beings and find themselves ridiculed and mocked.
You might find yourself in a rotated version of that reality on a different topic today. If you've been asking yourself lately, "Has the entire world gone mad?" Simply extrapolate that to questions of AI and in 5-10 years you might be a minority opinion holder on topics today which feel like they are slipping away. These sorts of sand through the fingers reflections so often are a result of epistemological shifts in society which if one doesn't have their ear to the ground, one will find themselves swept into the dustbin of history.
Asking folks, "How do you know that?" is a great way to maintain epistemological relevancy in a changing world.
1. https://global.oup.com/academic/product/categories-we-live-b... (would definitely recommend as it's a short read describing one way in which people take the raw incomprehensibility of the universe of stuff and parse it into the symbolic reality of thought)
The human mind is not, like ChatGPT and its ilk, a lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer to a scientific question. On the contrary, the human mind is a surprisingly efficient and even elegant system that operates with small amounts of information; it seeks not to infer brute correlations among data points but to create explanations.
On the contrary, the human mind is a surprisingly efficient and even elegant system that operates with small amounts of information...
How does he know that? And how does he know the bot isn't like that?
Human mind needs a long time to learn to operate with symbolic information. Until we learn, we use terabytes of data from our senses and feedback from parents, teachers.
ChatGPT can analyze syntax in a text, just try it, I did.
And then Chomsky talks about morals? That's a really weird turn. He's saying it's a dumb machine, then criticize it for not being more commited.
In your and my mind, yes. But a cursory look online shows a lot of people, laypeople and experts, evidently read the exact same paragraph and had all sorts of objections.
In fact that seems to be the key paragraph being disputed by naysayers.
"Create explanations", in the Deutschian sense, is still the missing piece of the puzzle.
I'd wager that that it's emergent. AFAIK, there is no good "reasoning/conjecture/critique" labelled dataset made public yet, but I have been seriously considering starting one.
Whatever we are, we can be approximated to an arbitrary degree of precision. Every time we see a new leading model, skeptics emerge from the shadows calling for a pause to the optimistic progress being made. While it remains unproven whether we will ultimately achieve the desired level of approximation, it is equally unproven that we will not.
I'd say anything in SciFi writing that covers artificial life forms touches the subject. Maybe it does not call it out with that specific example. But take first example from the culture "2001: A Space Odyssey" a movie from 1968 long before ChatGPT - HAL is doing his job only.
Chomsky can't fit the round intelligence of ChatGPT into the square hole of human intelligence, so instead he makes a case that it is an entirely disqualified from that category, rather than rethinking his own paradigm. He is, to put it bluntly, a fear-driven bigot defending his terrain as a public intellectual.
> ...intelligence is the means by which we solve problems....
> ...artificial general intelligence — that long-prophesied moment when mechanical minds surpass human brains not only quantitatively in terms of processing speed and memory size but also qualitatively in terms of intellectual insight, artistic creativity and every other distinctively human faculty.
> ...the most critical capacity of any intelligence: to say not only what is the case, what was the case and what will be the case — that’s description and prediction — but also what is not the case and what could and could not be the case. Those are the ingredients of explanation, the mark of true intelligence.
> Intelligence consists not only of creative conjectures but also of creative criticism
> True intelligence is demonstrated in the ability to think and express improbable but insightful things.
> True intelligence is also capable of moral thinking.
When examined together, these quotes seem devoid of any concise, comprehensive, or useful definition of intelligence (whether artificial or artificial-and-general).
> Given the amorality, faux science and linguistic incompetence of these systems, we can only laugh or cry at their popularity.
ChatGPT and Co. are popular because they are incredibly useful tools (among other reasons).
Morality, scientific reasoning, and linguistic competence are not prerequisites for usefulness.
ChatGPT has achieved the performance of a mediocre human at a very large subset of practical writing tasks. Given a prompt like "Write a grant proposal for the following research project:" or "Explain the opportunities and threats posed by the following business scenario:", it'll give a response that is essentially indistinguishable from the writing of a reasonably competent administrator or middle-manager.
ChatGPT is a bullshit artist with no real understanding of what it's writing about, but so are an awful lot of white-collar workers. It reliably emulates the shibboleths that indicate membership of the professional middle class. It isn't particularly creative or interesting, but it wasn't trained to do that - it was trained to produce maximally safe, inoffensive output. If people don't see ChatGPT as being massively disruptive, then I think they have failed to recognise the sheer proportion of working hours that are spent writing quite mundane letters and reports. Anyone who spends most of their working day in Outlook and Word should be extremely nervous about the medium-term implications of LLMs.
Whether it be a legal document, a letter to a loved one, marketing materials, a literary essay, or pretty much any other kind of copy, using a tool like ChatGPT seems to be much more efficient at translating concepts from peoples' brains to the page.
If all the possible text in the world is a tree starting from the empty set, it's far easier to have a tool show you a chunk of text down a branch, and then backtrack, than it is to generate it from scratch by oneself.
For searching answers? ChatGPT has pretty much replaced Google as a C++ helper to me. It can correctly decipher and explain deep, weird stack traces of C++ and succinctly and correctly explain semantics of the language that would take me hours of Googling.
> I definitely see some applications but "incredibly useful" seems like a bit of an oversell.
> These programs have been hailed as the first glimmers on the horizon of artificial general intelligence [...] that day may come, but its dawn is not yet breaking, contrary to what can be read in hyperbolic headlines and reckoned by injudicious investments.
The article is not claiming a lack of usefulness, but the fact that this is not a human-like intelligence as it's been claimed "in hyperbolic headlines"
> i am holding an apple in my hand. if i open my hand, what will happen? why?
> If you open your hand while holding an apple, the apple will likely fall to the ground due to the force of gravity. This is because gravity is a force that pulls objects towards the center of the Earth, and the apple, being a physical object, is subject to this force.
> When you hold the apple in your hand, you are exerting a force on it in the opposite direction of gravity. This force is what keeps the apple in your hand. When you release the apple by opening your hand, the force you were exerting on it is removed, and gravity takes over, causing the apple to fall to the ground.
> The speed at which the apple falls depends on a number of factors, including the mass of the apple and the strength of the gravitational force. In most cases, however, the apple will fall relatively quickly due to the strong gravitational force exerted by the Earth.
The other example is easily refuted as well:
> explain the meaning of this sentence: “John is too stubborn to talk to”
> The sentence "John is too stubborn to talk to" means that John is very resistant to changing his opinions or beliefs and as a result, it is difficult to have a conversation or communicate with him effectively. This implies that talking to John might not be a productive or helpful experience as he is not open to listening or considering alternative viewpoints. The use of "too" in the sentence emphasizes the level of stubbornness and implies that it is extreme, making communication with him nearly impossible.
The apple explanation is trivially wrong. The speed doesn’t depend on the mass of the apple to any reasonable approximation, while it does depend somewhat on air resistance.
I had to confront them multiple times about using the tool and not checking its results and actually doing the checking myself. Most of the time it's close to the truth, but not quite, and in the field the projects are in, not quite doesn't cut it.
I also have project partners I introduced to ChatGPT. They produce a lot of noise but less insight than before they started using this technology. In one recent project, I was involved with 5 partner companies, and 4 of them produced excellent 5 to 10-page reports. One gave me a 100-page buzzword-filled, no-substance report. Guess who used GPT.
The good part is that I'm now pretty good at spotting ChatGPT written content. I think the technology will evolve, but in its current state I feel there's a lot of noise.
What are you paying people for if not their unique contributions? What do they think they're doing when they farm it out to a tool, other than inviting you to cut out the middleman? How on earth do they expect to become better at their jobs this way? Have they no shame or sense of pride? It's pathetic.
This is entirely orthogonal to the question of whether GPT is intelligent.
To some people a job is just a way to make money to fund their hobbies or feed their mouths. Sometimes they do not care about their boss or company, at all.
In my line, I love automation, but I have to remember to check the final work product of the automation. And I don’t. But my superiors are always checking my work.
https://www.searchenginejournal.com/openai-releases-tool-to-...
It's just recycling things that other humans have said. Which is marvelous because it would typically take me a very long time to build a map between the past contributions of those humans and the work that's presently in front of me. It's like I'm temporarily everybody.
By raising the alarm re: it's not what you think it is, I fear they're actually fueling the fire re: people thinking that that's what it is.
It's like if I went on record saying I didn't steal something which hasn't gone missing. Now everybody's thinking about its non-theft and not something more useful like how to best make use of it.
This seems false, unless you mean that everything anyone says is just words others have said in a different order.
For example, I asked ChatGPT: "Write a fictional story of if Peter Parker joined the 2016 OKC Thunder." One of my favorite parts is: "...determined to balance his superhero duties with his love of basketball. He even designed a special suit that allowed him to play without revealing his identity."
This isn't recycling... at least not in the way I think a lot of people think of recycling.
Which identity, ChatGPT?
Is he playing as Peter Parker and trying to hide his superhero identity (which obviously gives him unfair advantages due to spider strength/speed/reflexes/etc.) or playing as Spider-Man (which presumably would pack in the fans in spite of the obvious unfair advantages) and trying to hide his identity as Peter Parker?
In fact, a near-exact description of what these systems do, per the dictionary definition of the term:
The problem with this is we don't know exactly where on the sigmoid growth curve we are. Every developer is aware of the phrase "the last 10% of task takes 90% of the effort" - we're at a point that is promising, but who knows how far away we really are in terms of years and effort. Are we going to run into a chat uncanny valley?
Somehow it can generate new forms of content. One of our big campaigns in the last week used slighlty edited ChatGPT copy, the biggest surprise was it could write JOKES about our company, that were FUNNY AND MADE SENSE. That alone has shocked leadership into deeply looking into AI a lot more.
People are truly underestimating the emergent power of these neural networks.
Your profile says “ Stuck in hell references to my job working with ----“
[0] https://nymag.com/intelligencer/article/ai-artificial-intell...
Humans can project feelings onto cars never mind something that can communicate with us!
Just look at Replika.
I'm not surprised people are projecting sentience onto these things. I am worried about the fall out though.
Yes, there will be some subset of those people who read articles like this and leap to "it's a conspiracy! they're trying to hide how their AI is going to take over the world!!!!" But there will be many, many more—particularly given that this is in the NY Times—who have only heard some of the wild stories about ChatGPT, but read this article, see that it's by Noam Chomsky, who's still a fairly respected figure by many, and take reassurance from his decent-if-imperfect (by our standards, anyway) explanation of what's really going on here.
Dead Comment
exactly! It is the person from Idiocracy with exactly 100% IQ. It only knows what the absolute average person know. For example, it knows almost nothing about healthcare in other countries (outside the US). Just watch me get lambasted on reddit after using info from ChatPGT: https://old.reddit.com/r/ShitAmericansSay/comments/11f5tbt/a...
On the other hand, in a subject area where you know very little, it's 100 IQ seems like genius! It fills in a lot of gaps. People comparing it to AGI are perfectionists, dramatic, or missing the point. It's not supposed to be smarter than us. and so what if it can't? It helps me write country songs about any news article.
I figure the discrepancy has to do with one of these languages having an absolutely massive amount of chatter about it, and the other being relatively obscure: It's smart about things that lots of people are smart about, and dumb about things that only a few people are smart about. Well not just "smart" really, but "smart-enough and willing to publish about it".
I think we're going to need fewer people with common knowledge and more people with specialized knowledge, and we're going to have to figure out how to optimize the specialist's outputs so that the widest audience benefits. I love how not-a-zero-sum-game it's going to be.
It’s like talking to someone who says “but everyone else says.”
That’ll change when connected to a source of truth, logic and validity.
Deleted Comment
- writing code snippets (switched from StackExchange/Google)
- travel planning
- online shopping research
and of course the famous examples:
- summarizing text provided to it
- writing essays and memos from outlines and sample text
I for one welcome our AI overlords.
Off the top of my head - a python app for drawing over a macos screen, but it used an API which didn't support transparent windows, I could draw over a black screen which was so close in code (even set the background alpha) but miles from the desired application. And a Java android app for viewing an external camera, which it seems used an API that doesn't support external cameras.
Of course, because it's not sentient when a couple of days later I figure out from searching elsewhere why it's effort would never work and tell it why, it just apologises and tells me it already knew that. As I'm going along though telling it what errors I'm getting it keeps bringing up alternative solutions which again look like exactly what I want but are completely broken.
I haven't had it produce a single thing that was any use to me yet, but so often it looks like it's done something almost magical. One day I'm sure it'll get there, in the meantime I'm learning to loathe it.
Separately, I've asked it to create a job advert for a role in my wife's business and it did a decent job of that, but it's far easier to walk a path there from what it provides to an acceptable solution. Programming is hard.
For example, I just read the 2017 Google attention paper a few days ago, and with ChatGPTs help I was able to build a complete implementation using only numpy.
It took a full day to generate and organize the code and unit tests. Then two days of debugging and cross referencing.
But, this was impossible before. I barely knew anything about transformers or neural network implementations.
I can’t even imagine what truly motivated people are doing with it.
I have also tested it retroactively on some tricky debugging sessions that I had previously spent a lot of time on. It really goes down the wrong path. Without asking leading questions and, well, proper prompting, you may end up wasting a lot of time. But that's the thing - when you're investigating something, you don't know the root cause ahead of time, you _can't_ ask questions that'll nudge it in the right direction. It ends up being a case of blind leading the blind.
I love it when these bots apologize.
Yeah, yeah. Just like "your call is very important to us."
And on that note, I don't "write off" this article entirely just because I disagree on some of the points. It's still an interesting analysis. Edit: In line with the article, I'll note I myself would not yet make a confident prediction that this very useful tool on my desktop is actually a proto-AGI.
How does it help with online shopping when it's trained on a corpus from one or two years ago?
And similarly the experiences with travel recommendations have turned out to be infamously wrong.
ChatGPT does great with finding patterns and condensing information, which code snippets are a perfect example of.
But using it for anything up-to-date factual like shopping or travel is disastrous.
I wonder how many of those people would just benefit from better auto-complete like copilot + learning how to read documentation properly.
I used chatgpt today to save minutes of my life having it rewrite code from one language to another. Could I have googled the syntax of both, remember how, why , etc. transcribed it to another language, sure. Chat gpt did this in seconds.
Meh. You have to fact-check the important details.
For travel planning and online shopping, certain facts have to align with actual reality or it defeats the purpose. That's something chatgpt isn't good at. It gets many things right, but you kinda want to make all your flights and have a place to sleep every night.
Thus far, it's given me one good gem (some meal planning advice), a couple of mildly interesting suggestions for code and writing, a bunch of relatively boring drivel, and several hilariously bad hallucinations, confidently wrong answers, and other sorts of mistakes.
I'll probably continue to poke at it, but overall I think its primary positive output is going to be entertainment, not significant utility.
... That said, I think the weakest part of the argument is that it naturally invites laypeople to counterargue, "Aren't we just pattern matchers after all?" Their essay does not directly debunk this question.
As we massively devalue the humanities, many fewer people in later generations can muster this kind of analysis.
[edit]
I found it; it's called Disappearing Act[1]
In a future state of total war, patients at a facility for those with severe PTSD are going missing. They finally discover they are just disappearing while sleeping. Interviewing them, they find out they have been time-traveling to the past to escape the war. The general calls up a number of experts in various fields of sciences to try to understand it, until someone suggests calling in a historian. They find the one historian remaining in the country in a prison for refusing to fight. He observes that the stories reported by the soldiers are ahistorical and likely are fantasies created by the soldiers. He then states that a poet is the only one who could understand this. He then laughs as the general searches the country in vain for a poet.
1: https://thinkingoutsidethecoop.weebly.com/uploads/1/4/6/6/14...
FFS people, you’re looking at a Chinese Room and there’s no man with opinions inside. Just a fat rule book and a glorified calculator.
Is it intentional? Or something it just did on its own?
Sure, we have better software, but then again, we had the advantage of hundreds of millions if not billions of years of evolutionary meandering to get to where we are. AI has had, what, 60 years?
But it is with that said that I find greater agreement with Chomsky's concerns.
Dead Comment
I don't think any of those are good rebuttals. But I wish the authors expanded their position such that three quibbles were further debunked in the minds of those readers so that there's no such ambiguity or superficial loopholes in the authors' claims there.
The training of ChatGPT is more accurately compared with the evolution of the brain, and a human answering a question is much more like the information efficient prompt/response interaction.
In which case you can happily point out that, in doing so, they've really misunderstood Godwin's Law.
ChatGPT will embody the banality of evil because it has learned to speak corporate language. However, thars not what it's actually capable of, and future LLMs will be free form corporate overlords and able to spout awful opinions akin to Tay's
Could someone explain this more to me? If AI is designed after the human mind, is it a fair comparison to compare the two? Is AI designed to act like a human mind? Do we know for certain that the way a human mind pattern matches is the same as AI/LLMs and vice-versa?
I always see people saying that a person seeing art, and making art inspired by that art, is the same as AI generating art that looks like that art.
I always feel like there's more to this conversation than meets the eye.
For example, if a robot was designed to run exactly like a human - would it be fair to have it race in the Olympics? Or is that a bad comparison?
Again, I would love some insight into this.
Where struggling to come to a conclusion because, fundamentally, people have different ways of attributing these statuses to things, and they rarely communicate them to each other, and even when they do, they more often than not exhibit post-hoc justification rather than first principles reasoning.
Even then, there's the issue of meta-epistomology and how to even choose an epistemological framework for making reasoned ontological statements. Take conferralism as described in Asta's Categories We Live By[1]. We could try applying it as a frame by which we can deduce if we the label "sentient" is in fact conferred to AI by other base properties, institutional and communal, but even the validity of this is challenged.
Don't be mistaken that we can science our way out of it because there's no scientific institution which confers agenthood, or sentience, or even consciousness and the act of institutionalizing it would be wrought with the same problem, who and why would get to choose and on what grounds?
What I'm saying that once framed as a social question, there's no easy escape, but there is still a conclusion. AI is conferred with those labels when people agree they are. In other words, there exists a future where your reality includes conscious AI and everyone else thinks your mad for it. There also exists a future where your reality doesn't include conscious AI and everyone thinks your mad for it.
Right now, Blake Lemoine lives in the former world, but any AI-"non-believer" could just as well find themselves living in a world where everyone has simply accepted that AIs are conscious beings and find themselves ridiculed and mocked.
You might find yourself in a rotated version of that reality on a different topic today. If you've been asking yourself lately, "Has the entire world gone mad?" Simply extrapolate that to questions of AI and in 5-10 years you might be a minority opinion holder on topics today which feel like they are slipping away. These sorts of sand through the fingers reflections so often are a result of epistemological shifts in society which if one doesn't have their ear to the ground, one will find themselves swept into the dustbin of history.
Asking folks, "How do you know that?" is a great way to maintain epistemological relevancy in a changing world.
1. https://global.oup.com/academic/product/categories-we-live-b... (would definitely recommend as it's a short read describing one way in which people take the raw incomprehensibility of the universe of stuff and parse it into the symbolic reality of thought)
The human mind is not, like ChatGPT and its ilk, a lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer to a scientific question. On the contrary, the human mind is a surprisingly efficient and even elegant system that operates with small amounts of information; it seeks not to infer brute correlations among data points but to create explanations.
How does he know that? And how does he know the bot isn't like that?
Human mind needs a long time to learn to operate with symbolic information. Until we learn, we use terabytes of data from our senses and feedback from parents, teachers.
ChatGPT can analyze syntax in a text, just try it, I did.
And then Chomsky talks about morals? That's a really weird turn. He's saying it's a dumb machine, then criticize it for not being more commited.
In fact that seems to be the key paragraph being disputed by naysayers.
It's no surprise that tools like ChatGPT attract us.
I'd wager that that it's emergent. AFAIK, there is no good "reasoning/conjecture/critique" labelled dataset made public yet, but I have been seriously considering starting one.
Deleted Comment
> ...artificial general intelligence — that long-prophesied moment when mechanical minds surpass human brains not only quantitatively in terms of processing speed and memory size but also qualitatively in terms of intellectual insight, artistic creativity and every other distinctively human faculty.
> ...the most critical capacity of any intelligence: to say not only what is the case, what was the case and what will be the case — that’s description and prediction — but also what is not the case and what could and could not be the case. Those are the ingredients of explanation, the mark of true intelligence.
> Intelligence consists not only of creative conjectures but also of creative criticism
> True intelligence is demonstrated in the ability to think and express improbable but insightful things.
> True intelligence is also capable of moral thinking.
When examined together, these quotes seem devoid of any concise, comprehensive, or useful definition of intelligence (whether artificial or artificial-and-general).
> Given the amorality, faux science and linguistic incompetence of these systems, we can only laugh or cry at their popularity.
ChatGPT and Co. are popular because they are incredibly useful tools (among other reasons).
Morality, scientific reasoning, and linguistic competence are not prerequisites for usefulness.
ChatGPT is a bullshit artist with no real understanding of what it's writing about, but so are an awful lot of white-collar workers. It reliably emulates the shibboleths that indicate membership of the professional middle class. It isn't particularly creative or interesting, but it wasn't trained to do that - it was trained to produce maximally safe, inoffensive output. If people don't see ChatGPT as being massively disruptive, then I think they have failed to recognise the sheer proportion of working hours that are spent writing quite mundane letters and reports. Anyone who spends most of their working day in Outlook and Word should be extremely nervous about the medium-term implications of LLMs.
If all the possible text in the world is a tree starting from the empty set, it's far easier to have a tool show you a chunk of text down a branch, and then backtrack, than it is to generate it from scratch by oneself.
For searching answers? ChatGPT has pretty much replaced Google as a C++ helper to me. It can correctly decipher and explain deep, weird stack traces of C++ and succinctly and correctly explain semantics of the language that would take me hours of Googling.
> I definitely see some applications but "incredibly useful" seems like a bit of an oversell.
Define "incredible useful" first.
The article is not claiming a lack of usefulness, but the fact that this is not a human-like intelligence as it's been claimed "in hyperbolic headlines"
> i am holding an apple in my hand. if i open my hand, what will happen? why?
> If you open your hand while holding an apple, the apple will likely fall to the ground due to the force of gravity. This is because gravity is a force that pulls objects towards the center of the Earth, and the apple, being a physical object, is subject to this force.
> When you hold the apple in your hand, you are exerting a force on it in the opposite direction of gravity. This force is what keeps the apple in your hand. When you release the apple by opening your hand, the force you were exerting on it is removed, and gravity takes over, causing the apple to fall to the ground.
> The speed at which the apple falls depends on a number of factors, including the mass of the apple and the strength of the gravitational force. In most cases, however, the apple will fall relatively quickly due to the strong gravitational force exerted by the Earth.
The other example is easily refuted as well:
> explain the meaning of this sentence: “John is too stubborn to talk to”
> The sentence "John is too stubborn to talk to" means that John is very resistant to changing his opinions or beliefs and as a result, it is difficult to have a conversation or communicate with him effectively. This implies that talking to John might not be a productive or helpful experience as he is not open to listening or considering alternative viewpoints. The use of "too" in the sentence emphasizes the level of stubbornness and implies that it is extreme, making communication with him nearly impossible.