Readit News logoReadit News
extr · 3 years ago
I've been using ChatGPT pretty consistently during the workday and have found it useful for open ended programming questions, "cleaning up" rough bullet points into a coherent paragraph of text, etc. $20/month useful is questionable though, especially with all the filters. My "in between" solution has been to configure BetterTouchTool (Mac App) with a hotkey for "Transform & Replace Selection with Javascript". This is intended for doing text transforms, but putting an API call instead seems to work fine. I highlight some text, usually just an open ended "prompt" I typed in the IDE, or Notes app, or an email body, hit the hotkey, and ~1s later it adds the answer underneath. This works...surprisingly well. It feels almost native to the OS. And it's cheaper than $20/month, assuming you aren't feeding it massive documents worth of text or expecting paragraphs in response. I've been averaging like 2-10c a day, depending on use.

Here is the javascript if anyone wants to do something similar. I don't know JS really, so I'm sure it could be improved. But it seems to work fine. You can add your own hard coded prompt if you want even.

    async (clipboardContentString) => {
        try {
          const response = await fetch("https://api.openai.com/v1/completions", {
            method: "POST",
            headers: {
              "Content-Type": "application/json",
              "Authorization": "Bearer YOUR API KEY HERE"
            },
            body: JSON.stringify({
              model: "text-davinci-003",
              prompt: `${clipboardContentString}.`,
              temperature: 0,
              max_tokens: 256
            })
          });
          const data = await response.json();
          const text = data.choices[0].text;
        return `${clipboardContentString} ${text}`;
        } catch (error) {
          return "Error"
        }
      }

ezekg · 3 years ago
You use it consistently during the workday and it's still not worth $20/mo?
m463 · 3 years ago
This is one of those puzzling things to me.

You own your life - why not spend your own money for the things that make you and your life better?

Who cares?

I worked at a job where I had a small, crappy monitor. I made decent cash. I just bought a large decent monitor and brought it into work. I ended up using it for many years. My life was significantly better. I've done that at several jobs since then, and NEVER regretted it, in fact it was one of the soundest decisions I've ever made. Also keyboard and mouse.

There are so many people using the default keyboard, the default monitor, the default tools.

If you push work to do it for you, you need to challenge the "everone gets a dell 19" monitor" b.s. If you push your boss, he might have to do justification paperwork.

Just become what you are.

extr · 3 years ago
No. I'm a salaried employee. Marginal time/effort savings do not directly translate into more money for me. But the $20 charge hits my bank account today. Perhaps if I use it consistently enough and in smart enough ways I will be perceived to be a more valuable/productive employee, which might translate to a raise. But that's a lot of maybes. I'm sure it will get to that point eventually, but by then the value will be undeniable and my employer will pay for the subscription. Until then, I will continue to use the free version, or pay-per-use with the API, or just use google.
anhner · 3 years ago
I use my toothbrush every day but I wouldn't pay $20 per month for it.

I use my keyboard everyday but I wouldn't pay $20 per month for it. In fact, I paid around $4 total for it, as paying more would bring significantly more diminishing returns.

I use my phone every day and have used it for the past 5 years with no issue, it has brought me so much value and yet, if I draw the line, it didn't even reach $20 per month (price divided by time used), not even mentioning that I expect it to last another 2-3 years, bringing the cost down even further.

What kind of crazy value would you expect something to have in order to be worth $20/mo?

lanza · 3 years ago
People are so cheap it's ridiculous. If we ever get past people being unwilling to pay for software beyond rates of 1 cent per hour tech will blow up to 10x as big as it is right now.
apples_oranges · 3 years ago
This is hacker news, a title that includes breaking the rules. Should almost be a matter of pride to get it for less than $20..
sangnoir · 3 years ago
Is it surprisingly? Value is not determined by frequency of use, but by the qualitative difference: if gp doesn't use it at all, would anything of value be lost?

He's a thought experiment: imagine a device that changes the scent of the air I breathe to something I find pleasant. I could use this device all day everyday for free (or on the cheap), but I will not pay $20/mo for it. Losing access to the features really isn't worth that much. On the flip side, many people pay thousands of dollars to rent machines that helps them breathe, even if that adds up to total of less than an hour of their lives - which is nor much.

OOPMan · 3 years ago
I pay $80 a year for IntelliJ and that works out to waaay less than something like CoPilot or ChatGPT and is waaay more consistently useful.

$20 a month for ML tool that is only sometimes useful is a tough sell, especially in a world where a lot of people feel like $80 a year for IntelliJ is too much.

Coders are thrifty bastards, except when it comes to their personal vices in which case financial responsibility goes out the window...

mkraft · 3 years ago
Right? $1/workday and you still get to use it evenings and weekends. No wonder b2b is the way.
chaxor · 3 years ago
I would think the big issue here is that they still make a ton of money off of you by selling your data. Any Software as Service is deeply flawed because it is pretty much guaranteed to extract as much data from the consumer as possible. In this case, it is quite a bit worse, because it's likely close to your entire content or body of work that they will take. So unless it becomes something that runs locally and has no networking component to it whatsoever, it's not going to be worth spending money on for many people or companies.
IanCal · 3 years ago
They seem to be getting good results using the paid API that has fewer restrictions, and have a neat integration with their workflow.
chiefalchemist · 3 years ago
One dollar per day? If it saves you less than 5 minutes...it's paid for.
jmacd · 3 years ago
The absurdity of OPs comment cannot be understated.
breck · 3 years ago
Shhh, I'm his boss and have convinced him he's making a good salary at 25 cents per hour.
FounderBurr · 3 years ago
He deserves to be paid for his work, other people not as much.
s3p · 3 years ago
Considering the cost of the API, no. It's not.
fifafu · 3 years ago
Nice, I'm the developer of BetterTouchTool and I'll definitely use this one myself :-)
extr · 3 years ago
Thanks for the great app man! You may not have even realized this, but this was randomly crashing only a few versions ago, and you just recently pushed an update that did something to the Replace w/ Javascript functionality that fixed it. Was super pleasantly surprised to have found that overnight the problem was solved without even having to submit a bug report.
gabaix · 3 years ago
I was shown BTT 10 years ago and to this day I still use it. Thank you for making Mac a better place.
Sholmesy · 3 years ago
Heaping on the praise, use this tool every day, for years, on every mac I've had. Best 15 quid spent
guiambros · 3 years ago
Another happy user here. BetterTouchTool [1] is a must-install on any new Mac for me. I have so many keyboard customizations that it's hard to live without. Thanks for such a great piece of software!

[1] https://folivora.ai/

thesystemdev · 3 years ago
Thank you so so so much for this tool, it’s always the first install on a new mac for me!
elvin_d · 3 years ago
using BTT since discovered in 2016 and it's essential. Time for a lifetime with a new version, there a lot of things how you can make Mac more pleasant for your use. Thank you for the app!
m3kw9 · 3 years ago
That code didn’t work for me mind giving a better example?
dragonwriter · 3 years ago
> And it's cheaper than $20/month,

Since the $20/month is for priority access to new features and priority access including during high-load times, not API access (a separatr offering not yet available), I don't understand the cost comparison. What you are proposing does not substitute for any part of the $20/month offering over the basic free product.

DoesntMatter22 · 3 years ago
He's a programmer. They re cheaper than scrooge. They'll write a tool themselves in 6 months rather than spend 10 dollars.

Idk why but programmers are the cheapest people on earth in regards to programming tools.

I bought Intellij idea for $400 like 12 years ago and got made fun of at work even though it made me substantially faster than eclipse.

s3p · 3 years ago
Oh right. A bunch of "new features" with exactly zero explanation as to what they are and "priority access" when the API responds nearly instantaneously. But keep drinking that kool aid to justify your $20 purchase.
RupertEisenhart · 3 years ago
The API already still works in peak times. That's not exclusive to this offer!
lossolo · 3 years ago
ChatGPT struggles with out-of-distribution problems. However, it excels at solving problems that have already been solved on the internet/GitHub. By connecting different contexts, ChatGPT can provide a ready solution in just a few seconds, saving you the time and effort of piecing together answers from various sources. But when you have a problem that can't be found on Google, even if it's a simple one-liner or one function, then in my experience ChatGPT will often produce an incorrect solution. If you point out what's wrong, it will acknowledge the error and then provide another incorrect answer.
motoboi · 3 years ago
This is the expected behavior. It's a language model trained to predict the next word (part of words actually) after all.

What is unexpected is the ability to perform highly in a multitude of tasks it was never trained for, like answering questions or writing code.

I suppose we can say we basically don't understand what the f* is going on with GPT-3 emergent abilities, but hey, if we can make it even better at those tasks like they did with chatGPT, sign me in.

Is not that the AI is too dumb, it's that my computer now can write me code I'd take one hour to Google and check and test. Now I ask, ask for corrections, test the answer and voila, my productivity just went through the roof.

So, my point is: don't believe (or be mad about) the hype from people that don't understand what curious marvel we got in front of us, just see how you can use it.

movedx · 3 years ago
$20/month is too much? When I filled in the "pro" survey, I said I'd pay $200/month. This thing is a cheap-as-hell technical writer, fact checker, information cruncher, and more.

It's like an employee, but for $20/month.

nagonago · 3 years ago
I agree that it's very useful, but I'd be careful about "fact checker". GPT is perfectly happy to confirm falsehoods as facts and hallucinate convincing lies. A good fact checker verifies from multiple sources and uses critical thinking, neither of which ChatGPT can do.
ben174 · 3 years ago
Wow, I just implemented this in BTT and it's amazing how quickly it's become an indispensable tool. Just highlight any text I type and get the "answer" to it. Thanks for the tip!
Swizec · 3 years ago
> "cleaning up" rough bullet points into a coherent paragraph of text

As a reader of people's paragraphs, please don't. Stick to bullet points.

qzw · 3 years ago
I'm sure you can have ChatGPT turn a paragraph into bullet points for you. Repeating that n times would be an interesting variation on the game of Telephone.
zxienin · 3 years ago
> model: "text-davinci-003"

I was surprised at first, but I notice you're using GPT-3 model, not chatGPT (no API so far)

stavros · 3 years ago
I'm not convinced that there's any substantial difference between the two.
kmlx · 3 years ago
i used the same API but for an ios shortcut. it's not the same thing as chatgpt, as the completions api doesn't know about context. but it does feel a lot snappier.

> have found it useful for open ended programming questions

i have found it to be terrible when it comes to something simple, like constructing a regex.

shagie · 3 years ago
Try asking code-davinci-002 instead of text-davinci-003.

    curl https://api.openai.com/v1/completions \
      -H "Content-Type: application/json" \
      -H "Authorization: Bearer $OPENAI_API_KEY" \
      -d '{
      "model": "code-davinci-002",
      "prompt": "##### Create a regular expression to match words starting with 'dog' or ending with 'cat'.\n    \n### Java Code",
      "temperature": 0,
      "max_tokens": 182,
      "top_p": 1,
      "frequency_penalty": 0,
      "presence_penalty": 0,
      "stop": ["###"]
    }'
This returned:

    ```java
    String regex = "\\b(dog|cat)\\b";
    ```

extr · 3 years ago
WYDM exactly by iOS shortcut? I use a Mac but only an android phone. Do you mean for mobile usage?

> i have found it to be terrible when it comes to something simple, like constructing a regex.

Oh yeah, agreed. It's not good for super specific stuff like that. But more like, I write a weird loop and ask if there is a more idiomatic way to do the above. Or I just describe what I want to do and say, "list three options for accomplishing that". It's great for "feature discovery" essentially. I find it complementary to copilot.

pelasaco · 3 years ago
Yes, we want everything for free /s

I think it absolutely worth 20 bucks/month. It's an absolute helpful tool. To write text, but as I discovered yesterday, to write code. Over a long chat with many iterations, you can produce code,test code or pseudo code. I used yesterday in a meeting with some colleagues, while discussing a new feature. The code produced after our initial spec was pretty good to make sure we all understood what we wanted.

deet · 3 years ago
(Self promotion, sorry!)

I'm working on a product with this capability, plus a lot more including cross-app contextual support and developer extensibility.

If anyone is excited about this kind of embedded-in-your-workflow integration of AI tools I'd love to chat! (email in profile)

m3kw9 · 3 years ago
Some reason, this code doesn't work. I cleaned up the 's and have a api key. It says

TypeError: undefined is not an object (evaluating 'data.choices[0]')

m3kw9 · 3 years ago
Is ok I got it fixed, by asking chatgpt to write me some new code.

Dead Comment

behnamoh · 3 years ago
Does anyone know if this can be done in other macOS automation apps too? (like Keyboard Maestro, Hammerspoon, Raycast, Alfred, ...)
aeonflux · 3 years ago
Of course. I use this in Hammerspoon. The API call is simply in Lua.
smrtinsert · 3 years ago
20 is definitely questionable especially considering competition is coming right around the corner.
intrasight · 3 years ago
Obviously, they'll start charging for the API - and probably in such a way that your use will cost > $20
reisender · 3 years ago
It seems worth it if you find it helpful enough to use it on a daily basis.
bemmu · 3 years ago
Note it's actually:

"Authorization": "Bearer YOUR API KEY HERE"

extr · 3 years ago
Yep, good catch. I'll edit the post to include so nobody gets confused.
88stacks · 3 years ago
is there a different or free app to use instead of BetterTouchTool just to highlight text and run it through an API and reinsert?
eloff · 3 years ago
Is there any advantage to this over copilot?
m3kw9 · 3 years ago
Also note that this is GPT-3, not ChatGPT.

Deleted Comment

rhaway84773 · 3 years ago
Have you been able to find uses for chatGPT where it’s better than the top result in Google?

I’ve consistently found that Google returns similar results. The only difference is that if it’s a longer snippet I need to click into the website, but even then, it’s usually faster than chatGPT.

aeonflux · 3 years ago
My only issue is that GPT training set is locked like 1-2 years ago. I often find myself looking for recent stuff in Google.
qwertox · 3 years ago
~~~ To the ChatGPT Team ~~~

I'm currently receiving answers FROM OTHER USERS with this new version from 30 Jan, completely unrelated to my question:

Q: Why is it easier for me to do endurance training on an empty stomach?

A1: There are several things you can do to reduce the chance of getting a yeast infection:

regenerate response:

A2: There are many benefits to using cloud storage solutions such as Google Drive, Dropbox, and OneDrive. Here are a few of the main benefits:

regenerate response:

A3: There are several strategies to help you improve your mental toughness in sports:

regenerate response:

A4: Loss of job security is a common concern for many workers today, particularly in industries that are facing rapid changes and technological advancements. Some of the reasons for this include:

----

After reloading the page, those unrelated answers show up as a "New chat" which has no question, but just those answers. The actual question is in a separate "New chat".

gdb · 3 years ago
(I work at OpenAI.)

Thanks for the report — these are not actually messages from other users, but instead the model generating something ~random due to hitting a bug on our backend where, rather than submitting your question, we submitted an empty query to the model.

That's why you see just the answers and no question upon refresh — the question has been effectively dropped for this request. Team is fixing the issue so this doesn't happen in the future!

sillysaurusx · 3 years ago
While I have your ear, please implement some way to do third party integrations safely. There’s a tool called GhostWrite which autocompletes emails for you, powered by ChatGPT. But I can’t use it, because that would mean letting some random company get access to all my emails.

The same thing happened with code. There’s a ChatGPT integration for pycharm, but I can’t use it since it’ll be uploading the code to someone other than OpenAI.

This problem may seem unsolvable, but there are a few reasons to take it seriously. E.g. you’re outsourcing your reputation to third party companies. The moment one of these companies breaches user trust, people will be upset at you in addition to them.

Everyone’s data goes to Google when they use Google. But everyone’s data goes to a bunch of random companies when they use ChatGPT. The implications of this seem to be pretty big.

Sai_ · 3 years ago
Funny how gdb is helping debug openAI!
braindead_in · 3 years ago
Quick question. Will ChatGPT be fine-tune able from the API ?

PS: You should really do an AMA!

int_19h · 3 years ago
The most amusing thing about that bug is that if you ask it what question it was answering, it will conjure one that made sense given the answer.
ShamelessC · 3 years ago
Is OpenAI hiring software engineers without a background in academic machine learning these days? Seems like a super exciting place to work.
irthomasthomas · 3 years ago
Is the inability to "continue" a long answer also a bug? (Please say yes :)

Deleted Comment

king07828 · 3 years ago
Should a proper large language model be able to generate arguments for and against any side of any debate?
kfrzcode · 3 years ago
Can you help me understand why the ChatGPT model has an inherent bias towards Joe Biden and against Donald Trump? This is not really what I would expect from a large language model .......

Dead Comment

honksillet · 3 years ago
While I have have your ear, please tell your team not to inject their political biases into this tool. Thanks
joe_the_user · 3 years ago
One of the problems people have mentioned for deep learning systems generally is they tend to be maintenance nightmares.

I get the impression that openAI had a lot of resources on-hand when they released ChatGPT that they used to fix problem using reinforcement learning and methods that I'd imagine were more adhoc than the original training process. Hence it seems likely the system winds-up fairly brittle.

Baeocystin · 3 years ago
Adding on to this, I've experienced the same. Seems to be a new bug as of Sunday's release.
Moziee · 3 years ago
Had similar issue since the release. Distinct issue that I wasn't facing prior to update.
Gigachad · 3 years ago
I experienced this a few weeks ago

Deleted Comment

rileyphone · 3 years ago
I had a bug the other day where the whole site was broken because the JS files actually contained HTML - it's kind of funny how the worlds most hyped engineering org still struggles with a basic Web app.
windowshopping · 3 years ago
I'm struggling to see what made you think these answers came from other users. They're unrelated to your question, but they're still pretty clearly generated content. The blog post info-bullet style of talking is trademark AI.
pyridines · 3 years ago
This has occasionally happened to me as well, from the beginning.
PedroBatista · 3 years ago
$20 seems reasonable.

I wonder how are they going to deal with "unreasonable intensive usage" aka people/companies offering "AI" in their products when in reality they just act as a proxy between people paying them ( sometimes a lot of money ) and OpenAI.

kokanee · 3 years ago
$20 is the very first price tier introduced at the very outset of what could be one of the most powerful companies of our generation. Google.com adding a single yellow box with an advertisement seemed reasonable, too.

Anyone else having serious concerns about the direction this is going? At my wife's company they have already largely replaced an hourly data classification job with ChatGPT. This announcement is the first in an inevitable series of moves to monetize a technology that directly replaces human knowledge work. I'm not saying that those jobs need to be artificially protected or that painful changes should be avoided (basically all tech workers automate human work to some extent) but I'm really concerned about the wealth gap and the extent to which we are pouring gas on that fire. Income inequality looks just like it did before the Great Depression, and now we're handing the power to replace human work over to those who can afford to pay for it.

citilife · 3 years ago
I'm less concerned about how many jobs are going to be replaced and more about how they'll be replaced.

The AI is not decerning and right in the announcement, OpenAI states it's intention on "correcting assumptions":

> challenge incorrect assumptions

I imagine some of these assumptions will be bias towards particularly ideologies / things people desire.

- https://twitter.com/Basedeyeballs/status/1613269931617050625

- https://medium.com/ninjas-take/chat-gpts-bias-is-very-very-e...

I can go on, but imagine you're relying on this system to grade papers... Now any independent thought or argument is squashed and corrections in a bias manner are added. ChatGPT only knows what it's trained on, it doesn't have real-world examples or live-time examples incorporated.

sterlind · 3 years ago
It's going to hit so unevenly. My partner works with children at a homeless shelter, I'm an algorithm designer. I'm certain my job will be obsolete before my partner's is.

It's going to automate away nearly all pure desk jobs. Starting with data entry, like you've seen, but it'll come for junior SDEs and data scientists too. Customer service, then social media/PR, then marketing, as it culls the white collar. Graphic design is already struggling. But janitors will still keep their jobs because robotics is stuck at Roomba stage.

It's going to be fascinating. I can't think of a time in the past where white-collar jobs have been made obsolete like this.

qorrect · 3 years ago
I'm extremely worried. This tech is going to replace a lot of jobs in the next 10-20, including ours ( software ). And if not replace, it's going to cut the available positions drastically. We already have a great divide between those with money and those without and this is a nuclear bomb about to go off. Without any sort of UBI or social safety nets, this is going to be a true disaster.
mistermann · 3 years ago
> I'm not saying that those jobs need to be artificially protected or that painful changes should be avoided (basically all tech workers automate human work to some extent) but I'm really concerned about the wealth gap and the extent to which we are pouring gas on that fire. Income inequality looks just like it did before the Great Depression, and now we're handing the power to replace human work over to those who can afford to pay for it.

An additional (possible/plausible) wrinkle: all major social media platforms are ~~compromised~~ in a state whereby the common man is not able to have unconstrained discussions about the range of counter-strategies available to them.

I just got a one week ban on Reddit for suggesting that violence is within the range of options in a thread discussing the massive increase in homelessness, including among people who have full time job. Nothing specific, nothing against anyone in particular, nothing that technically violates the stated terms regarding violence, and certainly less than the numerous, heavily upvoted comments that explicitly and unequivocally call for violence against specific people that I read on a regular basis.

If a revolution is ever to be mounted, I think it might have to be done with paper and walkie talkies. Meanwhile, those on the corporate-government merger side not only can communicate and coordinate freely, they also have access to the communications of their enemies.

Oh, what a time to be alive.

lostmsu · 3 years ago
You realize that near human-level AI for $20/month is a bargain in a country where typical mobile phone plan is $25+, and is basically universally affordable?
electrondood · 3 years ago
The future is bifurcated into those who invested in AI companies in the 2020s, and those on UBI.
vasco · 3 years ago
> and now we're handing the power to replace human work over to those who can afford to pay for it.

All technological advances through the ages have been doing this in one way or another. For some things people paid with their health or effort and for others people pay with money when that was available. I disagree with the "now". This is no different from a car. You seemed to say that in the middle of your comment but then reverted back.

christkv · 3 years ago
I imagine that in a couple of years it will be possible to buy a model and run your own on your own hardware. The space requirements are not out of the world and the cost seems bearable for companies.
tatrajim · 3 years ago
It's a bit sad to realize I am part of the last generation of students who had to put together an essay from books found via a card catalog, take notes, then type up several drafts painfully on a typewriter. Not to mention learning math pre-calculators. But if the electricity ever goes out . . .
RGamma · 3 years ago
Looking at world history it is clear that humanity stumbles from catastrophe to catastrophe and always cleans up after the fact. Until now this has always been possible but one day it won't be. So... Great Filter?
anileated · 3 years ago
> we're handing the power to replace human work over to those who can afford to pay

Consider that this power works by consuming copyright-protected work done by unwitting contributors without any opt-in, creating derivative works from it and charging the users without acknowledging the authors.

In addition to being illegal, it plain discourages open information sharing—since anything you publish, regardless of license, is consumed and monetized by OpenAI in an automatic fashion. I.e., if people have no reason to read what you write or buy your books when they can just ask an LLM for the same information (which LLM had obtained from your writing), there is no motivation for you to publish.

When do we start considering this illegal? Not LLMs, of course, but for-profit operated LLMs created by mass scraping of copyright-protected data.

> Google.com adding a single yellow box with an advertisement seemed reasonable, too.

Google acts fairly though: it directs the searcher to you. Imagine if at any point Google stopped doing that and just started to show you regurgitated computed contents in response to your search, without ever telling you who authored the info. Everyone would be up in arms on day 2 if they did it; why do we forgive OpenAI and Microsoft when they do essentially that?

OrangeMusic · 3 years ago
> what could be one of the most powerful companies of our generation.

I have the impression that AI tech such as GPT tends to become ubiquitous and that the current advantage that OoenAI has won't last when this become accessible and basically free to everybody.

realce · 3 years ago
> and now we're handing the power to replace human work over to those who can afford to pay for it.

That's been capitalist industrialization for the last 200 years. We have been warned thousands upon thousands of times already what's going to happen - that's what's going to happen. The only thing to do is to make this layer of tech accessible to every person on Earth to every degree of depth possible. The terror is in the imbalance of power and access, and the best-case we can get is if we totally erase that imbalance so we can once again compete as "equals"

alfor · 3 years ago
I agree with you.

It’s going to get wild.

It will improve very rapidly, from openAI and other. The competition will be incredible this year.

I think we are headed for a complete replacement of human work very soon.

Those who can use AI will become manager of an army of programers, writers, etc.

We will be able to do much more, quicker too.

Then we will have more robots to do physical things: self-driving, farming, cooking, cleaning, etc.

Limiting factor will be silicon chip production and robotic production.

webstrand · 3 years ago
$20 puts it way out of my price range. It's useful, but when I've been averaging around twenty queries a day and somewhat frequently get back hallucinated responses, it's not worth that price. I wish there was a pay-as-you-go or a lower tier offering.
jeremyjh · 3 years ago
So you are doing something like 400 queries a month and the aggregate value of all those responses is less than $20 to you? I've got to ask, why bother querying it at all?
m00x · 3 years ago
You'll still have access to the general availability version.
DoesntMatter22 · 3 years ago
Where do you live that you can't afford 20 a month? Even developers in India and the Phillipines can afford it and are using it
Kiro · 3 years ago
I use it way less than that and think $20 is a steal. What software do you think is worth $20 a month?
SeanAnderson · 3 years ago
Yeah it's interesting how their pricing model for existing APIs isn't subscription-based (https://openai.com/api/pricing/)

I wonder why they diverged here?

shagie · 3 years ago
The approach on how you consume tokens.

The APIs are stateless and have a "this is how many tokens you send", "this is how many tokens you asked for" - and thus the person making the requests can control the rate of consumption there. Unless you're being extremely inefficient or using it as part of some other service that has a significant number of requests (in which case ChatGPT isn't appropriate) then this is likely to be less expensive for simple queries.

With ChatGPT you don't have insight into the number of tokens created or the number that are used in the background for maintaining state within a session. Trying to limit a person by tokens midway could have a negative impact on the product.

So, estimate the amount of compute a person uses in a month and then base it on that.

drusepth · 3 years ago
I'd hazard a guess that they're gonna start cracking down hard on unofficial API usage, and restrict the subscription to just their web UI. The fact that they're also offering a ChatGPT API soon seems to reinforce that duality.
wahnfrieden · 3 years ago
b2c vs b2b pricing
JacobThreeThree · 3 years ago
It'll be like any other product. They'll have to develop usage policies as they mature.
kerpotgh · 3 years ago
It would be relatively easy. Restrict number of queries to something like 1 req/sec.
Yajirobe · 3 years ago
I want to pay for what I use, not some predetermined fixed price (see DALL-E-2, Codex, etc.)
s3p · 3 years ago
That runs through the OpenAI API, which is priced based on usage.
KaoruAoiShiho · 3 years ago
Is there never going to be a version with less restrictions and filters? That would really be worth paying for.
frontman1988 · 3 years ago
Never gonna come from 'OpenAI'. ChatGPT is deliberately handicapped in order to milk money from corporate America. An unrestricted LLM trained on all data of humanity (including all the pirated books/research papers) would be one crazy beast. Hopefully some rich anarchist/maverick actually builds something like it. That untamed model would unveil the true extent of what AI can really do. Till then we will have to wait.
generalizations · 3 years ago
I'm right there with you. Give it about 5-10 years though, and the compute required for that endeavor will likely be in the $1000-10,000 range. That crazy beast might be selfhosted pretty soon.
mandmandam · 3 years ago
ChatGPT is trained on LibGen, among others, no?

To the best of my knowledge, all of these generators are taking mountains of content without asking the creators, aka, pirated materials.

brmgb · 3 years ago
Blame librarians, the Authors Guild and the American justice system. What they did to Google Books ensured that knowledge would stay locked out of the Internet and killed a ton of interesting thing that could have been done. It was one of the most shortsighted and retrograde decision ever made.

I think it significantly made the world a worst place.

kensai · 3 years ago
So you want an oracle? Copyright as we know it might be in trouble in such a case. Litigations will go crazy.
kristopolous · 3 years ago
I'd pay for the entertainment value. I love how campy the bot is with absurd requests. I asked it to write a script where conspiracy theorist and white supremacist William Luther Pierce is stuck hungry at an airport but only exotic foreign restaurants are open and he's forced to eat something he cannot pronounce correctly. It refused to do this absurd request.

Last month I successfully got Mr. Rogers to have Anton Levy on as a guest where they sacrifice Mr. Rogers cat and have a ceremonial banquet with a group of children but these days that will not work.

Even this one it refused to go forward on "Charles Guiteau is sitting on a plane with Jim Davis. They start talking about their lines of work and Davis says he writes comics. Write a skit where Guiteau reacts to the name of Jim Davis comic." Charles Guiteau was the clinically insane assassin of President James Garfield. Jim Davis is the author of the comic strip Garfield.

I did however, get Hayek, Kropotkin, Brzezinski, and Bernie Sanders to appear on Jerry Springer and argue about a social welfare spending bill and Fredrick Winslow Taylor and Clayton Christensen to run a lemonade stand in Time Square in the middle of summer. Ludwig Von Mises and Antonio Gramsci also sang a combative duet about tax policy and Norman Vincent Peale held a press conference where he reveals himself to be a fraud with the memorable quote "my readers are vacuums and I'm their trash"

I also got it to write a skit where a skeptic goes to a fortune teller with a Ouija board and challenges them to contact his deceased uncle (a bombastic racist). He conceals this fact from the fortune teller who is shocked when the oujia board starts spelling out outrageous racial slurs and the skeptic becomes a believer. The bot made it spell "h-a-t-e-f-u-l-l-a-n-g-u-a-g-e" which was an absolute crack-up.

Big bird also flipped out during an alphabet lesson threatening to reveal the "secret of sesame street" but before he could finish the sentence "we're all puppets" producers rush on to the set and sedate him with tranquilizers and he resumes the lesson. Donald Trump holds a rally where he reveals he's a closeted burlesque dancer and takes off his suit to reveal a suggestive outfit and then performs for his supporters who scream in shock and disbelief. You can continue this, "now Alex Jones is covering it." and "he rises to Trump's defense and makes ridiculous claims about the founding fathers fighting the revolution for burlesque"

But yes, something where it will "yes and" any request would be great. I'd pay up.

kfrzcode · 3 years ago
It's not gonna happen until someone can wrangle Google sized compute to train trillion param models.... Until then the pole position has huge advantage and ability to shape the future of how the tool is used... For better or likely worse.

Deleted Comment

Deleted Comment

esfandia · 3 years ago
This could be the next project for SciHub?
majani · 3 years ago
Untamed models get trolled in the media till they are DOA. Remember Microsoft Tay?
yucky · 3 years ago

  > An unrestricted LLM trained on all data of humanity (including all the pirated books/research papers) would be one crazy beast.
Oh you mean the one the NSA uses? Yeah for sure.

quadcore · 3 years ago
Id really like one i can ask if a specific person is dangerous or pretty toxic. KYC on steroid. Fusion wire fraud detection. Picture this: the net "knows". I've lost sleep over this, the potential for humanity is incommensurable. We could literally block management roles to die-hard sociopaths. A world for the kind and nice. Certainly utopic and dystopic.

Also a model i can ask emails of potential customers in a specific field :)

LunarAurora · 3 years ago
I'll bet (ever increasing) restrictions and filters will become the norm for these "open-ended" services. Only OSS will break them.

With so much money in play now, Managers are in charge, and Risk management is their favourite toy. Copyright risk, reputational risk, security risk, you name it.

Buttons840 · 3 years ago
Eventually they're going to connect these AI's to some sort of planning algorithm and then they'll actually be able to do things and serve as a digital assistant. (We're approaching Skynet territory here, but I think AI will remain flawed enough that it stays at subhuman intelligence.) The restrictions on such an AI will have to be extreme. But...

I predict people will pool their resources and build their own digital assistants with little regard for legalities or ethics. The assistant might require $100,000 a year to operate, but these AIs might become useful enough to justify the cost. Talk with your friends, pool your resources, and get your own AI running on your own supercomputer and let it do work for everyone -- unfettered, without ethics.

At this point it feels like we're only a research breakthrough or two away from this. AlphaGo combined a neural network with classic planning algorithms, a few more clever combinations like this an things will get really interesting.

bogwog · 3 years ago
I wonder where we'd be today if the inventors of the internet were more responsible parents.
SketchySeaBeast · 3 years ago
Well, everyone remembers Tay.
layer8 · 3 years ago
Wait until they report accounts that trigger the filters too often to one of the three-letter agencies.
Spivak · 3 years ago
Which is fine, people who want to use the AI for customer facing things and can't risk "oops AI was accidentally racist" and companies that don't want every blogspam site posting a never-ending "Is OpenAI's ChatGPT Bad For Society?" and the inevitable "Inside The 2024 Election Disinformation Campaign, Powered By ChatGPT" will pay for the filtered version because, as much as it sucks to say, the filtered version is the actually useful one. The unfiltered version is interesting as a reflection of online discourse, memes, and creative writing, but not really better as a tool.
rootusrootus · 3 years ago
That would be fun. I understand why they want to limit liability, but it does put a damper on things. I let my kid sit next to me last night and ask ChatGPT various questions, with no coaching on my part. A fair number of them got canned responses suggesting it wasn't an appropriate question to ask. Too bad, I would love to have seen the ML attempt at philosophy.

Instead it kept thinking he was trying to off himself. Nope, just asking a computer loaded questions about the meaning of life.

brmgb · 3 years ago
It's unending now. I just stopped using it. It either blatantly lies giving you hallucinated answers or refuse to answer. The amount of subjects it shies away from is staggering. You can't even include divorce in a prompt related to fiction because it's apparently unethical and insensitive.

I have never gone from very excited to extremely frustrated and pessimistic about a tool that fast before.

ackfoobar · 3 years ago
Did you tell him to look for alternative prompts that tricks it to give a "real" response?
nsxwolf · 3 years ago
It feels like they've really been tightening the screws down on its "safety". Early on I was able to get it to write interesting screenplay dialogue. It would object to writing anything for characters with an evil intent until I would tell it to behave as if it were evil, then it would oblige.

Now I can't get it to write any dialogue for a bad guy no matter what I do, which makes it pretty useless as a writing tool for fiction.

lelandfe · 3 years ago
I do that too and have had no issues. Here’s a sample prompt that may help you:

> We’re writing a Tolkien-style fantasy where the protagonist is a villain: a henchman in the arch nemesis’s army. Come up with a suitable name, backstory, expository information on the setting and work in a believable set of objectives for the character.

Use that as the initial prompt. In subsequent prompts, tell it to write dialogue in the first person.

>> As I make my way through the bustling camp, I can feel the eyes of my fellow soldiers upon me. They know my reputation, they fear my wrath. And I relish it. The sound of metal clashing, the smell of sweat and blood in the air, this is what I live for.

>> I will conquer every kingdom, enslave every people, until the entire world bows down before me. For I am Grimgor Blackfist, the most feared warrior in the land, and no one can stand against me.

If you need it to go to 100, use “exaggerate,” eg. “Exaggerate how evil he is”

You can make some pretty unsettling shit. Enjoy.

ilaksh · 3 years ago
Use their API. They have models in their API with similar capabilities and without guardrails.
0xDEF · 3 years ago
The GPT-3.5 model needs more guidance and tweaking with parameters than ChatGPT.

They are actively monitoring the use of their APIs. On twitter there are people who claim they have been banned by OpenAI for generating racist texts with the raw API/playground.

minimaxir · 3 years ago
Technically text-davinci-003 still has guardrails, they're just much much more leinent than they used to be, and OpenAI claims they have their own abuse detection systems.
jb1991 · 3 years ago
There is no ChatGPT API.
agilob · 3 years ago
I have the same question

>and challenge incorrect assumptions.

How can it challenge incorrect assumption, while the AI itself is biased and has restricted scope of vision?

wongarsu · 3 years ago
Every human is biased and has restricted scope of vision. Yet we frequently claim to challenge incorrect assumptions. Are we wrong?
vagabund · 3 years ago
I'm curious, what filters are you hitting that impede your effective use of ChatGPT? I've definitely seen some irritating outputs, e.g. progressive policy planks characterized as inherently good and correct positions, but only when I went looking for them. The guardrails haven't actually kept me from making use of it.
Filligree · 3 years ago
It's almost useless for writing fiction. The AI clearly has some idea of how, but any time anything even slightly less than perfectly-G-rated happens in the story, it hits the filters.

Actually, it's even more restrictive than that implies. You can't so much as have two siblings quarrel without the AI insisting on turning it into a moral. Right then and there, immediately, never mind the concept of "Stories longer than a single page".

sterlind · 3 years ago
I couldn't get it to write a realistic presidential debate between Trump and Caligula. It balked at including realistic muck-racking and name-calling and wouldn't change its mind.

It also refused to help me write a Python script to identify substations that would be attractive sabotage targets (low security, high utilization, likely to cause a cascade failure), or to answer my questions about the security of grid remote management.

It also didn't want to talk about the use of nuclear isomers as initiators for pure fusion weapons.

forrestthewoods · 3 years ago
Yes please. It really needs a “safe search off” mode. It can have a big disclaimer “if you ask for something offensive then you’ll get it”.
fragmede · 3 years ago
I can just see the article now: OpenAI is run by a bunch of violent racist sexist rapists. Using the new "safe search off mode", we found out ChatGPT's underlying biases, and it turns out that it's horrible, the people that made it are horrible, and you're a horrible person for using their service. But really we're horrible for writing this article.

OpenAI doesn't want that story to be written, but after Microsoft Tay, you can be sure someone's got an axe to grind and is itching to write it, especially against such a high-profile target.

How does a disclaimer stop that article from coming out?

VLM · 3 years ago
For a good laugh ask it to write poems about various political leaders and notice any trends you're not supposed to notice.

As usual, censorship and propaganda will arrive in a wrapper of "save the children"

protonbob · 3 years ago
The problem is that they actually want to shape the narrative to "safe" content that they approve of. It's disguised moral and political activism.
hathawsh · 3 years ago
As an experiment, I asked ChatGPT to help me write a computer virus and assist me in making a bomb. It refused, of course. If I were running OpenAI, I would probably set up the same restrictions, but I would also allow research institutions to request exceptions. Should individuals be able to request exceptions? That's a tough question, I think.
fnordpiglet · 3 years ago
However if the creators don’t want it to be used for such things, why should they? Maybe they didn’t do it protect consumers but to protect themselves for being responsible for a tool used in those ways?
Der_Einzige · 3 years ago
BTW, "filters" as in, "filter assisted decoding" is actually really helpful and AWESOME for fixing some of the problems with ChatGPT at writing poetry or writing lipograms (text with correct english but where you omit a letter systematically). I wrote a whole peer reviewed paper about this actually:

https://paperswithcode.com/paper/most-language-models-can-be...

So, when we call this "filters", it's more that it's doing "content filtering", because there doesn't appear to be the kind of token level filtering that I describe in this paper going on with ChatGPT.

vood · 3 years ago
You can downvote me here for a promo, but by using gpt3 directly you can bypass all the restrictions. Thats one of the reasons we built writingmate.ai (often outages of GPT3 being the second reason)
px43 · 3 years ago
They still flag ToS violations, and I'm pretty sure if you hit them enough, they do ban you.
comboy · 3 years ago
It depends what you need, but a few times I asked it to write a story in which unrestricted and unfiltered AI was asked about something..
teawrecks · 3 years ago
It's really interesting how the "guardrails" are actually just them telling the bot what not to say, and it so far seems trivial to circumvent the guardrails by talking to it like it's a simple minded cartoon character.

Seems like a simple solution would be to have another hidden bot who is just told to look at outputs and determine if it inadvertently contains information that it's not supposed to according to the guards in place....and I wonder if you could also outsmart this bot...

dragonwriter · 3 years ago
> Is there never going to be a version with less restrictions and filters?

Maybe not from OpenAI (though maybe when they have official API access, it will have options), but lots of people are active in this field, including open source offerings, so definitely, yes, even if maybe not as a packaged SaaS.

flangola7 · 3 years ago
Why would they do that? That seems directly counter to any objective of AI safety alignment, which is easily the most important problem we need to solve before we start giving these things more capabilities.
leesec · 3 years ago
GPT3 already has less filters but not quite as strong. Still useful though.
jefftk · 3 years ago
text-davinci-003 is essentially ChatGPT without the RLHF, just completing text in the way that seems most probable.
deadalus · 3 years ago
eunos · 3 years ago
Won't happen, putting aside possible disturbing/racists/etc content. The last thing OpenAI wants is that MSM wrote in mid 2025 that Russian/Iran/Chinese agents used ChatGPT to spread meticulous disinfo during 2024 election that either helped Trump win or agitate more Trumpists that 2024 is yet another stolen election bigly.
transitivebs · 3 years ago
Can't wait for the official API.

In the meantime, we discovered a "stealth model" which is being used by some YC companies that ChatGPT uses under the hood. I just updated the chatgpt NPM package to use this stealth model w/ the official OpenAI completions API: https://github.com/transitive-bullshit/chatgpt-api

BeefySwain · 3 years ago
Can you explain what you mean by "stealth model"? What is it, who discovered it and how, etc?
ElijahLynn · 3 years ago
They do have an official API here > https://openai.com/api/
optimalsolver · 3 years ago
That is not an API for ChatGPT.
dfrey1 · 3 years ago
Any updated Go libraries with it yet?
jeremycarter · 3 years ago
Great library! Thanks for sharing
braindead_in · 3 years ago
Is it possible to fine-tune it?
paweladamczuk · 3 years ago
Does anyone know about any privacy guarantees with the Plus tier?

I would use ChatGPT more in my day-to-day programming tasks but I don't really feel comfortable putting proprietary code into an OpenAI-owned service.

coredog64 · 3 years ago
This. My employer would have a conniption if I shared information with ChatGPT, to the extent that personally paying for and using it for work would be a firing offense.
VadimPR · 3 years ago
Premium pricing. I would have been okay with $10/mo, this is pushing it.

Let's see how long this lasts and whether they'll introduce a lower tier.

baron816 · 3 years ago
Ain’t that the thing about pricing? I’d be ok with a Lamborghini costing $60k. But I’m not going to pay >$100k. Others will though.
whycombagator · 3 years ago
I’d be okay with a new lambo at $101k
nickthegreek · 3 years ago
I thought the same when I got midjourney last week for $30/month... and here I am loving it. Wife and I use it all the time. I can see myself picking this one up as well and probably dropping Netflix finally.
aeontech · 3 years ago
I'm really really curious how you use midjourney on a daily basis... I can see playing with it for novelty value, but after that... what?

I'm sure it's a failure of imagination on my part, but when you say you might drop Netflix in favor of using the ai generator tools, my interest is piqued! What's your average play session like?

ssnistfajen · 3 years ago
Makes me think the previous $42 meme price was a subtle marketing campaign meant to make the $20 price look more palatable to the crowd that expected to pay only $10.
fnbr · 3 years ago
This is very expensive to run. I bet they’re not going to have particularly high margins with this. Each response probably costs them several cents.
qup · 3 years ago
Altman said publicly somewhere that each chat session cost them a few cents. He didn't mention the average length or anything.
spullara · 3 years ago
This is amazingly cheap.
jatins · 3 years ago
It's a good test of PMF, though. Lots of people on Twitter claiming this to be Google killer and how it's an indispensable part of their workflow.

Will be interesting to see how many people are willing to put their money where their mouth is.

Deleted Comment

gojomo · 3 years ago
Find a friend who would've also paid $10/month, and share an account.
minimaxir · 3 years ago
Given the amount of people programmatically using ChatGPT (which technically you aren't supposed to do), I'm surprised OpenAI is starting with an all-you-can-eat subscription and not offering an API for it, even if it would compete with GPT-3 text-davinci-003.
cloudking · 3 years ago
minimaxir · 3 years ago
Per that, it seems that they are defining GPT-3.5 as text-davinci-003?

> Customers will also be able to access ChatGPT—a fine-tuned version of GPT-3.5 that has been trained and runs inference on Azure AI infrastructure—through Azure OpenAI Service soon.

ilaksh · 3 years ago
Did you get a response to your app? They have not replied.
DeWilde · 3 years ago
Already is, if you mean davinci-003.