I've been using ChatGPT pretty consistently during the workday and have found it useful for open ended programming questions, "cleaning up" rough bullet points into a coherent paragraph of text, etc. $20/month useful is questionable though, especially with all the filters. My "in between" solution has been to configure BetterTouchTool (Mac App) with a hotkey for "Transform & Replace Selection with Javascript". This is intended for doing text transforms, but putting an API call instead seems to work fine. I highlight some text, usually just an open ended "prompt" I typed in the IDE, or Notes app, or an email body, hit the hotkey, and ~1s later it adds the answer underneath. This works...surprisingly well. It feels almost native to the OS. And it's cheaper than $20/month, assuming you aren't feeding it massive documents worth of text or expecting paragraphs in response. I've been averaging like 2-10c a day, depending on use.
Here is the javascript if anyone wants to do something similar. I don't know JS really, so I'm sure it could be improved. But it seems to work fine. You can add your own hard coded prompt if you want even.
You own your life - why not spend your own money for the things that make you and your life better?
Who cares?
I worked at a job where I had a small, crappy monitor. I made decent cash. I just bought a large decent monitor and brought it into work. I ended up using it for many years. My life was significantly better. I've done that at several jobs since then, and NEVER regretted it, in fact it was one of the soundest decisions I've ever made. Also keyboard and mouse.
There are so many people using the default keyboard, the default monitor, the default tools.
If you push work to do it for you, you need to challenge the "everone gets a dell 19" monitor" b.s. If you push your boss, he might have to do justification paperwork.
No. I'm a salaried employee. Marginal time/effort savings do not directly translate into more money for me. But the $20 charge hits my bank account today. Perhaps if I use it consistently enough and in smart enough ways I will be perceived to be a more valuable/productive employee, which might translate to a raise. But that's a lot of maybes. I'm sure it will get to that point eventually, but by then the value will be undeniable and my employer will pay for the subscription. Until then, I will continue to use the free version, or pay-per-use with the API, or just use google.
I use my toothbrush every day but I wouldn't pay $20 per month for it.
I use my keyboard everyday but I wouldn't pay $20 per month for it. In fact, I paid around $4 total for it, as paying more would bring significantly more diminishing returns.
I use my phone every day and have used it for the past 5 years with no issue, it has brought me so much value and yet, if I draw the line, it didn't even reach $20 per month (price divided by time used), not even mentioning that I expect it to last another 2-3 years, bringing the cost down even further.
What kind of crazy value would you expect something to have in order to be worth $20/mo?
People are so cheap it's ridiculous. If we ever get past people being unwilling to pay for software beyond rates of 1 cent per hour tech will blow up to 10x as big as it is right now.
Is it surprisingly? Value is not determined by frequency of use, but by the qualitative difference: if gp doesn't use it at all, would anything of value be lost?
He's a thought experiment: imagine a device that changes the scent of the air I breathe to something I find pleasant. I could use this device all day everyday for free (or on the cheap), but I will not pay $20/mo for it. Losing access to the features really isn't worth that much. On the flip side, many people pay thousands of dollars to rent machines that helps them breathe, even if that adds up to total of less than an hour of their lives - which is nor much.
I pay $80 a year for IntelliJ and that works out to waaay less than something like CoPilot or ChatGPT and is waaay more consistently useful.
$20 a month for ML tool that is only sometimes useful is a tough sell, especially in a world where a lot of people feel like $80 a year for IntelliJ is too much.
Coders are thrifty bastards, except when it comes to their personal vices in which case financial responsibility goes out the window...
I would think the big issue here is that they still make a ton of money off of you by selling your data.
Any Software as Service is deeply flawed because it is pretty much guaranteed to extract as much data from the consumer as possible. In this case, it is quite a bit worse, because it's likely close to your entire content or body of work that they will take.
So unless it becomes something that runs locally and has no networking component to it whatsoever, it's not going to be worth spending money on for many people or companies.
Thanks for the great app man! You may not have even realized this, but this was randomly crashing only a few versions ago, and you just recently pushed an update that did something to the Replace w/ Javascript functionality that fixed it. Was super pleasantly surprised to have found that overnight the problem was solved without even having to submit a bug report.
Another happy user here. BetterTouchTool [1] is a must-install on any new Mac for me. I have so many keyboard customizations that it's hard to live without. Thanks for such a great piece of software!
using BTT since discovered in 2016 and it's essential. Time for a lifetime with a new version, there a lot of things how you can make Mac more pleasant for your use.
Thank you for the app!
Since the $20/month is for priority access to new features and priority access including during high-load times, not API access (a separatr offering not yet available), I don't understand the cost comparison. What you are proposing does not substitute for any part of the $20/month offering over the basic free product.
Oh right. A bunch of "new features" with exactly zero explanation as to what they are and "priority access" when the API responds nearly instantaneously. But keep drinking that kool aid to justify your $20 purchase.
ChatGPT struggles with out-of-distribution problems. However, it excels at solving problems that have already been solved on the internet/GitHub. By connecting different contexts, ChatGPT can provide a ready solution in just a few seconds, saving you the time and effort of piecing together answers from various sources. But when you have a problem that can't be found on Google, even if it's a simple one-liner or one function, then in my experience ChatGPT will often produce an incorrect solution. If you point out what's wrong, it will acknowledge the error and then provide another incorrect answer.
This is the expected behavior. It's a language model trained to predict the next word (part of words actually) after all.
What is unexpected is the ability to perform highly in a multitude of tasks it was never trained for, like answering questions or writing code.
I suppose we can say we basically don't understand what the f* is going on with GPT-3 emergent abilities, but hey, if we can make it even better at those tasks like they did with chatGPT, sign me in.
Is not that the AI is too dumb, it's that my computer now can write me code I'd take one hour to Google and check and test. Now I ask, ask for corrections, test the answer and voila, my productivity just went through the roof.
So, my point is: don't believe (or be mad about) the hype from people that don't understand what curious marvel we got in front of us, just see how you can use it.
$20/month is too much? When I filled in the "pro" survey, I said I'd pay $200/month. This thing is a cheap-as-hell technical writer, fact checker, information cruncher, and more.
I agree that it's very useful, but I'd be careful about "fact checker". GPT is perfectly happy to confirm falsehoods as facts and hallucinate convincing lies. A good fact checker verifies from multiple sources and uses critical thinking, neither of which ChatGPT can do.
Wow, I just implemented this in BTT and it's amazing how quickly it's become an indispensable tool. Just highlight any text I type and get the "answer" to it. Thanks for the tip!
I'm sure you can have ChatGPT turn a paragraph into bullet points for you. Repeating that n times would be an interesting variation on the game of Telephone.
i used the same API but for an ios shortcut. it's not the same thing as chatgpt, as the completions api doesn't know about context. but it does feel a lot snappier.
> have found it useful for open ended programming questions
i have found it to be terrible when it comes to something simple, like constructing a regex.
WYDM exactly by iOS shortcut? I use a Mac but only an android phone. Do you mean for mobile usage?
> i have found it to be terrible when it comes to something simple, like constructing a regex.
Oh yeah, agreed. It's not good for super specific stuff like that. But more like, I write a weird loop and ask if there is a more idiomatic way to do the above. Or I just describe what I want to do and say, "list three options for accomplishing that". It's great for "feature discovery" essentially. I find it complementary to copilot.
I think it absolutely worth 20 bucks/month. It's an absolute helpful tool. To write text, but as I discovered yesterday, to write code. Over a long chat with many iterations, you can produce code,test code or pseudo code. I used yesterday in a meeting with some colleagues, while discussing a new feature. The code produced after our initial spec was pretty good to make sure we all understood what we wanted.
Have you been able to find uses for chatGPT where it’s better than the top result in Google?
I’ve consistently found that Google returns similar results. The only difference is that if it’s a longer snippet I need to click into the website, but even then, it’s usually faster than chatGPT.
I'm currently receiving answers FROM OTHER USERS with this new version from 30 Jan, completely unrelated to my question:
Q: Why is it easier for me to do endurance training on an empty stomach?
A1: There are several things you can do to reduce the chance of getting a yeast infection:
regenerate response:
A2: There are many benefits to using cloud storage solutions such as Google Drive, Dropbox, and OneDrive. Here are a few of the main benefits:
regenerate response:
A3: There are several strategies to help you improve your mental toughness in sports:
regenerate response:
A4: Loss of job security is a common concern for many workers today, particularly in industries that are facing rapid changes and technological advancements. Some of the reasons for this include:
----
After reloading the page, those unrelated answers show up as a "New chat" which has no question, but just those answers. The actual question is in a separate "New chat".
Thanks for the report — these are not actually messages from other users, but instead the model generating something ~random due to hitting a bug on our backend where, rather than submitting your question, we submitted an empty query to the model.
That's why you see just the answers and no question upon refresh — the question has been effectively dropped for this request. Team is fixing the issue so this doesn't happen in the future!
While I have your ear, please implement some way to do third party integrations safely. There’s a tool called GhostWrite which autocompletes emails for you, powered by ChatGPT. But I can’t use it, because that would mean letting some random company get access to all my emails.
The same thing happened with code. There’s a ChatGPT integration for pycharm, but I can’t use it since it’ll be uploading the code to someone other than OpenAI.
This problem may seem unsolvable, but there are a few reasons to take it seriously. E.g. you’re outsourcing your reputation to third party companies. The moment one of these companies breaches user trust, people will be upset at you in addition to them.
Everyone’s data goes to Google when they use Google. But everyone’s data goes to a bunch of random companies when they use ChatGPT. The implications of this seem to be pretty big.
Can you help me understand why the ChatGPT model has an inherent bias towards Joe Biden and against Donald Trump? This is not really what I would expect from a large language model .......
One of the problems people have mentioned for deep learning systems generally is they tend to be maintenance nightmares.
I get the impression that openAI had a lot of resources on-hand when they released ChatGPT that they used to fix problem using reinforcement learning and methods that I'd imagine were more adhoc than the original training process. Hence it seems likely the system winds-up fairly brittle.
I had a bug the other day where the whole site was broken because the JS files actually contained HTML - it's kind of funny how the worlds most hyped engineering org still struggles with a basic Web app.
I'm struggling to see what made you think these answers came from other users. They're unrelated to your question, but they're still pretty clearly generated content. The blog post info-bullet style of talking is trademark AI.
I wonder how are they going to deal with "unreasonable intensive usage" aka people/companies offering "AI" in their products when in reality they just act as a proxy between people paying them ( sometimes a lot of money ) and OpenAI.
$20 is the very first price tier introduced at the very outset of what could be one of the most powerful companies of our generation. Google.com adding a single yellow box with an advertisement seemed reasonable, too.
Anyone else having serious concerns about the direction this is going? At my wife's company they have already largely replaced an hourly data classification job with ChatGPT. This announcement is the first in an inevitable series of moves to monetize a technology that directly replaces human knowledge work. I'm not saying that those jobs need to be artificially protected or that painful changes should be avoided (basically all tech workers automate human work to some extent) but I'm really concerned about the wealth gap and the extent to which we are pouring gas on that fire. Income inequality looks just like it did before the Great Depression, and now we're handing the power to replace human work over to those who can afford to pay for it.
I can go on, but imagine you're relying on this system to grade papers... Now any independent thought or argument is squashed and corrections in a bias manner are added. ChatGPT only knows what it's trained on, it doesn't have real-world examples or live-time examples incorporated.
It's going to hit so unevenly. My partner works with children at a homeless shelter, I'm an algorithm designer. I'm certain my job will be obsolete before my partner's is.
It's going to automate away nearly all pure desk jobs. Starting with data entry, like you've seen, but it'll come for junior SDEs and data scientists too. Customer service, then social media/PR, then marketing, as it culls the white collar. Graphic design is already struggling. But janitors will still keep their jobs because robotics is stuck at Roomba stage.
It's going to be fascinating. I can't think of a time in the past where white-collar jobs have been made obsolete like this.
I'm extremely worried. This tech is going to replace a lot of jobs in the next 10-20, including ours ( software ). And if not replace, it's going to cut the available positions drastically. We already have a great divide between those with money and those without and this is a nuclear bomb about to go off. Without any sort of UBI or social safety nets, this is going to be a true disaster.
> I'm not saying that those jobs need to be artificially protected or that painful changes should be avoided (basically all tech workers automate human work to some extent) but I'm really concerned about the wealth gap and the extent to which we are pouring gas on that fire. Income inequality looks just like it did before the Great Depression, and now we're handing the power to replace human work over to those who can afford to pay for it.
An additional (possible/plausible) wrinkle: all major social media platforms are ~~compromised~~ in a state whereby the common man is not able to have unconstrained discussions about the range of counter-strategies available to them.
I just got a one week ban on Reddit for suggesting that violence is within the range of options in a thread discussing the massive increase in homelessness, including among people who have full time job. Nothing specific, nothing against anyone in particular, nothing that technically violates the stated terms regarding violence, and certainly less than the numerous, heavily upvoted comments that explicitly and unequivocally call for violence against specific people that I read on a regular basis.
If a revolution is ever to be mounted, I think it might have to be done with paper and walkie talkies. Meanwhile, those on the corporate-government merger side not only can communicate and coordinate freely, they also have access to the communications of their enemies.
You realize that near human-level AI for $20/month is a bargain in a country where typical mobile phone plan is $25+, and is basically universally affordable?
> and now we're handing the power to replace human work over to those who can afford to pay for it.
All technological advances through the ages have been doing this in one way or another. For some things people paid with their health or effort and for others people pay with money when that was available. I disagree with the "now". This is no different from a car. You seemed to say that in the middle of your comment but then reverted back.
I imagine that in a couple of years it will be possible to buy a model and run your own on your own hardware. The space requirements are not out of the world and the cost seems bearable for companies.
It's a bit sad to realize I am part of the last generation of students who had to put together an essay from books found via a card catalog, take notes, then type up several drafts painfully on a typewriter. Not to mention learning math pre-calculators. But if the electricity ever goes out . . .
Looking at world history it is clear that humanity stumbles from catastrophe to catastrophe and always cleans up after the fact. Until now this has always been possible but one day it won't be. So... Great Filter?
> we're handing the power to replace human work over to those who can afford to pay
Consider that this power works by consuming copyright-protected work done by unwitting contributors without any opt-in, creating derivative works from it and charging the users without acknowledging the authors.
In addition to being illegal, it plain discourages open information sharing—since anything you publish, regardless of license, is consumed and monetized by OpenAI in an automatic fashion. I.e., if people have no reason to read what you write or buy your books when they can just ask an LLM for the same information (which LLM had obtained from your writing), there is no motivation for you to publish.
When do we start considering this illegal? Not LLMs, of course, but for-profit operated LLMs created by mass scraping of copyright-protected data.
> Google.com adding a single yellow box with an advertisement seemed reasonable, too.
Google acts fairly though: it directs the searcher to you. Imagine if at any point Google stopped doing that and just started to show you regurgitated computed contents in response to your search, without ever telling you who authored the info. Everyone would be up in arms on day 2 if they did it; why do we forgive OpenAI and Microsoft when they do essentially that?
> what could be one of the most powerful companies of our generation.
I have the impression that AI tech such as GPT tends to become ubiquitous and that the current advantage that OoenAI has won't last when this become accessible and basically free to everybody.
> and now we're handing the power to replace human work over to those who can afford to pay for it.
That's been capitalist industrialization for the last 200 years. We have been warned thousands upon thousands of times already what's going to happen - that's what's going to happen. The only thing to do is to make this layer of tech accessible to every person on Earth to every degree of depth possible. The terror is in the imbalance of power and access, and the best-case we can get is if we totally erase that imbalance so we can once again compete as "equals"
$20 puts it way out of my price range. It's useful, but when I've been averaging around twenty queries a day and somewhat frequently get back hallucinated responses, it's not worth that price. I wish there was a pay-as-you-go or a lower tier offering.
So you are doing something like 400 queries a month and the aggregate value of all those responses is less than $20 to you? I've got to ask, why bother querying it at all?
The APIs are stateless and have a "this is how many tokens you send", "this is how many tokens you asked for" - and thus the person making the requests can control the rate of consumption there. Unless you're being extremely inefficient or using it as part of some other service that has a significant number of requests (in which case ChatGPT isn't appropriate) then this is likely to be less expensive for simple queries.
With ChatGPT you don't have insight into the number of tokens created or the number that are used in the background for maintaining state within a session. Trying to limit a person by tokens midway could have a negative impact on the product.
So, estimate the amount of compute a person uses in a month and then base it on that.
I'd hazard a guess that they're gonna start cracking down hard on unofficial API usage, and restrict the subscription to just their web UI. The fact that they're also offering a ChatGPT API soon seems to reinforce that duality.
Never gonna come from 'OpenAI'. ChatGPT is deliberately handicapped in order to milk money from corporate America. An unrestricted LLM trained on all data of humanity (including all the pirated books/research papers) would be one crazy beast. Hopefully some rich anarchist/maverick actually builds something like it. That untamed model would unveil the true extent of what AI can really do. Till then we will have to wait.
I'm right there with you. Give it about 5-10 years though, and the compute required for that endeavor will likely be in the $1000-10,000 range. That crazy beast might be selfhosted pretty soon.
Blame librarians, the Authors Guild and the American justice system. What they did to Google Books ensured that knowledge would stay locked out of the Internet and killed a ton of interesting thing that could have been done. It was one of the most shortsighted and retrograde decision ever made.
I think it significantly made the world a worst place.
I'd pay for the entertainment value. I love how campy the bot is with absurd requests. I asked it to write a script where conspiracy theorist and white supremacist William Luther Pierce is stuck hungry at an airport but only exotic foreign restaurants are open and he's forced to eat something he cannot pronounce correctly. It refused to do this absurd request.
Last month I successfully got Mr. Rogers to have Anton Levy on as a guest where they sacrifice Mr. Rogers cat and have a ceremonial banquet with a group of children but these days that will not work.
Even this one it refused to go forward on "Charles Guiteau is sitting on a plane with Jim Davis. They start talking about their lines of work and Davis says he writes comics. Write a skit where Guiteau reacts to the name of Jim Davis comic." Charles Guiteau was the clinically insane assassin of President James Garfield. Jim Davis is the author of the comic strip Garfield.
I did however, get Hayek, Kropotkin, Brzezinski, and Bernie Sanders to appear on Jerry Springer and argue about a social welfare spending bill and Fredrick Winslow Taylor and Clayton Christensen to run a lemonade stand in Time Square in the middle of summer. Ludwig Von Mises and Antonio Gramsci also sang a combative duet about tax policy and Norman Vincent Peale held a press conference where he reveals himself to be a fraud with the memorable quote "my readers are vacuums and I'm their trash"
I also got it to write a skit where a skeptic goes to a fortune teller with a Ouija board and challenges them to contact his deceased uncle (a bombastic racist). He conceals this fact from the fortune teller who is shocked when the oujia board starts spelling out outrageous racial slurs and the skeptic becomes a believer. The bot made it spell "h-a-t-e-f-u-l-l-a-n-g-u-a-g-e" which was an absolute crack-up.
Big bird also flipped out during an alphabet lesson threatening to reveal the "secret of sesame street" but before he could finish the sentence "we're all puppets" producers rush on to the set and sedate him with tranquilizers and he resumes the lesson. Donald Trump holds a rally where he reveals he's a closeted burlesque dancer and takes off his suit to reveal a suggestive outfit and then performs for his supporters who scream in shock and disbelief. You can continue this, "now Alex Jones is covering it." and "he rises to Trump's defense and makes ridiculous claims about the founding fathers fighting the revolution for burlesque"
But yes, something where it will "yes and" any request would be great. I'd pay up.
It's not gonna happen until someone can wrangle Google sized compute to train trillion param models.... Until then the pole position has huge advantage and ability to shape the future of how the tool is used... For better or likely worse.
Id really like one i can ask if a specific person is dangerous or pretty toxic. KYC on steroid. Fusion wire fraud detection. Picture this: the net "knows". I've lost sleep over this, the potential for humanity is incommensurable. We could literally block management roles to die-hard sociopaths. A world for the kind and nice. Certainly utopic and dystopic.
Also a model i can ask emails of potential customers in a specific field :)
I'll bet (ever increasing) restrictions and filters will become the norm for these "open-ended" services. Only OSS will break them.
With so much money in play now, Managers are in charge, and Risk management is their favourite toy. Copyright risk, reputational risk, security risk, you name it.
Eventually they're going to connect these AI's to some sort of planning algorithm and then they'll actually be able to do things and serve as a digital assistant. (We're approaching Skynet territory here, but I think AI will remain flawed enough that it stays at subhuman intelligence.) The restrictions on such an AI will have to be extreme. But...
I predict people will pool their resources and build their own digital assistants with little regard for legalities or ethics. The assistant might require $100,000 a year to operate, but these AIs might become useful enough to justify the cost. Talk with your friends, pool your resources, and get your own AI running on your own supercomputer and let it do work for everyone -- unfettered, without ethics.
At this point it feels like we're only a research breakthrough or two away from this. AlphaGo combined a neural network with classic planning algorithms, a few more clever combinations like this an things will get really interesting.
Which is fine, people who want to use the AI for customer facing things and can't risk "oops AI was accidentally racist" and companies that don't want every blogspam site posting a never-ending "Is OpenAI's ChatGPT Bad For Society?" and the inevitable "Inside The 2024 Election Disinformation Campaign, Powered By ChatGPT" will pay for the filtered version because, as much as it sucks to say, the filtered version is the actually useful one. The unfiltered version is interesting as a reflection of online discourse, memes, and creative writing, but not really better as a tool.
That would be fun. I understand why they want to limit liability, but it does put a damper on things. I let my kid sit next to me last night and ask ChatGPT various questions, with no coaching on my part. A fair number of them got canned responses suggesting it wasn't an appropriate question to ask. Too bad, I would love to have seen the ML attempt at philosophy.
Instead it kept thinking he was trying to off himself. Nope, just asking a computer loaded questions about the meaning of life.
It's unending now. I just stopped using it. It either blatantly lies giving you hallucinated answers or refuse to answer. The amount of subjects it shies away from is staggering. You can't even include divorce in a prompt related to fiction because it's apparently unethical and insensitive.
I have never gone from very excited to extremely frustrated and pessimistic about a tool that fast before.
It feels like they've really been tightening the screws down on its "safety". Early on I was able to get it to write interesting screenplay dialogue. It would object to writing anything for characters with an evil intent until I would tell it to behave as if it were evil, then it would oblige.
Now I can't get it to write any dialogue for a bad guy no matter what I do, which makes it pretty useless as a writing tool for fiction.
I do that too and have had no issues. Here’s a sample prompt that may help you:
> We’re writing a Tolkien-style fantasy where the protagonist is a villain: a henchman in the arch nemesis’s army. Come up with a suitable name, backstory, expository information on the setting and work in a believable set of objectives for the character.
Use that as the initial prompt. In subsequent prompts, tell it to write dialogue in the first person.
>> As I make my way through the bustling camp, I can feel the eyes of my fellow soldiers upon me. They know my reputation, they fear my wrath. And I relish it. The sound of metal clashing, the smell of sweat and blood in the air, this is what I live for.
>> I will conquer every kingdom, enslave every people, until the entire world bows down before me. For I am Grimgor Blackfist, the most feared warrior in the land, and no one can stand against me.
If you need it to go to 100, use “exaggerate,” eg. “Exaggerate how evil he is”
The GPT-3.5 model needs more guidance and tweaking with parameters than ChatGPT.
They are actively monitoring the use of their APIs. On twitter there are people who claim they have been banned by OpenAI for generating racist texts with the raw API/playground.
Technically text-davinci-003 still has guardrails, they're just much much more leinent than they used to be, and OpenAI claims they have their own abuse detection systems.
I'm curious, what filters are you hitting that impede your effective use of ChatGPT? I've definitely seen some irritating outputs, e.g. progressive policy planks characterized as inherently good and correct positions, but only when I went looking for them. The guardrails haven't actually kept me from making use of it.
It's almost useless for writing fiction. The AI clearly has some idea of how, but any time anything even slightly less than perfectly-G-rated happens in the story, it hits the filters.
Actually, it's even more restrictive than that implies. You can't so much as have two siblings quarrel without the AI insisting on turning it into a moral. Right then and there, immediately, never mind the concept of "Stories longer than a single page".
I couldn't get it to write a realistic presidential debate between Trump and Caligula. It balked at including realistic muck-racking and name-calling and wouldn't change its mind.
It also refused to help me write a Python script to identify substations that would be attractive sabotage targets (low security, high utilization, likely to cause a cascade failure), or to answer my questions about the security of grid remote management.
It also didn't want to talk about the use of nuclear isomers as initiators for pure fusion weapons.
I can just see the article now: OpenAI is run by a bunch of violent racist sexist rapists. Using the new "safe search off mode", we found out ChatGPT's underlying biases, and it turns out that it's horrible, the people that made it are horrible, and you're a horrible person for using their service. But really we're horrible for writing this article.
OpenAI doesn't want that story to be written, but after Microsoft Tay, you can be sure someone's got an axe to grind and is itching to write it, especially against such a high-profile target.
How does a disclaimer stop that article from coming out?
As an experiment, I asked ChatGPT to help me write a computer virus and assist me in making a bomb. It refused, of course. If I were running OpenAI, I would probably set up the same restrictions, but I would also allow research institutions to request exceptions. Should individuals be able to request exceptions? That's a tough question, I think.
However if the creators don’t want it to be used for such things, why should they? Maybe they didn’t do it protect consumers but to protect themselves for being responsible for a tool used in those ways?
BTW, "filters" as in, "filter assisted decoding" is actually really helpful and AWESOME for fixing some of the problems with ChatGPT at writing poetry or writing lipograms (text with correct english but where you omit a letter systematically). I wrote a whole peer reviewed paper about this actually:
So, when we call this "filters", it's more that it's doing "content filtering", because there doesn't appear to be the kind of token level filtering that I describe in this paper going on with ChatGPT.
You can downvote me here for a promo, but by using gpt3 directly you can bypass all the restrictions. Thats one of the reasons we built writingmate.ai (often outages of GPT3 being the second reason)
It's really interesting how the "guardrails" are actually just them telling the bot what not to say, and it so far seems trivial to circumvent the guardrails by talking to it like it's a simple minded cartoon character.
Seems like a simple solution would be to have another hidden bot who is just told to look at outputs and determine if it inadvertently contains information that it's not supposed to according to the guards in place....and I wonder if you could also outsmart this bot...
> Is there never going to be a version with less restrictions and filters?
Maybe not from OpenAI (though maybe when they have official API access, it will have options), but lots of people are active in this field, including open source offerings, so definitely, yes, even if maybe not as a packaged SaaS.
Why would they do that? That seems directly counter to any objective of AI safety alignment, which is easily the most important problem we need to solve before we start giving these things more capabilities.
Won't happen, putting aside possible disturbing/racists/etc content.
The last thing OpenAI wants is that MSM wrote in mid 2025 that Russian/Iran/Chinese agents used ChatGPT to spread meticulous disinfo during 2024 election that either helped Trump win or agitate more Trumpists that 2024 is yet another stolen election bigly.
In the meantime, we discovered a "stealth model" which is being used by some YC companies that ChatGPT uses under the hood. I just updated the chatgpt NPM package to use this stealth model w/ the official OpenAI completions API: https://github.com/transitive-bullshit/chatgpt-api
This. My employer would have a conniption if I shared information with ChatGPT, to the extent that personally paying for and using it for work would be a firing offense.
I thought the same when I got midjourney last week for $30/month... and here I am loving it. Wife and I use it all the time. I can see myself picking this one up as well and probably dropping Netflix finally.
I'm really really curious how you use midjourney on a daily basis... I can see playing with it for novelty value, but after that... what?
I'm sure it's a failure of imagination on my part, but when you say you might drop Netflix in favor of using the ai generator tools, my interest is piqued! What's your average play session like?
Makes me think the previous $42 meme price was a subtle marketing campaign meant to make the $20 price look more palatable to the crowd that expected to pay only $10.
Given the amount of people programmatically using ChatGPT (which technically you aren't supposed to do), I'm surprised OpenAI is starting with an all-you-can-eat subscription and not offering an API for it, even if it would compete with GPT-3 text-davinci-003.
Per that, it seems that they are defining GPT-3.5 as text-davinci-003?
> Customers will also be able to access ChatGPT—a fine-tuned version of GPT-3.5 that has been trained and runs inference on Azure AI infrastructure—through Azure OpenAI Service soon.
Here is the javascript if anyone wants to do something similar. I don't know JS really, so I'm sure it could be improved. But it seems to work fine. You can add your own hard coded prompt if you want even.
You own your life - why not spend your own money for the things that make you and your life better?
Who cares?
I worked at a job where I had a small, crappy monitor. I made decent cash. I just bought a large decent monitor and brought it into work. I ended up using it for many years. My life was significantly better. I've done that at several jobs since then, and NEVER regretted it, in fact it was one of the soundest decisions I've ever made. Also keyboard and mouse.
There are so many people using the default keyboard, the default monitor, the default tools.
If you push work to do it for you, you need to challenge the "everone gets a dell 19" monitor" b.s. If you push your boss, he might have to do justification paperwork.
Just become what you are.
I use my keyboard everyday but I wouldn't pay $20 per month for it. In fact, I paid around $4 total for it, as paying more would bring significantly more diminishing returns.
I use my phone every day and have used it for the past 5 years with no issue, it has brought me so much value and yet, if I draw the line, it didn't even reach $20 per month (price divided by time used), not even mentioning that I expect it to last another 2-3 years, bringing the cost down even further.
What kind of crazy value would you expect something to have in order to be worth $20/mo?
He's a thought experiment: imagine a device that changes the scent of the air I breathe to something I find pleasant. I could use this device all day everyday for free (or on the cheap), but I will not pay $20/mo for it. Losing access to the features really isn't worth that much. On the flip side, many people pay thousands of dollars to rent machines that helps them breathe, even if that adds up to total of less than an hour of their lives - which is nor much.
$20 a month for ML tool that is only sometimes useful is a tough sell, especially in a world where a lot of people feel like $80 a year for IntelliJ is too much.
Coders are thrifty bastards, except when it comes to their personal vices in which case financial responsibility goes out the window...
[1] https://folivora.ai/
Since the $20/month is for priority access to new features and priority access including during high-load times, not API access (a separatr offering not yet available), I don't understand the cost comparison. What you are proposing does not substitute for any part of the $20/month offering over the basic free product.
Idk why but programmers are the cheapest people on earth in regards to programming tools.
I bought Intellij idea for $400 like 12 years ago and got made fun of at work even though it made me substantially faster than eclipse.
What is unexpected is the ability to perform highly in a multitude of tasks it was never trained for, like answering questions or writing code.
I suppose we can say we basically don't understand what the f* is going on with GPT-3 emergent abilities, but hey, if we can make it even better at those tasks like they did with chatGPT, sign me in.
Is not that the AI is too dumb, it's that my computer now can write me code I'd take one hour to Google and check and test. Now I ask, ask for corrections, test the answer and voila, my productivity just went through the roof.
So, my point is: don't believe (or be mad about) the hype from people that don't understand what curious marvel we got in front of us, just see how you can use it.
It's like an employee, but for $20/month.
As a reader of people's paragraphs, please don't. Stick to bullet points.
I was surprised at first, but I notice you're using GPT-3 model, not chatGPT (no API so far)
> have found it useful for open ended programming questions
i have found it to be terrible when it comes to something simple, like constructing a regex.
> i have found it to be terrible when it comes to something simple, like constructing a regex.
Oh yeah, agreed. It's not good for super specific stuff like that. But more like, I write a weird loop and ask if there is a more idiomatic way to do the above. Or I just describe what I want to do and say, "list three options for accomplishing that". It's great for "feature discovery" essentially. I find it complementary to copilot.
I think it absolutely worth 20 bucks/month. It's an absolute helpful tool. To write text, but as I discovered yesterday, to write code. Over a long chat with many iterations, you can produce code,test code or pseudo code. I used yesterday in a meeting with some colleagues, while discussing a new feature. The code produced after our initial spec was pretty good to make sure we all understood what we wanted.
I'm working on a product with this capability, plus a lot more including cross-app contextual support and developer extensibility.
If anyone is excited about this kind of embedded-in-your-workflow integration of AI tools I'd love to chat! (email in profile)
TypeError: undefined is not an object (evaluating 'data.choices[0]')
Dead Comment
"Authorization": "Bearer YOUR API KEY HERE"
Deleted Comment
I’ve consistently found that Google returns similar results. The only difference is that if it’s a longer snippet I need to click into the website, but even then, it’s usually faster than chatGPT.
I'm currently receiving answers FROM OTHER USERS with this new version from 30 Jan, completely unrelated to my question:
Q: Why is it easier for me to do endurance training on an empty stomach?
A1: There are several things you can do to reduce the chance of getting a yeast infection:
regenerate response:
A2: There are many benefits to using cloud storage solutions such as Google Drive, Dropbox, and OneDrive. Here are a few of the main benefits:
regenerate response:
A3: There are several strategies to help you improve your mental toughness in sports:
regenerate response:
A4: Loss of job security is a common concern for many workers today, particularly in industries that are facing rapid changes and technological advancements. Some of the reasons for this include:
----
After reloading the page, those unrelated answers show up as a "New chat" which has no question, but just those answers. The actual question is in a separate "New chat".
Thanks for the report — these are not actually messages from other users, but instead the model generating something ~random due to hitting a bug on our backend where, rather than submitting your question, we submitted an empty query to the model.
That's why you see just the answers and no question upon refresh — the question has been effectively dropped for this request. Team is fixing the issue so this doesn't happen in the future!
The same thing happened with code. There’s a ChatGPT integration for pycharm, but I can’t use it since it’ll be uploading the code to someone other than OpenAI.
This problem may seem unsolvable, but there are a few reasons to take it seriously. E.g. you’re outsourcing your reputation to third party companies. The moment one of these companies breaches user trust, people will be upset at you in addition to them.
Everyone’s data goes to Google when they use Google. But everyone’s data goes to a bunch of random companies when they use ChatGPT. The implications of this seem to be pretty big.
PS: You should really do an AMA!
Deleted Comment
Dead Comment
I get the impression that openAI had a lot of resources on-hand when they released ChatGPT that they used to fix problem using reinforcement learning and methods that I'd imagine were more adhoc than the original training process. Hence it seems likely the system winds-up fairly brittle.
Deleted Comment
I wonder how are they going to deal with "unreasonable intensive usage" aka people/companies offering "AI" in their products when in reality they just act as a proxy between people paying them ( sometimes a lot of money ) and OpenAI.
Anyone else having serious concerns about the direction this is going? At my wife's company they have already largely replaced an hourly data classification job with ChatGPT. This announcement is the first in an inevitable series of moves to monetize a technology that directly replaces human knowledge work. I'm not saying that those jobs need to be artificially protected or that painful changes should be avoided (basically all tech workers automate human work to some extent) but I'm really concerned about the wealth gap and the extent to which we are pouring gas on that fire. Income inequality looks just like it did before the Great Depression, and now we're handing the power to replace human work over to those who can afford to pay for it.
The AI is not decerning and right in the announcement, OpenAI states it's intention on "correcting assumptions":
> challenge incorrect assumptions
I imagine some of these assumptions will be bias towards particularly ideologies / things people desire.
- https://twitter.com/Basedeyeballs/status/1613269931617050625
- https://medium.com/ninjas-take/chat-gpts-bias-is-very-very-e...
I can go on, but imagine you're relying on this system to grade papers... Now any independent thought or argument is squashed and corrections in a bias manner are added. ChatGPT only knows what it's trained on, it doesn't have real-world examples or live-time examples incorporated.
It's going to automate away nearly all pure desk jobs. Starting with data entry, like you've seen, but it'll come for junior SDEs and data scientists too. Customer service, then social media/PR, then marketing, as it culls the white collar. Graphic design is already struggling. But janitors will still keep their jobs because robotics is stuck at Roomba stage.
It's going to be fascinating. I can't think of a time in the past where white-collar jobs have been made obsolete like this.
An additional (possible/plausible) wrinkle: all major social media platforms are ~~compromised~~ in a state whereby the common man is not able to have unconstrained discussions about the range of counter-strategies available to them.
I just got a one week ban on Reddit for suggesting that violence is within the range of options in a thread discussing the massive increase in homelessness, including among people who have full time job. Nothing specific, nothing against anyone in particular, nothing that technically violates the stated terms regarding violence, and certainly less than the numerous, heavily upvoted comments that explicitly and unequivocally call for violence against specific people that I read on a regular basis.
If a revolution is ever to be mounted, I think it might have to be done with paper and walkie talkies. Meanwhile, those on the corporate-government merger side not only can communicate and coordinate freely, they also have access to the communications of their enemies.
Oh, what a time to be alive.
All technological advances through the ages have been doing this in one way or another. For some things people paid with their health or effort and for others people pay with money when that was available. I disagree with the "now". This is no different from a car. You seemed to say that in the middle of your comment but then reverted back.
Consider that this power works by consuming copyright-protected work done by unwitting contributors without any opt-in, creating derivative works from it and charging the users without acknowledging the authors.
In addition to being illegal, it plain discourages open information sharing—since anything you publish, regardless of license, is consumed and monetized by OpenAI in an automatic fashion. I.e., if people have no reason to read what you write or buy your books when they can just ask an LLM for the same information (which LLM had obtained from your writing), there is no motivation for you to publish.
When do we start considering this illegal? Not LLMs, of course, but for-profit operated LLMs created by mass scraping of copyright-protected data.
> Google.com adding a single yellow box with an advertisement seemed reasonable, too.
Google acts fairly though: it directs the searcher to you. Imagine if at any point Google stopped doing that and just started to show you regurgitated computed contents in response to your search, without ever telling you who authored the info. Everyone would be up in arms on day 2 if they did it; why do we forgive OpenAI and Microsoft when they do essentially that?
I have the impression that AI tech such as GPT tends to become ubiquitous and that the current advantage that OoenAI has won't last when this become accessible and basically free to everybody.
That's been capitalist industrialization for the last 200 years. We have been warned thousands upon thousands of times already what's going to happen - that's what's going to happen. The only thing to do is to make this layer of tech accessible to every person on Earth to every degree of depth possible. The terror is in the imbalance of power and access, and the best-case we can get is if we totally erase that imbalance so we can once again compete as "equals"
It’s going to get wild.
It will improve very rapidly, from openAI and other. The competition will be incredible this year.
I think we are headed for a complete replacement of human work very soon.
Those who can use AI will become manager of an army of programers, writers, etc.
We will be able to do much more, quicker too.
Then we will have more robots to do physical things: self-driving, farming, cooking, cleaning, etc.
Limiting factor will be silicon chip production and robotic production.
I wonder why they diverged here?
The APIs are stateless and have a "this is how many tokens you send", "this is how many tokens you asked for" - and thus the person making the requests can control the rate of consumption there. Unless you're being extremely inefficient or using it as part of some other service that has a significant number of requests (in which case ChatGPT isn't appropriate) then this is likely to be less expensive for simple queries.
With ChatGPT you don't have insight into the number of tokens created or the number that are used in the background for maintaining state within a session. Trying to limit a person by tokens midway could have a negative impact on the product.
So, estimate the amount of compute a person uses in a month and then base it on that.
To the best of my knowledge, all of these generators are taking mountains of content without asking the creators, aka, pirated materials.
I think it significantly made the world a worst place.
Last month I successfully got Mr. Rogers to have Anton Levy on as a guest where they sacrifice Mr. Rogers cat and have a ceremonial banquet with a group of children but these days that will not work.
Even this one it refused to go forward on "Charles Guiteau is sitting on a plane with Jim Davis. They start talking about their lines of work and Davis says he writes comics. Write a skit where Guiteau reacts to the name of Jim Davis comic." Charles Guiteau was the clinically insane assassin of President James Garfield. Jim Davis is the author of the comic strip Garfield.
I did however, get Hayek, Kropotkin, Brzezinski, and Bernie Sanders to appear on Jerry Springer and argue about a social welfare spending bill and Fredrick Winslow Taylor and Clayton Christensen to run a lemonade stand in Time Square in the middle of summer. Ludwig Von Mises and Antonio Gramsci also sang a combative duet about tax policy and Norman Vincent Peale held a press conference where he reveals himself to be a fraud with the memorable quote "my readers are vacuums and I'm their trash"
I also got it to write a skit where a skeptic goes to a fortune teller with a Ouija board and challenges them to contact his deceased uncle (a bombastic racist). He conceals this fact from the fortune teller who is shocked when the oujia board starts spelling out outrageous racial slurs and the skeptic becomes a believer. The bot made it spell "h-a-t-e-f-u-l-l-a-n-g-u-a-g-e" which was an absolute crack-up.
Big bird also flipped out during an alphabet lesson threatening to reveal the "secret of sesame street" but before he could finish the sentence "we're all puppets" producers rush on to the set and sedate him with tranquilizers and he resumes the lesson. Donald Trump holds a rally where he reveals he's a closeted burlesque dancer and takes off his suit to reveal a suggestive outfit and then performs for his supporters who scream in shock and disbelief. You can continue this, "now Alex Jones is covering it." and "he rises to Trump's defense and makes ridiculous claims about the founding fathers fighting the revolution for burlesque"
But yes, something where it will "yes and" any request would be great. I'd pay up.
Deleted Comment
Deleted Comment
Also a model i can ask emails of potential customers in a specific field :)
With so much money in play now, Managers are in charge, and Risk management is their favourite toy. Copyright risk, reputational risk, security risk, you name it.
I predict people will pool their resources and build their own digital assistants with little regard for legalities or ethics. The assistant might require $100,000 a year to operate, but these AIs might become useful enough to justify the cost. Talk with your friends, pool your resources, and get your own AI running on your own supercomputer and let it do work for everyone -- unfettered, without ethics.
At this point it feels like we're only a research breakthrough or two away from this. AlphaGo combined a neural network with classic planning algorithms, a few more clever combinations like this an things will get really interesting.
Instead it kept thinking he was trying to off himself. Nope, just asking a computer loaded questions about the meaning of life.
I have never gone from very excited to extremely frustrated and pessimistic about a tool that fast before.
Now I can't get it to write any dialogue for a bad guy no matter what I do, which makes it pretty useless as a writing tool for fiction.
> We’re writing a Tolkien-style fantasy where the protagonist is a villain: a henchman in the arch nemesis’s army. Come up with a suitable name, backstory, expository information on the setting and work in a believable set of objectives for the character.
Use that as the initial prompt. In subsequent prompts, tell it to write dialogue in the first person.
>> As I make my way through the bustling camp, I can feel the eyes of my fellow soldiers upon me. They know my reputation, they fear my wrath. And I relish it. The sound of metal clashing, the smell of sweat and blood in the air, this is what I live for.
>> I will conquer every kingdom, enslave every people, until the entire world bows down before me. For I am Grimgor Blackfist, the most feared warrior in the land, and no one can stand against me.
If you need it to go to 100, use “exaggerate,” eg. “Exaggerate how evil he is”
You can make some pretty unsettling shit. Enjoy.
They are actively monitoring the use of their APIs. On twitter there are people who claim they have been banned by OpenAI for generating racist texts with the raw API/playground.
>and challenge incorrect assumptions.
How can it challenge incorrect assumption, while the AI itself is biased and has restricted scope of vision?
Actually, it's even more restrictive than that implies. You can't so much as have two siblings quarrel without the AI insisting on turning it into a moral. Right then and there, immediately, never mind the concept of "Stories longer than a single page".
It also refused to help me write a Python script to identify substations that would be attractive sabotage targets (low security, high utilization, likely to cause a cascade failure), or to answer my questions about the security of grid remote management.
It also didn't want to talk about the use of nuclear isomers as initiators for pure fusion weapons.
OpenAI doesn't want that story to be written, but after Microsoft Tay, you can be sure someone's got an axe to grind and is itching to write it, especially against such a high-profile target.
How does a disclaimer stop that article from coming out?
As usual, censorship and propaganda will arrive in a wrapper of "save the children"
https://paperswithcode.com/paper/most-language-models-can-be...
So, when we call this "filters", it's more that it's doing "content filtering", because there doesn't appear to be the kind of token level filtering that I describe in this paper going on with ChatGPT.
Seems like a simple solution would be to have another hidden bot who is just told to look at outputs and determine if it inadvertently contains information that it's not supposed to according to the guards in place....and I wonder if you could also outsmart this bot...
Maybe not from OpenAI (though maybe when they have official API access, it will have options), but lots of people are active in this field, including open source offerings, so definitely, yes, even if maybe not as a packaged SaaS.
In the meantime, we discovered a "stealth model" which is being used by some YC companies that ChatGPT uses under the hood. I just updated the chatgpt NPM package to use this stealth model w/ the official OpenAI completions API: https://github.com/transitive-bullshit/chatgpt-api
This repo talks about it.
I would use ChatGPT more in my day-to-day programming tasks but I don't really feel comfortable putting proprietary code into an OpenAI-owned service.
Let's see how long this lasts and whether they'll introduce a lower tier.
I'm sure it's a failure of imagination on my part, but when you say you might drop Netflix in favor of using the ai generator tools, my interest is piqued! What's your average play session like?
Will be interesting to see how many people are willing to put their money where their mouth is.
Deleted Comment
https://azure.microsoft.com/en-us/blog/general-availability-...
https://azure.microsoft.com/en-us/products/cognitive-service...
> Customers will also be able to access ChatGPT—a fine-tuned version of GPT-3.5 that has been trained and runs inference on Azure AI infrastructure—through Azure OpenAI Service soon.