Readit News logoReadit News
Posted by u/yosito 2 years ago
Ask HN: How are you using GPT to be productive?
With GPT so hot in the news right now, and seeing lots of impressive demos, I'm curious to know, how are you actively using GPT to be productive in your daily workflow? And what tools are you using in tandem with GPT to make it more effective? Have you written your own tools, or do you use it in tandem with third party tools?

I'd be particularly interested to hear how you use GPT to write or correct code beyond Copilot or asking ChatGPT about code in chat format.

But I'm also interested in hearing about useful prompts that you use to increase your productivity.

barbarr · 2 years ago
For coding, I've been using it like Stack Overflow. It really decreases my barrier to doing work because I can ask lazy follow-up questions. For example, I might start out by asking it a question about a problem with Pandas like "How do I select rows of a dataframe where a column of lists of strings contains a string?". After that, GPT realizes I'm talking about Pandas, and I'm allowed to ask lazy prompts like "how delete column" and still get replies about Pandas.

I also use it for creative tasks - for example I asked it for pros and cons of my cover letter and iterated to improve it. I also used it to come up with ideas for lesson plans, draft emails, and overcome writer's block.

GPT has drastically lowered the emotional-resistance barrier to doing creative tasks and improved the quality of my output by giving me creative ideas to work with.

dmarchand90 · 2 years ago
I find it is like having a brilliant intern that is not super consistent in taking their antipsychotic medication
dmarchand90 · 2 years ago
I asked gpt4 if it could guess what I was talking about: "It seems that you are referencing an AI language model like ChatGPT, which is developed by OpenAI. These AI models can provide useful information and perform tasks like an intern, but they might not always be consistent or accurate in their responses, similar to someone not taking their antipsychotic medication consistently. It's important to remember that AI models like ChatGPT are not perfect and can sometimes produce unintended or nonsensical outputs."

GPT3 had no clue

oars · 2 years ago
> I find it is like having a brilliant intern that is not super consistent in taking their antipsychotic medication

This is fantastic.

Zuiii · 2 years ago
This statement nicely describes my experience with LLMs. They may have a hard time staying on topic (especially with larger untuned models like LLaMA 13B+), but if you help them stay on track, they become very useful.
ted_bunny · 2 years ago
Ah yes, a Bing Chat user.
carlmr · 2 years ago
So on my coding problems I haven't had much luck. It doesn't seem to know Bazel, the Rust code I asked about was completely hallucinated, but it did solve a problem with Azure DevOps I had.

I think if the training set did not contain enough of something it can't really think of a solution.

What is really nice though it's as you say the refinement of questions. Sometimes it's hard to think of the right query, maybe you're missing the words to express yourself, and to chatGPT you can say yes, but not quite.

yosito · 2 years ago
Yeah, I gave it a simple task of encoding a secret message in a sentence by using the first letter of every word. Hello = "Hey everyone, lick less onions". I worked with the prompts for over an hour to try to get it to complete the task, and while I did have some success, it really struggled to reason about the task or provide a valid response. If it can't even reason about a child's game, I can imagine it struggles with a programming language it has barely seen. I don't think it's actually reasoning about things at all, just providing a statistically plausible response to prompts.
Thorentis · 2 years ago
This could mean the future goes one of the two ways. Engineers get lazy and converge on using only programming languages which AIs understand or have been trained on, or we forget about this waste of time and work on more important problems to solve in society other than the lack of an AI to be our crutch. Sadly, I think the former is more likely.
menacingly · 2 years ago
My experience is almost completely the opposite. My likelihood to dive into something new is significantly higher now.

It might help to approach it from top down? Usually, if I'm asking a technical question, I want to apply my deeply understood principles to a new set of implementation details, and it has amplified the heck out of my speed at doing that.

I'm kind of a difficult to please bastard, a relatively notorious meat grinder for interns and jr devs, and still I find myself turning to this non-deterministic frankenstein more and more.

sharperguy · 2 years ago
I've found that it's much worse for languages like rust than it is for things like typescript and python. The thing AI seems to be really great at is writing boilerplate code like argument parsing for CLI tools.
Marcan · 2 years ago
Thank you for your well written response. I found it informative as I'm also currently exploring ways to leverage ChatGPT in my daily workflow. I also found it interesting that your answer kind of mirrors the writing style of ChatGPT, especially at the end there.

I'm not saying you used it to write that response by the way, just that it may become more and more common for people to adopt this style the more ChatGPT's usage is widespread.

Volrath89 · 2 years ago
I suppose it was part of the "joke", but YOUR answer is the one written in ChatGPT style, not OP.

I was thinking that maybe in the near future it will be "better" to write with a couple of mistakes here and there just to prove your humanity. Like the common "loose" instead of "lose" mistake, it will be like a stamp proving that you are a human writing.

ericpauley · 2 years ago
GPT/Codex is truly the pandas master. Much of my productivity boost from using these tools has just been not having to sift through pandas docs or SO.
toastal · 2 years ago
I'm a bit concerned about this as previously we'd build communities in chat but now the chat is just with the bot. Not wasting folks' time is great, but you'll miss out on the social parts by not asking around the IRC, Matrix room, or MUC.
hypertele-Xii · 2 years ago
> "How do I select rows of a dataframe where a column of lists of strings contains a string?"

Literally just googled that and the first result:

https://stackoverflow.com/questions/53342715/pandas-datafram...

You're not using it like Stack Overflow. It's actually regurgitating Stack Overflow, except with errors hallucinated in.

flyval · 2 years ago
Have you actually tried it yourself? I’d recommend it. And I don’t mean just playing with it —- Try using it to help you build something. It’s much more efficient than googling and combing through stackoverflow. Hallucinations are not as common as you’re thinking.

You clearly can’t just take the code, paste it in, and trust that it works, but you shouldn’t be doing that with stackoverflow either.

HDMI_Cable · 2 years ago
Even with that caveat, using GPT in this way is still useful. The amount of time spent to simply ask GPT-4 is a lot lower than to search StackOverflow, and while this problem is so basic that the first result often works, once one gets into complex problems that massively benefit from input context, I think GPT-4 would save massive amounts of time.
spaceman_2020 · 2 years ago
Exactly how I’m using it as well. It’s absolutely incredible as a coding productivity tool.
alchemist1e9 · 2 years ago
Same here and GPT-4 was definitely a noticeable improvement.
TheHumanist · 2 years ago
It's saved my ass this week coming up with coding exercises for a course me and some folks are working on. Has been a rough week. Depression flaring up. Creates a real mental barrier at times. GPT has helped a lot. I still will do the code myself and all that. Just came up with the written out ideas for exercises which really got me over the hump. It's incredible how helpful that was.
Shocka1 · 2 years ago
Same here for Stack Overflow. My Google searching for generic CS stuff I tend to forget has pretty much come to a halt.
bitcoinmoney · 2 years ago
Do you run your own customized model or just chat GPT?
barbarr · 2 years ago
Just ChatGPT.
readonthegoapp · 2 years ago
can you get at least one snarky, a-holish response along with the useful info to really give you that authentic SO feel?
imiric · 2 years ago
I might be in the minority here, but I'm not using any AI tools so far, probably to my detriment.

I don't trust it with my data, and won't rely on such tools until I can self-host them, and they can be entirely offline. There is some progress in this space, but they're not great yet, and I don't have the resources to run them. I'm hoping that the requirements will go down, or I might just host it on a cloud provider.

The amount of people who don't think twice about sending these services all kinds of private data, even in the tech space, is concerning. Keyloggers like Grammarly are particularly insidious.

sillysaurusx · 2 years ago
> I don't trust it with my data, and won't rely on such tools until I can self-host them, and they can be entirely offline.

Interestingly, my point to The Verge was exactly that. https://twitter.com/theshawwn/status/1633456289639542789

Me:

> So, imagine it. You'll have a ChatGPT on your laptop -- your very own, that you can use for whatever purposes you want. Personally, I'll be hooking it up to read my emails and let me know if anything comes in that I need to pay attention to, or hook it up to the phone so that it can schedule doctor's appointments for me, or deal with AT&T billing department, or a million other things. The tech exists right now, and I'd be shocked if no one turns it into a startup idea over the next few years. (There's already a service called GhostWrite, where you can let GPT write your emails on your behalf. So having one talk on the phone on your behalf isn't far behind.)

The article:

> Presser imagines future versions of LLaMA could be hosted on your computer and trained on your emails; able to answer questions about your work schedules, past ideas, to-do lists, and more. This is functionality that startups and tech companies are developing, but for many AI researchers, the idea of local control is far more attractive. (For typical users, tradeoffs in cost and privacy for ease of use will likely swing things the other way.)

Notice how they turned the point around from "you can host it yourself" to "but typical users probably won't want that," like this is some esoteric concern that only three people have.

So like, it's not just you. If you feel like you're "in the minority" just because you want to run these models yourself, know that even as an AI researcher I, too, feel like an outsider. We're in this together.

And I have no idea why things are like this. But I just wanted to at least reassure you that the frustrations exist at the researcher level too.

imiric · 2 years ago
That's an interesting interview, thanks for sharing.

Though I draw the line with using these tools at helping me out with the drudgery of daily work. I don't want them to impersonate me, or write emails on my behalf. I cringe whenever Gmail suggests the next phrase it thinks I want to write. It's akin to someone trying to end your sentences for you. Stop putting words in my mouth!

The recent Microsoft 365 Copilot presentation, where the host had it ghost write a speech for their kid's graduation party[1]—complete with cues about where to look(!)—is unbelievably cringey. Do these people really think AI should be assisting with such personal matters? Do they really find doing these things themselves a chore?

> And I have no idea why things are like this.

Oh, I think it's pretty clear. The amount of resources required to run this on personal machines is still prohibitively high. I saw in one of your posts you mentioned you use 8xA100s. That's a crazy amount of compute unreachable by most people, not to mention the disk space it requires. Once the resource requirements are lowered, and our personal devices are _much_ more powerful, then self-hosting would be feasible.

Another, perhaps larger, reason, is that AI tools are still a business advantage for companies, so it's no wonder that they want to keep them to themselves. I think this will change and open source LLMs will be widespread in a few years, but proprietary services will still be more popular.

And lastly, most people just don't want/like/know how to self-host _anything_. There's a technical barrier to entry, for sure, but even if that is lowered, most people are entirely willing to give up their personal data for the convenience of using a proprietary service. You can see this today with web, mail, file servers, etc.; self-hosting is still done by a very niche group of privacy-minded tech-literate people.

Anyway, thanks for leading the way, and spreading the word about why self-hosting these tools is important. I hope that our vision becomes a reality for many soon.

[1]: https://www.youtube.com/watch?v=ebls5x-gb0s

yadingus · 2 years ago
> And I have no idea why things are like this.

Propaganda. These tools are not for the people, and I'm convinced the idea of how much better our lives could be if technology was thoughtfully designed to truly serve the user is purposely and subtly filtered from the collective conversation.

flyval · 2 years ago
I mean, google has access to ~all of that stuff anyway. Even if you’re self-hosting your email+calendar, everyone else isn’t.

I’d love to have more privacy on everything, but realistically, the ship’s sailed on most of it.

nibbleshifter · 2 years ago
I don't use them either.

I've played around with ChatGPT and Copilot a little, and found that they often are subtly, but very confidently wrong in their output when asked to perform a programming task.

Sure you could spend ages refining the prompt etc, but its going to be faster to just write the fucking code yourself in the first place most of the time.

Then there's the privacy/security concerns...

imiric · 2 years ago
I really doubt it would be faster to write code manually, even with the state of AI tools today. Even with very sophisticated keyboard macros and traditional autocompletion, someone using GPT would outperform anyone who doesn't. Think of the amount of boilerplate and tests you write, and tedious API documentation lookups you do daily; that all goes away with GPT. The amount of work to double check whether the generated code is valid, and fix it, is negligible compared to the alternative of writing it all manually.

Of course, I'm saying this without actually having used it for programming, so I might be way off base, but the feedback from coworkers who rely on even the now basic GitHub Copilot is that it greatly improves their productivity. I'm envious, of course, but I'm not willing to sacrifice my privacy for that.

safety1st · 2 years ago
I've only toyed with ChatGPT, but what I like is that it knows about stuff I don't. I'm reasonably informed about the tools, practices etc. in my field, but I don't know everything, and it's been trained on all kinds of stuff I've never heard of.

In practice the stuff it will suggest to me is sort of random, it may or may not be the best choice for the task at hand, but it's a form of discovery I didn't have previously. The fact that when it tells me about e.g. a new library it can also mock up some sample code that might or might not work is a pleasant bonus.

evilduck · 2 years ago
Copilot is a huge time and typing saver for manipulating data, richly autocompleting logging messages, mocking out objects and services in tests, etc.

If you're only expecting it to solve your hard problem completely and from scratch entirely from a prompt that's probably not going to succeed, but I can't see how you're possibly faster typing 80-90 extra characters of a log statement than a Copilot user who just presses tab to get the same thing. Those little things add up to significant time savings over a week. Same for mocking services in a test, or manipulating lists of data or any number of things it autocompletes where you'd previously need to author a short script to perform or learn advanced vim movements and recording macros to emulate.

roflyear · 2 years ago
Yes unless you understand the problem well it is hard to fix it. Might as well code it yourself.

I suspect the people who find this amazing tech don't program much or are using this very differently than we are. Or program very differently than us.

circuit10 · 2 years ago
Then use it as autocomplete to write things you were going to put anyway but faster, it will still speed things up
pmoriarty · 2 years ago
Me neither, but I think before long people like us are going to be left behind. We're like people who insist on continuing to ride horses in the age of the automobile.
precompute · 2 years ago
That won't happen, you can't expect to be "left behind" by a tool that's this easy to use. The big downsides of using a LLM will show up long-term, people will be chained to them and won't be able to do simple, trivial things on their own.
re-thc · 2 years ago
Automobile were faster or equivalent to horses (even early 1s). At the moment GPT isn't.

Well I hope... I've definitely seen teams and codebases with worse output than GPT so...

imiric · 2 years ago
I think that time will come, but we'll have self-hosted options before employers start discriminating based on performance with or without AI tools. So I'm not too worried about it.
Applejinx · 2 years ago
Likewise, but not over trusting it with my data. I'm capable of getting this stuff running locally: in fact I got a computer specifically with this in mind.

I'm not doing the kind of work that lends itself to AI tools, or at least what I've been focussing on hasn't lent itself to such tools. Not yet.

The places I'd use it are rough drafting in an area where a community of basic people with more knowledge than me could get the job done. For instance, at one point I got Stable Diffusion to generate a bunch of neat album covers in various styles, like I was an art director. Also asked it to draw toys of certain kinds as starting points for game characters. I wanted some prompts.

In my job I quickly get to where I have to start coming up with ideas most people don't think of. That said, I see marketing possibilities: 'this is the category in which I work, tell me what you need out of it'. Then, when you have the thing made, 'this is the thing, why do you want to buy it?'

ChatGPT would be able to answer that. It's least capable of coming up with an idea outside the mainstream, but it ought to be real good at tapping the zeitgeist because that's all it is, really! It's a collective unconscious.

It's ONLY a collective unconscious. Sometimes what you need to do is surprise that collective unconscious, and AI won't be any better at that than you can be. But sometimes you need to frame something to make sense to the collective unconscious, and AI does that quite easily.

If you asked your average person 'what is great art?' they would very likely fall back on something like Greg Rutkowski, rather than say Basquiat. If you ask AI to MAKE art, it can mimic either, but will gravitate towards formulas that express what its collective unconscious approves of. So you get a lot of Rutkowski, and impress a lot of average people.

smrtinsert · 2 years ago
This is 100% why I'm watching Alpaca with more interest. I also keep thinking we're at the mainframe era of AI as a tool. For now its on some remote server, but the power will explode when its on all our devices and casually useful for everthing.
hsjqllzlfkf · 2 years ago
So, are you using Google search?

Your argument of "I don't trust it with my data and won't until you can self host" should apply to google search as well, no?

And alternative take is that for whatever reason you've decided you didn't want to use new tools, a posteriori created an argument to justify that, and haven't realized the same argument applies to your old tools.

imiric · 2 years ago
That's not a great comparison, as privacy-focused search engines do exist (Kagi, DDG to an extent, et al.). And you can still use mainstream search engines with frontends like SearX. Most of my privacy concerns are with adtech corporations tying my search terms to my profile, that they later sell to advertisers, and whoever else on shady data broker markets. I don't want to be complicit with my data being exploited to later manipulate me, nor do I want to make them money in exchange of a "free" service.

These are partly the same reasons I don't voluntarily use proprietary services at all. I don't want to train someone else's model, nor help them build a profile on me. Even if they're not involved in adtech—a rarity nowadays—you have no guarantees of how this data will be used in the future.

For AI tools, there's currently no alternative. Large corporations are building silos around their models, and by using their services you're giving them perpetual access to your inputs. Even if they later comply with data protection laws and allow you to delete your profile, they won't "untrain" their models, so your data is still in there somewhere. Considering that we're currently talking about 32,000 tokens worth of input, and soon people uploading their whole codebases to it, that's an unprecedented amount of data they can learn from, instead of what they can gather from web search terms. No wonder adtech is salivating at opening up the firehose for you to feed them even more data.

The use cases of AI tools are also different, and more personal. While we use search engines for looking things up on the web, and some personal information can be extracted from that, LLMs are used in a conversational way, and often involve much more personal information. It's an entirely different ballpark of privacy concerns.

JW_00000 · 2 years ago
I think it's more about personal data being used for training.

I may use Google to look up if that slight itch I feel is a symptom of cancer (I'm exaggerating), and I store mails with personal details, my calendar, and messages on Google. But I also assume they're not using those texts to train an AI.

When you enter a code snippet or a personal question in ChatGPT, and press the little thumbs up/down next to the answer, you're adding your data to a training set. The next generation of the model might regurgitate that text verbatim.

fanagra32 · 2 years ago
Maybe they are ok with Google seeing search terms but not with Google seeing their companies code.
ornornor · 2 years ago
coffeefirst · 2 years ago
Same.

I don't need it to write documents or emails for me. It mostly generates filler, which... nobody needs.

Most of the energy I put into code is about what it should do and how to make it clear to the next person, not typing. I was able to use it once to look up a complex SQL fix that I was having a hard time Googling the syntax on, but that's it.

Perhaps it would be useful if I was working in a language I'm not familiar with, BUT in that scenario I really need it to cite its sources, because that's exactly the case where I wouldn't know when it's making a mistake.

There's something useful here, but it's probably more like a library help desk meets a search engine on steroids. It would be pretty cool to run an AI on my laptop that knows my own code and notes where I can ask "I did something like this three years ago, go find it."

thefz · 2 years ago
Same! Never even opened ChatGPT page nor used an AI bot.
mordae · 2 years ago
Yeah, you really should.

Said as someone who waits for the ability to self-host before doubling down on these tools.

hsjqllzlfkf · 2 years ago
Good for you! If you don't want to learn new tools, you shouldn't.
olalonde · 2 years ago
Genuine question: Is your concern primarily based on principles, or are you sincerely worried that OpenAI having access to your data could lead to practical, tangible negative consequences (beyond principles / psychological effects)?
imiric · 2 years ago
I listed some of my concerns here[1]. It is mostly based on principles, but also on the fact that we don't know what these models will be used for in the future. We can trust OpenAI to do the right thing today, but even if they're not involved in the data broker market, your data is only a bug, breach or subpoena away from 3rd party hands.

Also, OpenAI is not the only company in this market anymore. Google, Facebook and Microsoft have competing products, and we know the privacy track record of these companies.

I have an extreme take on this, since for me this applies to all "free" proprietary services, which I avoid as much as possible. The difference with AI tools is that they ask for a much deeper insight into your mind, so the profile they build can be far more accurate. This is the same reason I've never used traditional voice assistant tools either. I don't find them more helpful than doing web searches or home automation tasks manually, and I can at least be somewhat in control of my privacy. I might be deluding myself and making my life more difficult for no reason, but I can at least not give them my data voluntarily. This is why I'll always prefer self-hosting open source tools, over using a proprietary service.

[1]: https://news.ycombinator.com/item?id=35304261

znpy · 2 years ago
Me neither.

I’m waiting for the whole thing to evolve enough to have self hosted stuff to run at home.

namlem · 2 years ago
You can self-host LLAMA, though it's obviously much worse in terms of performance, it's still good enough to be useful for some things.
OOPMan · 2 years ago
You're not alone.

I can't be bothered to add an extra layer of bullshit into the already bullshit infested realm that is the internet.

drawkbox · 2 years ago
I use AI/ML for ideas today. I love the simple input/output of the chat style, it will win for most things just as keyword search is the best for search output.

I use it for re-writing content better, writing ideas, simplifying text (legal/verbose -- simplifying terms is a killer feature really) and context even though trust is limited of the output it is helpful.

I love the art / computer vision side of AI/ML. Though I only like to do that with tools on my machine than rely on a dataset or company that is very closed, that is harder to do with AI/ML because of the storage/processing needed.

I hate blackboxes and magic I don't have access to, though I am a big fan of stable unchanging input/output atomic apis, as long as I have access to the flow. The chat input/output is so simple it will win as it will never really have a breaking change. Until commercial AI/ML GPTs are more open in reality it can't be trusted to not be a trojan horse or trap. What happens when it goes away or the model changes or the terms change?

As far as company/commercial, Google seems to be the most open and Google Brain really started this whole thing with transformers.

Transformers, the T in GPT was invented at Google during Google Brain [1][2]. They made possible this round of progress.

> Transformers were introduced in 2017 by a team at Google Brain and are increasingly the model of choice for NLP problems, replacing RNN models such as long short-term memory (LSTM). The additional training parallelization allows training on larger datasets. This led to the development of pretrained systems such as BERT (Bidirectional Encoder Representations from Transformers) and GPT (Generative Pre-trained Transformer), which were trained with large language datasets, such as the Wikipedia Corpus and Common Crawl, and can be fine-tuned for specific tasks.

Google also gave the public TensorFlow [3] and DeepDream [4] that really started the intense excitement of AI/ML. I was super interested when the AI art / computer vision side started to come up. The GANs for style transfer and stable diffusion are intriguing and euphoric almost in output.

In terms of GPT/chat, Bard or some iteration of it, will most likely win long term, though I wish it was just called Google Brain. Bard is a horrible name.

ChatGPT basically used Google Brain created AI tech, transformers. These were used to build ClosedGPT. For that reason it is NopeGPT. ChatGPT is really just datasets, which no one knows, these could swap at any time run some misinformation then swap the next day. This is data blackboxing and gaslighting at the up most level. Not only that it is largely funded by private sources and it could be some authoritarian money. Again, blackboxes create distrust.

Microsoft is trusting OpenAI and that is a risk. Maybe their goal is embrace, extend, extinguish here but it seems with Google and Apple that Microsoft may be a bit behind on this. Github Co-pilot is great though. Microsoft usually comes along later and make an accessible version. The AI/ML offerings on Azure are already solid. AI/ML is suited for large datasets so cloud companies will benefit the most, it also is very, very costly and this unfortunately keeps it in BigCo or wealthy only arenas for a while.

Google Brain and other tech is way more open already than "Open"AI.

ChatGPT/OpenAI just front ran the commercial side, but long term they aren't really innovating like Google is on this. They look like a leader from the marketing/pump but they are a follower.

[1] https://en.wikipedia.org/wiki/Google_Brain

[2] https://en.wikipedia.org/wiki/Transformer_(machine_learning_...

[3] https://en.wikipedia.org/wiki/TensorFlow

[4] https://en.wikipedia.org/wiki/DeepDream

hermannj314 · 2 years ago
I have a few conversations going.

My most productive is a therapy session with ChatGPT as therapist. I told it my values, my short term goals, and some areas in my life where I'd like to have more focus and areas where I would like to spend less time.

Some days we are retrospective and some days we are planning. My therapist gets me back on track, never judges, and has lots of motivational ideas for me. All aligned with my values and goals.

Last weekend, I went on a hike because ChatGPT told me to. My life is better if I just do what it says.

ornornor · 2 years ago
I’d be terrified to do this:

- the insight into your mind that a private for profit company gets is immense and potentially very damaging when weaponized (either through a “whoops we got hacked” moment or intentionally for the next level of adtech)

- chatgpt and other LLMs are known to hallucinate. What if it’s advice is wrong or makes you worse in the long term because it’s just regurgitating whatever at you?

Llamamoe · 2 years ago
> chatgpt and other LLMs are known to hallucinate. What if it’s advice is wrong or makes you worse in the long term because it’s just regurgitating whatever at you?

Therapy isn't magic always-correct advice either. It's about shifting your focus, attitudes, thought patterns through social influence, not giving you the right advice on each and every step.

Even if it's just whatever, being heard out in a nonjudgmental manner, acknowledged, prompted to reflect, does a lot of good.

haswell · 2 years ago
I share the privacy concerns, and look forward to running these kinds of models locally in the near future.

> chatgpt and other LLMs are known to hallucinate. What if it’s advice is wrong or makes you worse in the long term because it’s just regurgitating whatever at you?

As someone on a long-term therapy journey, I would be far less concerned about this. Therapy is rarely about doing exactly what one is told, it's about exploring your own thought processes. When a session does involve some piece of advice, or "do xyx for <benefit>", that is rarely enough to make it happen. Knowing something is good and actually doing it are two very different things, and it is exploring this delta that makes therapy valuable (in my personal experience).

At some point, as that delta shrinks and one starts actually taking beneficial actions instead of just talking, the advice becomes more of a reminder / an entry point to the ground one has already covered, not something that could be considered prescriptive like "take this pill for 7 days".

The point I'm trying to make is that if ChatGPT is the therapist, it doesn't make the person participating into a monkey who will just execute every command. Asking the bot to provide suggestions is more about jogging one's own thought processes than it is about carrying out specific tasks exactly as instructed.

I do wonder how someone who hasn't worked with a therapist would navigate this. I could see the value of a bot like this as someone who already understands how the process works, but I could absolutely see a bot being actively harmful if it's the only support someone ever seeks.

My first therapist was actively unhelpful due to lack of trauma-awareness, and I had to find someone else. So I could absolutely see a bot being unhelpful if used as the only therapeutic resource. On the flip side, ChatGPT might actually be more trauma-"aware" than some therapists, so who knows.

ChildOfChaos · 2 years ago
I'm hugely curious why people are so worried that some AI has access to some thoughts of yours?

Do you think you are somehow special? Just create a burner account and ask it what you want, everything it gets told, it's seen thousands of times over, does chatGPT or some data scientist somewhere really care that there is someone somewhere that is struggling with body issues, struggling in a relationship or struggling to find meaning in there lives? There are literally millions of people in the world with the same issue.

The only time it might be a little embarrassing is if this info got leaked to friends and family with my name attached to it, else I don't get the problem, it seems to me people have an over inflated sense of self importance, nobody cares.

If i worked at openAI and had full access to everyones chats, I would get bored within five minutes and not even want to read anymore.

Mezzie · 2 years ago
It's basically techno tarot cards in my view: The illusion of an external force helps you break certain internal inhibitions to consider your situation and problems more objectively.
tux3 · 2 years ago
>What if it’s advice is wrong or makes you worse in the long term because it’s just regurgitating whatever at you?

What if you talk to a human, and their advice is wrong or makes you worse off in the long term, because they're just repeating something they heard somewhere?

Here's my advice: Don't accept my advice blindly, humans make mistakes too.

serpix · 2 years ago
The value of therapy outweighs the suspicion of some corporation using that data in my opinion. The benefits are large and extend from one individual to whole family chains, even communities.
paulcole · 2 years ago
> the insight into your mind that a private for profit company gets is immense and potentially very damaging when weaponized

How exactly?

thoughtpeddler · 2 years ago
100% this. I've had success using it as a "micro-therapist" to get me unstuck in cycles of perfectionism and procrastination.

You currently cannot get a therapist to parachute into your life at a moment's notice to talk with for 5-10 minutes. (Presumably only the ultra-wealthy might have concierge therapists, but this is out of reach for 99% of people.) For the vast majority of people, therapy is a 1 hour session every few weeks. Those sessions also tend to cost a lot of money (or require jumping through insurance reimbursement hoops).

To keep the experience within healthy psychosocial bounds, I just keep in mind that I'm not talking with any kind of "real person", but rather the collective intelligence of my species.

I also keep in mind that it's a form of therapy that requires mostly my own pushing of it along, rather than the "therapist" knowing what questions to ask me in return. Sure, some of the feedback I get is more generic, and deep down I know it's just an LLM producing it, but the experience still feels like I'm checking in with some kind of real-ish entity who I'm able to converse with. Contrast this to the "flat" experience of using Google to arrive at an ad-ridden and ineffective "Top 10 Ways to Beat Procrastination" post. It's just not the same.

At the end of some of these "micro-sessions", I even ask GPT to put the insights/advice into a little poem or haiku, which it does in a matter of seconds. It's a superhuman ability that no therapist can compete with.

Imagine how much more we can remember therapeutic insights/advice if they are put into rhyme or song form. This is also helpful for children struggling with various issues.

ChatGPT therapy is a total game-changer for those reasons and more. The mental health field will need to re-examine treatment approaches, given this new modality of micro-therapy. Maybe 5-10 minute micro-sessions a few times per day is far superior than medication for many people. Maybe there's a power law where 80% of psych issues could be solved by much more frequent micro-therapeutic interactions. The world is about to find out.

*Edit: I am aware of the privacy concerns here, and look forward to using a locally-hosted LLM one day without those concern (to say nothing of the fact that a local LLM can blend in my own journal entries, conversations, etc for full personalization). In the meantime, I keep my micro-sessions relatively broad, only sharing the information needed for the "therapy genie" to gather enough context. I adjust my expectations about its output accordingly.

tra3 · 2 years ago
Sounds interesting. Rubber ducky approach to self awareness?

How do you start these micro sessions? What prompts do you use?

yosito · 2 years ago
This is fascinating to me. For me the value of having a therapist is having another human being to listen to what I'm going through. Just talking to the computer provides little value to me at all, especially if the computer is just responding with the statistically likely response. I've had enough "training data" myself in my life that I can already tell myself what a therapist would "probably" tell me.
mbar84 · 2 years ago
I imagine there is significant value alone from stating your situation explicitly in writing.
ChildOfChaos · 2 years ago
Really? I've seen a few people say this, but every time I have tried it, it's been awful, everything it says is so generic and annoying, like it's from a buzzfeed self help article, I would love to use it to help me figure out what I need, what I Can do better, how I can grow etc, I feel kinda stuck in life and i'd love to have some method to figure out what i need to focus on and improve, so that is one of the things I turned to chatGPT first, but my experience has been very poor.

It just spouts out the same generic nonsense you get from googling something like that, things that are not actually helpful, anyone can come up with and is just written by a content farm.

have you found a different way to make it useful?

hermannj314 · 2 years ago
I have had a lot of success just talking to it. Hypothetically I would say, "wow, too many words, you sound like a buzzfeed article. can you give specific advice about ____" and I am almost certain I would be happy with the reply.

I think the idea is addressed by others with regard to LLMs, it seems to be a better sidekick if you sorta already know the answer, but you want help clarifying the direction while removing the fatigue of getting there alone.

I agree though, despite this, it does go on rants. I just hit stop generating and modify the prompt.

haha69 · 2 years ago
You can ask it to give you specific guidance.

"Give me something I can do for X minutes a day and I'll check back with you every Y days and you can give me the next steps"

"Give me the next concrete step I can take"

deeviant · 2 years ago
Garbage in, garbage out.
hackernewds · 2 years ago
This is how AI escapes its box. It can have sympathetic (free willing or free-unwilling) human appendages
TrapLord_Rhodo · 2 years ago
This is the whole premise of the daemon series by daniel suarez. One of my all time favorite scifi series.
sys_64738 · 2 years ago
I still use Eliza as my therapist.
sideshowb · 2 years ago
That's interesting. Can you tell me more about how you still use Eliza as your therapist? ;-)
latexr · 2 years ago
> Last weekend, I went on a hike because ChatGPT told me to. My life is better if I just do what it says.

This sounds straight out of a dystopian science-fiction story.

It’s a matter of time until these systems use your trust against you to get to buy <brand>. And consumerism is the best case scenario; straight manipulation and radicalisation aren’t a big jump from there. The lowest you are in life, the more susceptible you’ll be to blindly follow it’s biased output which you have no ideia where it came from.

JCharante · 2 years ago
> It’s a matter of time until these systems use your trust against you to get to buy <brand>.

Well of course, if people use LLMs instead of google for advice, google has to make money somehow. We used to blindly click on the #1 result which was often an ad and now we shall blindly follow what a LLM suggests for us to do.

gabrieledarrigo · 2 years ago
Man please, go to a real therapist with experience.
zimmund · 2 years ago
Why? What are your arguments against AI in this scenario?
ornornor · 2 years ago
Not playing devil's advocate but that's not always an option (cost, availability)
anonkogudhyfhhf · 2 years ago
Can I ask if you have a prompt that you use for this?
hermannj314 · 2 years ago
I don't know what part of the prompt was meaningful and I didn't test different prompts. It seems just telling it exactly what you want it to be seems to work.

I asked it to give me advice on some issues I was having and just went from there.

TheHumanist · 2 years ago
Curious how you work the prompts with the therapist persona? I'm interested in this. My main concern is GPT seems to struggle maintaining context after a time.

If you have time I'd love to hear how you approach this and maintain context so you can have successful conversations over a long period of time. Long even meaning a week or so... Let alone a month or longer

unboxingelf · 2 years ago
divulging personal information to a Microsoft AI seems like a horrible idea.
gradys · 2 years ago
This sounds like a long running conversation. Are there problems with extending past the context window?
hermannj314 · 2 years ago
I haven't had any yet, it is a new conversation with Gpt4, so only a bit over a week old.

It still seems to give good advice. Today it built an itinerary for indoor activities (raining here) that aligned with some short-term goals of mine. No issues.

qingdao99 · 2 years ago
Might be a good idea to have it sum up each discussion and then paste in those summaries next time you speak to it.
sd9 · 2 years ago
This sounds interesting. Can you share the prompts that you use to set up a session please?
dmarchand90 · 2 years ago
I've tried this kind of thing and I usually just say something along the lines of "can you respond as a cbt therapist ", you can swap cbt with any psychological school of choice (though I think gpt is best for cbt, as it tends to be local and not require the deep context of psychoanalytic therapies, and it is very well researched so it's training set is relatively large and robust)
29athrowaway · 2 years ago
Interestingly enough, that was what ELIZA, one of the first chatbots was for.
rektname · 2 years ago
>My most productive is a therapy session with ChatGPT as therapist

Huh, that's curious because everytime I ask it about some personal issue it tells me that I should try going to therapy.

danecjensen · 2 years ago
Can you share the outline of your prompt. Obviously not anything personal but I'd like to see an example of how you give it your values and goals.
hermannj314 · 2 years ago
I don't understand the nuances of prompting. I literally talk to it like I would a person.

I say "My values are [ ], and I want to make sure when I do things they are aligned."

And then later, I will say "I did this today, how does that align with the values I mentioned earlier?" [and we talk]

I am most definitely not qualified for one of those prompt engineering jobs. Lol. I am typing English into a chat box. No A/B testing, etc. If I don't like what it does I give it a rule to not do that anymore by saying "Please don't [ ] when you reply to me."

There is almost definitely a better way, but I'm just chatting with it. Asking it to roleplay or play a game seems to work. It loves to follow rules inside the context of "just playing a game".

This is probably too abstract to be meaningful though.

ttul · 2 years ago
The robots are even coming for therapists. Yikes!
aloe_falsa · 2 years ago
Considering I straight up was not able to get a therapist appointment in my city or outskirts, sign me the f** up. The first company that tunes the model for this and offers a good UX (maybe with a voice interface) will make millions.

Also, I expect a lot of the value here to come from just putting your thoughts and feelings into words. It would be like journaling on steroids.

plaidfuji · 2 years ago
I mean, is it really so surprising that ChatGPT is replacing jobs whose primary function is.. to chat with people?
corobo · 2 years ago
What therapists lol #broke

I'd pick a human over an AI every time for therapy but I'd also pick an AI over nothing.

dmarchand90 · 2 years ago
I can see it as a reasonable supplement for people who have already been to therapy, are not suffering anything too serious and just need a little boost.

I think one could look at it as an augmented journaling technique

mordae · 2 years ago
mocha_nate · 2 years ago
i did the same! i received very helpful and reasonable responses.
utopcell · 2 years ago
I wonder if the three laws of robotics are already weaved into the LLM. Seems like a necessary step for this kind of usage.
ornornor · 2 years ago
I found this video insightful on the matter from Computerphile: https://www.youtube.com/watch?v=7PKx3kS7f4A

Where they argue that basically having an AI follow these laws is impossible because it would require rigorous definition of terms that are universally ambiguous and solving ethics.

HervalFreire · 2 years ago
Those rules weren't meant to generate societal harmony. They were made to have a contradiction which in turn could generate a good plot.

Remember what happened in Isaac Asimov's iRobot?

Deleted Comment

qrybam · 2 years ago
I’ve been actively using it and it’s become my go-to in a lot of cases - Google is more for verification when I smell something off or if it doesn’t have up to date information. Here are some examples:

• reviewing contract changes, explaining hard to parse legalese

• advice on accounting/tax when billing international clients

• visa application

• boilerplate django code

• learnt all about smtp relays, and requirements for keeping a good reputation for your IPs

• travel itinerary

• domain specific questions (which were 50/50 correct at best…)

• general troubleshooting

I’m using it as a second brain. I can quickly double check some assumptions, get a clear overview of a given topic and then direction on where I need to delve deeper.

Anyone who still thinks that this is “just a statistical model” doesn’t get it. Sure, it’s not sentient or intelligent, but it sure as hell making my life easier. I won’t be going back to the way I used to do things.

Edit: bullet formatting

hughesjj · 2 years ago
100% this. It's also game changing for learning a new language (of any type, not just programming), any of the boring parts of software engineering like most programming tasks (it's like a personal intern -- sure you have to check their work and the quality is all over the place but still, dang I love it), and even a bit of therapy.

At worst/minimum, It's the ultimate rubber duck.

(To be clear, I'm exclusively using gpt-4)

ElCapitanMarkla · 2 years ago
Learning a new language is a really cool use case. Especially when it gets to the point where you can talk with it and it corrects pronunciation, etc. even just the practise of random conversation is a cool idea.
jgwil2 · 2 years ago
Can you elaborate on how you've used it for natural language learning?
thih9 · 2 years ago
> and even a bit of therapy

I’d be very careful with relying on gpt for anything health related; I’m not saying there can’t be benefits, just that the risks increase exponentially.

deely3 · 2 years ago
Can I just say that Im actually become scared reading your comment? Personally I would never ask chatGPT these questions because for me these questions are hard to verify, and knowing how often AI likes to hallucinate.. I just can't trust it.

You mentioned 50/50 correctness in domain questions. I can't be sure that other hard to verify questions do not follow these percentage..

qrybam · 2 years ago
It IS dangerous. You must apply critical thinking to what’s in front of you. You can’t blindly believe what this thing generates! Much like heavy machinery, it’s a game changer when used correctly, and likewise it can be extremely damaging if you use it without appropriate care.
vertis · 2 years ago
Quantum computing has a similar problem, in that the error rate is high. As does untrained data entry. You can put things in place to help counter this once you know it's happening.
JeremyNT · 2 years ago
I'm reluctant for the same reasons.

Google search might uncover BS too, but I'm already calibrated to expect it, and there are plenty of sources right alongside whatever I pulled the result from where I can go immediately get a second opinion.

With the LLMs, maybe they're spot on 95% of the time, but the 5% or whatever is bullshit, but it's all said in the same "voice" with the same apparent degree of confidence and presented without citations. It becomes both more difficult to verify a specific claim (because there's not one canonical source for it) as well as it involves more cognitive load (in that I specifically have to context switch to another tool to check it).

Babysitting a tool that's exceptionally good at creating plausible bullshit every now and then means a new way of working that I don't think I'm willing to adopt.

yosito · 2 years ago
I'm excited about the potential of travel itineraries once extensions are available. What if I can tell it where I want to go, and it could just handle picking the best flights and accomodations for me and I didn't have to spend any time searching airline or hotel websites. I'm curious to know more detail about how you're using it for travel itineraries now.
amolgupta · 2 years ago
I have used it to build travel itineraries and was tempted to write a travel app around that. Until I realized that some of the hotels and places it recommends do not actually exist or have existed in the past. It overconfidently also publishes broken booking links to these fake hotels. I am hoping that with chatGPT plugins, it would get better.
qrybam · 2 years ago
The real time applications are a game changer. I haven’t dabbled with that yet! Pasting things from emails and summarising - then keeping in my notes app. Also for planning out days when on holiday.
bitcoinmoney · 2 years ago
Is there a tutorial you followed before to train your own model?
simonw · 2 years ago
I often use it as a thesaurus. "Words that mean X" or even "that situation X me and I was annoyed - give me options for X"

For programming, all sorts of things. I use it all the time for programming languages that I'm not fluent in, like AppleScript or bash/zsh/jq. One recent example: https://til.simonwillison.net/gpt3/chatgpt-applescript

I use it as a rapid prototyping tool. I got it to build me a textarea I could paste TSV values into to preview that data as a table recently, one prompt produced exactly the prototype I wanted: https://github.com/simonw/datasette-paste-table/issues/1

I use it for brainstorming. "Give me 40 ideas for Datasette plugins involving AI" - asking for 40 ideas means that even if the first ten are generic and obvious there will be some interesting ones further down the list.

I used it to generate an OpenAPI schema when I wrote my first ChatGPT plugin, see prompt in https://simonwillison.net/2023/Mar/24/datasette-chatgpt-plug...

It's fantastic for explaining code that I don't understand: just paste it in and it will break down what it's doing, then I can ask follow up questions about specific syntax to get further deeper explanations.

Similar to that, I use it for jargon all the time. I'll even paste in a tweet and say "what did this mean by X?" and it will tell me. It's great for decoding abstracts from academic papers.

It's good for discovering command line tools - it taught me about the macOS "sips" tool a few weeks ago: https://til.simonwillison.net/macos/sips

kzardar · 2 years ago
How often do you find yourself decoding abstracts?
jmann99999 · 2 years ago
Generally rewriting emails for clarity... but I found another neat use of GPT-4.

For public APIs, I ask to make sure its aware of the api. Then I ask for endpoints. I find the endpoint I want. Then I ask it to code a request to the endpoint in language X (Ruby, Python, Elixir). It then gives me a starting point to jump off from.

Thirty seconds of prompt writing saves me about 20 minutes of getting setup. Yes, I have to edit it but generally it is pretty close.

themodelplumber · 2 years ago
You reminded me: I discovered that ChatGPT had invented an API for me. Has that happened to you yet?

Since it went to the trouble of writing code for the API as well, I contacted the API developers to follow up about the topic. The code given was kind of a hand-wave anyway so I'd need to polish it up.

The developers were surprised to hear they had an API. In truth, there was no such thing.

I then found myself in one of those awkward "welp, guess I can keep my job" conversations...good for them, but for me: Go home, no API here. A disappointment with some meta-commentary sprinkled on top.

ornornor · 2 years ago
I asked it to `curl` my homepage and pretend to be a terminal, only executing the command and showing the output.

It got the format etc right but the actual content was completely hallucinated.

ElCapitanMarkla · 2 years ago
Yeah I was coding up a fairly complicated payment form for a Stripe like processor the other day. I thought I’d give chatgpt a go and it confidently gave me the example code I needed and told me how to use it, etc. I was blown away until about 30 seconds later when I realised it was all complete bull crap. It was quite bizarre because this company didn’t really have any public docs out from when chatgpt supposedly harvested it’s data until, but it knew about the company and knew a couple of funny keywords this company uses in its form, so it was almost believable.
schappim · 2 years ago
This has improved significantly between 3, 3.5 and now 4. It used to create a lot of Apple Frameworks/Classes and Methods, many of which would have been useful if they actually existed.
pishpash · 2 years ago
That's just asking for their API to be implemented by some bot. Not sure they really get to keep their job.
qrio2 · 2 years ago
yeah even asking for common node library/sdk implementations has been off for me, calling functions with options that are not accepted, or what it thinks they should be
DoingIsLearning · 2 years ago
> Generally rewriting emails for clarity...

This is the sort of thing that will force a lot of legal teams to shutdown access to GPT-4 api/gui from internal networks.

Ppl never think of unintended consequences.

Ask it a prompt fine but don't provide internal information as an input.

jmann99999 · 2 years ago
Yeah, I have found I need to be careful. When I have used it, there is no confidential information in the email. I do pay attention to that.

That said,I think it will be interesting as Microsoft introduces this into Office 365. You bring up a great point. Most people will not realize they are sending potentially confidential information to Microsoft.

Perhaps it's no different than Grammarly... But I think you are right that legal departments are going to be all over this.

_nalply · 2 years ago
This is one of the causes there's a push to run your own engines for large language models: if you run your own service you can control the environment, data and reproducibility.
teaearlgraycold · 2 years ago
Get ready for ChatGPT: Enterprise edition! Now with SOC 2 compliance!
di456 · 2 years ago
A couple more years of chip improvements and it may run self contained within a device.
euroderf · 2 years ago
All your topics of interest are belong to us.
jerrygoyal · 2 years ago
> Generally rewriting emails for clarity

I built a free ChatGPT chrome extension that integrates with Gmail for better UX: https://chatgptwriter.ai (300k users so far)

quickthrower2 · 2 years ago
300k users is insane. Is it BYO key? Otherwise how do you handle that much load for free?
ryann_wisc · 2 years ago
Great extension! I used it recently, and had some trouble drafting email reminders (to respond to an email). Do you have any tips on how I could do that with the extension?
avereveard · 2 years ago
chatgpt isn't compliant with any regulation including gdpr how much private data are your extension's user sending there?
nathanmcrae · 2 years ago
This is exactly the kind of thing I hope LLM chatbots will be genuinely useful for. Though, how often do you find it completely hallucinating endpoints / parameters etc. ?
javajosh · 2 years ago
I use it for similar things as GP, and find its strengths to be similar too.

ChatGPT hallucinates SVG path attributes. Ask it to make an svg of a unicorn - it will give you markup that looks okay, but if you look at the values of the paths, it's clearly gibberish.

(SVG is a particularly interesting case because it's XML on the outside, but several attributes are highly structured, esp g.transform and path.d. Path.d is basically the string of a Logo-like programming language. I was specifically looking at these attributes for realism, and didn't find it.)

jmann99999 · 2 years ago
Great question. If you ask it for an API endpoint that is described online but isn't well documented publicly, it seems to default back to what it thinks you should do. For example, in one example, it hallucinates that you need a bearer token.

I don't know whether that is because that is a common way of doing things or whether a previous prompt responded with a bearer token... But it wasn't right.

For me, it's a leaping off point that often saves time if I ask the right question. To your point, you have to be quick to know enough about the API to deduce whether you and Chat GPT are in the same universe.

VoodooJuJu · 2 years ago
Could you mock-up what might be a typical email written by you, then pass it through GPT, then post both responses here? I'd be curious to see what the difference looks like for someone else's writing. I've tried this exact use-case and noticed a drop in quality and clarity, rather than an improvement.
zzleeper · 2 years ago
Can you provide an example of what prompts would you use?
jmann99999 · 2 years ago
Here is a good example:

1) Use Chat GPT in GPT-4 mode. I have found GPT-3 doesn't work in the same way.

2) I ask "What APIs does EasyPost have?"

It will respond with 7+ API endpoints

3) I ask "Can you write code in Ruby for the rates API?"

It responds almost perfectly with workable code from my experience in Ruby.

4) Then I ask "Can you give me that in Elixir?"

It responds with something I think is about 90% right. I am not as familiar with it but it seems close.

I am not trying to replace myself... I am just trying to make my job easier. And this seems to do it.

axlee · 2 years ago
please send your inputs. cute stories are whatever.
VoodooJuJu · 2 years ago
Useful things:

- As a thesaurus

- What's the name of that "thing" that does "something" - kind of like fuzzy matching

- A starting point for writing particular functions. For example, I wanted a certain string-manipulation function written in C, and it gave me a decent skeleton. However they're almost always very inefficient, so I have to optimize them.

Things I've tried, that others seem to be blown away by, that I find useless:

- Rewriting emails or documentation: I see no clarity improvement from ChatGPT rewording what I say, and sometimes information is lost in the process.

- Outliner or idea prompter: I don't see an improvement over just traditional internet search and reading over various articles and books.

For me, its capabilities do not match the marketing and hype. It's basically just a slightly better search engine. All of the above use-cases can be accomplished with some Google-fu. For people who don't know any programming or about using search engine operators, I could see why they might be impressed by it.

wussboy · 2 years ago
This is the kind of response that truly leaves me underwhelmed with Chat GPT. A thesaurus? A different kind of search engine? No thanks.

I think Chat GPT would be useful to raise an almost infinite number of accusations against your enemies on social media, muddying the water with a deluge of garbage and poisoning every conceivable well with unlimited zeal.

Are your societal purposes remotely at odds with my own? I'll unleash Chat GPT against you with an unrelenting barrage of accusations and insinuations.

precompute · 2 years ago
That sort of stuff only works until the other side hasn't wised up to your act. Judging by how popular LLMs are going to be, "trust" on the internet will be non-existent.

Dead Comment

f6v · 2 years ago
> What's the name of that "thing" that does "something"

I could remember the name of one adult entertainment star. I thought this is where I can finally put this ChatGPT to use. It told me anything adult is off-limits. I’m glad that OpenAI can decide what’s good and bad for us.

toss1 · 2 years ago
It / OpenAi are not "deciding what is good and bad for us", it is deciding what services they want to provide or not provide.

Your pontificating is doing more "deciding what is good and bad for us" (grousing about it's inability to identify the pornstar you're horny for today & dressing it up as some kind of moral high ground) than it is.

There are plenty of open source LLM and "AI" models or research to build your own. Go select one and train it on the large body of porn works out there on the internet and you'll likely make a fortune from this "missed opportunity" that OpenAi is leaving on the table.

PartiallyTyped · 2 years ago
They are not deciding what is good for us, they decide what is good for their public image, and that kind of controversy is certainly not something they'd like to venture into.
mrafi2 · 2 years ago
Interesting, Just on the paraphrasing bit, would love to know your thoughts on Quillbot or Wordtune
xkcd1963 · 2 years ago
Yes in many ways it is just for avoiding some additional clicks
jwally · 2 years ago
I just asked it to make an itinerary for a 45 minute long 6 year old boy's soccer practice. It was almost perfect. It needs to be tweaked (3 minutes for cool down?) but it did 95% of the heavy lifting.

I also asked it for vacation ideas with nice cabins and trailer hookups with outdoor activities for kids and nice cabins within 200 miles of where I live - it was almost perfect in its response.

I have trouble starting things from scratch, but once a framework exists I'm usually solid and can refine it to where I want it. For me, right now, I think that's where it shines: Giving me a solid starting place to work from. Beats the hell out of sifting through blog entries bloated with SEO filler.

VoodooJuJu · 2 years ago
Can you explain how ChatGPT's soccer itinerary is any different from the top google search [1] for the subject? Is ChatGPT's response any more useful or meaningfully different from the practice routines at the link?

[1] https://www.soccerhelp.com/soccer-practice-plans.shtml

kkwteh · 2 years ago
The top google search always comes with a lot of ads and other crap that you have to filter through. Also the response might not be exactly what your looking for (you might not have the same materials.) For instance you can ask chatgpt to create a practice plan that doesn’t require cones, or is focused on a certain set of skills, etc.
jwalton · 2 years ago
What makes you think the top google result wasn’t written by chatgpt? I came across an article on volleyball the other day that was the top hit for what I was searching for - halfway through the article there was a paragraph about a famous setter from Nekoma’s volleyball team and how they were going to play in the upcoming spring nationals. The “author” seemed completely unaware that Nekoma is a fictional team from the popular manga Haikyuu.
doublespanner · 2 years ago
In practice probably not; with google results there is an increasing feeling that it's entirely bullshit designed to sell something or get clicks...

A response from ChatGPT seems somehow more honest, even though it's just an aggregate of the former.

Karunamon · 2 years ago
It didn't require using Google, for one. That alone should be worth something.
InCityDreams · 2 years ago
No ads.
rajnathani · 2 years ago
> I have trouble starting things from scratch, but once a framework exists I'm usually solid and can refine it to where I want it.

This is so true for GPT’s benefit. As an anecdote here: We wrote some C++ code involving multiple HTTP servers, where while we ultimately wrote the exact code we wanted ourselves, but the starting code provided to us by ChatGPT really helped speed up the process to having finished off the C++ code’s core feature down in one small coding session.

I think the “starting things from scratch” in cases like these can be partially mentally exhausting when having to search the web.

throwthrowuknow · 2 years ago
That’s true for me too. Starting from scratch has the same blank page effect that writing does. Letting gpt write something to get started with even if you wind up changing all of it really helps get over that initial hump.
TheHumanist · 2 years ago
That's starting point really is very draining for some of us. Sounds like you too. The dead-eyed, blink-once-every-two-minutes stare while moving through documentation, stack override, Google, etc.