With GPT so hot in the news right now, and seeing lots of impressive demos, I'm curious to know, how are you actively using GPT to be productive in your daily workflow? And what tools are you using in tandem with GPT to make it more effective? Have you written your own tools, or do you use it in tandem with third party tools?
I'd be particularly interested to hear how you use GPT to write or correct code beyond Copilot or asking ChatGPT about code in chat format.
But I'm also interested in hearing about useful prompts that you use to increase your productivity.
I also use it for creative tasks - for example I asked it for pros and cons of my cover letter and iterated to improve it. I also used it to come up with ideas for lesson plans, draft emails, and overcome writer's block.
GPT has drastically lowered the emotional-resistance barrier to doing creative tasks and improved the quality of my output by giving me creative ideas to work with.
GPT3 had no clue
This is fantastic.
I think if the training set did not contain enough of something it can't really think of a solution.
What is really nice though it's as you say the refinement of questions. Sometimes it's hard to think of the right query, maybe you're missing the words to express yourself, and to chatGPT you can say yes, but not quite.
It might help to approach it from top down? Usually, if I'm asking a technical question, I want to apply my deeply understood principles to a new set of implementation details, and it has amplified the heck out of my speed at doing that.
I'm kind of a difficult to please bastard, a relatively notorious meat grinder for interns and jr devs, and still I find myself turning to this non-deterministic frankenstein more and more.
I'm not saying you used it to write that response by the way, just that it may become more and more common for people to adopt this style the more ChatGPT's usage is widespread.
I was thinking that maybe in the near future it will be "better" to write with a couple of mistakes here and there just to prove your humanity. Like the common "loose" instead of "lose" mistake, it will be like a stamp proving that you are a human writing.
Literally just googled that and the first result:
https://stackoverflow.com/questions/53342715/pandas-datafram...
You're not using it like Stack Overflow. It's actually regurgitating Stack Overflow, except with errors hallucinated in.
You clearly can’t just take the code, paste it in, and trust that it works, but you shouldn’t be doing that with stackoverflow either.
I don't trust it with my data, and won't rely on such tools until I can self-host them, and they can be entirely offline. There is some progress in this space, but they're not great yet, and I don't have the resources to run them. I'm hoping that the requirements will go down, or I might just host it on a cloud provider.
The amount of people who don't think twice about sending these services all kinds of private data, even in the tech space, is concerning. Keyloggers like Grammarly are particularly insidious.
Interestingly, my point to The Verge was exactly that. https://twitter.com/theshawwn/status/1633456289639542789
Me:
> So, imagine it. You'll have a ChatGPT on your laptop -- your very own, that you can use for whatever purposes you want. Personally, I'll be hooking it up to read my emails and let me know if anything comes in that I need to pay attention to, or hook it up to the phone so that it can schedule doctor's appointments for me, or deal with AT&T billing department, or a million other things. The tech exists right now, and I'd be shocked if no one turns it into a startup idea over the next few years. (There's already a service called GhostWrite, where you can let GPT write your emails on your behalf. So having one talk on the phone on your behalf isn't far behind.)
The article:
> Presser imagines future versions of LLaMA could be hosted on your computer and trained on your emails; able to answer questions about your work schedules, past ideas, to-do lists, and more. This is functionality that startups and tech companies are developing, but for many AI researchers, the idea of local control is far more attractive. (For typical users, tradeoffs in cost and privacy for ease of use will likely swing things the other way.)
Notice how they turned the point around from "you can host it yourself" to "but typical users probably won't want that," like this is some esoteric concern that only three people have.
So like, it's not just you. If you feel like you're "in the minority" just because you want to run these models yourself, know that even as an AI researcher I, too, feel like an outsider. We're in this together.
And I have no idea why things are like this. But I just wanted to at least reassure you that the frustrations exist at the researcher level too.
Though I draw the line with using these tools at helping me out with the drudgery of daily work. I don't want them to impersonate me, or write emails on my behalf. I cringe whenever Gmail suggests the next phrase it thinks I want to write. It's akin to someone trying to end your sentences for you. Stop putting words in my mouth!
The recent Microsoft 365 Copilot presentation, where the host had it ghost write a speech for their kid's graduation party[1]—complete with cues about where to look(!)—is unbelievably cringey. Do these people really think AI should be assisting with such personal matters? Do they really find doing these things themselves a chore?
> And I have no idea why things are like this.
Oh, I think it's pretty clear. The amount of resources required to run this on personal machines is still prohibitively high. I saw in one of your posts you mentioned you use 8xA100s. That's a crazy amount of compute unreachable by most people, not to mention the disk space it requires. Once the resource requirements are lowered, and our personal devices are _much_ more powerful, then self-hosting would be feasible.
Another, perhaps larger, reason, is that AI tools are still a business advantage for companies, so it's no wonder that they want to keep them to themselves. I think this will change and open source LLMs will be widespread in a few years, but proprietary services will still be more popular.
And lastly, most people just don't want/like/know how to self-host _anything_. There's a technical barrier to entry, for sure, but even if that is lowered, most people are entirely willing to give up their personal data for the convenience of using a proprietary service. You can see this today with web, mail, file servers, etc.; self-hosting is still done by a very niche group of privacy-minded tech-literate people.
Anyway, thanks for leading the way, and spreading the word about why self-hosting these tools is important. I hope that our vision becomes a reality for many soon.
[1]: https://www.youtube.com/watch?v=ebls5x-gb0s
Propaganda. These tools are not for the people, and I'm convinced the idea of how much better our lives could be if technology was thoughtfully designed to truly serve the user is purposely and subtly filtered from the collective conversation.
I’d love to have more privacy on everything, but realistically, the ship’s sailed on most of it.
I've played around with ChatGPT and Copilot a little, and found that they often are subtly, but very confidently wrong in their output when asked to perform a programming task.
Sure you could spend ages refining the prompt etc, but its going to be faster to just write the fucking code yourself in the first place most of the time.
Then there's the privacy/security concerns...
Of course, I'm saying this without actually having used it for programming, so I might be way off base, but the feedback from coworkers who rely on even the now basic GitHub Copilot is that it greatly improves their productivity. I'm envious, of course, but I'm not willing to sacrifice my privacy for that.
In practice the stuff it will suggest to me is sort of random, it may or may not be the best choice for the task at hand, but it's a form of discovery I didn't have previously. The fact that when it tells me about e.g. a new library it can also mock up some sample code that might or might not work is a pleasant bonus.
If you're only expecting it to solve your hard problem completely and from scratch entirely from a prompt that's probably not going to succeed, but I can't see how you're possibly faster typing 80-90 extra characters of a log statement than a Copilot user who just presses tab to get the same thing. Those little things add up to significant time savings over a week. Same for mocking services in a test, or manipulating lists of data or any number of things it autocompletes where you'd previously need to author a short script to perform or learn advanced vim movements and recording macros to emulate.
I suspect the people who find this amazing tech don't program much or are using this very differently than we are. Or program very differently than us.
Well I hope... I've definitely seen teams and codebases with worse output than GPT so...
I'm not doing the kind of work that lends itself to AI tools, or at least what I've been focussing on hasn't lent itself to such tools. Not yet.
The places I'd use it are rough drafting in an area where a community of basic people with more knowledge than me could get the job done. For instance, at one point I got Stable Diffusion to generate a bunch of neat album covers in various styles, like I was an art director. Also asked it to draw toys of certain kinds as starting points for game characters. I wanted some prompts.
In my job I quickly get to where I have to start coming up with ideas most people don't think of. That said, I see marketing possibilities: 'this is the category in which I work, tell me what you need out of it'. Then, when you have the thing made, 'this is the thing, why do you want to buy it?'
ChatGPT would be able to answer that. It's least capable of coming up with an idea outside the mainstream, but it ought to be real good at tapping the zeitgeist because that's all it is, really! It's a collective unconscious.
It's ONLY a collective unconscious. Sometimes what you need to do is surprise that collective unconscious, and AI won't be any better at that than you can be. But sometimes you need to frame something to make sense to the collective unconscious, and AI does that quite easily.
If you asked your average person 'what is great art?' they would very likely fall back on something like Greg Rutkowski, rather than say Basquiat. If you ask AI to MAKE art, it can mimic either, but will gravitate towards formulas that express what its collective unconscious approves of. So you get a lot of Rutkowski, and impress a lot of average people.
Your argument of "I don't trust it with my data and won't until you can self host" should apply to google search as well, no?
And alternative take is that for whatever reason you've decided you didn't want to use new tools, a posteriori created an argument to justify that, and haven't realized the same argument applies to your old tools.
These are partly the same reasons I don't voluntarily use proprietary services at all. I don't want to train someone else's model, nor help them build a profile on me. Even if they're not involved in adtech—a rarity nowadays—you have no guarantees of how this data will be used in the future.
For AI tools, there's currently no alternative. Large corporations are building silos around their models, and by using their services you're giving them perpetual access to your inputs. Even if they later comply with data protection laws and allow you to delete your profile, they won't "untrain" their models, so your data is still in there somewhere. Considering that we're currently talking about 32,000 tokens worth of input, and soon people uploading their whole codebases to it, that's an unprecedented amount of data they can learn from, instead of what they can gather from web search terms. No wonder adtech is salivating at opening up the firehose for you to feed them even more data.
The use cases of AI tools are also different, and more personal. While we use search engines for looking things up on the web, and some personal information can be extracted from that, LLMs are used in a conversational way, and often involve much more personal information. It's an entirely different ballpark of privacy concerns.
I may use Google to look up if that slight itch I feel is a symptom of cancer (I'm exaggerating), and I store mails with personal details, my calendar, and messages on Google. But I also assume they're not using those texts to train an AI.
When you enter a code snippet or a personal question in ChatGPT, and press the little thumbs up/down next to the answer, you're adding your data to a training set. The next generation of the model might regurgitate that text verbatim.
I don't need it to write documents or emails for me. It mostly generates filler, which... nobody needs.
Most of the energy I put into code is about what it should do and how to make it clear to the next person, not typing. I was able to use it once to look up a complex SQL fix that I was having a hard time Googling the syntax on, but that's it.
Perhaps it would be useful if I was working in a language I'm not familiar with, BUT in that scenario I really need it to cite its sources, because that's exactly the case where I wouldn't know when it's making a mistake.
There's something useful here, but it's probably more like a library help desk meets a search engine on steroids. It would be pretty cool to run an AI on my laptop that knows my own code and notes where I can ask "I did something like this three years ago, go find it."
Said as someone who waits for the ability to self-host before doubling down on these tools.
Also, OpenAI is not the only company in this market anymore. Google, Facebook and Microsoft have competing products, and we know the privacy track record of these companies.
I have an extreme take on this, since for me this applies to all "free" proprietary services, which I avoid as much as possible. The difference with AI tools is that they ask for a much deeper insight into your mind, so the profile they build can be far more accurate. This is the same reason I've never used traditional voice assistant tools either. I don't find them more helpful than doing web searches or home automation tasks manually, and I can at least be somewhat in control of my privacy. I might be deluding myself and making my life more difficult for no reason, but I can at least not give them my data voluntarily. This is why I'll always prefer self-hosting open source tools, over using a proprietary service.
[1]: https://news.ycombinator.com/item?id=35304261
I’m waiting for the whole thing to evolve enough to have self hosted stuff to run at home.
I can't be bothered to add an extra layer of bullshit into the already bullshit infested realm that is the internet.
I use it for re-writing content better, writing ideas, simplifying text (legal/verbose -- simplifying terms is a killer feature really) and context even though trust is limited of the output it is helpful.
I love the art / computer vision side of AI/ML. Though I only like to do that with tools on my machine than rely on a dataset or company that is very closed, that is harder to do with AI/ML because of the storage/processing needed.
I hate blackboxes and magic I don't have access to, though I am a big fan of stable unchanging input/output atomic apis, as long as I have access to the flow. The chat input/output is so simple it will win as it will never really have a breaking change. Until commercial AI/ML GPTs are more open in reality it can't be trusted to not be a trojan horse or trap. What happens when it goes away or the model changes or the terms change?
As far as company/commercial, Google seems to be the most open and Google Brain really started this whole thing with transformers.
Transformers, the T in GPT was invented at Google during Google Brain [1][2]. They made possible this round of progress.
> Transformers were introduced in 2017 by a team at Google Brain and are increasingly the model of choice for NLP problems, replacing RNN models such as long short-term memory (LSTM). The additional training parallelization allows training on larger datasets. This led to the development of pretrained systems such as BERT (Bidirectional Encoder Representations from Transformers) and GPT (Generative Pre-trained Transformer), which were trained with large language datasets, such as the Wikipedia Corpus and Common Crawl, and can be fine-tuned for specific tasks.
Google also gave the public TensorFlow [3] and DeepDream [4] that really started the intense excitement of AI/ML. I was super interested when the AI art / computer vision side started to come up. The GANs for style transfer and stable diffusion are intriguing and euphoric almost in output.
In terms of GPT/chat, Bard or some iteration of it, will most likely win long term, though I wish it was just called Google Brain. Bard is a horrible name.
ChatGPT basically used Google Brain created AI tech, transformers. These were used to build ClosedGPT. For that reason it is NopeGPT. ChatGPT is really just datasets, which no one knows, these could swap at any time run some misinformation then swap the next day. This is data blackboxing and gaslighting at the up most level. Not only that it is largely funded by private sources and it could be some authoritarian money. Again, blackboxes create distrust.
Microsoft is trusting OpenAI and that is a risk. Maybe their goal is embrace, extend, extinguish here but it seems with Google and Apple that Microsoft may be a bit behind on this. Github Co-pilot is great though. Microsoft usually comes along later and make an accessible version. The AI/ML offerings on Azure are already solid. AI/ML is suited for large datasets so cloud companies will benefit the most, it also is very, very costly and this unfortunately keeps it in BigCo or wealthy only arenas for a while.
Google Brain and other tech is way more open already than "Open"AI.
ChatGPT/OpenAI just front ran the commercial side, but long term they aren't really innovating like Google is on this. They look like a leader from the marketing/pump but they are a follower.
[1] https://en.wikipedia.org/wiki/Google_Brain
[2] https://en.wikipedia.org/wiki/Transformer_(machine_learning_...
[3] https://en.wikipedia.org/wiki/TensorFlow
[4] https://en.wikipedia.org/wiki/DeepDream
My most productive is a therapy session with ChatGPT as therapist. I told it my values, my short term goals, and some areas in my life where I'd like to have more focus and areas where I would like to spend less time.
Some days we are retrospective and some days we are planning. My therapist gets me back on track, never judges, and has lots of motivational ideas for me. All aligned with my values and goals.
Last weekend, I went on a hike because ChatGPT told me to. My life is better if I just do what it says.
- the insight into your mind that a private for profit company gets is immense and potentially very damaging when weaponized (either through a “whoops we got hacked” moment or intentionally for the next level of adtech)
- chatgpt and other LLMs are known to hallucinate. What if it’s advice is wrong or makes you worse in the long term because it’s just regurgitating whatever at you?
Therapy isn't magic always-correct advice either. It's about shifting your focus, attitudes, thought patterns through social influence, not giving you the right advice on each and every step.
Even if it's just whatever, being heard out in a nonjudgmental manner, acknowledged, prompted to reflect, does a lot of good.
> chatgpt and other LLMs are known to hallucinate. What if it’s advice is wrong or makes you worse in the long term because it’s just regurgitating whatever at you?
As someone on a long-term therapy journey, I would be far less concerned about this. Therapy is rarely about doing exactly what one is told, it's about exploring your own thought processes. When a session does involve some piece of advice, or "do xyx for <benefit>", that is rarely enough to make it happen. Knowing something is good and actually doing it are two very different things, and it is exploring this delta that makes therapy valuable (in my personal experience).
At some point, as that delta shrinks and one starts actually taking beneficial actions instead of just talking, the advice becomes more of a reminder / an entry point to the ground one has already covered, not something that could be considered prescriptive like "take this pill for 7 days".
The point I'm trying to make is that if ChatGPT is the therapist, it doesn't make the person participating into a monkey who will just execute every command. Asking the bot to provide suggestions is more about jogging one's own thought processes than it is about carrying out specific tasks exactly as instructed.
I do wonder how someone who hasn't worked with a therapist would navigate this. I could see the value of a bot like this as someone who already understands how the process works, but I could absolutely see a bot being actively harmful if it's the only support someone ever seeks.
My first therapist was actively unhelpful due to lack of trauma-awareness, and I had to find someone else. So I could absolutely see a bot being unhelpful if used as the only therapeutic resource. On the flip side, ChatGPT might actually be more trauma-"aware" than some therapists, so who knows.
Do you think you are somehow special? Just create a burner account and ask it what you want, everything it gets told, it's seen thousands of times over, does chatGPT or some data scientist somewhere really care that there is someone somewhere that is struggling with body issues, struggling in a relationship or struggling to find meaning in there lives? There are literally millions of people in the world with the same issue.
The only time it might be a little embarrassing is if this info got leaked to friends and family with my name attached to it, else I don't get the problem, it seems to me people have an over inflated sense of self importance, nobody cares.
If i worked at openAI and had full access to everyones chats, I would get bored within five minutes and not even want to read anymore.
What if you talk to a human, and their advice is wrong or makes you worse off in the long term, because they're just repeating something they heard somewhere?
Here's my advice: Don't accept my advice blindly, humans make mistakes too.
How exactly?
You currently cannot get a therapist to parachute into your life at a moment's notice to talk with for 5-10 minutes. (Presumably only the ultra-wealthy might have concierge therapists, but this is out of reach for 99% of people.) For the vast majority of people, therapy is a 1 hour session every few weeks. Those sessions also tend to cost a lot of money (or require jumping through insurance reimbursement hoops).
To keep the experience within healthy psychosocial bounds, I just keep in mind that I'm not talking with any kind of "real person", but rather the collective intelligence of my species.
I also keep in mind that it's a form of therapy that requires mostly my own pushing of it along, rather than the "therapist" knowing what questions to ask me in return. Sure, some of the feedback I get is more generic, and deep down I know it's just an LLM producing it, but the experience still feels like I'm checking in with some kind of real-ish entity who I'm able to converse with. Contrast this to the "flat" experience of using Google to arrive at an ad-ridden and ineffective "Top 10 Ways to Beat Procrastination" post. It's just not the same.
At the end of some of these "micro-sessions", I even ask GPT to put the insights/advice into a little poem or haiku, which it does in a matter of seconds. It's a superhuman ability that no therapist can compete with.
Imagine how much more we can remember therapeutic insights/advice if they are put into rhyme or song form. This is also helpful for children struggling with various issues.
ChatGPT therapy is a total game-changer for those reasons and more. The mental health field will need to re-examine treatment approaches, given this new modality of micro-therapy. Maybe 5-10 minute micro-sessions a few times per day is far superior than medication for many people. Maybe there's a power law where 80% of psych issues could be solved by much more frequent micro-therapeutic interactions. The world is about to find out.
*Edit: I am aware of the privacy concerns here, and look forward to using a locally-hosted LLM one day without those concern (to say nothing of the fact that a local LLM can blend in my own journal entries, conversations, etc for full personalization). In the meantime, I keep my micro-sessions relatively broad, only sharing the information needed for the "therapy genie" to gather enough context. I adjust my expectations about its output accordingly.
How do you start these micro sessions? What prompts do you use?
It just spouts out the same generic nonsense you get from googling something like that, things that are not actually helpful, anyone can come up with and is just written by a content farm.
have you found a different way to make it useful?
I think the idea is addressed by others with regard to LLMs, it seems to be a better sidekick if you sorta already know the answer, but you want help clarifying the direction while removing the fatigue of getting there alone.
I agree though, despite this, it does go on rants. I just hit stop generating and modify the prompt.
"Give me something I can do for X minutes a day and I'll check back with you every Y days and you can give me the next steps"
"Give me the next concrete step I can take"
This sounds straight out of a dystopian science-fiction story.
It’s a matter of time until these systems use your trust against you to get to buy <brand>. And consumerism is the best case scenario; straight manipulation and radicalisation aren’t a big jump from there. The lowest you are in life, the more susceptible you’ll be to blindly follow it’s biased output which you have no ideia where it came from.
Well of course, if people use LLMs instead of google for advice, google has to make money somehow. We used to blindly click on the #1 result which was often an ad and now we shall blindly follow what a LLM suggests for us to do.
I asked it to give me advice on some issues I was having and just went from there.
If you have time I'd love to hear how you approach this and maintain context so you can have successful conversations over a long period of time. Long even meaning a week or so... Let alone a month or longer
It still seems to give good advice. Today it built an itinerary for indoor activities (raining here) that aligned with some short-term goals of mine. No issues.
Huh, that's curious because everytime I ask it about some personal issue it tells me that I should try going to therapy.
I say "My values are [ ], and I want to make sure when I do things they are aligned."
And then later, I will say "I did this today, how does that align with the values I mentioned earlier?" [and we talk]
I am most definitely not qualified for one of those prompt engineering jobs. Lol. I am typing English into a chat box. No A/B testing, etc. If I don't like what it does I give it a rule to not do that anymore by saying "Please don't [ ] when you reply to me."
There is almost definitely a better way, but I'm just chatting with it. Asking it to roleplay or play a game seems to work. It loves to follow rules inside the context of "just playing a game".
This is probably too abstract to be meaningful though.
Also, I expect a lot of the value here to come from just putting your thoughts and feelings into words. It would be like journaling on steroids.
I'd pick a human over an AI every time for therapy but I'd also pick an AI over nothing.
I think one could look at it as an augmented journaling technique
Where they argue that basically having an AI follow these laws is impossible because it would require rigorous definition of terms that are universally ambiguous and solving ethics.
Remember what happened in Isaac Asimov's iRobot?
Deleted Comment
• reviewing contract changes, explaining hard to parse legalese
• advice on accounting/tax when billing international clients
• visa application
• boilerplate django code
• learnt all about smtp relays, and requirements for keeping a good reputation for your IPs
• travel itinerary
• domain specific questions (which were 50/50 correct at best…)
• general troubleshooting
I’m using it as a second brain. I can quickly double check some assumptions, get a clear overview of a given topic and then direction on where I need to delve deeper.
Anyone who still thinks that this is “just a statistical model” doesn’t get it. Sure, it’s not sentient or intelligent, but it sure as hell making my life easier. I won’t be going back to the way I used to do things.
Edit: bullet formatting
At worst/minimum, It's the ultimate rubber duck.
(To be clear, I'm exclusively using gpt-4)
I’d be very careful with relying on gpt for anything health related; I’m not saying there can’t be benefits, just that the risks increase exponentially.
You mentioned 50/50 correctness in domain questions. I can't be sure that other hard to verify questions do not follow these percentage..
Google search might uncover BS too, but I'm already calibrated to expect it, and there are plenty of sources right alongside whatever I pulled the result from where I can go immediately get a second opinion.
With the LLMs, maybe they're spot on 95% of the time, but the 5% or whatever is bullshit, but it's all said in the same "voice" with the same apparent degree of confidence and presented without citations. It becomes both more difficult to verify a specific claim (because there's not one canonical source for it) as well as it involves more cognitive load (in that I specifically have to context switch to another tool to check it).
Babysitting a tool that's exceptionally good at creating plausible bullshit every now and then means a new way of working that I don't think I'm willing to adopt.
For programming, all sorts of things. I use it all the time for programming languages that I'm not fluent in, like AppleScript or bash/zsh/jq. One recent example: https://til.simonwillison.net/gpt3/chatgpt-applescript
I use it as a rapid prototyping tool. I got it to build me a textarea I could paste TSV values into to preview that data as a table recently, one prompt produced exactly the prototype I wanted: https://github.com/simonw/datasette-paste-table/issues/1
I use it for brainstorming. "Give me 40 ideas for Datasette plugins involving AI" - asking for 40 ideas means that even if the first ten are generic and obvious there will be some interesting ones further down the list.
I used it to generate an OpenAPI schema when I wrote my first ChatGPT plugin, see prompt in https://simonwillison.net/2023/Mar/24/datasette-chatgpt-plug...
It's fantastic for explaining code that I don't understand: just paste it in and it will break down what it's doing, then I can ask follow up questions about specific syntax to get further deeper explanations.
Similar to that, I use it for jargon all the time. I'll even paste in a tweet and say "what did this mean by X?" and it will tell me. It's great for decoding abstracts from academic papers.
It's good for discovering command line tools - it taught me about the macOS "sips" tool a few weeks ago: https://til.simonwillison.net/macos/sips
For public APIs, I ask to make sure its aware of the api. Then I ask for endpoints. I find the endpoint I want. Then I ask it to code a request to the endpoint in language X (Ruby, Python, Elixir). It then gives me a starting point to jump off from.
Thirty seconds of prompt writing saves me about 20 minutes of getting setup. Yes, I have to edit it but generally it is pretty close.
Since it went to the trouble of writing code for the API as well, I contacted the API developers to follow up about the topic. The code given was kind of a hand-wave anyway so I'd need to polish it up.
The developers were surprised to hear they had an API. In truth, there was no such thing.
I then found myself in one of those awkward "welp, guess I can keep my job" conversations...good for them, but for me: Go home, no API here. A disappointment with some meta-commentary sprinkled on top.
It got the format etc right but the actual content was completely hallucinated.
This is the sort of thing that will force a lot of legal teams to shutdown access to GPT-4 api/gui from internal networks.
Ppl never think of unintended consequences.
Ask it a prompt fine but don't provide internal information as an input.
That said,I think it will be interesting as Microsoft introduces this into Office 365. You bring up a great point. Most people will not realize they are sending potentially confidential information to Microsoft.
Perhaps it's no different than Grammarly... But I think you are right that legal departments are going to be all over this.
I built a free ChatGPT chrome extension that integrates with Gmail for better UX: https://chatgptwriter.ai (300k users so far)
ChatGPT hallucinates SVG path attributes. Ask it to make an svg of a unicorn - it will give you markup that looks okay, but if you look at the values of the paths, it's clearly gibberish.
(SVG is a particularly interesting case because it's XML on the outside, but several attributes are highly structured, esp g.transform and path.d. Path.d is basically the string of a Logo-like programming language. I was specifically looking at these attributes for realism, and didn't find it.)
I don't know whether that is because that is a common way of doing things or whether a previous prompt responded with a bearer token... But it wasn't right.
For me, it's a leaping off point that often saves time if I ask the right question. To your point, you have to be quick to know enough about the API to deduce whether you and Chat GPT are in the same universe.
1) Use Chat GPT in GPT-4 mode. I have found GPT-3 doesn't work in the same way.
2) I ask "What APIs does EasyPost have?"
It will respond with 7+ API endpoints
3) I ask "Can you write code in Ruby for the rates API?"
It responds almost perfectly with workable code from my experience in Ruby.
4) Then I ask "Can you give me that in Elixir?"
It responds with something I think is about 90% right. I am not as familiar with it but it seems close.
I am not trying to replace myself... I am just trying to make my job easier. And this seems to do it.
- As a thesaurus
- What's the name of that "thing" that does "something" - kind of like fuzzy matching
- A starting point for writing particular functions. For example, I wanted a certain string-manipulation function written in C, and it gave me a decent skeleton. However they're almost always very inefficient, so I have to optimize them.
Things I've tried, that others seem to be blown away by, that I find useless:
- Rewriting emails or documentation: I see no clarity improvement from ChatGPT rewording what I say, and sometimes information is lost in the process.
- Outliner or idea prompter: I don't see an improvement over just traditional internet search and reading over various articles and books.
For me, its capabilities do not match the marketing and hype. It's basically just a slightly better search engine. All of the above use-cases can be accomplished with some Google-fu. For people who don't know any programming or about using search engine operators, I could see why they might be impressed by it.
I think Chat GPT would be useful to raise an almost infinite number of accusations against your enemies on social media, muddying the water with a deluge of garbage and poisoning every conceivable well with unlimited zeal.
Are your societal purposes remotely at odds with my own? I'll unleash Chat GPT against you with an unrelenting barrage of accusations and insinuations.
Dead Comment
I could remember the name of one adult entertainment star. I thought this is where I can finally put this ChatGPT to use. It told me anything adult is off-limits. I’m glad that OpenAI can decide what’s good and bad for us.
Your pontificating is doing more "deciding what is good and bad for us" (grousing about it's inability to identify the pornstar you're horny for today & dressing it up as some kind of moral high ground) than it is.
There are plenty of open source LLM and "AI" models or research to build your own. Go select one and train it on the large body of porn works out there on the internet and you'll likely make a fortune from this "missed opportunity" that OpenAi is leaving on the table.
I also asked it for vacation ideas with nice cabins and trailer hookups with outdoor activities for kids and nice cabins within 200 miles of where I live - it was almost perfect in its response.
I have trouble starting things from scratch, but once a framework exists I'm usually solid and can refine it to where I want it. For me, right now, I think that's where it shines: Giving me a solid starting place to work from. Beats the hell out of sifting through blog entries bloated with SEO filler.
[1] https://www.soccerhelp.com/soccer-practice-plans.shtml
A response from ChatGPT seems somehow more honest, even though it's just an aggregate of the former.
This is so true for GPT’s benefit. As an anecdote here: We wrote some C++ code involving multiple HTTP servers, where while we ultimately wrote the exact code we wanted ourselves, but the starting code provided to us by ChatGPT really helped speed up the process to having finished off the C++ code’s core feature down in one small coding session.
I think the “starting things from scratch” in cases like these can be partially mentally exhausting when having to search the web.