But what's the overhead price with Embassy?
Yes, projects have their uses. But as an example - I do python across many projects and non-projects alike. I don't want to want to need to tell ChatGPT exactly how I like my python each and everytime, or with each project. If it was just one or two items like that, fine, I can update its custom instruction personalization. But there are tons of nuances.
The system knowing who I am, what I do for work, what I like, what I don't like, what I'm working on, what I'm interested in... makes it vastly more useful. When I randomly ask ChatGPT "Hey, could I automate this sprinkler" it knows I use home assistant, I've done XYZ projects, I prefer python, I like DIY projects to a certain extent but am willing to buy in which case be prosumer. Etc. Etc. It's more like a real human assistant, than a dumb-bot.
I've found a good balance with the global system prompt (with info about me and general preferences) and project level system prompts. In your example, I would have a "Python" project with the appropriate context. I have others for "health", "home automation", etc.
Deleted Comment
I had it stop right there, and asked it to tell me exactly where it got this information; the date, the title of the chat, the exact moment it took this data on as an attribute of mine. It was unable to specify any of it, aside from nine months previous. It continued to insist I had ADHD, and that I told it I did, but was unable to reference exactly when/where.
I asked “do you think it’s dangerous that you have assumed I have a medical / neurological condition for this long? What if you gave me incorrect advice based on this assumption?” to which it answered a paraphrased mea culpa, offered to forget the attribute, and moved the conversation on.
This is a class action waiting to happen.
If you want chats to shared info, then use a project.
If you don't mind sharing, what kind of useful information is ChatGPT giving you based off of a photo that your doctor didn't give you? Could you have asked the doctor about the data on the instrument and gotten the same info?
I'm mildly interested in this kind of thing, but I have a severe health anxiety and do not need a walking hypochondria-sycophant in my pocket. My system prompts tell the LLMs not to give me medical advice or indulge in diagnosis roulette.
In another case I uploaded a CSV of CGM data, analyzed it and identified trends (e.g. Saturday morning blood sugar spikes). All in five minutes on my phone.
That's the majority of people though, if you really think that I assume you wouldn't have a problem with needing to be licenced to have this kind of access, right?
I think they can design it to minimize misinformation or at least blind trust.
I’ve also lived in places where I don’t have a choice in doctor.
But I don’t know if I should be denied access because of those people.