Readit News logoReadit News
roger_ commented on LM Studio 0.4   lmstudio.ai/blog/0.4.0... · Posted by u/jiqiren
tarruda · 11 days ago
These days I don't feel the need to use anything other than llama.cpp server as it has a pretty good web UI and router mode for switching models.
roger_ · 11 days ago
MLX support on Macs was the main reason for me.
roger_ commented on I'm 34. Here's 34 things I wish I knew at 21   elliot.my/im-34-heres-34-... · Posted by u/clowes
ap99 · 18 days ago
> Eating meat is quite clearly immoral. Unless it will be detrimental to your health, eat as little as possible.

Carnivorous animals, are they immoral?

roger_ · 18 days ago
Appeal to nature.
roger_ commented on Embassy: Modern embedded framework, using Rust and async   github.com/embassy-rs/emb... · Posted by u/birdculture
roger_ · a month ago
Async embedded is something that's always made sense to me and I've been awaiting a long time for it to happen.

But what's the overhead price with Embassy?

roger_ commented on ChatGPT Health   openai.com/index/introduc... · Posted by u/saikatsg
ShakataGaNai · a month ago
Unfortunately I don't think that's a good solution. Memories are an excellent feature and you see them on.... most similar services now.

Yes, projects have their uses. But as an example - I do python across many projects and non-projects alike. I don't want to want to need to tell ChatGPT exactly how I like my python each and everytime, or with each project. If it was just one or two items like that, fine, I can update its custom instruction personalization. But there are tons of nuances.

The system knowing who I am, what I do for work, what I like, what I don't like, what I'm working on, what I'm interested in... makes it vastly more useful. When I randomly ask ChatGPT "Hey, could I automate this sprinkler" it knows I use home assistant, I've done XYZ projects, I prefer python, I like DIY projects to a certain extent but am willing to buy in which case be prosumer. Etc. Etc. It's more like a real human assistant, than a dumb-bot.

roger_ · a month ago
I know what you mean, but the issue the parent comment brought up is real and "bad" chats can contaminate future ones. Before switching off memories, I found I had to censor myself in case I messed up the system memory.

I've found a good balance with the global system prompt (with info about me and general preferences) and project level system prompts. In your example, I would have a "Python" project with the appropriate context. I have others for "health", "home automation", etc.

Deleted Comment

roger_ commented on ChatGPT Health   openai.com/index/introduc... · Posted by u/saikatsg
paulgrimes1 · a month ago
Here’s something: my chatGPT quietly assumed I had ADHD for around 9 months, up until October 2025. I don’t suffer from ADHD. I only found out through an answer that began “As you have ADHD..”

I had it stop right there, and asked it to tell me exactly where it got this information; the date, the title of the chat, the exact moment it took this data on as an attribute of mine. It was unable to specify any of it, aside from nine months previous. It continued to insist I had ADHD, and that I told it I did, but was unable to reference exactly when/where.

I asked “do you think it’s dangerous that you have assumed I have a medical / neurological condition for this long? What if you gave me incorrect advice based on this assumption?” to which it answered a paraphrased mea culpa, offered to forget the attribute, and moved the conversation on.

This is a class action waiting to happen.

roger_ · a month ago
Disable memories so each chat is independent.

If you want chats to shared info, then use a project.

roger_ commented on ChatGPT Health   openai.com/index/introduc... · Posted by u/saikatsg
nozzlegear · a month ago
> Recently I snapped a photo of an obscure instrument screen after taking a test and was able to get more useful information than what my doctor eventually provided (“nothing to worry about”, etc.) ChatGPT was able to reference papers and do data analysis which was pretty amazing, right from my phone (e.g fitting my data to a model from a paper and spitting out a plot).

If you don't mind sharing, what kind of useful information is ChatGPT giving you based off of a photo that your doctor didn't give you? Could you have asked the doctor about the data on the instrument and gotten the same info?

I'm mildly interested in this kind of thing, but I have a severe health anxiety and do not need a walking hypochondria-sycophant in my pocket. My system prompts tell the LLMs not to give me medical advice or indulge in diagnosis roulette.

roger_ · a month ago
In one case it was a urinary flow test (uroflowmetry). The results go to a lab and then the doctor gets the summary. Was able to diagnose the issue, prevalence, etc. and educate myself about treatment and risks before seeing a doctor. Papers gave me distributions of flow by age, sex, etc. so I knew it was out of range.

In another case I uploaded a CSV of CGM data, analyzed it and identified trends (e.g. Saturday morning blood sugar spikes). All in five minutes on my phone.

roger_ commented on ChatGPT Health   openai.com/index/introduc... · Posted by u/saikatsg
piva00 · a month ago
> But I don’t know if I should be denied access because of those people.

That's the majority of people though, if you really think that I assume you wouldn't have a problem with needing to be licenced to have this kind of access, right?

roger_ · a month ago
If they pepper it with warnings and add safe guards, then I'm fine.

I think they can design it to minimize misinformation or at least blind trust.

roger_ commented on ChatGPT Health   openai.com/index/introduc... · Posted by u/saikatsg
SiempreViernes · a month ago
You're supposed to share it with a doctor you trust, if nobody qualified asked for it it's probably because it's no longer relevant.
roger_ · a month ago
I’ve had mixed experiences with doctors. Often times they’re glancing at my chart for two minutes before an appointment and that’s the extent of their concern for me.

I’ve also lived in places where I don’t have a choice in doctor.

roger_ commented on ChatGPT Health   openai.com/index/introduc... · Posted by u/saikatsg
websiteapi · a month ago
If an insight led you or a family member to being misdiagnosed and crippled would you just say it’s their or your own fault? If it were a doctor would you have the same opinion?
roger_ · a month ago
I understand enough about these systems to know they’re not perfect but I agree some people might be misled.

But I don’t know if I should be denied access because of those people.

u/roger_

KarmaCake day329October 28, 2009View Original