As many have said in this thread, most doctors will tell you to go away or give you Welbutrin (which works poorly, if at all). I feel for your struggle.
Exactly.
I got gate-kept with a massive ten page plus questionaire to fill out. Got half way through the laborious free form text responses. Came back the next day and none of my work was saved.
Gave up. Haven't ever gotten back. Because...
Serious question: who is producing reliable numbers now? The Trump administration is actively suppressing federal reporting and openly threatening to cease collecting and reporting data,
and this is absolutely signaling to sycophants and supporters that they should falsify or withhold unflattering data.
This is a truly terrible timeline.
What an LLM cannot do today is almost irrelevant in the tide of change upon the industry. The fact is, with improvements, it doesn't mean an LLM cannot do it tomorrow.
"Every critique of AI assumes to some degree that contemporary implementations will not, or cannot, be improved upon.
Lemma: any statement about AI which uses the word "never" to preclude some feature from future realization is false.
Lemma: contemporary implementations have almost always already been improved upon, but are unevenly distributed."
but believe that every such critique points the way to improved AI.
It's pretty easy to imagine any number of ways of incorporating this concern directly, especially in any reasoning chain approach.
Personally I'd be fond of an eventual Society of Minds where the text put out for non-chatty reasons,
represents the collaborative adversarial relationship between various roles, each itself reflexive, including an "editor" and a "product manager," who force intent and clarity... maybe through iteration...
I just tested this with my internet connection disabled and it still worked. Since it's doing local processing, I suspect it uses traditional OCR algorithms rather than LLMs.
As the article concludes, LLMs aren't magic, they're just one useful tool to include in your toolbox.
Security concerns aside (...) that sounds pretty useful.
> If you inspect the devtools network tab of your browser, you see that everything happens over a single WebSocket to wss://ws.r-universe.dev. The browser is not making the HTTP requests, in fact this would not even be possible because we download the files from a host that does not enable CORS.
Is this attack really just "inject obfuscated text into the image... and hope some system interprets this as a prompt"...?