Readit News logoReadit News
tartrate commented on iPhone Air   apple.com/newsroom/2025/0... · Posted by u/excerionsforte
sgustard · 4 months ago
Good to know! The fine print:

As of September 9, 2025, hypertension notifications are currently under FDA review and expected to be cleared this month, with availability on Apple Watch Series 9 and later and Apple Watch Ultra 2 and later. The feature is not intended for use by people under 22 years old, those who have been previously diagnosed with hypertension, or pregnant persons.

tartrate · 4 months ago
> [hypertension notifications] is not intended for use by people [...] who have been previously diagnosed with hypertension

Sounds a bit ironic but I guess it's for legal reasons.

tartrate commented on Apple Hearing Study shares preliminary insights on tinnitus   apple.com/newsroom/2024/0... · Posted by u/mgh2
lalalandland · a year ago
I think I suffer from this. My theory: There are muscles in the ear canal that try to modulate the sound and those muscles tense up and cause issues. I also have sore muscles that get a lot better from use of magnesium supplements and the tinnitus also get slightly better from this use. (It get a lot worse if I stop taking it)
tartrate · a year ago
how quickly does magnesium work, and how quickly does it get worse if you don't take it?
tartrate commented on Perplexica: Open-source Perplexity alternative   github.com/ItzCrazyKns/Pe... · Posted by u/sean_pedersen
michelsedgh · 2 years ago
Actually I loved it. I dont think they have any grounds to sue. Its different and close enough. Also they wouldn’t sue a project on github, if they do they show their faces its worse for them. Also many forks will happen and they have to sue many. Worst case you change the name of the repo. Thats the power of open source ;)
tartrate · 2 years ago
Isn't Yuzu a good counter example?
tartrate commented on GPT-4o's Memory Breakthrough – Needle in a Needlestack   nian.llmonpy.ai/... · Posted by u/parrt
tartrate · 2 years ago
Are there any prompts/tests about recalling multiple needles (spread out) at once?

For example, each needle could be a piece to a logic puzzle.

tartrate commented on DuckDuckGo AI Chat   duckduckgo.com/?q=DuckDuc... · Posted by u/maltalex
ziddoap · 2 years ago
>is this substantively different from how search queries are handled when DuckDuckGo forwards them to Bing?

No, it is the exact same business model but applied to AI chat instead of search.

tartrate · 2 years ago
In other words, a lot of fuss about nothing.
tartrate commented on Mixtral 8x22B   mistral.ai/news/mixtral-8... · Posted by u/meetpateltech
brokensegue · 2 years ago
Isn't equating active parameters with cost a little unfair since you still need full memory for all the inactive parameters?
tartrate · 2 years ago
Well, since it affects inference speed it means you can handle more in less time, needing less concurrency.
tartrate commented on There’s a 30-year old dead Rabbit in Seven Sisters tube station   ianvisits.co.uk/articles/... · Posted by u/edward
tartrate · 2 years ago
Sigh, clickbait. No, not a real rabbit.
tartrate commented on DBRX: A new open LLM   databricks.com/blog/intro... · Posted by u/jasondavies
pandastronaut · 2 years ago
Even starting at 30%, the MMLU graph is false. The four bars are wrong. Even their own 73,7% is not at the right height. The Mixtral 71.4% is below the 70% mark of the axis. This is really the kind of marketing trick that makes me avoid a provider / publisher. I can't build trust this way.
tartrate · 2 years ago
Seems fixed now
tartrate commented on DBRX: A new open LLM   databricks.com/blog/intro... · Posted by u/jasondavies
jjgo · 2 years ago
> "On HumanEval, DBRX Instruct even surpasses CodeLLaMA-70B Instruct, a model built explicitly for programming, despite the fact that DBRX Instruct is designed for general-purpose use (70.1% vs. 67.8% on HumanEval as reported by Meta in the CodeLLaMA blog)."

To be fair, they do compare to it in the main body of the blog. It's just probably misleading to compare to CodeLLaMA on non coding benchmarks.

tartrate · 2 years ago
Which non-coding benchmark?
tartrate commented on DBRX: A new open LLM   databricks.com/blog/intro... · Posted by u/jasondavies
hintymad · 2 years ago
Thanks! Why do they not focus on hosting other open models then? I suspect other models will soon catch up with their advantages in faster inference and better benchmark results. That said, maybe the advantage is aligned interests: they want customers to use their platforms, so they can keep their models open. In contrast, Mistral removed their commitment to open source as they found a potential path to profitability.
tartrate · 2 years ago
> Why do they not focus on hosting other open models then?

They do host other open models as well (pay-per-token).

u/tartrate

KarmaCake day307January 2, 2012View Original