Readit News logoReadit News
34679 commented on Deploying DeepSeek on 96 H100 GPUs   lmsys.org/blog/2025-05-05... · Posted by u/GabrielBianconi
dragonslayer56 · 9 hours ago
” Our implementation, shown in the figure above, runs on 12 nodes in the Atlas Cloud, each equipped with 8 H100 GPUs.”

Maybe the cost of renting?

34679 · 9 hours ago
I'm confused because I wouldn't consider a cloud implementation to be local.
34679 commented on Updates to Consumer Terms and Privacy Policy   anthropic.com/news/update... · Posted by u/porridgeraisin
34679 · 9 hours ago
It has a toggle for opting out, but the buttons available are "Accept" and "Not Now". So, if I toggle off but click "Accept", does that accept what I toggled or accept data sharing? If I click "Not Now", do they leave it set at the default Opt-in?

I hate this shit and I'm cancelling now.

https://imgur.com/a/oCw5eEp

34679 commented on Grok Code Fast 1   x.ai/news/grok-code-fast-... · Posted by u/Terretta
eterm · 9 hours ago
It depends how fast.

If an LLM is often going to be wrong anyway, then being able to try prompts quickly and then iterate on those prompts, could possibly be more valuable than a slow higher quality output.

Ad absurdum, if it could injest and work on an entire project in milliseconds, then it has mucher geater value to me, than a process which might take a day to do the same, even if the likelihood of success is also strongly affected.

It simply enables a different method of interactive working.

Or it could supply 3 different suggestions in-line while working on something, rather than a process which needs to be explicitly prompted and waited on.

Latency can have critical impact on not just user experience but the very way tools are used.

Now, will I try Grok? Absolutely not, but that's a personal decision due to not wanting anything to do with X, rather than a purely rational decision.

34679 · 9 hours ago
>If an LLM is often going to be wrong anyway, then being able to try prompts quickly and then iterate on those prompts, could possibly be more valuable than a slow higher quality output.

Before MoE was a thing, I built what I called the Dictator, which was one strong model working with many weaker ones to achieve a similar result as MoE, but all the Dictator ever got was Garbage In, so guess what came out?

34679 commented on Deploying DeepSeek on 96 H100 GPUs   lmsys.org/blog/2025-05-05... · Posted by u/GabrielBianconi
34679 · 9 hours ago
"By deploying this implementation locally, it translates to a cost of $0.20/1M output tokens"

Is that just the cost of electricity, or does it include the cost of the GPUs spread out over their predicted lifetime?

34679 commented on Updates to Consumer Terms and Privacy Policy   anthropic.com/news/update... · Posted by u/porridgeraisin
cactca · 10 hours ago
This! Any LLM provider that monitors chat/api history for ‘abuse’ towards the model is considering using user data for training.

An Effective Altruism ethos provides moral/ethical cover for trampling individual privacy and property rights. Consider their recent decision to provide services for military projects.

As others have pointed out, Claude was trained using data expressly forbidden for commercial reuse.

The only feedback Anthropic will heed is financial and the impact must be large enough to destroy their investors willingness to cover the losses. This type of financial feedback can come from three places: termination of a large fraction of their b2b contracts, software devs organizing a persistent mass migration to an open source model for software development. Neither of these are likely to happen in the next 3 months. Finally, a mass filing of data deletion requests from California and EU residents and corporations that repeats every week.

34679 · 9 hours ago
Maybe I'll use the remainder of my subscription time to help improve Void. It's already pretty good.

https://voideditor.com/

https://github.com/voideditor/void

34679 commented on Updates to Consumer Terms and Privacy Policy   anthropic.com/news/update... · Posted by u/porridgeraisin
c080 · 11 hours ago
honest questions, what is the value of training over user chat? the answers are already provided by your LLM!
34679 · 11 hours ago
To start with, I'm sure there's something to be learned from all the times I've responded to a LLM with "Bad bot".
34679 commented on Updates to Consumer Terms and Privacy Policy   anthropic.com/news/update... · Posted by u/porridgeraisin
34679 · 11 hours ago
I'd bet this is related to their recent decision to boot people for being "abusive" to Claude. It now seems that was an attempt to keep their training data friendly.
34679 commented on The “Wow!” signal was likely from extraterrestrial source, and more powerful   iflscience.com/the-wow-si... · Posted by u/toss1
venusenvy47 · 2 days ago
I'd like to watch this documentary. Do you know the year and/or channel where you saw it? The antenna is focused with a tremendous amount of gain towards a spot in the sky, and provides a very significant amount of rejection to signals in all other directions. I can't see how you would get a signal at 1.42 GHz from a watch or flashlight. Harmonics from something like a walkie talkie only occur when the radio is transmitting, and they would spread in bandwidth at each successive harmonic. It would have to be an extremely narrow fundamental frequency, with no audio signal on it, to get a signal with less than 10 kHz at 1.42 GHz.
34679 · a day ago
https://m.imdb.com/title/tt7928816/

I don't remember where I watched it, but the 2nd and 3rd links from a kagi search were for Prime and Apple TV.

34679 commented on The “Wow!” signal was likely from extraterrestrial source, and more powerful   iflscience.com/the-wow-si... · Posted by u/toss1
lelanthran · 2 days ago
> The signal itself looks like a parabola when graphed, gaining in intensity and then falling off at the same rate. Exactly what you'd expect from someone walking across the field in front of it.

Also exactly what you'd expect if aliens were beaming a search signal into their sky, no?

34679 · 2 days ago
Yes, but one is far more likely.
34679 commented on The “Wow!” signal was likely from extraterrestrial source, and more powerful   iflscience.com/the-wow-si... · Posted by u/toss1
andrecarini · 2 days ago
Would you mind expanding on your theory more?
34679 · 2 days ago
It was after watching a documentary called "WOW Signal". The receiver they built was along the edge of a field, and it's designed to pick up extremely weak variations in electrical signals/radio waves. They go into great detail about how sensitive it is. The signal itself looks like a parabola when graphed, gaining in intensity and then falling off at the same rate. Exactly what you'd expect from someone walking across the field in front of it. And if I remember correctly, the signal was more about how much it differed from what was expected, not necessarily how intense it was. My thinking is that if it can pick up on the variations in signal from a star system light-years away, it would also indicate on a Timex watch (or flashlight) a dozen meters away.

u/34679

KarmaCake day3229March 20, 2019View Original