Readit News logoReadit News
calaphos commented on We may not like what we become if A.I. solves loneliness   newyorker.com/magazine/20... · Posted by u/defo10
stdvit · a month ago
> Even in a world brimming with easy distractions—TikTok, Pornhub, Candy Crush, Sudoku—people still manage to meet for drinks, work out at the gym, go on dates, muddle through real life.

They actually don't. Everything from dating and fitness to manufacturing and politics is in decline in activities, and more so in effect and understanding. You can't convince (enough) people anymore that it is even important as many don't have capacity to do it. And it isn't even something new at this point.

calaphos · a month ago
But these social third places have also shifted. Younger generations aren't going out as much but e.g. playing video games specifically with other close friends is very popular.
calaphos commented on Batch Mode in the Gemini API: Process More for Less   developers.googleblog.com... · Posted by u/xnx
fantispug · 2 months ago
Yes, this seems to be a common capability - Anthropic and Mistral have something very similar as do resellers like AWS Bedrock.

I guess it lets them better utilise their hardware in quiet times throughout the day. It's interesting they all picked 50% discount.

calaphos · 2 months ago
Inference throughout scales really well with larger batch sizes (at the cost of latency) due to rising arithmetic intensity and the fact that it's almost always memory BW limited.
calaphos commented on I extracted the safety filters from Apple Intelligence models   github.com/BlueFalconHD/a... · Posted by u/BlueFalconHD
ghxst · 2 months ago
If the training data was "censored" by leaving out certain information, is there any practical way to inject that missing data after the model has already been trained?
calaphos · 2 months ago
If it's just filtered out in the training sets, adding the information as context should work out fine - after all this is exactly how o3, Gemini 2.5 and co deal with information that is newer than their training data cutoff.
calaphos commented on Lieferando.de has captured 5.7% of restaurant related domain names   mondaybits.com/lieferando... · Posted by u/__natty__
mkmk · 3 months ago
This is exactly the stuff that prevents innovation and drives entrepreneurs out of those economies. Physical address?!
calaphos · 3 months ago
Something where you're reachable for any legal purposes- in Germany this sadly remains a physical address.

There are various service which offer a 'virtual' address with digital forwarding of letters for less than 10Eur/Month, so it's not an insurmountable obstacle.

calaphos commented on 'I paid for the whole GPU, I am going to use the whole GPU'   modal.com/blog/gpu-utiliz... · Posted by u/mooreds
twoodfin · 4 months ago
You’d worry about 100% CPU because even if the OS was successfully optimizing for throughput (as Linux is very good at), latency/p99 is certain to suffer as spare cycles disappear.

That’s not a concern with typical GPU workloads, which are batch/throughput-oriented.

calaphos · 4 months ago
There's still a throughput/latency tradeoff curve, at least for any sort of interactive models.

One of the reasons why inference providers sell batch discounts.

calaphos commented on Nvidia on NixOS WSL – Ollama up 24/7 on your gaming PC   yomaq.github.io/posts/nvi... · Posted by u/fangpenlin
emsign · 5 months ago
Running models at home seems like a waste of money while at the same time they are currently heavily subsidized in the cloud by dumb money.
calaphos · 5 months ago
There's also big efficiency increases when batching multiple requests, making clouds inherently more cost effective for normal use cases.

Way better utilization of expensive hardware as well ofc.

calaphos commented on The Llama 4 herd   ai.meta.com/blog/llama-4-... · Posted by u/georgehill
vintermann · 5 months ago
I understand they're only optimizing for load distribution, but have people been trying to disentangle what the the various experts learn?
calaphos · 5 months ago
Mixture of experts involves some trained router components which routes to specific experts depending on the input, but without any terms enforcing load distribution this tends to collapse during training where most information gets routed to just one or two experts.
calaphos commented on Steam Networks   worksinprogress.co/issue/... · Posted by u/herbertl
nine_k · 6 months ago
In most places with district heating, the heat that goes to the heating system is waste heat which turbines of a nearby power plant cannot use. This applies to fuel-fired and nuclear power plants alike. (Keywords: Carnot cycle efficiency.)

So using electricity for heating would be just throwing free heat away. Heat distribution networks are of course not free though.

Electric heating would be an economical win if the electricity could be harvested by solar and wind generators, and stored. Storing solar heat directly is much trickier.

calaphos · 6 months ago
It's not quite waste heat because the cold side of thermal power plants wants to be colder than district heating temperatures for best efficiency. There is some loss in electrical efficiency compared to non cogeneration plants, but the combined efficiency is a lot higher.
calaphos commented on Harder Drive: hard drives we didn't want, or need (2022) [video]   tom7.org/harder/... · Posted by u/pabs3
Waterluvian · a year ago
The idea of buffering data by transmitting it somewhere far, bouncing it off a moon or whatnot, and using that distance of radio waves as your memory is my favourite thing ever.
calaphos · a year ago
Used to be a common thing for storing analog signals in the past :)

https://en.m.wikipedia.org/wiki/Delay-line_memory

calaphos commented on Commercial perovskite solar modules at SNEC 2024 trade show   pv-magazine.com/2024/06/1... · Posted by u/doener
tromp · a year ago
> Yu said the perovskite modules cost about CNY 1.4 ($0.19)/W. He said perovskites could be manufactured more affordably than this, and claimed that once the company's 1 GW commercial line is fully operational, costs could fall to $0.11/W.

How does that compare to traditional solar cells?

calaphos · a year ago
Apparently Chinese mainstream silicon PV modules are already a bit cheaper at ~0.14USD/W right now.

Article doesn't talk about efficiencies but it seems production perovskite modules are slightly lower than their silicon counterparts, which will affect downstream costs like land or mounting hardware a bit.

u/calaphos

KarmaCake day284November 23, 2018View Original