Readit News logoReadit News
berndi commented on Ukraine destroys more than 40 military aircraft in drone attack deep in Russia   npr.org/2025/06/01/nx-s1-... · Posted by u/consumer451
jqpabc123 · 9 months ago
I'm curious how they managed to control the drones from such a distance.

I'll bet Russia is curious too.

berndi · 9 months ago
Someone claimed they used fiber optic drones [1]. So perhaps the drones were connected to the trucks via optical fibers and the trucks carried the modems. That way, jamming over the airbases would have had no effect.

[1] https://nitter.net/bayraktar_1love/status/192915556386414634...

berndi commented on Gene Hackman died of heart disease, his wife died of hantavirus 1 week earlier   wfsb.com/2025/03/07/inves... · Posted by u/howard941
duncancarroll · a year ago
This feels very hand wavey to me.

So she feels progressively ill and then dies, all without ever seeking medical attention? I suppose they assume it was acute / she collapsed, but it still seems unusual. She's only 65 remember.

But then Gene's just walking around the house for a week? With his dead wife in the bathroom and a barking dog in a crate??

I haven't dealt with Alzheimer's but you're telling me it's possible to be functional enough to eat and sleep for a week, but not know that your wife is dead and the barking dog needs food and water? And you don't ever seek help or outside assistance?

And then, coincidentally, you also die of a heart attack.

Just seems far fetched to me.

Edit: I guess at 95 he might have mobility issues, so maybe he couldn't call for help? Idk still seems odd.

berndi · a year ago
> you're telling me it's possible to be functional enough to eat and sleep for a week, but not know that your wife is dead and the barking dog needs food and water

Absolutely, in late stages of Alzheimers you’re a vegetable, but basic bodily functions still work to some degree.

berndi commented on US airlines transported passengers over two light-years since the last crash   ourworldindata.org/us-air... · Posted by u/sohkamyung
_heimdall · a year ago
> taking the number of crashes per kilometer, and multiplying by the number of kilometers you'll drive for. A trip of 5000km is less safe than a 5km one.

When comparing a difference of 1000x, sure its probably safer.

Your premise is wrong though, it isn't common sense to take a generalized statistic like deaths per km and extrapolate that to be the correct estimate for any particular trip.

The statistic loses all context of differences in vehicles, road conditions, time of day, specific roads, etc. The stat is a great example of how a statistic seems meaningful to get a big picture but is completely useless to us as we can't make any decisions based on a number that is so generalized as to lose all context and meaning.

berndi · a year ago
Common sense means taking the less risky option.

Assuming that the survival rate is constant as a function of trip length, the empirical estimate of the risk is fully determined by the number of deaths per distance.

berndi commented on The scary sound of Aztec skull whistles   caneuro.github.io/blog/20... · Posted by u/Borrible
dyauspitr · a year ago
It’s always insane to notice that anything can be normalized. I bet those people weren’t strongly affected by death, murder, rape, gore etc.
berndi · a year ago
I’m not a vegetarian or animal rights advocate, but an argument could be made that cruelty to non-human animals is similarly normalized in today’s world, perhaps seen as a necessary evil to satisfy our appetite for food or fashion items.
berndi commented on US Olympic and other teams will bring their own AC units to Paris   apnews.com/article/olympi... · Posted by u/impish9208
lxgr · 2 years ago
> This is outside the optimal sleeping temperature FYI as a normal human being going to work. This really surprised me actually, but some say it should be as low as 18C.

I'm definitely with you on that these athletes should do whatever they think is best for them for the few days that they're in the olympic village, but generally cooling down the entire world's bedrooms to 18C would be pretty catastrophic, at least using today's technology. One day, hopefully!

berndi · 2 years ago
Assuming a bedroom with surface area of 50 m^2 and an insulation R-value of 2 m^2K/W and an outside temperature of 30C, 300W would be required to keep the room at 18C. Cooling down 8 billion such bedrooms would require 24TW, which represents approximately a 10% increase in global power consumption.

Certainly a lot, but doesn't seem "catastrophic" and is realistic with today's technology.

berndi commented on H5N1 prevalence in milk suggest US bird flu outbreak in cows is widespread   statnews.com/2024/04/25/h... · Posted by u/divbzero
triceratops · 2 years ago
Chickens eat worms. Feeding them crab or fish meal is fine.

Cows are herbivores. Feeding them meat is abhorrent.

berndi · 2 years ago
Do the cows know they are herbivores? Why would the fact that they don’t predate on other animals in the wild make it “abhorrent” to feed them animal products in captivity?
berndi commented on Automated Terminal Attack Capability Making Its Way into Ukraine's FPV Drones   twz.com/news-features/aut... · Posted by u/nradov
berndi · 2 years ago
TLDR: no evidence is presented for automated terminal attack capabilities. Even a heat-seeking missile is more automated than these drones which simply maintain course.
berndi commented on Quantum computers move closer to the assembly line   axios.com/2024/02/18/quan... · Posted by u/rbanffy
smurda · 2 years ago
5 years ago I was at MIT’s QC lab and 4 qubits was the max number of entangled qubits their machine could reach, sustainably. All the marketing fluff from IBM and Google about hundreds, or thousands, of entangled qubits is misleading - they don’t maintain those entangled states for long durations. Only when we can get hundreds of qubits to maintain entangled states for sustainable periods of time can we then attempt the theoretical use cases of the technology.
berndi · 2 years ago
You’re right! The trapped ion approach (IonQ) is the most promising direction toward scalable quantum computing. Superconducting qubits — such as those used by IBM and Google — require extreme cooling while ions can be trapped at room temperature. Superconducting qubits are also plagued by substrate imperfections, while trapped ions — being “nature’s qubits” — are absolutely identical in their quantum mechanical properties. This allows trapped ion quantum computers to realize the best demonstrated gate fidelities.
berndi commented on The race is on to stop Ozempic muscle loss   seattletimes.com/nation-w... · Posted by u/lxm
berndi · 2 years ago
Is there any evidence that Ozempic causes more muscle loss than what is to be expected from caloric restriction? I haven’t seen any.
berndi commented on Telling GPT-4 you're scared or under pressure improves performance   aimodels.substack.com/p/t... · Posted by u/Terretta
mjburgess · 2 years ago
> The “statistical parrot” assertion is pretty thoroughly disproven by this point

errr... all NNs are just optimisations of an associative probability objective: P(Y|X), they are by definition "statistical parrots". There isn't anything to prove or disprove.

People offering prompts as evidence are people who fundamentally do not understand the basics. NNs aren't strange empirical objects, they're specified by mathematical rules whose properties are known ahead of time.

Any property of a trained NN is derivative of a property of a formula P(Answer|Prompt, TrainingData)

This is an associative statistical relation, which by definition, selects elements of TrainingData by-association with the Prompt.

If the basis by which you understand LLMs is putting prompts into ChatGPT you're severely underqualified for drawing any conclusions about LLMs, and radically subject to confirmation bias.

berndi · 2 years ago
You’re confused about what “statistical parrot” means and you don’t seem to understand the difference between an optimization objective and the resulting model.

The term “parrot” is used to imply inference by something akin to a look-up table, specifically it is used to indicate poor out-of-sample performance and a lack of a proper world model. The optimization objective is irrelevant when determining the generalization performance of a model and when judging whether it can reason beyond looking up answers in a table.

As the user above noted, it is now quite well established that GPT-4 has impressive out-of-sample performance which can be explained by it possessing an actual model of the world and not being a “parrot”.

u/berndi

KarmaCake day361August 10, 2019View Original