Readit News logoReadit News
Eisenstein commented on Burner Phone 101   rebeccawilliams.info/burn... · Posted by u/CharlesW
mikeytown2 · a day ago
If you need to communicate with people in your area and not be tracked; MeshCore software with LoRa hardware like the this https://lilygo.cc/en-ca/products/t-lora-pager is something to consider. Text only, completely offline
Eisenstein · a day ago
Except that your texts go out to everyone on the mesh network.
Eisenstein commented on Launch HN: Reality Defender (YC W22) – API for Deepfake and GenAI Detection   realitydefender.com/platf... · Posted by u/bpcrd
coeneedell · 6 days ago
There are three observations that are helpful to know about here: A: High quality, battle tested architectures are sold via an API and samples are therefore easy to retrieve at scale. B: lower quality, novel architectures are often published on GitHub and can be scaled on budget compute resources. C: Often these models perform well at classifying content generated by architectures similar to those they were trained on, even if that architecture is not identical.

As for actual lead time associated with our actual strategy, that’s probably not something I can talk about publicly. I can say I’m working on making it happen faster.

Eisenstein · 6 days ago
I don't want to be rude is this not a question you get asked by potential customers? Is that your answer for them? It sounds a lot like 'I guess we will find out.'

Deleted Comment

Eisenstein commented on How much do electric car batteries degrade?   sustainabilitybynumbers.c... · Posted by u/xnx
cuttothechase · 7 days ago
Is this true!?

So does it mean that if I use a 80% capacity battery my actual functional value that I get out of it would be considerably less than what the 80% would infer?

Eisenstein · 7 days ago
"The degradation rate of lithium-ion battery is not a linear process with respect tonumber of cycles, battery aging tests (Fig. 1) have shown that in cycling tests the degradation rate is significantly higher during the early cycles than during the later cycles, and then increases rapidly when reaching the end of life (EoL)."

Actually, 80% is considered effectively 'end of life':

"Battery end of life is typically defined as the point at which the battery only provides 80% of its rated maximum capacity"

* https://www.researchgate.net/publication/303890624_Modeling_...

Eisenstein commented on GenAI FOMO has spurred businesses to light nearly $40B on fire   theregister.com/2025/08/1... · Posted by u/rntn
kgwgk · 7 days ago
> I also think things like "is this chest Xray cancer?" are going to be hugely impactful.

Yes, but https://radiologybusiness.com/topics/artificial-intelligence...

Nine years ago, scientist Geoffrey Hinton famously said, “People should stop training radiologists now,” believing it was “completely obvious” AI would outperform human rads within five years.

Eisenstein · 7 days ago
If you want to go back in history you will find people confidently claiming things in either direction of what eventually happened.
Eisenstein commented on Launch HN: Reality Defender (YC W22) – API for Deepfake and GenAI Detection   realitydefender.com/platf... · Posted by u/bpcrd
bpcrd · 7 days ago
We've actually deployed to several Tier 1 banks and large enterprises already for various use-cases (verification, fraud detection, threat intelligence, etc.). The feedback that we've gotten so far is that our technology is high accuracy and a useful signal.

In terms of how our technology works, our research team has trained multiple detection models to look for specific visual and audio artifacts that the major generative models leave behind. These artifacts aren't perceptible to the human eye / ear, but they are actually very detectable to computer vision and audio models.

Each of these expert models gets combined into an ensemble system that weighs all the individual model outputs to reach a final conclusion.

We've got a rigorous process of collecting data from new generators, benchmarking them, and retraining our models when necessary. Often retrains aren't needed though, since our accuracy seems to transfer well across a given deepfake technique. So even if new diffusion or autoregressive models come out, for example, the artifacts tend to be similar and are still caught by our models.

I will say that our models are most heavily benchmarked on convincing audio/video/image impersonations of humans. While we can return results for items outside that scope, we've tended to focus training and benchmarking on human impersonations since that's typically the most dangerous risk for businesses.

So that's a caveat to keep in mind if you decide to try out our Developer Free Plan.

Eisenstein · 7 days ago
What's the lead time between new generators and a new detection model? What about novel generators that are never made public?

I think the most likely outcome of a criminal organization doing this is that they train a public architecture model from scratch on the material that they want to reproduce, and then use without telling anyone. Would your detector prevent this attack?

Eisenstein commented on The lottery ticket hypothesis: why neural networks work   nearlyright.com/how-ai-re... · Posted by u/076ae80a-3c97-4
deepfriedchokes · 7 days ago
Rather than reframing intelligence itself, wouldn’t Occam’s Razor suggest instead that this isn’t intelligence at all?
Eisenstein · 7 days ago
Unless you can provide a definition for intelligence which is internally consistent and does not exclude things are obviously intelligent or include things which are obviously not intelligent, the only thing occam's razor suggests is that the basis for solving novel problems is the ability to pattern match combined with a lot of background knowledge.
Eisenstein commented on The lottery ticket hypothesis: why neural networks work   nearlyright.com/how-ai-re... · Posted by u/076ae80a-3c97-4
highfrequency · 7 days ago
Enjoyed the article. To play devil’s advocate, an entirely different explanation for why huge models work: the primary insight was framing the problem as next-word prediction. This immediately creates an internet-scale dataset with trillions of labeled examples, which also has rich enough structure to make huge expressiveness useful. LLMs don’t disprove bias-variance tradeoff; we just found a lot more data and the GPUs to learn from it.

It’s not like people didn’t try bigger models in the past, but either the data was too small or the structure too simple to show improvements with more model complexity. (Or they simply trained the biggest model they could fit on the GPUs of the time.)

Eisenstein · 7 days ago
Why does 'next-word prediction' explain why huge models work? You saying we needed scale, and saying we use next-word prediction, but how does one relate to the other? Diffusion models also exist and work well for images, and they do seem to work for LLMs too.
Eisenstein commented on How much do electric car batteries degrade?   sustainabilitybynumbers.c... · Posted by u/xnx
cuttothechase · 7 days ago
>> That’s not bad, given that most cars are scrapped somewhere in the 150,000 to 200,000 miles range. At that point, a Tesla will have more than 80% of its initial capacity, and in some cases, even more. So people will probably give up their car, well, well before the battery gets close to becoming a burden.

Can they not see that this is because of correlation and not causation. Why would an EV be given up at 150 - 200K when it has much less moving parts and stressors compared to the traditional ICE based vehicles?

Eisenstein · 7 days ago
Batteries are not like gas tanks. 80% of original capacity doesn't mean you have 20% less, it means a lot of other things too. These are chemical reactions which happen which lead to lots of other effects, like higher internal resistance.

Deleted Comment

u/Eisenstein

KarmaCake day4284February 28, 2022View Original