Readit News logoReadit News
petulla commented on Toothpaste made with keratin may protect and repair damaged teeth: study   kcl.ac.uk/news/toothpaste... · Posted by u/sohkamyung
buybackoff · 7 months ago
The picture says "enamel-mimicking" and the text says "protective coating that mimics the structure and function of natural enamel", so it looks like a protective layer, not true repair. I've been using a paste with novamin lately, it also creates a protective layer and is also marketed as "repair". I like it and feel some heat when it contacts with teeth, so the chemical reaction must be working. But the marketing leaves a bad taste in the mouth.
petulla · 7 months ago
Try biomin F, newer novamin

Deleted Comment

petulla commented on Ilya Sutskever's SSI Inc raises $1B   reuters.com/technology/ar... · Posted by u/colesantiago
hn_throwaway_99 · 2 years ago
Lots of comments either defending this ("it's taking a chance on being the first to build AGI with a proven team") or saying "it's a crazy valuation for a 3 month old startup". But both of these "sides" feel like they miss the mark to me.

On one hand, I think it's great that investors are willing to throw big chunks of money at hard (or at least expensive) problems. I'm pretty sure all the investors putting money in will do just fine even if their investment goes to zero, so this feels exactly what VC funding should be doing, rather than some other common "how can we get people more digitally addicted to sell ads?" play.

On the other hand, I'm kind of baffled that we're still talking about "AGI" in the context of LLMs. While I find LLMs to be amazing, and an incredibly useful tool (if used with a good understanding of their flaws), the more I use them, the more that it becomes clear to me that they're not going to get us anywhere close to "general intelligence". That is, the more I have to work around hallucinations, the more that it becomes clear that LLMs really are just "fancy autocomplete", even if it's really really fancy autocomplete. I see lots of errors that make sense if you understand an LLM is just a statistical model of word/token frequency, but you would expect to never see these kinds of errors in a system that had a true understanding of underlying concepts. And while I'm not in the field so I may have no right to comment, there are leaders in the field, like LeCun, who have expressed basically the same idea.

So my question is, has Sutskever et al provided any acknowledgement of how they intend to "cross the chasm" from where we are now with LLMs to a model of understanding, or has it been mainly "look what we did before, you should take a chance on us to make discontinuous breakthroughs in the future"?

petulla · 2 years ago
Ilya has discussed this question: https://www.youtube.com/watch?v=YEUclZdj_Sc

Deleted Comment

petulla commented on California Grid Breezes Through Heatwave with Batteries   thinc.blog/2024/07/14/cal... · Posted by u/ChuckMcM
petulla · 2 years ago
Hope other states follow. The fact Arizona is still 80%+ non-renewable is just such a missed opportunity.

Deleted Comment

u/petulla

KarmaCake day581March 2, 2013View Original