Readit News logoReadit News
dharma1 commented on Purple Earth hypothesis   en.wikipedia.org/wiki/Pur... · Posted by u/colinprince
svdr · a month ago
The removal is only temporary.
dharma1 · a month ago
Can be a couple of hundred years with trees and wood used for housing. Long enough to figure things out
dharma1 commented on Ghostwriter – use the reMarkable2 as an interface to vision-LLMs   github.com/awwaiid/ghostw... · Posted by u/wonger_
memorydial · 7 months ago
That would be next-level immersion! You could probably achieve this by rendering the LLM’s response using a handwritten font—maybe even train a model on your own handwriting to make it feel truly personal.
dharma1 · 7 months ago
Script fonts don’t really look like handwriting - too regular.

But one of the early deep learning papers from Alex Graves does this really well with LSTMs - https://arxiv.org/abs/1308.0850

Implementation - https://www.calligrapher.ai/

dharma1 commented on The Ribbon Microphone   khz.ac/sound/ribbon-mic/... · Posted by u/glittershark
Aachen · 7 months ago
Is there a recording somewhere to hear what this sounds like?
dharma1 · 7 months ago
They have a bit different characteristics to dynamic/condenser mics - usually less high frequency content, and pronounced proximity effect.

Often used for horns, violin, guitar cabs - sources where you want to reduce "shrillness" but you can use them anywhere.

dharma1 commented on How to turn off Apple Intelligence on your iPhone   theverge.com/24340563/app... · Posted by u/laktak
Over2Chars · 7 months ago
I suspect disabling this is absolutely the right thing to do.

But I'm curious, has anyone really made any effort to test this so-called AI first to see if it's at all useful, or lives up to any level of expectation?

Or is there some a priori evil element to justify this (Tim Cook slurping up all your data and using it for training and advertising without consent or opt out) that I don't know about?

dharma1 · 7 months ago
It’s mostly useful as a hands free shortcut to get replies from chatgpt. Beyond that it’s useless

All the promises of integrating deeply into the OS and have APIs directly into apps so you can use natural language to get any app to do stuff for you is vaporware

So much potential but none of it delivered yet, I hope it will change soon

Deleted Comment

dharma1 commented on Make It Yourself   makeityourself.org/... · Posted by u/deivid
aydgn · 10 months ago
what are the other options?
dharma1 · 10 months ago
looking at them again, they're definitely all 3d renders with a consistent toon/outline material

beautiful.

dharma1 commented on Make It Yourself   makeityourself.org/... · Posted by u/deivid
dharma1 · 10 months ago
love the illustrations! all custom made?
dharma1 commented on Saturated fat: the making and unmaking of a scientific consensus (2022)   journals.lww.com/co-endoc... · Posted by u/mgh2
KempyKolibri · 10 months ago
Why would we believe otherwise? The evidence suggests that replacing butter with margarine reduces LDL-c (see https://pubmed.ncbi.nlm.nih.gov/9771853/), and we have an enormous body of evidence showing that LDL-c is a causal agent in atherosclerosis (https://academic.oup.com/eurheartj/article/38/32/2459/374510...).

So why wouldn’t replacing butter with margarine be a positive step for one’s cardiovascular risk profile?

dharma1 · 10 months ago
Not a big proponent of saturated fats but dietary LDL has only a modest impact on LDL-c - 5-10%. Other things that have similar or larger impact are exercise, reducing sugar intake, not being overweight, and consuming soluble fibre. Plant sterols/stanols also help
dharma1 commented on Math is still catching up to the genius of Ramanujan   quantamagazine.org/sriniv... · Posted by u/philiplu
chucknthem · 10 months ago
I wonder if you can train a neuronetwork to have the kind of intuition Ramanujan had. How incredible would it be for math discoveries. Then separate AIs to try and prove or disprove the insights.
dharma1 · 10 months ago
probably but we don't know how to build that type of intuition. in humans or in machines.

AlphaProof does do some kind of neural network guided search and automated theorem proving to validate it https://deepmind.google/discover/blog/ai-solves-imo-problems...

But it's still fairly brute force and inefficient

dharma1 commented on Math is still catching up to the genius of Ramanujan   quantamagazine.org/sriniv... · Posted by u/philiplu
bombastry · 10 months ago
The recent book Mathematica by David Bessis attempts to describe this exact topic. One core theme of the book is that math is done by building mental models and using our intuitions that come from. Formalism is then used to shape and refine these models. By spending time developing these models, insights eventually become obvious. These mental models are the essence of mathematics, not theorems. The author explicitly uses neural networks as an example to describe how this negative feedback could work to make these changes to our mental models happen in our brains.

The core premise of the book is to describe how mathematicians work and think and to show that this is a process that everyone can do (although some will be better at it than others). It includes interesting accounts of Grothendieck, Bill Thurston, and Descartes as well as from the author's own research career at Yale and École normale supérieure. The book is targeted at the general reader and at times reads a little like a self-help book, especially in the first third or so. However, I found it to be an enjoyable and fascinating read. It provoked a lot of interesting questions about the nature of learning and provided a framework to begin to answer them (e.g. "How can I have proved something and yet feel no understanding of it?", "How can some people solve problems orders of magnitude faster than other smart people, as if they don't even have to think about it?", "Why do I sometimes watch a presentation on a new topic, follow every step, and come away feeling like I've learned nothing?"(* see except below)). I don't think I'm doing it justice here, so I'll stop by saying I highly recommend it based on your comment.

_______

* I'll use this as an excuse to provide a related excerpt featuring Fields Medalist and Abel Prize winner Jean-Pierre Serre:

One day, I had to give a lecture at the Chevalley Seminar, a group theory seminar in Paris. I didn't have substantial new results to announce, but it was an opportunity to make a presentation even simpler than usual. [...] A couple of minutes before the talk was to start, Serre came in and sat in the second row. I was honored to have him in the audience, but I let him know right off that the presentation might not be very interesting to him. It was intended for a general audience and I was going to be explaining very basic things.

What I didn't tell him, of course, was that his presence was intimidating. Still, I didn't want to raise the level of my talk only to keep him interested. I just kept an eye out to see if he'd taken off his glasses, which would mean he was getting bored and had stopped listening. No worries there—he kept his glasses on till the end.

I gave my presentation as I would have without him there, speaking to the entire audience, especially the students seated in the back, whom I was pleased to see listening and looking like they understood. It was a normal presentation, fairly successful, not very deep but well prepared, clear, and intelligible. At the end of the seminar, Serre came up to me and said—and here I quote verbatim: "You'll have to explain that to me again, because I didn't understand anything."

That's a true story, and it plunged me into a state of profound perplexity.

Apparently, Serre wasn't using the verb to understand the way most people use it. The concepts and reasonings of my talk couldn't really have caused him any difficulty. I'm sure he wanted to say that he understood what I had explained, but he hadn't understood why what I had explained was true.

There are two levels of understanding. The first level consists of following the reasoning step by step and accepting that it's correct. Accepting is not the same as understanding. The second level is real understanding. It requires seeing where the reasoning comes from and why it's natural.

In thinking again about Serre's comment, I realized that my presentation had too many “miracles,” too many arbitrary choices, too many things that worked without my really knowing why. Serre was right; it was incomprehensible. His feedback helped me become aware of a number of very big holes in my understanding of the objects and situations I was working on at the time. In the years that followed, research into explanations for these various miracles allowed me to fill in some of the holes and achieve some of the most important results of my career. (However, some of the miracles remain unexplained to this day.)

But the most troubling aspect was the abruptness, the frankness with which Serre had overplayed his own incomprehension.

dharma1 · 10 months ago
thanks, will be checking the book!

My intuition is that some people are able to develop abstractions on top of abstractions (compression is a key part of intelligence) that allow them to traverse the search space much, much faster than without these, and with enough repetition this becomes routine, even faster in the brain.

I don't think we have a good theory of how this works, or at least I haven't come across it.

Neural network guided search is somewhat similar but I think we are missing several key pieces

u/dharma1

KarmaCake day4409May 17, 2010
About
twitter.com/dharmaone helminen.co
View Original