Readit News logoReadit News
empiricus commented on Analysis finds anytime electricity from solar available as battery costs plummet   pv-magazine-usa.com/2025/... · Posted by u/Matrixik
empiricus · 4 days ago
All nice and beautiful, but I don't understand how will this work in the winter in the temperate areas. You maintain parallel natural gas installations and ramp them up in the winter? Does this doubles the cost?
empiricus commented on Horses: AI progress is steady. Human equivalence is sudden   andyljones.com/posts/hors... · Posted by u/pbui
jakewins · 9 days ago
I mean, I'm just some guy, but in my mind:

- They are not making progress, currently. The elephant-in-the-room problem of hallucinations is exactly the same or, as I said above, worse as it was 3 years ago

- It's clearly possible to solve this, since we humans exist and our brains don't have this problem

There's then two possible paths: Either the hallucinations are fundamental to the current architecture of LLMs, and there's some other aspect about the human brains configuration that they've yet to replicate. Or the hallucinations will go away with better and more training.

The latter seems to be the bet everyone is making, that's why there's all these data centers being built right? So, either larger training will solve the problem, and there's enough training data, silica molecules and electricity on earth to perform that "scale" of training.

There's 86B neurons in the human brain. Each one is a stand-alone living organism, like a biological microcontroller. It has constantly-mutating state, memory: short term through RNA and protein presence or lack thereof, long term through chromatin formation, enabling and disabling it's own DNA over time, in theory also permanent through DNA rewriting via TEs. Each one has a vast array of input modes - direct electrical stimulation, chemical signalling through a wide array of signaling molecules and electrical field effects from adjacent cells.

Meanwhile, GPT-4 has 1.1T floats. No billions of interacting microcontrollers, just static floating points describing a network topology.

The complexity of the neural networks that run our minds is spectacularly higher than the simulated neural networks we're training on silicon.

That's my personal bet. I think the 88B interconnected stateful microcontrollers is so much more capable than the 1T static floating points, and the 1T static floating points is already nearly impossibly expensive to run. So I'm bearish, but of course, I don't actually know. We will see. For now all I can conclude is the frontier model developers lie incessantly in every press release, just like their LLMs.

empiricus · 9 days ago
Thanks, that's a reasonable argument. Some critique: based on this argument it is very surprising that LLM work so well, or at all. The fact that even small LLM do something suggests that the human substrate is quite inefficient for thinking. Compared to LLMs, it seems to me that 1. some humans are more aware of what they know; 2. humans have very tight feedback loops to regulate and correct. So I imagine we do not need much more scaling, just slightly better AI architectures. I guess we will see how it goes.
empiricus commented on Horses: AI progress is steady. Human equivalence is sudden   andyljones.com/posts/hors... · Posted by u/pbui
Mawr · 9 days ago
What is this horseshit.

What exactly does specifically engine efficiency have to do with horse usage? Cars like the Ford Model T entered mass production somewhere around 1908. Oh, and would you look at the horse usage graph around that date! sigh

The chess ranking graph seems to be just a linear relationship?

> This pink line, back in 2024, was a large part of my job. Answer technical questions for new hires.

>

> Claude, meanwhile, was now answering 30,000 questions a month; eight times as many questions as me & mine ever did.

So more == better. sigh. Ran any, you know, studies to see the quality of those answers? I too can consult /dev/random for answers at a rate of gigabytes per second!

> I was one of the first researchers hired at Anthropic.

Yeah. I can tell. Somebody's high on their own supply here.

empiricus · 9 days ago
Well, for some reason horse numbers and horse usage dropped sharply at a moment in time. Probably there was some horse pandemic I forgot about.
empiricus commented on Horses: AI progress is steady. Human equivalence is sudden   andyljones.com/posts/hors... · Posted by u/pbui
jakewins · 9 days ago
I’m deeply sceptical. Every time a major announcement comes out saying so-and-so model is now a triple Ph.D programming triathlon winner, I try using it. Every time it’s the same - super fast code generation, until suddenly staggering hallucinations.

If anything the quality has gotten worse, because the models are now so good at lying when they don’t know it’s really hard to review. Is this a safe way to make that syscall? Is the lock structuring here really deadlock safe? The model will tell you with complete confidence its code is perfect, and it’ll either be right or lying, it never says “I don’t know”.

Every time OpenAI or Anthropic or Google announce a “stratospheric leap forward” and I go back and try and find it’s the same, I become more convinced that the lying is structural somehow, that the architecture they have is not fundamentally able to capture “I need to solve the problem I’m being asked to solve” instead of “I need to produce tokens that are likely to come after these other tokens”.

The tool is incredible, I use it constantly, but only for things where truth is irrelevant, or where I can easily verify the answer. So far I have found programming, other than trivial tasks and greenfield ”write some code that does x”, much faster without LLMs

empiricus · 9 days ago
I agree that the current models are far from perfect. But I am curious how you see the future. Do you really think/feel they will stop here?
empiricus commented on The Absent Silence (2010)   ursulakleguin.com/blog/3-... · Posted by u/dcminter
wavemode · 11 days ago
> it’s as if a great library, say the Library of Congress, refused to tell where they got their books and how they got their books and who chose the books and whether all the books they had were in the catalogue and available or some were held back, kept secret.

I think "proprietary" is a better descriptor for Google Search's inner machinations, than "secret". The general concept of engineering a search crawler is well-trodden. Many companies have done it, there are open-source examples, and Google themselves have written blogs about their own.

It would probably be more apt to say, we know where the books came from and how they were acquired, we just don't necessarily know how the archive shelves in the basement are arranged and we don't know which employee is responsible for organizing them and we don't have the source code to the library's LMS. (All of which is true, by the way, for the LOC.) Proprietary, not secret.

empiricus · 11 days ago
Well, the secret is not how you crawl the web, but how you decide what to show to the users.
empiricus commented on A cell so minimal that it challenges definitions of life   quantamagazine.org/a-cell... · Posted by u/ibobev
empiricus · 22 days ago
I think the genome might be mostly just the "config file". So the cell already contains most of the information and mechanisms needed for the organism. The genome is config flags and some more detailed settings that turn things on and off in the cell, at specific times in the life of the organism. From this point of view, the discussion about how many pairs/bytes of information are in the genome is misleading. Similar analogy: I can write a hello world program, which displays hello world on the screen. But the screen is 4k, the windows background is also visible, so the hardware and OS are 6-8 orders of magnitude more complex than the puny program, and the output is then much more complex than the puny program.
empiricus commented on China reaches energy milestone by "breeding" uranium from thorium   scmp.com/news/china/scien... · Posted by u/surprisetalk
BoredPositron · 25 days ago
By all the doomerism about German and nuclear there is at least Wendelstein 7-x doing frontier work. It's fine to get rid of legacy nuclear if there is a feasible bridge ahead.
empiricus · 25 days ago
By the time stellarator designs become economical (tens of years in the most optimistic case), you can cover the entire Germany in PV panels. Or even grow an entire new generation of forrest. So far stellarators look just like interesting vaporware. I mean they are irrelevant to any current energy discussion.
empiricus commented on Being poor vs. being broke   blog.ctms.me/posts/2025-1... · Posted by u/speckx
empiricus · a month ago
I "like" when ppl talk about UBI and say "but ppl on UBI are not happy and lack purpose". Compare with being poor.
empiricus commented on Austria: Pylons as sculpture for public acceptance of expanding electrification   goodgoodgood.co/articles/... · Posted by u/Geekette
empiricus · 2 months ago
This looks nice, but somebody should pay the difference, and maybe it should be those that oppose the normal looking supports.
empiricus commented on How the cochlea computes (2024)   dissonances.blog/p/the-ea... · Posted by u/izhak
adornKey · 2 months ago
This subject has bothered me for a long time. My question to guys into acoustics was always: If the cochlea performs some kind of Fourier transform, what are the chances, that it uses sinus waves as a base for the vector-space? - if it did anything like that it could just as good use any slightly different wave-forms as a base for transformation. Stiffness and non-linearity will for sure take care that any ideal rubber model in physics will in reality be different from the perfect sinus.
empiricus · 2 months ago
well, cochlea is working withing the realm of biological and physical possibilities. basically it is a triangle through which waves are propagating, and sensors along the edge. smth smth this is similar to a filter bank of gabor filters that respond to rising freq along the triangle edge. ergo you can say fourier, but it only means sensors responding to different freq becasue of their location.

u/empiricus

KarmaCake day330August 11, 2009View Original