Readit News logoReadit News
andrewgleave commented on AR Fluid Simulation Demo   danybittel.ch/fluid... · Posted by u/danybittel
andrewgleave · 3 days ago
Reminds me of Brett Victor's demo of projected AR turbulence around a toy car at Dynamicland. Only a short clip, but you get the idea: https://youtu.be/5Q9r-AEzRMA?t=47
andrewgleave commented on Anthropic raises $13B Series F   anthropic.com/news/anthro... · Posted by u/meetpateltech
llamasushi · 5 days ago
The compute moat is getting absolutely insane. We're basically at the point where you need a small country's GDP just to stay in the game for one more generation of models.

What gets me is that this isn't even a software moat anymore - it's literally just whoever can get their hands on enough GPUs and power infrastructure. TSMC and the power companies are the real kingmakers here. You can have all the talent in the world but if you can't get 100k H100s and a dedicated power plant, you're out.

Wonder how much of this $13B is just prepaying for compute vs actual opex. If it's mostly compute, we're watching something weird happen - like the privatization of Manhattan Project-scale infrastructure. Except instead of enriching uranium we're computing gradient descents lol

The wildest part is we might look back at this as cheap. GPT-4 training was what, $100M? GPT-5/Opus-4 class probably $1B+? At this rate GPT-7 will need its own sovereign wealth fund

andrewgleave · 4 days ago
> “There's kind of like two different ways you could describe what's happening in the model business right now. So, let's say in 2023, you train a model that costs 100 million dollars. > > And then you deploy it in 2024, and it makes $200 million of revenue. Meanwhile, because of the scaling laws, in 2024, you also train a model that costs a billion dollars. And then in 2025, you get $2 billion of revenue from that $1 billion, and you spend $10 billion to train the model. > > So, if you look in a conventional way at the profit and loss of the company, you've lost $100 million the first year, you've lost $800 million the second year, and you've lost $8 billion in the third year. So, it looks like it's getting worse and worse. If you consider each model to be a company, the model that was trained in 2023 was profitable.” > ... > > “So, if every model was a company, the model is actually, in this example, is actually profitable. What's going on is that at the same time as you're reaping the benefits from one company, you're founding another company that's like much more expensive and requires much more upfront R&D investment. And so, the way that it's going to shake out is this will keep going up until the numbers go very large, the models can't get larger, and then it will be a large, very profitable business, or at some point, the models will stop getting better. > > The march to AGI will be halted for some reason, and then perhaps it will be some overhang, so there will be a one-time, oh man, we spent a lot of money and we didn't get anything for it, and then the business returns to whatever scale it was at.” > ... > > “The only relevant questions are, at how large a scale do we reach equilibrium, and is there ever an overshoot?”

From Dario’s interview on Cheeky Pint: https://podcasts.apple.com/gb/podcast/cheeky-pint/id18210553...

andrewgleave commented on Getting good results from Claude Code   dzombak.com/blog/2025/08/... · Posted by u/ingve
aosaigh · a month ago
I’m just today after having my first real success with Claude (and generally with coding agents). I’ve played with Cursor in the past but am now trying Claude and others.

As mentioned in the article, the big trick is having clear specs. In my case I sat down for 2 hours and wrote a 12 step document on how I would implement this (along with background information). Claude went through step by step and wrote the code. I imagine this saved me probably 6-10 hours. I’m now reviewing and am going to test etc. and start adjusting and adding future functionality.

Its success was rooted in the fact I knew exactly how to do what it needed to do. I wrote out all the steps and it just followed my lead.

It makes it clear to me that mid and senior developers aren’t going anywhere.

That said, it was amazing to just see it go through the requirements and implement modules full of organised documented code that I didn’t have to write.

andrewgleave · a month ago
Yeah. Read “Programming as Theory Building” by Naur [1] to understand why you need to still need to develop a theory of the problem and how to model it yourself lest the LLM concoct (an incorrect) one for you.

[1] https://gwern.net/doc/cs/algorithm/1985-naur.pdf

andrewgleave commented on Ask HN: What are you working on? (March 2025)    · Posted by u/david927
andrewgleave · 5 months ago
I recently built a quick SwiftUI app to pin quotes and posts from X to my home screen.

I found I was liking/bookmarking insightful content on X I rarely saw again and wanted a way to resurface them somewhere I would see multiple times per day.

Can import from X via share sheet or manually enter them. It's minimal, but I've found having:

"i hate how well asking myself "if i had 10x the agency i have what would i do" works"

there every time I unlock my phone, was worth the development effort.

https://apps.apple.com/gb/app/lumatta/id6740705796

andrewgleave commented on Ask HN: What's the most creative 'useless' program you've ever written?    · Posted by u/reverseCh
jinay · 10 months ago
When I was first learning computer vision, I wrote a program that could tell the time from an image of a clock [1]. I had no purpose for it besides the fact that it seemed like a cool problem to try and solve.

Years later, I get an email from a stranger in Korea, asking me how to run my program. Why would he want to use my silly program? Turns out you can adapt the code to read analog pressure gauges which is really useful for chemical plants. Goes to show that there's often a use for most things.

[1] https://github.com/jinayjain/timekeeper

andrewgleave · 10 months ago
This is very similar to when I used OpenCV to read the angle of a cardboard knob I pinned on the wall in my office to change the volume on Spotify!
andrewgleave commented on Richard Feynman and the Connection Machine (1989)   longnow.org/essays/richar... · Posted by u/jmstfv
andrewgleave · a year ago
"""But what Richard hated, or at least pretended to hate, was being asked to give advice. So why were people always asking him for it? Because even when Richard didn't understand, he always seemed to understand better than the rest of us. And whatever he understood, he could make others understand as well. Richard made people feel like a child does, when a grown-up first treats him as an adult. He was never afraid of telling the truth, and however foolish your question was, he never made you feel like a fool."""

This is why he is spoken of with such reverence and why his insights have profoundly impacted both scientists and non-scientists alike. Few Nobel laureates have achieved such popular influence.

andrewgleave commented on ChatGPT consumes 25 times more energy than Google   brusselstimes.com/1042696... · Posted by u/cdme
andrewgleave · a year ago
Unsurprisingly, this pessimistic and shortsighted take emanates from a newspaper in heart of the EU...
andrewgleave commented on On the Double-Slit Experiment and Quantum Interference in the Wolfram Model (2020)   wolframphysics.org/bullet... · Posted by u/floobertoober
danbruc · 2 years ago
I just skimmed the article for sanity checking and it looks more like crackpottery than science to me.

Looking at the numbers on the graphs for single-slit diffraction, they are just binomial coefficient, at least mostly, not sure why there are pieces missing in the last rows. That is also what you expect when you repeatedly make binary decisions to go left or right. The article does not mention the binomial distributions once, it only appears in a comment.

And then they claim that it converge to the actual single-slit diffraction distribution, something with a Chebyshev polynomial and the sinc function, according to the article. Seemingly without justification besides looking at graphs and noting that they are both bell shaped. As said, not sure what is going on in the last rows of the graphs, but I would almost bet that the two functions are not the same, even in the limit as it becomes a Poisson distribution plus whatever the last rows do.

Why do they not just proof that the two are the same? The entire article seems to be about getting numbers out of their multiway system and then concluding that - if you squint hard enough - they look somewhat like diffraction patterns.

andrewgleave · 2 years ago
And by the third slit simulation they seem to have refuted their own hypothesis.
andrewgleave commented on How Gödel's proof works (2020)   quantamagazine.org/how-go... · Posted by u/ljosifov
krukah · 2 years ago
I'm always fascinated by how Gödel's incompleteness theorems, Cantor's diagonalization proof, Turing's halting problem, and Russel's paradox all seem to graze the boundaries of logic. There's something almost terrifying about how everything we know seems to "bottom out" and what we're left with is an embarrassingly small infinite set of truths to grapple with.

It really feels to me as if the distinctions between countable vs uncountable; rational vs irrational; discrete vs continuous; all represent the boundary between physics and mathematics – an idea I wish I could elaborate more precisely, but for me stands only on a shred of intuition.

I've been interested lately in Stephen Wolfram's and Scott Aaronson's writings on related ideas.

Aaronson on Gödel, Turing, and Friends: https://www.scottaaronson.com/democritus/lec3.html

Wolfram on computational irreducibility and equivalence: https://www.wolframscience.com/nks/chap-12--the-principle-of...

andrewgleave · 2 years ago
You may find Constructor Theory interesting. An attempt to express physical laws solely in terms of possible and impossible transformations.

“These include providing a theory of information underlying classical and quantum information; generalising the theory of computation to include all physical transformations; unifying formal statements of conservation laws with the stronger operational ones (such as the ruling-out of perpetual motion machines); expressing the principles of testability and of the computability of nature (currently deemed methodological and metaphysical respectively) as laws of physics; allowing exact statements of emergent laws (such as the second law of thermodynamics); and expressing certain apparently anthropocentric attributes such as knowledge in physical terms.”

https://arxiv.org/abs/1210.7439

u/andrewgleave

KarmaCake day259November 18, 2009
About
Red Robot Studios
View Original