Readit News logoReadit News
po commented on I ported JustHTML from Python to JavaScript with Codex CLI and GPT-5.2 in hours   simonwillison.net/2025/De... · Posted by u/pbowyer
simonw · 2 months ago
I just blogged about this https://simonwillison.net/2025/Dec/17/firefox-parser/

... and then when I checked the henri-sivonen tag https://simonwillison.net/tags/henri-sivonen/ found out I'd previously written about the exact same thing 16 years earlier!

po · 2 months ago
It's very nice to have written for so long... I often think I should write more for myself than for others.
po commented on OpenAI declares 'code red' as Google catches up in AI race   theverge.com/news/836212/... · Posted by u/goplayoutside
littlestymaar · 2 months ago
I'd be a little bit more nuanced:

I think there's something off with their plans right now: it's pretty clear at this point that they can't own the technological frontier, Google is just too close already and from a purely technological PoV they are much better suited to have the best tech in the medium term. (There's no moat and Google has way more data and compute available, and also tons of cash to burn without depending on external funding).

But ChatGPT is an insane brand and for most (free) customers I don't think model capabilities (aka “intelligence”) are that important. So if they stopped training frontier models right now and focus on driving their costs low by optimizing their inference compute budget while serving ads, they can make a lot of money from their user base.

But that would probably mean losing most of its paying customers over the long run (companies won't be buying mediocre token at a premium for long) and more importantly it would require abandoning the AGI bullshit narrative, which I'm not sure Altman is willing to do. (And even if he was, how to do that without collapsing from lack of liquidity due to investors feeling betrayed is an open question).

po · 2 months ago
as long as the business model is:

- users want the best/smartest LLM

- the best performance for inference is found by spending more and more tokens (deep thinking)

- pricing is based on cost per token

Then the inference providers/hyperscalers will take all of the margin available to app makers (and then give it to Nvidia apparently). It is a bad business to be in, and not viable for OpenAI at their valuation.

po commented on PSF has withdrawn $1.5M proposal to US Government grant program   pyfound.blogspot.com/2025... · Posted by u/lumpa
po · 4 months ago
Did you read the comment you're replying to? It's talking about the DEI selection process being blind and instead focusing on outreach to get a more diverse input. You wouldn't be denied anything due to your sex under a system like that. It has nothing to do with what you're talking about.

Deleted Comment

po commented on AI helps unravel a cause of Alzheimer’s and identify a therapeutic candidate   today.ucsd.edu/story/ai-h... · Posted by u/pedalpete
dsign · 10 months ago
This piece of the puzzle, and its finding, if confirmed, is very neat. But I think we are barking at the wrong tree, because senescence is inherently chaotic. Sometimes we identify a disease with a set of common symptoms because there are many alternative causes that lead to those very symptoms. It's like "convergent symptoms", so to speak.

If I had any funding to work freely in these subjects, I would instead focus on the more fundamental questions of computationally mapping and reversing cellular senescence, starting with something tiny and trivial (but perhaps not tiny nor trivial enough) like a rotifer. My focus wouldn't be the biologists' "we want to understand this rotifer", "or we want to understand senescence", but more "can we create an exact computational framework to map senescence, a framework which can be extended and applied to other organisms"?

Sadly, funding for science is a lost cause, because even where/when it is available, it comes with all sort of political and ideological chains.

po · 10 months ago
Have you ever lived with or helped a person with AD? It's not cellular senescence. What you're talking about is fine and well, but AD is a devastating disease that has very particular symptoms. We may not know all of the causes, but reversing cellular senescence isn't going to solve this.

Researching and curing AD is not barking up the wrong tree. There is a horrible deadly monster in that tree that needs defeating. I hope people also get scientific funding for other age-related issues.

po commented on Steam Networks   worksinprogress.co/issue/... · Posted by u/herbertl
blackeyeblitzar · a year ago
> The last total system failure was in 2007, when an 82-year-old pipe at 41st and Lexington exploded, showering Midtown in debris. Heavy rainfall had cooled the pipes, producing large amounts of condensate quickly, and a clogged steam trap meant that the system was unable to expel the water. When this build-up hit a critical level, the internal pressure shot up, causing the explosion. Almost fifty people were injured, and one woman died of a heart attack while fleeing

The photo of this explosion makes it look huge. I wonder how much the pent up infrastructure work to avoid these issues would cost. Are water or gas pipes also vulnerable to this sort of thing?

po · a year ago
I happened to be walking a few blocks north and one east of this when it happened. I saw dozens of people running up the street in a panic. Office ladies running barefoot with their heels in hand. I asked what had happened and they said there was a giant explosion that could only have been terrorism.

The wikipedia article has a few more photos: https://en.wikipedia.org/wiki/2007_New_York_City_steam_explo...

The aftermath photo gives you a good sense of it: https://www.nbcnews.com/id/wbna20184563

po commented on The legacy of lies in Alzheimer's science   nytimes.com/2025/01/24/op... · Posted by u/apsec112
DrScientist · a year ago
The current system lies on the market of ideas - ie if you publish rubbish a competitor lab will call you out. ie it's not the same as the two people in an aircraft cabin - in the research world that plane crashing is all part of the market adjustment - weeding out bad pilots/academics.

However it doesn't work all the time for the same reasons that markets don't work all the time - the tendency for people to choose to create cosy cartels to avoid that harsh competition.

In academia this is created around grants either directly ( are you inside the circle? ) or indirectly - the idea obviously won't work as the 'true' cause is X.

Not sure you can fully avoid this - but I'm sure their might be ways to improve it around the edges.

po · a year ago
Yeah I don't think CRM is the correct thing in this case... I just think that there needs to be some new set of incentives put in place such that the culture reinforces the outcomes you want.
po commented on The legacy of lies in Alzheimer's science   nytimes.com/2025/01/24/op... · Posted by u/apsec112
po · a year ago
Science needs an intervention similar to what the CRM process (https://en.wikipedia.org/wiki/Crew_resource_management) did to tamp down cowboy pilots flying their planes into the sides of mountains because they wouldn't listen to their copilots who were too timid to speak up.

...on the evening of Dec 28, 1978, they experienced a landing gear abnormality. The captain decided to enter a holding pattern so they could troubleshoot the problem. The captain focused on the landing gear problem for an hour, ignoring repeated hints from the first officer and the flight engineer about their dwindling fuel supply, and only realized the situation when the engines began flaming out. The aircraft crash-landed in a suburb of Portland, Oregon, over six miles (10 km) short of the runway

It has been applied to other fields:

Elements of CRM have been applied in US healthcare since the late 1990s, specifically in infection prevention. For example, the "central line bundle" of best practices recommends using a checklist when inserting a central venous catheter. The observer checking off the checklist is usually lower-ranking than the person inserting the catheter. The observer is encouraged to communicate when elements of the bundle are not executed; for example if a breach in sterility has occurred

Maybe not this system exactly, but a new way of doing science needs to be found.

Journals, scientists, funding sources, universities and research institutions are locked in a game that encourages data hiding, publish or perish incentives, and non-reproducible results.

u/po

KarmaCake day7626June 22, 2009
About
Technical co-founder, currently living in Japan, previously living in NYC.

https://mastodon.social/@poswald

https://www.makeleaps.com/

I run Hacker News reader meetups in Tokyo.

Always looking for awesome people to meet up with. Get in touch!

pauloswald@gmail.com

[ my public key: https://keybase.io/poswald; my proof: https://keybase.io/poswald/sigs/HVlTjzK0n89ogL4IrN4QAX1JSJJUZRzfp_9MrlPIJ0A ]

View Original