Readit News logoReadit News
samsartor commented on Pushing and Pulling: Three reactivity algorithms   jonathan-frere.com/posts/... · Posted by u/frogulis
samsartor · 4 days ago
I've been working on a reactivity system for rust over the past couple of years, which uses a lot of these ideas! It also tries to make random concurrent modification less of a pain, with transactional memory and CRDT stuff. And gives you free undo/redo.

Still kind of WIP, but it isn't secret. People are welcome to check it out at https://gitlab.com/samsartor/hornpipe

samsartor commented on Building a new Flash   bill.newgrounds.com/news/... · Posted by u/TechPlasma
adrian17 · 9 days ago
AFAIK the .fla format was never fully documented or reverse engineered by anyone (FFDEC has an exporter, but not importer), so this alone would be a bold claim.
samsartor · 8 days ago
https://ruffle.rs/ is pretty solid
samsartor commented on Elsevier shuts down its finance journal citation cartel   chrisbrunet.com/p/elsevie... · Posted by u/qsi
throwpoaster · 18 days ago
> On Christmas Eve, 9 “peer-reviewed” economics papers were quietly retracted by Elsevier, the world’s largest academic publisher.

It is becoming clearer and clearer that peer review is a systematized band wagon fallacy.

It relies on the belief that one’s peers in a competitive field, presented with new ideas and evidence, will simply accept it.

And yet, “science progresses one funeral at a time” is an old joke.

“Peer review” is an indication an idea is safe for granting agency bureaucrats to fund, not an indication of its truth, validity, or utility.

samsartor · 18 days ago
I feel like my papers are better for having gone through peer review, and I'm a better researcher for having had a few rejections. Of course the reviewers can't hover around in your lab watching everything you do. But even if reviewers can't check the validity of the evidence in your paper, they do a pretty good job ensuring that the claims you make are supported by the evidence you present. That's a valuable if imperfect guardrail! What would be the alternative?
samsartor commented on Vitamin D and Omega-3 have a larger effect on depression than antidepressants   blog.ncase.me/on-depressi... · Posted by u/mijailt
samsartor · a month ago
Several people in my family have a MTHFR gene mutation that screws stuff up, including causing problems with anxiety+depression. But a simple B12 shot every couple of weeks does wonders.
samsartor commented on Backpropagation is a leaky abstraction (2016)   karpathy.medium.com/yes-y... · Posted by u/swatson741
brcmthrowaway · 4 months ago
Do LLMs still use backprop?
samsartor · 4 months ago
Yes. Pretraining and fine-tuning use standard Adam optimizers (usually with weight-decay). Reinforcement learning has been the odd-man out historically, but these days almost all RL algorithms also use backprop and gradient descent.
samsartor commented on 'Attention is all you need' coauthor says he's 'sick' of transformers   venturebeat.com/ai/sakana... · Posted by u/achow
dekhn · 5 months ago
The way I look at transformers is: they have been one of the most fertile inventions in recent history. Originally released in 2017, in the subsequent 8 years they completely transformed (heh) multiple fields, and at least partially led to one Nobel prize.

realistically, I think the valuable idea is probabilistic graphical models- of which transformers is an example- combining probability with sequences, or with trees and graphs- is likely to continue to be a valuable area for research exploration for the foreseeable future.

samsartor · 5 months ago
I'm skeptical that we'll see a big breakthrough in the architecture itself. As sick as we all are of transformers, they are really good universal approximators. You can get some marginal gains, but how more _universal_ are you realistically going to get? I could be wrong, and I'm glad there are researchers out there looking at alternatives like graphical models, but for my money we need to look further afeild. Reconsider the auto-regressive task, cross entropy loss, even gradient descent optimization itself.
samsartor commented on Show HN: Every single torrent is on this website   infohash.lol/... · Posted by u/tdjsnelling
Llamamoe · 5 months ago
I wonder if there is some way to create a latent-space Library of Babel in which you only find incoherent gibberish with extremely long keys, with the shortest ones pointing specifically to the most common/likely strings of text, in manageable computational complexity.
samsartor · 5 months ago
In a library of all possible strings, this is just text compression (as the other comment observes). But in a finite library it gets even simpler, in a cool way! We can treat each text as a unique symbol and use an entropy encoding (eg Huffman) to assign length-optimized key to each based on likelihood (eg from an LLM). Building the library is something like O(n log n), which isn't terrible. But adding new texts would change the IDs for existing texts (which is annoying). There might be a good way to reserve space for future entries probabilistically? Out of my depth at this point!
samsartor commented on WASM 3.0 Completed   webassembly.org/news/2025... · Posted by u/todsacerdoti
hinkley · 6 months ago
I was even trying to be charitable and read the feature list for elements that would thin down a third party DOM access layer, but other than the string changes I’m just not seeing it. That’s not enough forward progress.

WASM is just an extremely expensive toy for browsers until it supports DOM access.

samsartor · 6 months ago
My old team shipped a web port of our 3D modeling software back in 2017. The entire engine is the same as the desktop app, written in C++, and compiled to wasm.

Wasm is not now and will never be a magic "press here to replace JS with a new language" button. But it works really well for bringing systems software into a web environment.

samsartor commented on Starship's Tenth Flight Test   spacex.com/launches/stars... · Posted by u/d_silin
ivape · 7 months ago
But physics is physics. We’re not learning new physics are we? To reiterate, why wouldn’t these launches be perfect (seriously)?
samsartor · 7 months ago
The simulatable stuff is almost perfect. It's the stuff that can't be simulated that fails.

Take the last flight as an example. The booster experienced what was (probably) a structural failure in the propellant fuel lines. Simulating stress in the structure under static conditions is quite straightforward. Simulating the stress as the rocket ascends vertically and the tanks empty is hard, but doable.

Simulating the dynamic loading as the rocket flips? The fuel sloshes around, the sloshing fuel changes the kenimatics of the rocket, the kenimatics of the rocket change how the fuel sloshes, the engines try to correct adding a new force, the thrust from the engines creates increased force on the fuel increasing the pressure to the pumps, the performance of the engines changes because of the new fuel flow, that alters the acceleration further causing fuel to slosh, gass bubbles are intrained in the fuel from all the sloshing thus altering its flow/sloshing behavior, valves open and close creating pressure waves in the fuel that travel up and down the fuel lines (the water-hammer effect alone being enough to burst the pipes if valve closing is not well-timed), and the rocket itself flexes as all this happens, testing every exact detail of the manufacturing which you have to go out to the factory and physically measure. No simulation software ever imagined can handle all that coupling of systems.

The usual solution is to make some conservative estimates (the center-of-mass of the fuel will move by at most some amount, bubbles will last at most some time, the engines will have so much control authority, etc). But that requires experience. And this is aerospace, so safety margins are tiny.

samsartor commented on Derivatives, Gradients, Jacobians and Hessians   blog.demofox.org/2025/08/... · Posted by u/ibobev
ks2048 · 7 months ago
When you look at a 2D surface, you directly observe all the values on that surface.

For a loss-function, the value at each point must be computed.

You can compute them all and "look at" the surface and just directly choose the lowest - that is called a grid search.

For high dimensions, there's just way too many "points" to compute.

samsartor · 7 months ago
And remember, optimization problems can be _incredibly_ high-dimensional. A 7B parameter LLM is a 7-billion-dimensional optimization landscape. A grid-search with a resolution of 10 (ie 10 samples for each dimension) would requre evaluating the loss function 10^(7*10^9) times. That is, the number of evaluations is a number with 7B digits.

u/samsartor

KarmaCake day420July 4, 2023View Original