Readit News logoReadit News
adroniser commented on Starting from scratch: Training a 30M Topological Transformer   tuned.org.uk/posts/013_th... · Posted by u/tuned
kannanvijayan · a month ago
I think this is an attempt to try to enrich the locality model in transformers.

One of the weird things you do in transformers is add a position vector which captures the distance between the token being attended to the some other token.

This is obviously not powerful enough to express non-linear relationships - like graph relationships.

This person seems to be experimenting with doing pre-processing of the input token set, to linearly reorder it by some other heuristic that might map more closely to the actual underlying relationship between each token.

adroniser · a month ago
Adding the position vector is basic sure, but it's naive to think the model doesn't develop its own positional system bootstrapping on top of the barebones one.

Dead Comment

adroniser commented on AI model trapped in a Raspberry Pi   blog.adafruit.com/2025/09... · Posted by u/harel
ineedasername · 5 months ago
Is this your sense of what is happening, or is this what model introspection tools have shown by observing areas of activity in the same place as when stories are explicitly requested?
adroniser · 5 months ago
fmri's are correlational nonsense (see Brainwashed, for example) and so are any "model introspection" tools.
adroniser commented on Language models pack billions of concepts into 12k dimensions   nickyoder.com/johnson-lin... · Posted by u/lawrenceyan
sdenton4 · 5 months ago
IME: most of the reviewers in the big ML conferences are second-year phd students sent into the breach against the overwhelming tide of 10k submissions... Their review comments are often somewhere between useless and actively promoting scientific dishonesty.

Sometimes we get good reviewers, who ask questions and make comments which improve the quality of a paper, but I don't really expect it in the conference track. It's much more common to get good reviewers in smaller journals, in domains where the reviewers are experts and care about the subject matter. OTOH, the turnaround for publication in these journals can take a long time.

Meanwhile, some of the best and most important observations in machine learning never went through the conference circuit, simply because the scientific paper often isn't the best venue for broad observation... The OG paper on linear probes comes to mind. https://arxiv.org/pdf/1610.01644

adroniser · 5 months ago
Of the papers submitted to a conference, it might be that reviewers don't offer suggestions that would significantly improve the quality of the work. Indeed the quality of reviews has gone down significantly in recent years. But if Anthropic were going to submit this work to peer review, they would be forced to tighten it up significantly.

The linear probe paper is still written in a format where it could reasonably be submitted, and indeed it was submitted to an ICLR workshop.

adroniser commented on Language models pack billions of concepts into 12k dimensions   nickyoder.com/johnson-lin... · Posted by u/lawrenceyan
PeterStuer · 5 months ago
"peer review would encourage less hand wavy language and more precise claims"

In theory, yes. Lets not pretend actual peer review would do this.

adroniser · 5 months ago
So you think that this blog post would make it into any of the mainstream conferences? I doubt it.
adroniser commented on Language models pack billions of concepts into 12k dimensions   nickyoder.com/johnson-lin... · Posted by u/lawrenceyan
yorwba · 5 months ago
It's not how pre-publication peer review works. There, the problem is that many papers aren't worth reading, but to determine whether it's worth reading or not, someone has to read it and find out. So the work of reading papers of unknown quality is farmed out over a large number of people each reading a small number of randomly-assigned papers.

If somebody's paper does not get assigned as mandatory reading for random reviewers, but people read it anyway and cite it in their own work, they're doing a form of post-publication peer review. What additional information do you think pre-publication peer review would give you?

adroniser · 5 months ago
peer review would encourage less hand wavy language and more precise claims. They would penalize the authors for bringing up bizarre analogies to physics concepts for seemingly no reason. They would criticize the fact that they spend the whole post talking about features without a concrete definition of a feature.

The sloppiness of the circuits thread blog posts has been very damaging to the health of the field, in my opinion. People first learn about mech interp from these blog posts, and then they adopt a similarly sloppy style in discussion.

Frankly, the whole field currently is just a big circle jerk, and it's hard not to think these blog posts are responsible for that.

I mean do you actually think this kind of slop would be publishable in NeurIPS if they submitted the blog post as it is?

Deleted Comment

adroniser commented on Canaries in the Coal Mine? Recent Employment Effects of AI [pdf]   digitaleconomy.stanford.e... · Posted by u/p1esk
bgwalter · 6 months ago
I would say it's time to fight back. Politicians need to be educated that "AI" is plagiarism and is used as an excuse to scale back the workforce. It also ruins the thriving intellectual landscape.

Since you are a professor, they might listen to you.

adroniser · 6 months ago
didn't hackers used to be for piracy?
adroniser commented on Canaries in the Coal Mine? Recent Employment Effects of AI [pdf]   digitaleconomy.stanford.e... · Posted by u/p1esk
majormajor · 6 months ago
LLMs are very useful tools for software development, but focusing on employment does not appear to really dig into if it will automate or augment labor (to use their words). Behaviors are changing not just because of outcomes but because of hype and expectations and b2b sales. You'd expect the initial corporate behaviors to look much the same whether or not LLMs turn into fully-fire-and-forget employee-replacement tools.

Some nits I'd pick along those lines:

>For instance, according to the most recent AI Index Report, AI systems could solve just 4.4% of coding problems on SWE-Bench, a widely used benchmark for software engineering, in 2023, but performance increased to 71.7% in 2024 (Maslej et al., 2025).

Something like this should have the context of SWE-Bench not existing before November, 2023.

Pre-2023 systems were flying blind with regard to what they were going to be tested with. Post-2023 systems have been created in a world where this test exists. Hard to generalize from before/after performance.

> The patterns we observe in the data appear most acutely starting in late 2022, around the time of rapid proliferation of generative AI tools.

This is quite early for "replacement" of software development jobs as by their own prior statement/citation the tools even a year later, when SWE-Bench was introduced, were only hitting that 4.4% task success rate.

It's timing lines up more neatly with the post-COVID-bubble tech industry slowdown. Or with the start of hype about AI productivity vs actual replaced employee productivity.

adroniser · 6 months ago
This suggests people should pre-register benchmarks. Because currently it feels like there is little incentive to publish benchmarks that models saturate.
adroniser commented on Learning to Reason with LLMs   openai.com/index/learning... · Posted by u/fofoz
jsheard · a year ago
See also: them still sitting on Sora seven months after announcing it. They've never given any indication whatsoever of how much compute it uses, so it may be impossible to release in its current state without charging an exorbitant amount of money per generation. We do know from people who have used it that it takes between 10 and 20 minutes to render a shot, but how much hardware is being tied up during that time is a mystery.
adroniser · a year ago
But there are lots of models available now that render much faster which are better quality than sora

u/adroniser

KarmaCake day76March 21, 2023View Original