Readit News logoReadit News
ntonozzi commented on Design Patterns for Securing LLM Agents Against Prompt Injections   simonwillison.net/2025/Ju... · Posted by u/simonw
ntonozzi · 3 months ago
This approach is so limiting it seems like it would be better to change the constraints. For example, in the case of a software agent you could run everything in a container, only allow calls you trust to not exfiltrate private and make the end result a PR you can review.
ntonozzi commented on Reinforcement Pre-Training   arxiv.org/abs/2506.08007... · Posted by u/frozenseven
ntonozzi · 3 months ago
Is there any work related to using some kind of soft tokens for reasoning? It seems so inefficient to try to encode so much information down into a single token for the next pass of the model, when you could output a large vector for each forward pass, and have a drastically larger working memory/scratchpad, and have much higher bandwidth for the models to pass information forward to the next token call. If a single token has 17 bits of information, a vector of 1024 floats could have 32,768 bits of information.
ntonozzi · 3 months ago
I just found a recent paper about this: https://arxiv.org/abs/2505.15778. It's really thoughtful and well written. They mix the different token outputs together.
ntonozzi commented on Reinforcement Pre-Training   arxiv.org/abs/2506.08007... · Posted by u/frozenseven
ntonozzi · 3 months ago
Is there any work related to using some kind of soft tokens for reasoning? It seems so inefficient to try to encode so much information down into a single token for the next pass of the model, when you could output a large vector for each forward pass, and have a drastically larger working memory/scratchpad, and have much higher bandwidth for the models to pass information forward to the next token call. If a single token has 17 bits of information, a vector of 1024 floats could have 32,768 bits of information.
ntonozzi commented on Human coders are still better than LLMs   antirez.com/news/153... · Posted by u/longwave
ntonozzi · 3 months ago
If you care that much about having correct data you could just do a SHA-256 of the whole thing. Or an HMAC. It would probably be really fast. If you don’t care much you can just do murmur hash of the serialized data. You don’t really need to verify data structure properties if you know the serialized data is correct.
ntonozzi commented on A Brief History of Cursor's Tab-Completion   coplay.dev/blog/a-brief-h... · Posted by u/josvdwest
ntonozzi · 4 months ago
I've been so impressed with Cursor's tab completion. I feel like I just update my Typescript types, and point my cursor at the right spot in the file and it reads my mind and writes the correct code. It's so good that I feel like I lose time if I write my instructions to an AI in the chat window, instead of just letting Cursor guess what I want. None of the other models feel anything like this.
ntonozzi commented on Zed: High-performance AI Code Editor   zed.dev/blog/fastest-ai-c... · Posted by u/vquemener
freehorse · 4 months ago
Interesting. I actually like the editable format of the chat interface because it allows fixing small stuff on the fly (instead of having to talk about it with the model) and de-cluttering the chat after a few back and forths make it a mess (instead of having to start anew), which makes the context window smaller and less confusing to the model, esp for local ones. Also, the editable form makes more sense to me, and it feels more natural and simple to interact with an LLM assistant with it.
ntonozzi · 4 months ago
The old panel still exists, they call it a text thread.
ntonozzi commented on Internet usage pattern during power outage in Spain and Portugal   blog.akamai-mpulse.com/bl... · Posted by u/ghoshbinayak
bryanlarsen · 4 months ago
Solar noon in Madrid is at 2:13PM due to the absurd time zone that Spain's in. Having lunch at 1PM is early, not late.
ntonozzi · 4 months ago
Well this explains a lot of things that I always attributed to Spanish culture.
ntonozzi commented on Everything we announced at our first LlamaCon   ai.meta.com/blog/llamacon... · Posted by u/meetpateltech
wewewedxfgdf · 4 months ago
Can someone explain to me please why Meta doesn't create subject specific versions of their LLMs such as one that knows only about computer programming, computers, hardware software.

I would have imagined such a thing would be smaller and thus run on smaller configurations.

But since I am only a layman maybe someone can tell me why this isn't the case?

ntonozzi · 4 months ago
One of the weirdest and most interesting parts of LLMs is that they grow more effective the more languages and disciplines they are trained in. It turns out training LLMs on code instead of just prose boosted their intelligence and reasoning capabilities by huge amounts.
ntonozzi commented on Heart disease deaths worldwide linked to chemical widely used in plastics   medicalxpress.com/news/20... · Posted by u/amichail
horsawlarway · 4 months ago
There's been some interesting pushes to use glass linings here.

Ex Chico in the baby bottle space (glass lined plastic bottles) Purist in the adult bottle space (glass lined stainless steel).

You can also get plenty of unlined aluminum/stainless cups/bottles (amazon is full of them).

No idea how that idea is going to play out long term.

ntonozzi · 4 months ago
Purist has some cool plastic bottles with a glass liner: https://www.specialized.com/us/en/purist-watergate-water-bot...

They are lightweight and flexible and supposedly have minimal plastic contact with water.

u/ntonozzi

KarmaCake day511April 21, 2014
About
https://nicot.us
View Original