Readit News logoReadit News
ttul commented on Analysis finds anytime electricity from solar available as battery costs plummet   pv-magazine-usa.com/2025/... · Posted by u/Matrixik
aswegs8 · 3 days ago
Am I dumb or does that sentence "Analysis finds anytime electricity from solar available as battery costs plummet" make no sense grammatically?
ttul · 3 days ago
If they were going for maximum confusion, why not write, “Solar battery costs plummet analysis findings back anytime electricity availability”?

Subject (((((Solar battery) costs) plummet) analysis) findings)

Verb [back]

Object (anytime (electricity availability))

Garden path sentence structure trap creation relies on initial word parse error encouragement. Brain pattern recognition system default subject-verb-object order preference exploitation causes early stop interpretation failure.

Solar battery costs plummet phrase acting as complex noun modifier group creates false sentence finish illusion. Real subject findings arrival delay forces mental backtrack restart necessity.

Noun adjunct modifier stack length excess impacts processing speed negatively. Back word function switch from direction noun to support verb finalizes reader confusion state.

We write to be understood. Short sentences and simple words make the truth easy to see.

ttul commented on The highest quality codebase   gricha.dev/blog/the-highe... · Posted by u/Gricha
ttul · 5 days ago
Have you tried writing into the AGENTS.md something like, "Always be on the lookout for dead code, copy-pasta, and other opportunities to optimize and trim the codebase in a sensible way."

In my experience, adding this kind of instruction to the context window causes SOTA coding models to actually undertake that kind of optimization while development carries on. You can also periodically chuck your entire codebase into Gemini-3 (with its massive context window) and ask it to write a refactoring plan; then, pass that refactoring plan back into your day-to-day coding environment such as Cursor or Codex and get it to take a few turns working away at the plan.

As with human coders, if you let them run wild "improving" things without specifically instructing them to also pay attention to bloat, bloat is precisely what you will get.

ttul commented on Show HN: Local Privacy Firewall-blocks PII and secrets before ChatGPT sees them   github.com/privacyshield-... · Posted by u/arnabkarsarkar
ttul · 5 days ago
This should be a native feature of the native chat apps for all major LLM providers. There’s no reason why PII can’t be masked from the API endpoint and then replaced again when the LLM responds. “Mary Smith” becomes “Samantha Robertson” and then back to “Mary Smith” on responses from the LLM. A small local model (such as the BERT model in this project) detects the PII.

Something like this would greatly increase end user confidence. PII in the input could be highlighted so the user knows what is being hidden from the LLM.

ttul commented on Mistral releases Devstral2 and Mistral Vibe CLI   mistral.ai/news/devstral-... · Posted by u/pember
kevin061 · 7 days ago
Lol, someone vibecoded an entire website for OpenAI's model, that's some dedication.
ttul · 6 days ago
"GPT, please make me a website about OpenAI's 'Garlic' model."
ttul commented on Flow: Actor-based language for C++, used by FoundationDB   github.com/apple/foundati... · Posted by u/SchwKatze
ttul · 8 days ago
Type-safe message-passing is such a wonderful programming paradigm - and not just for distributed applications. I remember using QNX back in the 1990s. One of its fabulous features was a C message passing library allowing you to send arbitrary binary structs from one process to another. In the context of realtime software development, you often find yourself having one process that watches for events from a certain device, modify the information somehow, and then pass it on to another process that ends up doing something else. The message-passing idiom was far superior to what was available in Linux at the time (pipes and whatnot) because you were able to work with C structs. It was not strictly type safe (as is the case with FoundationDB’s library), but for the 1990s it was pretty great.
ttul commented on UniFi 5G   blog.ui.com/article/intro... · Posted by u/janandonly
matthewfcarlson · 11 days ago
I keep an old Starlink in a closet for this exact reason
ttul · 11 days ago
So I'm not the only guy doing this...
ttul commented on Image Diffusion Models Exhibit Emergent Temporal Propagation in Videos   arxiv.org/abs/2511.19936... · Posted by u/50kIters
cheald · 19 days ago
I do (same username), but I haven't published any of this (and in fact my Github has sadly languished lately); I keep working on it with the intent to publish eventually. The big problem with models like this is that the training dynamics have so many degrees of freedom that every time I get close to something I want to publish I end up chasing down another set of rabbit holes.

https://gist.github.com/cheald/7d9a436b3f23f27b8d543d805b77f... - here's a quick dump of my SVDLora module though. I wrote it for use in OneTrainer though it should be adaptable to other frameworks easily enough. If you want to try it out, I'd love to hear what you find.

ttul · 19 days ago
This is super cool work. I’ve built some new sampling techniques for flow matching models that encourage the model to take a “second look” by rewinding sampling to a midpoint and then running the clock forward again. This worked really well with diffusion models (pre-DiT models like SDXL) and I was curious whether it would work with flow matching models like Qwen Image. Yes, it does, but the design is different because flow matching models aren’t de-noising pixels so much as they are simply following a vector field at each step like a ship being pushed by the wind.
ttul commented on Image Diffusion Models Exhibit Emergent Temporal Propagation in Videos   arxiv.org/abs/2511.19936... · Posted by u/50kIters
ethmarks · 19 days ago
Gemini 3 is a 10 trillion parameter model?
ttul · 19 days ago
I read that the pre-training model behind Gemini 3 has 10T parameters. That does not mean that the model they’re serving each day has 10T parameters. The online model is likely distilled from 10T down to something smaller, but I have not had either fact confirmed by Google. These are anecdotes.
ttul commented on Image Diffusion Models Exhibit Emergent Temporal Propagation in Videos   arxiv.org/abs/2511.19936... · Posted by u/50kIters
smerrill25 · 20 days ago
Hey, do you know how you figured out about this information? I would be super curious to keep track of current ad-hoc ways of pushing older models to do cooler things. LMK
ttul · 20 days ago
1) Reading papers. 2) Reading "Deep Learning: Foundations and Concepts". 3) Taking Jeremy Howard's Fast.ai course
ttul commented on I don't care how well your "AI" works   fokus.cool/2025/11/25/i-d... · Posted by u/todsacerdoti
miningape · 20 days ago
Claude, summarise this for me
ttul · 20 days ago
I wrote this, and honestly when I read it, I also want to reach for the LLM.

u/ttul

KarmaCake day9509March 19, 2009
About
Founder of MailChannels. Defender of open communications on the internet.
View Original