Readit News logoReadit News
gnat commented on Addicted to Claude Code–Help    · Posted by u/aziz_sunderji
lkm0 · 7 days ago
I'm seeing the limits when Claude makes some statements that are extremely wrong but incredibly hard to spot unless you're in the field, recently telling me that "some people say" that rydberg atoms and neutral atoms are different enough to be in different quantum computing categories (they're the same). The stakes are lowering somehow, because I know I can't trust it for anything but fun side-projects. For serious research it's still me and reading papers.
gnat · 7 days ago
I'm not trying to convert you, just want to share process tips that I see working for me and others. We're using agents, not a chat, because they can do complex work in pursuit of a goal.

1. Make artifacts. If you're doing research into a tech, or a hypothesis, then fire off subagents to explore different parts of the problem space, each reporting back into a doc. Then another agent synthesizes the docs into a conclusion/report.

2. Require citations. "Use these trusted sources. Cite trusted sources for each claim. Cite with enough context that it's clear your citations supports the claim, and refuse to cite if the citation doesn't support the claim."

3. Review. This lets you then fire off a subagent to review the synthesis. It can have its own prompt: look for confirming and disconfirming evidence, don't trust uncited claims. If you find it making conflation mistakes, figure out at what stage and why, and adjust your process to get in front of them.

4. Manage your context. LLM only has a fixed context size ("chat length") and facts & instructions at the front of that tend to be better hewn to than things at the end. Subagents are a way of managing that context to get more from a single run. Artifacts like notebooks or records of subagent output move content outside the context so you can pick up in a new session ("chat") and continue the work.

It's less fun that just having a chat with ChatGPT. I find that I get much better quality results using these techniques. Hope this helps! If you're not interested in doing this (too much like work, and you already have something that works), it's no skin off my nose. All the best!

gnat commented on ai;dr   0xsid.com/blog/aidr... · Posted by u/ssiddharth
ssiddharth · a month ago
My biggest sorrow right now is the fact that my beloved emdash is a major signal for AI generated content. I've been using it for decades now but these days, I almost always pause for a second.
gnat · a month ago
We're in the brief window of time when AI's writing style is the weirdness. It's an artifact of the production process, like JPG blur, MP3 distortion, autotune's rigidity. And it didn't take long for those things to become normalized, in fact for them to become artifacts that people proudly adopted and embraced. DJs release tracks built from MP3s samples instead of waves. Autotune is famously a 'sound' that was once something to be subtly added and never confessed to, but which now genres and artists lean into rather than away from.

Long story short: I think emoji in headings and lists, em dashes, and the vile TED Talk paragraph structure of "long sentence with lots of words asking a question or introducing a possibility. followed by. short sentences. rebutting. or affirming." are here to stay. My money is that it gets normalized and embraced as "well of course that's how you best communicate because I see it everywhere."

gnat commented on I know we're in an AI bubble because nobody wants me   petewarden.com/2025/11/29... · Posted by u/iparaskev
boutell · 3 months ago
He lost me a bit at the end talking about running chat bots on CPUs. I know it's possible, but it's inherently parallel computing isn't it? Would that ever really make sense? I expected to hear something more like low end consumer gpus.

Recent generation llms do seem to have some significant efficiency gains. And routers to decide if you really need all of their power on a given question. And Google is building their custom tpus. So I'm not sure if I buy the idea that everyone ignores efficiency.

gnat · 3 months ago
(Hi, Tom!) Reread the article and look for “CPU”. The whole article is about doing deep learning on CPUs not GPUs. Moonshine, the open source project and startup he talks about, shows speech recognition and realtime translation on the device rather than on a server. My understanding is that doing The Math in parallel is itself a performance hack, but Doing Less Math is also a performance hack.
gnat commented on Agent design is still hard   lucumr.pocoo.org/2025/11/... · Posted by u/the_mitsuhiko
wild_egg · 4 months ago
Claude is amazing at brownfield if you take the time to experiment with your approach.

Codex is stronger out of the box but properly customized Claude can't be matched at the moment

gnat · 4 months ago
What have you done to make Claude stronger on brownfields work? This is very interesting to me.
gnat commented on GPT-5.1: A smarter, more conversational ChatGPT   openai.com/index/gpt-5-1/... · Posted by u/tedsanders
pants2 · 4 months ago
FWIW I didn't like the Robot / Efficient mode because it would give very short answers without much explanation or background. "Nerdy" seems to be the best, except with GPT-5 instant it's extremely cringy like "I'm putting my nerd hat on - since you're a software engineer I'll make sure to give you the geeky details about making rice."

"Low" thinking is typically the sweet spot for me - way smarter than instant with barely a delay.

gnat · 4 months ago
I hate its acknowledgement of its personality prompt. Try having a series of back and forth and each response is like “got it, keeping it short and professional. Yes, there are only seven deadly sins.” You get more prompt performance than answer.
gnat commented on A Global Web of Chinese Propaganda Leads to a U.S. Tech Mogul   nytimes.com/2023/08/05/wo... · Posted by u/thisislife2
gnat · 4 months ago
Tl;Dr: ThoughtWorks founder is spending his millions portraying Chinese government policies, including Xinjian/Uighurs, in a positive light. His spending his heavily laundered but he’s now based in China, and working in the same offices as a propaganda company.
gnat commented on Melvyn Bragg steps down from presenting In Our Time   bbc.co.uk/mediacentre/202... · Posted by u/aways
benrutter · 6 months ago
I looked, and there's more than 1000 available episodes of IOT on the BBC, they're all (at least every one I've heard) brilliant.

I'm curious if anyone here has any particular favourites?

I remember really enjoying the Plankton episode because it took me the classic IOT route of "That doesn't sound interesting, but I'll give it a listen" to looking up all the reading list.

https://www.bbc.co.uk/programmes/m001r1t5

gnat · 6 months ago
Calendar was brilliant. I think it was the first time I fully appreciated the misery of the human mind in the face of various orbit periods that aren't simple integer ratios of one another. https://www.bbc.co.uk/programmes/p00548m9

Great Fire of London too. Pepys burying his cheese! https://www.bbc.co.uk/programmes/b00ft63q

Politeness. Social barriers were coming down, you were interacting with people of different rank, how do you not get into a swordfight? Also, the letter from the wife complaining about her husband! https://www.bbc.co.uk/programmes/p004y29m

I think they did all the big interesting things in history and then struggled with a lot of minor events that were hard to find interesting angles on.

gnat commented on Testosterone Didn't Rapidly Decline   twitter.com/cremieuxrecue... · Posted by u/MrBuddyCasino
neaden · 7 months ago
Here is someone actually writing about it that the twitter thread cites is you want the details. https://eryney.substack.com/p/maybe-its-just-your-testostero...
gnat · 7 months ago
Thank you! Worth reading, if only for the phrase “global taint ruler”.

u/gnat

KarmaCake day1186March 4, 2007
About
Nat Torkington. Coauthor of Perl Cookbook, author of Four Short Links at oreilly.com, and host of Kiwi Foo Camp. Kiwi, banjo player, Dad.
View Original