Readit News logoReadit News
lkm0 commented on Why I may ‘hire’ AI instead of a graduate student   science.org/content/artic... · Posted by u/doener
lkm0 · 9 hours ago
This sounds misguided. In the little experience I had, I've seen that models get basic knowledge so absolutely wrong that giving them any sort of independence will not result in publications that positively impact a professor's reputation, or contribute to science. Or at least the reviews and papers I read that had AI content did not give me the impression that we should have more of this. And they require much more supervision, with the added issue that they cannot learn in the long term through your interactions, and without the enjoyment of teaching something to someone. They're really good at finding papers though. Perhaps because navigating search engines has become a pain. Perhaps this will be the case in the future, but saying you're tempted right now is like saying you're being tempted to replace your HPC with quantum computers. It's a bit early.
lkm0 commented on Library of Short Stories   libraryofshortstories.com... · Posted by u/debo_
lkm0 · a day ago
I was surprised not to see Forster's stories as they are marvelous and public domain. People often cite the machine stops, but I recommend reading them all. The point of it and co-ordination are two favorites
lkm0 commented on Willingness to look stupid   sharif.io/looking-stupid... · Posted by u/Samin100
lkm0 · 3 days ago
It is sort of funny to think of Nobel prize level work in relation to blogposting. A couple of examples that don't conform: Marie Curie won another prize. Josephson in general (check him out). Feynman did greatly contribute to other fields after his prize. You can find as many counterexamples as examples if you dig a bit. I've witnessed a few times that looking like an idiot is the least of their concern.
lkm0 commented on Billion-Parameter Theories   worldgov.org/complexity.h... · Posted by u/seanlinehan
lkm0 · 6 days ago
It's an optimistic point of view. Still, when people use large neural nets to model physics, they also have a lot of parameters but they replicate very simple laws. So there's something deeper about this. Something like a simulation of theory.
lkm0 commented on Addicted to Claude Code–Help    · Posted by u/aziz_sunderji
gnat · 9 days ago
I'm not trying to convert you, just want to share process tips that I see working for me and others. We're using agents, not a chat, because they can do complex work in pursuit of a goal.

1. Make artifacts. If you're doing research into a tech, or a hypothesis, then fire off subagents to explore different parts of the problem space, each reporting back into a doc. Then another agent synthesizes the docs into a conclusion/report.

2. Require citations. "Use these trusted sources. Cite trusted sources for each claim. Cite with enough context that it's clear your citations supports the claim, and refuse to cite if the citation doesn't support the claim."

3. Review. This lets you then fire off a subagent to review the synthesis. It can have its own prompt: look for confirming and disconfirming evidence, don't trust uncited claims. If you find it making conflation mistakes, figure out at what stage and why, and adjust your process to get in front of them.

4. Manage your context. LLM only has a fixed context size ("chat length") and facts & instructions at the front of that tend to be better hewn to than things at the end. Subagents are a way of managing that context to get more from a single run. Artifacts like notebooks or records of subagent output move content outside the context so you can pick up in a new session ("chat") and continue the work.

It's less fun that just having a chat with ChatGPT. I find that I get much better quality results using these techniques. Hope this helps! If you're not interested in doing this (too much like work, and you already have something that works), it's no skin off my nose. All the best!

lkm0 · 7 days ago
Thanks for the thoughtful reply! I definitely want to try a more complex setup when I have more time on my hands
lkm0 commented on Addicted to Claude Code–Help    · Posted by u/aziz_sunderji
simonw · 9 days ago
I know dozens of people who are in a similar state right now, following the November 2025 moment when Claude Code (and Codex) got really good.

I wouldn't worry about it just yet - this is all very novel, and there's a lot of excitement involved in figuring out what it can do and trying different things.

If you're still addicted to it in three months time I'd start to be concerned.

For the moment though you're building a valuable mental model of how to use it and what it can do. That's not wasted time.

lkm0 · 9 days ago
I'm seeing the limits when Claude makes some statements that are extremely wrong but incredibly hard to spot unless you're in the field, recently telling me that "some people say" that rydberg atoms and neutral atoms are different enough to be in different quantum computing categories (they're the same). The stakes are lowering somehow, because I know I can't trust it for anything but fun side-projects. For serious research it's still me and reading papers.
lkm0 commented on Triplet Superconductor   sciencedaily.com/releases... · Posted by u/jonbaer
lkm0 · 10 days ago
The manuscript has been out since October 2025, and back then it didn't make so much noise. Looks like solid work, but more muted in tone than this press release. https://arxiv.org/abs/2510.08110
lkm0 commented on The Brand Age   paulgraham.com/brandage.h... · Posted by u/bigwheels
lkm0 · 11 days ago
Somebody's getting into watch collecting! Quartz watches were also developed in Switzerland: check out the story of the Beta: https://goldammer.me/blogs/articles/beta-21-history-guide
lkm0 commented on Physicists developing a quantum computer that’s entirely open source   physics.aps.org/articles/... · Posted by u/tzury
lkm0 · 13 days ago
Incredibly cool initiative! Looks like they're going for a trapped-ion device, which the best you can get for now. It's not clear what kind of geometry the ions will be on, but I assume linear traps? If so, it can't be scaled beyond 10ish qubits, so that's definitely more of an educational project. That makes sense though, since the other options like racetrack or whatever are still active research.

I wonder if there's ever been any cross-pollinating between SC and trapped-ion labs when it comes to control electronics and such. This could be a good way to find out.

lkm0 commented on AI uBlock Blacklist   github.com/alvi-se/ai-ubl... · Posted by u/rdmuser
lkm0 · 23 days ago
Why is apnews.com on the list?

u/lkm0

KarmaCake day19November 11, 2024View Original