Readit News logoReadit News
dinfinity commented on Being “Confidently Wrong” is holding AI back   promptql.io/blog/being-co... · Posted by u/tango12
gavinray · a day ago

  > Yeah I think our jobs are safe.
I give myself 6-18 months before I think top-performing LLM's can do 80% of the day-to-day issues I'm assigned.

  > Why doesn’t anyone acknowledge loops like this?
Thisis something you run into early-on using LLM's and learn to sidestep. This looping is a sort of "context-rot" -- the agent has the problem statement as part of it's input, and then a series of incorrect solutions.

Now what you've got is a junk-soup where the original problem is buried somewhere in the pile.

Best approach I've found is to start a fresh conversation with the original problem statement and any improvements/negative reinforcements you've gotten out of the LLM tacked on.

I typically have ChatGPT 5 Thinking, Claude 4.1 Opus, Grok 4, and Gemini 2.5 Pro all churning on the same question at once and then copy-pasting relevant improvements across each.

dinfinity · a day ago
I concur. Something to keep in mind is that it is often more robust to pull an LLM towards the right place than to push it away from the wrong place (or more specifically, the active parts of its latent space). Sidenote: also kind of true for humans.

That means that positively worded instructions ("do x") work better than negative ones ("don't do y"). The more concepts that you don't want it to use / consider show up in the context, the more they do still tend to pull the response towards them even with explicit negation/'avoid' instructions.

I think this is why clearing all the crap from the context save for perhaps a summarizing negative instruction does help a lot.

dinfinity commented on Vibe coding creates a bus factor of zero   mindflash.org/coding/ai/a... · Posted by u/AntwaneB
AntwaneB · 3 days ago
Hello, author here. Thanks for your comment.

I agree with your first point, maybe AI will close some of those gaps with future advances, but I think a large part of the damage will have been done by then.

Regarding the memory of reasoning from LLMs, I think the issue is that even if you can solve it in the future, you already have code for which you've lost the artifacts associated with the original generation. Overall I find there's a lot of talks (especially in the mainstream media) about AI "always learning" when they don't actually learn new anything until a new model is released.

> Why does it require 100% accuracy 100% of the time? Humans are not 100% accurate 100% of the time and we seem to trust them with our code.

Correct, but humans writing code don't lead to a Bus Factor of 0, so it's easier to go back, understand what is wrong and address it.

If the other gaps mentioned above are addressed, then I agree that this also partially goes away.

dinfinity · 2 days ago
The solution is almost trivial and well known:

Document. Whenever a member of the team leaves, you should always make them document what is in their head and not already documented (or even better, make documenting a required part of normal development).

The thing to understand is that every (new) AI chat session is effectively a member entering and leaving your team. So to make that work well you need great onboarding (provide documentation) and great offboarding (require documentation).

dinfinity commented on Review of Anti-Aging Drugs   scienceblog.com/joshmitte... · Posted by u/XzetaU8
sigmoid10 · 6 days ago
While autophagy does correlate with fasting and some studies link it to health markers, it should be noted that it usually takes at least 18 hours of continued fasting to even start and only goes into full swing after 48 to 72 hours. It is also an extreme cell response that is associated with high levels of cellular stress, which might have understudied long term detrimental effects. A simple calorie reduction either by eating fewer highly processed meals or regular intense exercise is much more universally accepted as longevity boosting, because it combats overweight, which is by far the most common disease that shortens general lifespan in the western world. There's really no good reason to force your body through these extreme diets. Don't be overweight, don't smoke, don't drink alcohol, maybe go easy on junk food and maybe do some exercise. And get your regular medical check-ups. Then you're already at the pinnacle of clinical longevity science. There is no actual anti aging drug yet that has a proven effect on humans. Best we have are some moderately promising monkey and small mammal studies, but they generally don't translate well.
dinfinity · 4 days ago
> autophagy [...] usually takes at least 18 hours of continued fasting to even start

False. That would mean cellular debris would pile up unconstrained for pretty much everybody, which is clearly absurd.

> It is also an extreme cell response that is associated with high levels of cellular stress

Also false. It is a very essential cellular process. Read up on it, please.

dinfinity commented on Meta Leaks Part 2: How to Kill a Social Movement   archive.org/details/meta_... · Posted by u/icw_nru
tojumpship · 9 days ago
I wholly agree with your point on X. The comeback of racism is one of the most dangerous social phenomenons in today's world. Besides sowing unsurmountable amounts of hatred, it also brings along xenophobia, misogyny/misandry and the whole likes along with it as the forerunning discriminatory practice in our world.
dinfinity · 6 days ago
The 'comeback' of racism? Really?

It never left. Anywhere in the world.

dinfinity commented on "None of These Books Are Obscene": Judge Strikes Down Much of FL's Book Ban Bill   bookriot.com/penguin-rand... · Posted by u/healsdata
dinfinity · 9 days ago
Ignore the insanity of banning books for a second. What about the Victorian-era insanity of preventing children from being exposed to the mere concept of sex and labeling it 'obscenity'?

On one hand we want kids to learn about consent, what 'normal sex' is like and all that, but simultaneously there is this idiotic push to prevent them from encountering any of it until they are 18. If we don't want kids to see bad porn, we need to ensure that there is lots of good porn available, and not just some boring sex ed bullshit. I mean actual benign everyday sex that kids can safely watch and learn from because otherwise they will never see it anywhere else (it's not like they regularly watch their parents or other people do it).

You have to be incredibly regressive to think 18 is somehow a good cutoff for this.

dinfinity commented on AI Efficiency? Give Me a Break   luolink.substack.com/p/ai... · Posted by u/luolink
lentil_soup · 9 days ago
I don't understand the rush to adopt things if they're not really giving you a boost right now, some people still use vi and hammers, both invented by early humans. Adopting a new tool while a new one comes out every week is fishing in a troubled river. It's perfectly fine to wait until things settle and then learn whatever is actually left, I hightly doubt you're going to lose your job because you didn't learn something 2 days ago.
dinfinity · 9 days ago
I would say:

1. Spend time on the fundamentally new things through whatever tool provides it.

2. Always try to prevent 'vendor' lock-in. Think about portability and reusability.

A lot of the stuff that now "works well" when working with AI assistants and tools is stuff that was always a good idea and always worked well. Write Once, Read Many; Provide good specification; Communicate clearly; Automate repeated tasks; etc.

If you update your workflow in a fundamental manner and don't jump from investing 100% in tool X to 100% in tool Y redoing a lot of shit, you improve it efficiently.

dinfinity commented on Study: Social media probably can't be fixed   arstechnica.com/science/2... · Posted by u/todsacerdoti
lemming · 10 days ago
LLMs aren’t people, and the authors have not convinced me that they will behave like people in this context.

This was my initial reaction as well, before reading the interview in full. They admit that there are problems with the approach, but they seem to have designed the simulation in a very thoughtful way. There really doesn't seem to be a better approach, apart from enlisting vast numbers of people instead of using LLMs/agent systems. That has its own problems as well of course, even leaving cost and difficulty aside.

There’s no option to create original content...

While this is true, I'd say the vast majority of users don't create original content either, but still end up shaping the social media environment through the actions that they did model. Again, it's not perfect but I'm more convinced that it might be useful after reading the interview.

dinfinity · 9 days ago
LLMs don't learn, though.

The fundamental problem with social media (and many other things) is humans, specifically our biological makeup and (lack of) overriding mechanisms. One could argue that pretty much everything we call 'civilised behavior' is an instance of applying a cultural override for a biological drive. Without it, we are very close to shit-flinging murderous apes.

For so many of our problems what goes wrong is that we fail to stop our biological drive from taking the wheel to the point where we consciously observe ourselves doing things we rationally / culturally know we should not be doing.

Now the production side of media/content/goods evolves very fast and does not have a similarly strong legacy biological drive holding it back, so it is very, very good (and ever improving) at exploiting the sitting duck that is our biological makeup (food engineering, game engineering etc. are very similar to social media engineering in this regard).

The only reliable defense against that is training ourselves to not give in to our biological drives when they are counterproductive. For some that might be 'disconnect completely' (i.e. take away the temptations altogether), but having a healthy approach to encountering the temptations is far more robust. I am of the opinion that labeling the social media purveyors and producers in general as evil abusers is not necessarily inaccurate, but counterproductive in that it tends to absolve individuals of their responsibility in the matter. Imagine telling a heroin addict: "you can't help it, it's those evil dealers that are keeping you hooked to the heroin".

dinfinity commented on His psychosis was a mystery–until doctors learned about ChatGPT's health advice   psypost.org/his-psychosis... · Posted by u/01-_-
incomingpain · 10 days ago
>for the past three months, he had been replacing regular table salt with sodium bromide. His motivation was nutritional—he wanted to eliminate chloride from his diet, based on what he believed were harmful effects of sodium chloride.

Ok so the article is blaming chatgpt but this is ridiculous.

Where do you buy this bromide? It's not like it's in the spices aisle. The dude had to go buy a hot tub cleaner like Spa Choice Bromine Booster Sodium Bromide

and then sprinkle that on his food. I dont care what chatgpt said... that dude is the problem.

dinfinity · 9 days ago
We could also point the finger towards the popular consensus that "salt is bad for you". This guy just took it to the next level.
dinfinity commented on AI must RTFM: Why tech writers are becoming context curators   passo.uno/from-tech-write... · Posted by u/theletterf
stillsut · 15 days ago
I've been finding adding context for your external packages is really important when the package is relatively new and or has breaking changes since the model training cut-off date.

Two that stick out this week are google's genai (client for vertex/gemini ednpoints) that is updating methods and moviepy in their 2.x breaking changes (most of the corpus for this library was trained with v1.x).

I wrote about some findings here, and that there's still not a great way to have the models examine only the pieces of documentation that they need for their work: https://github.com/sutt/agro/blob/master/docs/case-studies/a...

dinfinity · 15 days ago
Instruct it to look at the actual code of your dependencies directly. It should be able to (quickly) figure out how the exact version of the dependency you have should be used.
dinfinity commented on Ultra-processed foods make up more than 60% of us kids' diets   bloomberg.com/news/articl... · Posted by u/JumpCrisscross
sokoloff · 16 days ago
> highly processed foods like burgers, pastries, snacks and pizza

Is a burger highly processed? Home-made burgers seem not to fall into that bucket. (Hot dogs, I can buy as being it, but burgers not.)

dinfinity · 16 days ago
Just like the term "ultraprocessed foods" the term "sandwich" is very badly defined.

I can't determine whether the study differentiates between healthy home made sandwiches and junk food that contains some kind of "bread".

u/dinfinity

KarmaCake day310March 22, 2017View Original