Readit News logoReadit News
nowittyusername commented on DeepSeek uses banned Nvidia chips for AI model, report says   finance.yahoo.com/news/ch... · Posted by u/goodway
nowittyusername · 4 days ago
let me just dig out a surprise Pikachu face from my pocket somewhere here ....
nowittyusername commented on Pebble Index 01 – External memory for your brain   repebble.com/blog/meet-pe... · Posted by u/freshrap6
nowittyusername · 5 days ago
This seems like one of those devices that seems like "meh" at a glance but grows on you once you used it. In fact just the Bluetooth button feature alone is warranted a second take let alone a mic embedded in to the ring with a crazy battery life. If there's a way to hack the device and pipe the mic features to other apps I think i might get this thing. edit: never mind i just noticed 15 hours recording time with no recharging. yeah bud that's a no go.
nowittyusername commented on AI should only run as fast as we can catch up   higashi.blog/2025/12/07/a... · Posted by u/yuedongze
pegasus · 5 days ago
Correct me if I'm wrong, but even with batch processing turned off, they are still only deterministic as long as you set the temperature to zero? Which also has the side-effect of decreasing creativity. But maybe there's a way to pass in a seed for the pseudo-random generator and restore determinism in this case as well. Determinism, in the sense of reproducible. But even if so, "determinism" means more than just mechanical reproducibility for most people - including parent, if you read their comment carefully. What they mean is: in some important way predictable for us humans. I.e. no completely WTF surprises, as LLMs are prone to produce once in a while, regardless of batch processing and temperature settings.
nowittyusername · 5 days ago
You can change ANY sampling parameter once batch processing is off and you will keep the deterministic behavior. temperature, repetition penalty, etc.... I got to say I'm a bit disappointed in seeing this in hacker news, as I expect this from reddit. you bring the whole matter on a silver platter, the video describes in detail how any sampling parameter can be used, i provide the whole code opensource so anyone can try it themselves without taking my claims as hearsay, well you can bring a horse to water as they say....
nowittyusername commented on The universal weight subspace hypothesis   arxiv.org/abs/2512.05117... · Posted by u/lukeplato
unionjack22 · 5 days ago
I hope someone much smarter than I answers this. I’ve been noticing an uptick platonic and neo-platonic discourse in the zeitgeist and am wondering if we’re converging on something profound.
nowittyusername · 5 days ago
I've been noticing that as well....
nowittyusername commented on AI should only run as fast as we can catch up   higashi.blog/2025/12/07/a... · Posted by u/yuedongze
nazgul17 · 5 days ago
That's not an interesting difference, from my point of view. The box m black box we all use is non deterministic, period. Doesn't matter where on the inside the system stops being deterministic: if I hit the black box twice, I get two different replies. And that doesn't even matter, which you also said.

The more important property is that, unlike compilers, type checkers, linters, verifiers and tests, the output is unreliable. It comes with no guarantees.

One could be pedantic and argue that bugs affect all of the above. Or that cosmic rays make everything unreliable. Or that people are non deterministic. All true, but the rate of failure, measured in orders of magnitude, is vastly different.

nowittyusername · 5 days ago
My man did you even check my video, did you even try the app. This is not "bug related" nowhere did i say it was a bug. Batch processing is a FEATURE that is intentionally turned on in the inference engine for large scale providers. That does not mean it has to be on. If they turn off batch processing al llm api calls will be 100% deterministic but it will cost them more money to provide the services as now you are stuck with providing 1 api call per GPU. "if I hit the black box twice, I get two different replies" what you are saying here is 100% verifiably wrong. Just because someone chose to turn on a feature in the inference engine to save money does not mean llms are anon deterministic. LLM's are stateless. their weights are froze, you never "run" an LLM, you can only sample it. just like a hologram. and depending on the inference sampling settings you use is what determines the outcome.....
nowittyusername commented on AI should only run as fast as we can catch up   higashi.blog/2025/12/07/a... · Posted by u/yuedongze
mort96 · 6 days ago
Code written by humans has always been nondeterministic, but generated code has always been deterministic before now. Dealing with nondeterministically generated code is new.
nowittyusername · 6 days ago
determinism v nondeterminism is and has never been an issue. also all llms are 100% deterministic, what is non deterministic are the sampling parameters used by the inference engine. which by the way can be easily made 100% deterministic by simply turning off things like batching. this is a matter for cloud based api providers as you as the end user doesnt have acess to the inferance engine, if you run any of your models locally in llama.cpp turning off some server startup flags will get you the deterministic results. cloud based api providers have no choice but keeping batching on as they are serving millions of users and wasting precious vram slots on a single user is wasteful and stupid. see my code and video as evidence if you want to run any local llm 100% deterministocally https://youtu.be/EyE5BrUut2o?t=1
nowittyusername commented on The "confident idiot" problem: Why AI needs hard rules, not vibe checks   steerlabs.substack.com/p/... · Posted by u/steerlabs
keiferski · 6 days ago
The thing that bothers me the most about LLMs is how they never seem to understand "the flow" of an actual conversation between humans. When I ask a person something, I expect them to give me a short reply which includes another question/asks for details/clarification. A conversation is thus an ongoing "dance" where the questioner and answerer gradually arrive to the same shared meaning.

LLMs don't do this. Instead, every question is immediately responded to with extreme confidence with a paragraph or more of text. I know you can minimize this by configuring the settings on your account, but to me it just highlights how it's not operating in a way remotely similar to the human-human one I mentioned above. I constantly find myself saying, "No, I meant [concept] in this way, not that way," and then getting annoyed at the robot because it's masquerading as a human.

nowittyusername · 6 days ago
Its not a magic technology, they can only represent data they were trained on. Naturally most represented data in their training data is NOT conversational. Consider that such data is very limited and who knows how it was labeled if at all during pretraining. But with that in mind, LLM's definitely can do all the things you describe, but a very robust and well tested system prompt has to be used to coax this behavior out. Also a proper model has to be used, as some models are simply not trained for this type of interaction.
nowittyusername commented on The past was not that cute   juliawise.net/the-past-wa... · Posted by u/mhb
jstummbillig · 7 days ago
That is a beautiful anecdote, but I don't see what we could reasonably generalize from that. It's fairly well established that access to good medical care and a certain degree of wealth make us happier.

Could a life radically and willfully different in many ways turn out to be better for most of us (which is critically what you claimed before)? It's certainly possible, given how few people take this route, but an appeal to nature is just not super convincing, unless you can back it up with data.

I can't help but notice you did not engage with how 40% of kids dieing and another 20% of us getting killed by some member of the cherished tribe could possible lead to high levels of life satisfaction. As far I can tell, on the whole, the good old days were cruel and rosy retrospection is just that.

nowittyusername · 7 days ago
The "miserable" existence ascribed by modern day humans to past human life is colored by their modern day psychological profile. If they were born and raised at that environment their psychological profile would be very different. A modern day human can be easily traumatized by something that past humans would consider trivial. Sure death was more common, possibly even violence, but that would not mean people were less happy. Satisfaction in human psychology has a certain profile, and that profile mainly follows the things i talked about. Close human relationships in small cohort groups, perception of agency, among a few other important factors. things that are missing among many citizens of modern day societies world wide. My point is that on average if you performed a statistical analysis of how happy people were, the claim is that they were happier back then then now.
nowittyusername commented on The past was not that cute   juliawise.net/the-past-wa... · Posted by u/mhb
jstummbillig · 7 days ago
It's kind of an interesting question. What makes us inherently unhappy?

I think if the theory goes that from a evolutionary standpoint we psychologically are still better equipped to be hunter gatherers, we should assume that our feelings towards homicide and child mortality are comparable. So how happy can a people be, when 40% of their children die and another 20% die by homicide?

If we follow that thread I would argue that it's very unlikely that people were happier back when or would be happier today, unless some other component of being hunter gatherers makes us fantastically ecstatic.

nowittyusername · 7 days ago
What makes us unhappy are the things that the modern world takes away from us. Sense of agency, sense of community, belonging, autonomy, recognition, and many other factors. The modern day human brain and mind is still lagging far behind our current predicament. We evolved to thrive in small village cohorts that condition for small social interactions that have real impact on our lives. Here's a striking example I remember. https://www.youtube.com/watch?v=KFOhAd3THW4 There are better longer videos of the citation from the mothers side, where she talks about how alien and cold modern day society is compared to her humble village life. No amount of medicine, material possessions or modern day creature comforts could keep her in New York. she chose to leave and come back home because that's what made her happy.
nowittyusername commented on Sort the Court – A Free King Simulator Where You Rule with Yes or No   sort-the-court.com... · Posted by u/causalzap
nowittyusername · 7 days ago
Cute little game, I started in hopes of just trying it out and an hour something later finished the whole game.

u/nowittyusername

KarmaCake day145January 9, 2025View Original