Readit News logoReadit News
jadbox commented on Pebble Time 2 Design Reveal [video]   youtube.com/watch?v=pcPzm... · Posted by u/net01
hyperbolablabla · 14 days ago
Lack of GPS is the dealbreaker for me. Otherwise this would be an insta purchase -- I bought the Time in 2014, and was hoping for a "smartstrap" with GPS back then, but when it never came I slightly lost interest.
jadbox · 13 days ago
Does it at least have heart rate?
jadbox commented on PHP 8.5 adds pipe operator   thephp.foundation/blog/20... · Posted by u/lemper
jadbox · 22 days ago
Now if only JS or Typescript can jump on this ship!
jadbox commented on Facts will not save you – AI, history and Soviet sci-fi   hegemon.substack.com/p/fa... · Posted by u/veqq
delusional · 22 days ago
> "[...] dynamically changing the zone is something they simply cannot do." But... why TF not?

Because the computer is fundamentally knowable. Somebody defined what a "close game" ahead of time. Somebody defined what a "reasonable stretch" is ahead of time.

The minute it's solidified in an algorithm, the second there's an objective rule for it, it's no longer dynamic.

The beauty of the "human element" is that the person has to make that decision in a stressful situation. They will not have to contextualize it within all of their other decisions, they don't have to formulate an algebra. They just have to make a decision they believe people can live with. And then they will have to live with the consequences.

It creates conflict. You can't have a conflict with the machine. It's just there, following rules. It would be like having a conflict with the beurocrats at the DMV, there's no point. They didn't make a decision, they just execute on the rules as written.

jadbox · 22 days ago
Could we maybe say that an LLM which can update its own model weights using its own internal self-narrating log may be 'closer' to being dynamic? We can use Wolfram's computational irreducibility principle which says that even simple rule-based functions will often cause unpredictable patterns of its return values. You could say that computer randomness is deterministic, but could we maybe say that ontologically a Quantum-NN LLMs could perhaps be classified on paper as being Dynamic? (unless you believe that quantum computing is deterministic).
jadbox commented on 6 weeks of Claude Code   blog.puzzmo.com/posts/202... · Posted by u/mike1o1
jadbox · 24 days ago
Good article, but fwiw, I think GraphQL is a bane for web dev for 90% of projects. It overcomplicates, bloats, and doesn't add anything over regular OpenAPI specs for what is usually just CRUD resource operations.
jadbox commented on Problem solving using Markov chains (2007) [pdf]   math.uchicago.edu/~shmuel... · Posted by u/Alifatisk
AnotherGoodName · a month ago
I feel like we need a video on Dynamic Markov Chains. It's a method to create a markov chain from data. It's used in all the highest compression winners in the Hutter Prize (a competition to compress data the most).
jadbox · a month ago
Make your own video then :)
jadbox commented on GLM-4.5: Reasoning, Coding, and Agentic Abililties   z.ai/blog/glm-4.5... · Posted by u/GaggiX
TheAceOfHearts · a month ago
Tried it with a few prompts. The vibes feel super weird to me, in a way that I don't know how to describe. I'm not sure if it's just that I'm so used to Gemini 2.5 Pro and the other US models. Subjectively, it doesn't feel very smart.

I asked it to analyze a recent painting I made and found the response uninspired. Although at least the feedback that it provided was notably distinct from what I could get from the US models, which tends to be pretty same-y when I ask them to analyze and critique my paintings.

Another subjective test, I asked it to generate the lyrics for a song based on a specific topic, and none of the options that it gave me were any good.

Finally, I tried describing some design ideas for a website and it gave me something that looks basically the same as what other models have given me. If you get into a sufficiently niche design space all the models seem to output things that pretty much all look the same.

jadbox · a month ago
From my tests, GLM tends to be best for server code or frontend logic. It's not very good at design tasks. It did make a killer chess app with UX, but I think it was just trained heavily for it.
jadbox commented on Kimi-K2 Tech Report [pdf]   github.com/MoonshotAI/Kim... · Posted by u/swyx
chisleu · a month ago
It looks like qwen3-coder is going to steal K2's thunder in terms of agentic coding use.
jadbox · a month ago
Maybe so, but currently I like the sound of K2's writing more so than qwen3 (so far in my testing).
jadbox commented on AccountingBench: Evaluating LLMs on real long-horizon business tasks   accounting.penrose.com/... · Posted by u/rickcarlino
jermaustin1 · a month ago
I find the same issues (though with much lower stakes) when using an LLM to determine the outcome of a turn in a game. I'm working on something called "A Trolly (problem) Through Time" where each turn is a decade starting with the 1850s, and you are presented with historic figures on a train track, and you have to chose whether to actively spare the person on your track for a potential unknown figure on the other side, or let the train run them over.

It works well as a narrative, but the second I started adding things like tracking high level macro effects of the decisions, within a couple of turns the world's "Turmoil" goes from 4/10 to a 10/10... even when the person that was killed would have been killed IRL.

Sonnet 4, o4-mini, and GPT 4o-mini all had the same world ending outcomes not matter who you kill. Killing Hitler in 1930s: 10/10 turmoil, Killing Lincoln in the 1850s: 10/10 turmoil in the first turn.

I've come to the realization, the LLM shouldn't be used for the logic, and instead needs to be used to just narrate the choices you make.

jadbox · a month ago
"I've come to the realization, the LLM shouldn't be used for the logic, and instead needs to be used to just narrate the choices you make."

This exactly right. LLMs are awesome for user<>machine communication, but are still painful to try to use as a replacement for the machine itself.

jadbox commented on Why you should choose HTMX for your next web-based side project (2024)   hamy.xyz/blog/2024-02_htm... · Posted by u/kugurerdem
rapnie · a month ago
I recently found Datastar [0], another hypermedia project. It was originally inspired by htmx, but they are fully on their own (hypermedia) course. According to the devs, who had a bunch of discussions with maintainers of htmx, the htmx project considers itself finished and no new features forthcoming. It is laudible, a project considering itself complete.

Datastar considers its library v1.0 release [1] to be complete, offering a core hypermedia API, while all else consists of optional plugins. The devs have a hot take wrt htmx in their release announcement:

> While it’s going to be a very hot take, I think there is zero reason to use htmx going forward. We are smaller, faster and more future proof. In my opinion htmx is now a deprecated approach but Datastar would not exist but for the work of Carson and the surrounding team.

When you think of adopting htmx, it may be worth making a comparison to Datastar as well.

[0] https://data-star.dev/

[1] https://data-star.dev/essays/v1_and_beyond

jadbox · a month ago
I like using both for different things. The only real complaint I have with Datastar is that its progressive-enhancement capabilities are not as nice/simple/well-defined as htmx.
jadbox commented on Running TypeScript Natively in Node.js   nodejs.org/en/learn/types... · Posted by u/jauco
jadbox · a month ago
This is great to finally see get added. I wonder why they decided to build their own type stripper instead of just bundling tsc/swc. It feels like Node.js is going to be plagued with bugs whenever TypeScript adds new type constructs, which may take months to get patched.

u/jadbox

KarmaCake day1854June 3, 2015
About
Live with courage and kindness
View Original