Readit News logoReadit News
dpweb commented on Ex-GitHub CEO launches a new developer platform for AI agents   entire.io/blog/hello-enti... · Posted by u/meetpateltech
straydusk · a day ago
> Checkpoints are a new primitive that automatically captures agent context as first-class, versioned data in Git. When you commit code generated by an agent, Checkpoints capture the full session alongside the commit: the transcript, prompts, files touched, token usage, tool calls and more.

This thread is extremely negative - if you can't see the value in this, I don't know what to tell you.

dpweb · a day ago
I know about "the entire developer world has been refactored" and all, but what exactly does this thing do?

Runs git checkpoint every time an agent makes changes?

dpweb commented on Show HN: LocalGPT – A local-first AI assistant in Rust with persistent memory   github.com/localgpt-app/l... · Posted by u/yi_wang
dpweb · 4 days ago
Made a quick bot app (OC clone). For me I just want to iMessage it - but do not want to give Full Disk rights to terminal (to read the imessage db).

Uses Mlx for local llm on apple silicon. Performance has been pretty good for a basic spec M4 mini.

Nor install the little apps that I don't know what they're doing and reading my chat history and mac system folders.

What I did was create a shortcut on my iphone to write imessages to an iCloud file, which syncs to my mac mini (quick) - and the script loop on the mini to process my messages. It works.

Wonder if others have ideas so I can iMessage the bot, im in iMessage and don't really want to use another app.

dpweb commented on LLMs could be, but shouldn't be compilers   alperenkeles.com/posts/ll... · Posted by u/alpaylan
dpweb · 5 days ago
Compilation is transforming one computing model to another. LLMs aren't great at everything, but seem particularly well suited for this purpose.

One of the first things I tried to have an llm do is transpile. These days that works really well. You find an interesting project in python, i'm a js guy, boom js version. Very helpful.

dpweb commented on Peerweb: Decentralized website hosting via WebTorrent   peerweb.lol/... · Posted by u/dtj1123
dpweb · 12 days ago
Useless if it takes > 5 sec. to load a page
dpweb commented on Nanolang: A tiny experimental language designed to be targeted by coding LLMs   github.com/jordanhubbard/... · Posted by u/Scramblejams
deepsquirrelnet · 23 days ago
At this point, I am starting to feel like we don’t need new languages, but new ways to create specifications.

I have a hypothesis that an LLM can act as a pseudocode to code translator, where the pseudocode can tolerate a mixture of code-like and natural language specification. The benefit being that it formalizes the human as the specifier (which must be done anyway) and the llm as the code writer. This also might enable lower resource “non-frontier” models to be more useful. Additionally, it allows tolerance to syntax mistakes or in the worst case, natural language if needed.

In other words, I think llms don’t need new languages, we do.

dpweb · 22 days ago
I disagree I think we always need new languages. Every language over time becomes more and more unnecessarily complex.

It's just part of the software lifecycle. People think their job is to "write code" and that means everything becomes more and more features, more abstractions, more complex, more "five different ways to do one thing".

Many many examples, C++, Java esp circa 2000-2010 and on and on and on. There's no hope for older languages. We need simpler languages.

dpweb commented on On the Existence, Impact, and Origin of Hallucination-Associated Neurons in LLMs   arxiv.org/abs/2512.01797... · Posted by u/bilsbie
airhangerf15 · 2 months ago
LLMs don't "hallucinate" or "lie." They have no intent. They're Weighted Random Word Generator Machines. They're train mathematically to create series of tokens. Whenever they get something "right," it's literally by accident. If you get that rate of accidental rightness up to 80%, and people suddenly thing the random word generator is some kind of oracle. It's not. It's a large model with an embedded space, tokens and a whole series of computationally expensive perceptron and attention blocks that generate output.

The title/introduction is very baited, because it implies some "physical" connection to hallucinations in biological organism, but it's focused on trying to single out certain parts of the model. LLMs are absolutely nothing at all like a biological system, of which our brains are orders of magnitudes more complex than the machines we've built that we no longer fully understand. Believing in these LLMs as being some next stage in understanding intelligence is hubris.

dpweb · 2 months ago
We don't understand the brain. We fully understand what LLM are doing, humans built them. The idea we don't understand what LLMs are doing is magical. Magical is good for clicks and fundraising.
dpweb commented on JPMorgan CEO: "I don't care how many people sign that f—ing [WFH] Petition"   fortune.com/2025/02/13/ja... · Posted by u/Zaheer
dpweb · a year ago
Been WFH for many years, but I can understand why some companies prefer (demand) in office. Sorry, but unless its written into employment law, no-one has a "right" to it.

I would think this could be a perk that companies could use to get an advantage in hiring. Although, maybe those who it appeals to on the whole may be lower performers?

Would love to see some real data on this.

u/dpweb

KarmaCake day2110January 18, 2013View Original