Readit News logoReadit News
chenmx · 3 hours ago
Every 'platform for AI agents' announcement makes me wonder: are we building tools for a workflow that actually exists, or are we building tools and hoping the workflow materializes? The GitHub parallel is instructive because GitHub succeeded by meeting developers where they already were (git). The question for Entire is whether agents have a natural coordination layer yet or if this is premature infrastructure.
loveparade · an hour ago
I don't think your Github example is accurate. The vast majority of developers started using git after Github became a thing. They may have used svn or another type of collaboration system before, but not git. And the main reason they started using git is because Github was such massive value on top of git, not because git was so amazing.
bayindirh · 34 minutes ago
Coming from Subversion, git was already so amazing without GitHub, so I'll kindly disagree with you on that front.
shakna · 31 minutes ago
Git had already replaced perforce and svn most everywhere I'd seen, before GitHub came along. CVS was still horrible and in a lot, though.

I mean, git was '05 and GitHub was '08, so not like the stats will say much one way or another. StackOverflow only added it their survey in 2015. No source of truth, only anecdotes.

asfafwewfad · 18 minutes ago
I have to hard disagree on that. I know of many developers personally who were on Source Forge and Google Code before and migrated to GitHub specifically because they offered git
cess11 · 14 minutes ago
I don't think SVN and Mercurial were more widely used than git before Github became popular, but Github definitely killed off most of the use of those.
hansmayer · 27 minutes ago
It seems at this point, everyone and their mother, i.e. "We", are building the "tools" for which "we" mostly hope that the VC money will materialise. Use-cases are not important - if OpenAI can essentially work with Monopolly money, whey can´t "we" do it too?
TeMPOraL · 16 minutes ago
> if OpenAI can essentially work with Monopolly money, whey can´t "we" do it too?

The answer is, in case anyone wonders: because OpenAI is providing a general purpose tool that has potential to subsume most of the software industry; "We" are merely setting up toll gates around what will ultimately become a bunch of tools for LLM, and trying to pass it off as a "product".

necovek · 2 hours ago
I do not think that's how it worked out for GitHub: I'd rather say that Git (as complex as it was to use) succeeded due to becoming the basis of GitHub (with simple, clean interface).

At the time, there were multiple code hosting platforms like Sourceforge, FSF Savannah, Canonical's Launchpad.net, and most development was still done in SVN, with Git, Bazaar, Mercurial the upstart "distributed" VCSes with similar penetration.

prerok · an hour ago
Yes, development was being done in SVN but it was a huge pain. Continuous communication was required with the server (history lookups took ages, changing a file required a checkout, etc.) and that was just horribly inefficient for distributed teams. Even within Europe, much more so when cross-continent.

A DVCS was definitely required. And I would say git won out due to Linus inventing and then backing it, not because of a platform that would serve it.

ashtom · 2 hours ago
Yes to all that. And GitLab the company was only founded in 2014 (OSS project started in 2011) and ran through YC in 2015, seven years after GitHub launched.
fnord77 · an hour ago
and most of those, except maybe gitlab, were clunky AF to use
BatteryMountain · 2 hours ago
Of the thousands, a handful will prevail. Most of it is vaporware, just like in any boom. Every single industry has this problem; copy-cats, fakes & frauds.

"Buy my fancy oil for your coal shovel and the coal will turn into gold. If you pay for premium, you don't have to shovel yourself."

If everything goes right, there won't be a coal mine needed.

ashtom · 2 hours ago
I'd bet that less people had their source code on git in 2008 than the number of developers using the various coding agents today. And the open-source project that we published today hooks into the existing workflow for those developers, in Claude Code and in Gemini CLI. Time will tell the rest. We will publish regular updates and you can judge us on those results.
jameslk · an hour ago
At least for me, I have felt like the chat history in an agent is often times just as important and potentially even more important than the source code it generates. The code is merely the compiled result of my explanations of intent and goals. That is, the business logic and domain expertise is trapped in my brain, which isn't very scalable.

Versioning and tracking the true source code, my thoughts, or even the thoughts of other agents and their findings, seems like a logical next step. A hosted central place for it and the infrastructure required to store the immense data created by constantly churning agents that arrive at a certain result seems like the challenge many seem to be missing here.

I wish you the best of luck with your startup.

gherkinnn · 3 hours ago
The hype is the product
surrTurr · 23 minutes ago
the workflow exists

my code is 90% ai generated at this point

Deleted Comment

nikanj · an hour ago
We are building tools and hoping an exit materializes. There’s so much funny money in AI right now, getting life-altering money seems easily attainable
N_Lens · 2 hours ago
HN is full of AI agents hype posts. I am yet to see legitimate and functional agent orchestration solving real problems, whether it is scale or velocity.
crossroadsguy · 2 hours ago
> Entire, backed by a $60 million

This is the point of that post and helpfully it was added at the top in a TL;dr and was half of that t sentence TL;dr. Will succeed or not? Well, that's a coin toss, always been.

marcosqanil · 2 hours ago
I mean, pretty much all big startups begin as "niche" things that people might care about later. Tesla, Airbnb, Twitch... and countless failures too. It's just how the game is.
straydusk · 9 hours ago
> Checkpoints are a new primitive that automatically captures agent context as first-class, versioned data in Git. When you commit code generated by an agent, Checkpoints capture the full session alongside the commit: the transcript, prompts, files touched, token usage, tool calls and more.

This thread is extremely negative - if you can't see the value in this, I don't know what to tell you.

toraway · 4 hours ago
What kind of barrier/moat/network effects/etc would prevent someone with a Claude Code subscription from replicating whatever "innovation" is so uniquely valuable here?

It's somewhat strange to regularly read HN threads confidently asserting that the cost of software is trending towards zero and software engineering as a profession is dead, but also that an AI dev tool that basically hooks onto Git/Claude Code/terminal session history is worth multiples of $60+ million dollars.

jwbron · 2 hours ago
I built a basic copy with about an hour with my own "platform for ai agents" that I built out over the last week: https://github.com/jwbron/egg/pull/504, and refined it here: https://github.com/jwbron/egg/pull/517 (though right after I merged this I blew through my weekly token quota for my second claude max 20x account so I haven't been able to test it out yet).

I think your point is valid and I've been having the same thoughts. My tooling is still in the experimental phase, but I can move so quickly that I'm having trouble grasping how products like this will survive. If I can build this out in a week and copy an idea like this one (which is a great one, mind you) in an hour, what's the value of paying someone for a product like this vs just building it myself?

jameslk · 3 hours ago
> What kind of barrier/moat/network effects/etc would prevent someone with a Claude Code subscription from replicating whatever "innovation" is so uniquely valuable here?

You are correct, that isn't the moat. Writing the software is the easy part

cush · an hour ago
There's no way this company is just a few git and claude hooks with a CLI. They're definitely working on a SASS - something else that isn't open source that this primitive is the basis of. Like a GitHub for agent code
ryanjshaw · an hour ago
This comment feels word-for-word the legendary DropBox critique on HN.
psandor · 2 hours ago
If they had wanted a moat for this part of their offering, they wouldn’t have open-sourced it.

This is not their offering, this is a tool to raise interest.

elif · 3 hours ago
The same moat that git had on svn, a better mental paradigm over the same fundamental system, more suited to how SWE changed over a decade.
YetAnotherNick · 3 hours ago
> HN threads confidently asserting

I have never seen any thread that unanimously asserts this. Even if they do, having HN/reddit asserting something as evidence is wrong way to look at things.

bambax · an hour ago
I currently develop small utilities with the help of AI, but am far from vibe coding or using agents. I review every single suggestion and do some refactoring at each step, before any commit (sometimes heavy refactoring; sometimes reorganizing everything).

In my experience LLMs tend to touch everything all of the time and don't naturally think about simplification, centralization and separation of concerns. They don't care about structure, they're all over the place. One needs to breathe on their shoulders to produce anything organized.

Maybe there's a way to give them more autonomy by writing the whole program in pseudo-code with just function signatures and let them flesh it out. I haven't tried that yet but it may be interesting.

nialv7 · 8 hours ago
Sure... you `git add` the context text generated by AI and `git commit` it, could be useful. Is that worth 60 million?
Klonoar · 8 hours ago
It’s good to know that a few decades later the same generic Dropbox-weekend take can be made.
sellmesoap · 15 minutes ago
Well a famous name is attached, could be the start of the product that replaces github, building github2 would give oppertunity to fix mistakes that are too entrenched to change at github, and who better to try? I'm uncharacteristically optimistic on this one, I'd give it a try!
guiambros · 3 hours ago
It's funny how HN'ers frequently judge ideas based on complexity of implementation, not value.

I still remember the reaction when Dropbox was created: "It's just file sharing; I can build my own with FTP. What value could it possibly create".

Aperocky · 5 hours ago
Discord is not prized because you can send a message to a chatroom, or any of the hooks and functions.

It's because of everybody there.

Currently no one is on Entire - the investor are betting they will be.

androiddrew · 8 hours ago
They raised 60 million. The investors think it’s worth 600M+
anonzzzies · 7 hours ago
We have had this for ages now.... I just don't have access to the sort of people willing to pass me 60m for that. I never thought it to be worth anything really ; it was a trivial to implement afterthought.
UqWBcuFx6NV4r · 8 hours ago
I love this one so much! The arbitrary decision to cherry-pick critique a particular product to this degree, when it’s something that could be said about 99% of the stuff SV churns out, including in all likelihood anything you’ve ever worked on.
surfinganalyst · 4 hours ago
Couldn't we capture this value with a git hook?

Deleted Comment

Deleted Comment

buildbuildbuild · 7 hours ago
The unannounced web collaboration platform in-progress might be.
sailfast · 7 hours ago
300 million, apparently.
paulddraper · 8 hours ago
That is their first feature.

If it were also their last, I would be inclined to agree.

vardalab · an hour ago
For the last three or four months, what I've been doing is anytime I have Claude write a comment on an issue, it just adds a session ID, file path and the VM it is on. That way, whenever we have some stuff that comes up, we just search through issues and then we can also retrace the session that produced the work and it's all traceable. In general, I just work through gitea issues and sometimes beads. I couldn't stand having all these MD files in my repo because I was just drowning in documentation, so having it in issues has been working really nicely and agents know how to work with issues. I did have it write a gitea utility and they are pretty happy using/abusing it. Anytime I see that they call it in some way that generates errors, I just have them improve the utility. And by this point, it pretty much always works. It's been really nice.
bmurphy1976 · 6 hours ago
I haven't read the article yet but this conversation reminds me of Docker. Lots of people "didn't get it." I told them at the time: if you don't get it you aren't ready for it yet so don't worry about it. When you do need it, you'll get it and then you'll use it and never look back. Look at where we are with containers now.
darkwater · an hour ago
And look where Docker Inc is now (which is one of the points some critics are making)
lubujackson · 3 hours ago
Wow, read through the comments and you weren't joking. I attribute this to crossroads of "this release is v0.1 of what we are building" and the HN crowd who have been scrolling past 120 AI frameworks and hot takes daily and have no patience for anything that isn't immediately 100% useful to them in the moment.

I find the framing of the problem to be very accurate, which is very encouraging. People saying "I can roll my own in a weekend" might be right, but they don't have $60M in the bank, which makes all the difference.

My take is this product is getting released right now because they need the data to build on. The raw data is the thing, then they can crunch numbers and build some analysis to produce dynamic context, possibly using shared patterns across repos.

Despite what HN thinks, $60M doesn't just fall in your lap without a clear plan. The moat is the trust people will have to upload their data, not the code that runs it. I expect to see some interesting things from this in the coming months.

vasachi · 32 minutes ago
Didn’t Juicero get more than a $100M? Do you think they had a clear plan? How much did Rome get? Did they have a clear plan?
dpweb · 8 hours ago
I know about "the entire developer world has been refactored" and all, but what exactly does this thing do?

Runs git checkpoint every time an agent makes changes?

konaraddi · 9 hours ago
100% agree because there’s a lot of value in understanding how and why past code was written. It can be used to make better decisions faster around code to write in the future.

E.g., if you’ve ever wondered why code was written in a particular way X instead of Y then you’ll have the context to understand whether X is still relevant or if Y can be adopted.

E.g., easier to prompt AI to write the next commit when it knows all the context behind the current/previous commit’s development process.

buster · 2 hours ago
But that's not what is in the whole context. The whole context contains a lot of noise and false "thoughts". What the AI needs to do is to document the software project in an efficient manner without duplication. That's not what this tool is doing. I question the value in storing all the crap in git.
majormajor · 3 hours ago
I wonder. How often will that context actually be that valuable vs just more bloat to fill up future API calls with to burn tokens.
bergheim · 8 hours ago
A year ago I added memory to my Emacs helper [0]. It was just lines in org-mode. I thought it was so stupid. It worked though. Sort of.

That's how a trillion dollar company also does it, turns out.

0: https://github.com/karthink/gptel

vrosas · 6 hours ago
ehhhh is it really that useful though? Sounds way more noisy than anything, and a great way to burn through tokens. It's like founding a startup to solve the problem of people squashing their commits. Also, it sounds like something Claude Code/Codex/etc could quickly add an extension for.
weird-eye-issue · 4 hours ago
How would this use any extra tokens? Just seems like it's serializing the existing context
throw10920 · 4 hours ago
Maybe use critical thinking instead of a mindless dismissal?

The fact that you aren't haven't offered a single counterargument to any other posters' points and have to resort to pearl-clutching is pretty good proof that you can't actually respond to any points and are just emotionally lashing out.

Aeolun · 7 hours ago
This is literally what claude code already does minus the commit attachment. It’s just very fancy marketing speak for the exact same thing.

I’m happy to believe maybe they’ll make something useful with 60M (quite a lot for a seed round though), but Maybe not get all lyrical about what they have now.

sothatsit · 7 hours ago
Claude Code captures this locally, not in version control alongside commits.
hoten · 9 hours ago
I see the utility in this as an extension to git / source control. But how do VCs make money of it?

Deleted Comment

soulofmischief · 8 hours ago
I built out the same thing in my own custom software forge. Every single part of the collaborative development process is memoized.
stitched2gethr · 5 hours ago
And how are you using it now? Have you seen real value weeks or months on?
tbrownaw · 5 hours ago
[flagged]
dang · 5 hours ago
Please don't use quotation marks to make it look like you're quoting someone when you aren't. That's an internet snark trope and we're trying to avoid those on HN.

https://news.ycombinator.com/newsguidelines.html

MrDarcy · 5 hours ago
Look it’s obvious at this point to anyone who is actually using the tools.

We can articulate it but why should we bother when it’s so obvious.

We are at an inflection point where discussion about this, even on HN, is useless until the people in the conversation are on a similar level again. Until then we have a very large gap in a bimodal distribution, and it’s fruitless to talk to the other population.

Dead Comment

sillyconwalle · 4 hours ago
Some Tom Dick and Harry to VCs: I have a proposal for you.

VCs: what is it

Tom Dick & Harry: AI

VCs: get the ** out of here, we already burnt enough money and will never see it back

Tom Dick & Harry: hear me out this is different

VCs: ok you have 5 minutes to explain me your product

Tom Dick & Harry: I dont have one

VCs: get the ** out of here

Tom Dick & Harry: hear me out

VCs: ok, you have 30 seconds to impress us.

Tom Dick & Harry: I just quit Microslop and still have high level contacts there

VCs: Hot damn!!! you are our lottery ticket to recoup all the money we have lost in other ventures. This is going to be a race against time, before your contacts go stale. Here's 60M for you, wine and dine your friends with it. On your way out you will find some AI generated product names and some vague product descriptions. Pick one and slap it on some website and announce our deal. Now get the ** out of here.

Deleted Comment

woah · 8 hours ago
I have an agent write a file with this template each run:

```markdown # Run NNNN

## First Impressions [What state is the project in? What did the last agent leave?]

## Plan [What will you work on this iteration? Why?]

## Work Log [Fill this in as you work]

## Discoveries [What did you learn? What surprised you? What should the next agent know?]

## Summary [Fill this in before committing] ```

This is surprisingly effective and lets agents easily continue in progress work and understand past decisions.

Aeolun · 7 hours ago
I have CURRENT_TASK.md that does more or less the same thing. It also gets committed to git. So I guess that’s entire? Wish I’d realized I was sitting on a 60M idea…
chapz · 39 minutes ago
It's sad to see that ex-GitHub CEO didn't make enough money to just kick-start his company, but needs external money which will later on dictate how the company works or will sell users and the product for the next exit..

So.. yea. Ignore and move on.

grey-area · a minute ago
Maybe he prefers to set someone else’s money on fire.
giancarlostoro · 16 hours ago
> Spec-driven development is becoming the primary driver of code generation.

This sounds like my current "phase" of AI coding. I have had so many project ideas for years that I can just spec out, everything I've thought about, all the little ideas and details, things I only had time to think about, never implement. I then feed it to Claude, and watch it meet my every specification, I can then test it, note any bugs, recompile and re-test. I can review the code, as you would a Junior you're mentoring, and have it rewrite it in a specific pattern.

Funnily enough, I love Beads, but did not like that it uses git hooks for the DB, and I can't tie tickets back to ticketing systems, so I've been building my own alternative, mine just syncs to and from github issues. I think this is probably overkill for whats been a solved thing: ticketing systems.

visarga · 16 hours ago
I am going lower level - every individual work item is a "task.md" file, starts initially as a user ask, then add planning, and then the agent checks gates "[ ]" on each subtask as it works through it. In the end the task files remain part of the project, documenting work done. I also keep an up to date mind map for the whole project to speed up start time.

And I use git hooks on the tool event to print the current open gate (subtask) from task.md so the agent never deviates from the plan, this is important if you use yolo mode. It might be an original technique I never heard anyone using it. A stickie note in the tool response, printed by a hook, that highlights the current task and where is the current task.md located. I have seen stretches of 10 or 15 minutes of good work done this way with no user intervention. Like a "Markdown Turing Machine".

RAMJAC · 2 hours ago
If you are using claude, in your project's `.claude/settings.json` you can add something like:

```

  "hooks": {
    "PostToolUse": [
      {
        "matcher": "Edit|Write",
        "hooks": [
          {
            "type": "command",
            "command": "mix format ${file} 2>/dev/null || true"
          }
        ]
      }
    ],
    "TaskCompleted": [
      {
        "matcher": "",
        "hooks": [
          {
            "type": "prompt",
            "prompt": "reminder: run mix test if implementation is complete"
          }
        ]
      }
    ],
    "Stop": [
      {
        "hooks": [
          {
            "type": "prompt",
            "prompt": "Check if all tasks are complete. If not, respond with {\"ok\": false, \"reason\": \"what remains to be done\"}."
          }
        ]
      }
    ]
  },

```

Just update it to iterate over your file. It should be a little easier to manage than git hooks and can hammer in testing.

giancarlostoro · 16 hours ago
That's hilarious, I called it gates too for my reimplementation of Beads. Still working on it a bit, but this is the one I built out a month back, got it into git a week ago.

For me a gate is: a dependency that must pass before a task is closed. It could be human verification, unit testing, or even "can I curl this?" "can I build this?" and gates can be re-used, but every task MUST have one gate.

My issue with git hooks integration at that level is and I know this sounds crazy, but not everyone is using git. I run into legacy projects, or maybe its still greenfield as heck, and all you have is a POC zip file your manager emailed you for whatever awful reason. I like my tooling to be agnostic to models and external tooling so it can easily integrate everywhere.

Yours sounds pretty awesome for what its worth, just not for me, wish you the best of luck.

https://github.com/Giancarlos/GuardRails

dworks · 10 hours ago
that's similar to the workflow i built, inspired by Recursive Language Models: https://github.com/doubleuuser/rlm-workflow
mattmanser · 12 hours ago
This is built in to Claude Code, when you're in plan mode it makes a task MD file, even giving it a random name and storing it in your .claude folder.

I'm confused how this is any different to the pretty standard agentic coding workflow?

samename · 12 hours ago
Me too. I've been using spec-kitty [0], a fork of Spec Kit. Quite amazing how a short interview on an idea can produce full documents of requirements, specs, tasks, etc. After a few AI projects, this is my first time using spec driven development, and it is definitely an improvement.

[0]: https://github.com/Priivacy-ai/spec-kitty

giancarlostoro · 11 hours ago
Nice, I'll check yours out after work, looks pretty polished.
wild_egg · 10 hours ago
Task management is fundamentally straightforward and yet workflow specific enough that I recommend everyone just spend a few hours building their own tools at this point.

Beads is a nightmare.

dgunay · 8 hours ago
I started off with the original beads and it was definitely a nightmare. However I would recommend using https://github.com/Dicklesworthstone/beads_rust - it's a much simpler implementation of the same concept, without all the random extra stuff thrown on to support Gas Town.
williamstein · 8 hours ago
> Checkpoints run as a Git-aware CLI. On every commit generated by an agent, it writes a structured checkpoint object and associates it with the commit SHA. The code stays exactly the same, we just add context as first-class metadata. When you push your commit, Checkpoints also pushes this metadata to a separate branch (entire/checkpoints/v1), giving you a complete, append-only audit log inside your repository. As a result, every change can now be traced back not only to a diff, but to the reasoning that produced it.

The context for every single turn could in theory be nearly 1MB. Since this context is being stored in the repo and constantly changing, after a thousand turns, won't it make just doing a "git checkout" start to be really heavy?

For example, codex-cli stores every single context for a given session in a jsonl file (in .codex). I've easily got that file to hit 4 GB in size, just working for a few days; amusingly, codex-cli would then take many GB of RAM at startup. I ended up writing a script that trims the jsonl history automatically periodically. The latest codex-cli has an optional sqlite store for context state.

My guess is that by "context", Checkpoints doesn't actually mean the contents of the context window, but just distilled reasoning traces, which are more manageable... but still can be pretty large.

otterley · 5 hours ago
To add to your comment, I think the next logical question is, “then what?” Surely one can’t build a sustainable business storing these records alone.
snemvalts · an hour ago
MCP server with RAG to feed it back to agents when they are working on a piece of code, and bob's your uncle
jiveturkey · an hour ago
> won't it make just doing a "git checkout" start to be really heavy?

not really? doesn't git checkout only retrieve the current branch? the checkpoint data is in another branch.

we can presume that the tooling for this doesn't expect you to manage the checkpoint branch directly. each checkpoint object is associated with a commit sha (in your working branch, master or whatever). the tooling presumably would just make sure you have the checkpoints for the nearby (in history) commit sha's, and system prompt for the agent will help it do its thing.

i mean all that is trivial. not worth a $60MM investment.

i suspect what is really going on is that the context makes it back to the origin server. this allows _cloud_ agents, independent of your local claude session, to pick up the context. or for developer-to-developer handoff with full context. or to pick up context from a feature branch (as you switch across branches rapidly) later, easily. yes? you'll have to excuse me, i'm not well informed on how LLM coding agents actually work in that way (where the context is kept, how easy it is to pick it back up again). this is just a bit of opining based on why this is worth 20% of $300MM.

if i look at https://chunkhound.github.io it makes me think entire is a version of that. they'll add an MCP server and you won't have to think about it.

finally, because there is a commit sha association for each checkpoint, i would be worried that history rewrites or force pushes MUST use the tooling otherwise you'd end up screwing up the historical context badly.