The context for every single turn could in theory be nearly 1MB. Since this context is being stored in the repo and constantly changing, after a thousand turns, won't it make just doing a "git checkout" start to be really heavy?
For example, codex-cli stores every single context for a given session in a jsonl file (in .codex). I've easily got that file to hit 4 GB in size, just working for a few days; amusingly, codex-cli would then take many GB of RAM at startup. I ended up writing a script that trims the jsonl history automatically periodically. The latest codex-cli has an optional sqlite store for context state.
My guess is that by "context", Checkpoints doesn't actually mean the contents of the context window, but just distilled reasoning traces, which are more manageable... but still can be pretty large.
not really? doesn't git checkout only retrieve the current branch? the checkpoint data is in another branch.
we can presume that the tooling for this doesn't expect you to manage the checkpoint branch directly. each checkpoint object is associated with a commit sha (in your working branch, master or whatever). the tooling presumably would just make sure you have the checkpoints for the nearby (in history) commit sha's, and system prompt for the agent will help it do its thing.
i mean all that is trivial. not worth a $60MM investment.
i suspect what is really going on is that the context makes it back to the origin server. this allows _cloud_ agents, independent of your local claude session, to pick up the context. or for developer-to-developer handoff with full context. or to pick up context from a feature branch (as you switch across branches rapidly) later, easily. yes? you'll have to excuse me, i'm not well informed on how LLM coding agents actually work in that way (where the context is kept, how easy it is to pick it back up again). this is just a bit of opining based on why this is worth 20% of $300MM.
if i look at https://chunkhound.github.io it makes me think entire is a version of that. they'll add an MCP server and you won't have to think about it.
finally, because there is a commit sha association for each checkpoint, i would be worried that history rewrites or force pushes MUST use the tooling otherwise you'd end up screwing up the historical context badly.
There's an upside to this sort of effort too, though. You actually need to make it crystal clear what your idea is and what it is not, because of the continuous pushback from the agentic programming tool. The moment you stop pushing back, is the moment the LLM rolls over your project and more than likely destroys what was unique about your thing in the first place.
That's not an upside in that it's unique to LLM vs human written code. When writing it yourself, you also need to make it crystal clear. You do that in the language of implementation.
Interestingly, from 10GBit/s, we now also have binary divisions, so 5GBit/s and 2.5GBit/s.
Even at slower speeds, these were traditionally always decimal based - we call it 50bps, 100bps, 150bps, 300bps, 1200bps, 2400bps, 9600bps, 19200bps and then we had the odd one out - 56k (actually 57600bps) where the k means 1024 (approximately), and the first and last common speed to use base 2 kilo. Once you get into MBps it's back to decimal.
I mean, that's not quite it. By that logic, had memory been defined in decimal from the start (happenstance), we'd have 4000 byte pages.
Now ethernet is interesting ... the data rates are defined in decimal, but almost everything else about it is octets! Starting with the preamble. But the payload is up to an annoying 1500 (decimal) octets. The _minimum_ frame length is defined for CSMA/CD to work, but the max could have been anything.
Similar signal will be there for any email provider or server-side filter that downloads the content for malware inspection.
Pixel trackers are nearly never implemented in-house, because it's basically impossible for you to do your own email. So the tracker is a function of the batteries-included sending email provider. Those guys do that for a living, so they are sophisticated, and filter on the provider download of images.
umm, anti-glare/matte used to be the norm for LCD. Around 2005-2006 that changed. As laptops became more of a consumer product, and DVD watching was an important usage, the glossy screens became the norm.
https://forum.thinkpads.com/viewtopic.php?t=26396
So, I would call it a massive step backwards! The 2006 MBP had an optional glossy screen, and the 2008 was the first one with default glossy. Around 2012 Apple dropped the matte option altogether.
For infrequent cleaning of hard-to-remove smudges, you can moisten the cloth with a 70-percent isopropyl alcohol (IPA) solution.
source: https://support.apple.com/en-us/104948But never apply it directly on screen. I think it's important to mention you just do not use "some alcohol" but it should be 70% isopropyl alcohol solution.
Btw. alcohol is a very good way to destroy the old glossy screens (non nano texture).