Readit News logoReadit News
jiveturkey commented on Europe's $24T Breakup with Visa and Mastercard Has Begun   europeanbusinessmagazine.... · Posted by u/NewCzech
krunck · 16 hours ago
This. And don't forget that most businesses have in their payment processing contract terms (set forth by Visa/MC ) that prevent the business from directly charging card users the card processing fees. Which means that everybody - even cash users - pay for those fees. What a racket.
jiveturkey · 12 minutes ago
There are lots of restaurants in the US these days that charge 3% for use of any credit card. One that I've been even has a sign posted at the entrance about it, that it's legal to do so. Must have gotten a lot of complaints that it was somehow illegal, or perhaps against card processing rules. Because it's one thing to post a sign that says you charge the fee, it's another for that sign to mention the legality of it.
jiveturkey commented on Ex-GitHub CEO launches a new developer platform for AI agents   entire.io/blog/hello-enti... · Posted by u/meetpateltech
williamstein · 8 hours ago
> Checkpoints run as a Git-aware CLI. On every commit generated by an agent, it writes a structured checkpoint object and associates it with the commit SHA. The code stays exactly the same, we just add context as first-class metadata. When you push your commit, Checkpoints also pushes this metadata to a separate branch (entire/checkpoints/v1), giving you a complete, append-only audit log inside your repository. As a result, every change can now be traced back not only to a diff, but to the reasoning that produced it.

The context for every single turn could in theory be nearly 1MB. Since this context is being stored in the repo and constantly changing, after a thousand turns, won't it make just doing a "git checkout" start to be really heavy?

For example, codex-cli stores every single context for a given session in a jsonl file (in .codex). I've easily got that file to hit 4 GB in size, just working for a few days; amusingly, codex-cli would then take many GB of RAM at startup. I ended up writing a script that trims the jsonl history automatically periodically. The latest codex-cli has an optional sqlite store for context state.

My guess is that by "context", Checkpoints doesn't actually mean the contents of the context window, but just distilled reasoning traces, which are more manageable... but still can be pretty large.

jiveturkey · an hour ago
> won't it make just doing a "git checkout" start to be really heavy?

not really? doesn't git checkout only retrieve the current branch? the checkpoint data is in another branch.

we can presume that the tooling for this doesn't expect you to manage the checkpoint branch directly. each checkpoint object is associated with a commit sha (in your working branch, master or whatever). the tooling presumably would just make sure you have the checkpoints for the nearby (in history) commit sha's, and system prompt for the agent will help it do its thing.

i mean all that is trivial. not worth a $60MM investment.

i suspect what is really going on is that the context makes it back to the origin server. this allows _cloud_ agents, independent of your local claude session, to pick up the context. or for developer-to-developer handoff with full context. or to pick up context from a feature branch (as you switch across branches rapidly) later, easily. yes? you'll have to excuse me, i'm not well informed on how LLM coding agents actually work in that way (where the context is kept, how easy it is to pick it back up again). this is just a bit of opining based on why this is worth 20% of $300MM.

if i look at https://chunkhound.github.io it makes me think entire is a version of that. they'll add an MCP server and you won't have to think about it.

finally, because there is a commit sha association for each checkpoint, i would be worried that history rewrites or force pushes MUST use the tooling otherwise you'd end up screwing up the historical context badly.

jiveturkey commented on Ex-GitHub CEO launches a new developer platform for AI agents   entire.io/blog/hello-enti... · Posted by u/meetpateltech
shimman · 6 hours ago
These aren't new economics, it's just VC funds trying to boost their holdings by saying it's worth X because they said so. Frankly the FTC should make it illegal.
jiveturkey · 2 hours ago
That's not how it works at all. Why stop at $300M, why didn't they just say $1BN out the gate?
jiveturkey commented on I miss thinking hard   jernesto.com/articles/thi... · Posted by u/jernestomg
helloplanets · 7 days ago
And when programming with agentic tools, you need to actively push for the idea to not regress to the most obvious/average version. The amount of effort you need to expend on pushing the idea that deviates from the 'norm' (because it's novel), is actually comparable to the effort it takes to type something out by hand. Just two completely different types of effort.

There's an upside to this sort of effort too, though. You actually need to make it crystal clear what your idea is and what it is not, because of the continuous pushback from the agentic programming tool. The moment you stop pushing back, is the moment the LLM rolls over your project and more than likely destroys what was unique about your thing in the first place.

jiveturkey · 7 days ago
> need to make it crystal clear

That's not an upside in that it's unique to LLM vs human written code. When writing it yourself, you also need to make it crystal clear. You do that in the language of implementation.

jiveturkey commented on 1 kilobyte is precisely 1000 bytes?   waspdev.com/articles/2026... · Posted by u/surprisetalk
ralferoo · 7 days ago
There's a good reason that gigabit ethernet is 1000MBit/s and that's because it was defined in decimal from the start. We had 1MBit/s, then 10MBit/s, then 100MBit/s then 1000MBit/s and now 10Gbit/s.

Interestingly, from 10GBit/s, we now also have binary divisions, so 5GBit/s and 2.5GBit/s.

Even at slower speeds, these were traditionally always decimal based - we call it 50bps, 100bps, 150bps, 300bps, 1200bps, 2400bps, 9600bps, 19200bps and then we had the odd one out - 56k (actually 57600bps) where the k means 1024 (approximately), and the first and last common speed to use base 2 kilo. Once you get into MBps it's back to decimal.

jiveturkey · 7 days ago
> that's because it was defined in decimal from the start

I mean, that's not quite it. By that logic, had memory been defined in decimal from the start (happenstance), we'd have 4000 byte pages.

Now ethernet is interesting ... the data rates are defined in decimal, but almost everything else about it is octets! Starting with the preamble. But the payload is up to an annoying 1500 (decimal) octets. The _minimum_ frame length is defined for CSMA/CD to work, but the max could have been anything.

jiveturkey commented on 1 kilobyte is precisely 1000 bytes?   waspdev.com/articles/2026... · Posted by u/surprisetalk
jiveturkey · 7 days ago
Looking around their website, they appear to be an enthusiastic novice. I looked around because isn't a hardware architecture course part of any first year syllabus? The author clearly hasn't a clue about hardware, how memory is implemented.
jiveturkey commented on County pays $600k to pentesters it arrested for assessing courthouse security   arstechnica.com/security/... · Posted by u/MBCook
adrr · 12 days ago
How much did they spend on lawyers?
jiveturkey · 12 days ago
I would guess this would be a contingency case, which would typically be 40%.
jiveturkey commented on That's not how email works   danq.me/2026/01/28/hsbc-d... · Posted by u/HotGarbage
Dwedit · 14 days ago
Gmail automatically downloads images ahead of time, so the tracking pixels will have been fetched by Gmail themselves regardless of when the user opens the email.
jiveturkey · 13 days ago
When Gmail downloads the image it identifies itself as GoogleImageProxy, and will be coming from a GCP/Google ASN.

Similar signal will be there for any email provider or server-side filter that downloads the content for malware inspection.

Pixel trackers are nearly never implemented in-house, because it's basically impossible for you to do your own email. So the tracker is a function of the batteries-included sending email provider. Those guys do that for a living, so they are sophisticated, and filter on the provider download of images.

jiveturkey commented on Notes on Apple's Nano Texture (2025)   jon.bo/posts/nano-texture... · Posted by u/dsr12
jiveturkey · 22 days ago
> massive step forward

umm, anti-glare/matte used to be the norm for LCD. Around 2005-2006 that changed. As laptops became more of a consumer product, and DVD watching was an important usage, the glossy screens became the norm.

https://forum.thinkpads.com/viewtopic.php?t=26396

So, I would call it a massive step backwards! The 2006 MBP had an optional glossy screen, and the 2008 was the first one with default glossy. Around 2012 Apple dropped the matte option altogether.

jiveturkey commented on Notes on Apple's Nano Texture (2025)   jon.bo/posts/nano-texture... · Posted by u/dsr12
therealmarv · 22 days ago
Alcohol? After research on Apple they allow:

    For infrequent cleaning of hard-to-remove smudges, you can moisten the cloth with a 70-percent isopropyl alcohol (IPA) solution.
source: https://support.apple.com/en-us/104948

But never apply it directly on screen. I think it's important to mention you just do not use "some alcohol" but it should be 70% isopropyl alcohol solution.

Btw. alcohol is a very good way to destroy the old glossy screens (non nano texture).

jiveturkey · 22 days ago
The screen has an oleophobic coating. That is the danger of alcohol, that it strips the coating. For your phone absolutely don't do this. For your laptop it should be fine.

u/jiveturkey

KarmaCake day2378February 16, 2018View Original