Readit News logoReadit News
maximilianroos commented on Lutra: General-Purpose Query Language   lutra-lang.org/... · Posted by u/aerzen
maximilianroos · 9 hours ago
Very excited to see this!
maximilianroos commented on Worktrunk: Git worktree manager, designed for parallel agents   worktrunk.dev/... · Posted by u/maximilianroos
maximilianroos · 13 days ago
I've been working on a worktree manager for a couple months, excited to share it.

Since agents have become good enough to run in parallel, I've found git worktrees to be, in the words of Juliet "my only love sprung from my only hate" — an awesome productivity multiplier, but with a terrible UX...

Worktrunk is designed to fix that: 1) it's a wonderful layer on top of git worktrees and 2) it adds a lot of optional QoL improvements focused on parallel agents.

Those Qol improvements include a command to show the status of all worktrees/branches (including CI status & links to PRs), a great Claude Code statusline, a command to have an LLM write a commit message, etc.

Like my other projects (PRQL, xarray, insta, numbagg), it's Open Source, no commercial intent. It's written in rust, extensively tested; crafted with love (no slop!)

Check it out, please let me know any feedback, either here or in GH. Thanks in advance, Max

- https://github.com/max-sixty/worktrunk

- https://worktrunk.dev/

Deleted Comment

maximilianroos commented on Job-seekers are dodging AI interviewers   fortune.com/2025/08/03/ai... · Posted by u/robtherobber
maximilianroos · 5 months ago
Have your AI talk to their AI

Then, if the AIs are positive, the human principals can talk

Seems quite reasonable!

maximilianroos commented on Swift at Apple: Migrating the Password Monitoring Service from Java   swift.org/blog/swift-at-a... · Posted by u/fidotron
potatolicious · 7 months ago
> "why not just run the checks at the backend's discretion?"

Because the other side may not be listening when the compute is done, and you don't want to cache the result of the computation because of privacy.

The sequence of events is:

1. Phone fires off a request to the backend. 2. Phone waits for response from backend.

The gap between 1 and 2 cannot be long because the phone is burning battery the entire time while it's waiting, so there are limits to how long you can reasonably expect the device to wait before it hangs up.

In a less privacy-sensitive architecture you could:

1. Phone fires off request to the backend. Gets a token for response lookup later. 2. Phone checks for a response later with the token.

But that requires the backend to hold onto the response, which for privacy-sensitive applications you don't want!

maximilianroos · 7 months ago
thanks!

u/maximilianroos

KarmaCake day3027April 19, 2016
About
GitHub: https://github.com/max-sixty Twitter: https://twitter.com/max_sixty
View Original