We also have a tag features, allowing you to pinpoint versions and go back instantly, and we have plans to make that feature even lighter on disk space.
We also have a tag features, allowing you to pinpoint versions and go back instantly, and we have plans to make that feature even lighter on disk space.
I don't think so. The only ones I know of are Pijul and Jujitsu which you mentioned. They're both quite new.
> you're already doing centralized version control; you're just doing it with a confusing distributed version control UX
Sort of... But actually, as soon as you go offline it's distributed.
Anyway I think more alternatives is always better and lots of the issues you listed in the readme definitely need solving, so good luck!
There are other Git frontends like Jujutsu: Gitless, StackedGit, GitButler, Sapling…
Even the idea of "SVN is the next Git" (which is the thing here) isn't quite new, PlasticSCM did it already.
Nothing like Pijul though, defining the actual problem carefully and rigorously, then actually solving it.
> Sort of... But actually, as soon as you go offline it's distributed.
Even online is distributed: Google Docs needs stuff from distributed computing such as OTs and CRDTs.
1. A branch pointing to the latest nixpkgs head.
2. A branch with commit A (let's say commit A introduces a new package to nixpkgs).
3. A branch with commit B (changing some config file).
4. A branch currently at in use for your own machines, with branches 2 and 3 rebased on top of branch 1.
Every time you do anything, you'll have to remember the flow for getting the commits fetched/rebased. Which is fine if you have a DevOps team doing exactly that, but isn't too cool if you are anything other than a large company.
In Pijul, you would have a single channel (branch sort-of equivalent) and two patches (A and B) instead, which you can push independently from each other at any time if you want to contribute them back.
Darcs does the same but wouldn't scale to Nixpkgs-sized repos.
As long as the patches don't conflict its fine and dandy, if there's a collision you record a resolution that fixes the conflict
I have an another blog idea for you: what are these "right things"? I have used Mercurial a bit when I was new to programming, and back then Git and Mercurial seemed more or less the same with different command names. Today, I almost exclusively use (and hate) Git, but I find it hard to see what alternatives there would be in DVCS space, other than something lie Darcs/Pijul (perhaps my imagination has been stunted by too much exposure to Git). It will be great to have someone with the knowledge lay it out comprehensively and explicitly, so that the next generation of VCS developers will be able to build upon it.
1. The SCSS/Git way, aka the hacker way: look at what we can do with existing stuff, and use that to build something that can do the job. For example if you're one of the world's expert on filesystem implementations, you can actually produce a fantastic tool like Git.
2. The mathematician's way: start by a model of what collaboration is, and expand from there. If your model is simple enough, you may have to use complex algorithms, but there is a hope that the UI will match the initial intuition even for non-technical users. Darcs did this, using the model that work produces diffs, and conflicts are when diffs can't be reordered. Unfortunately this is slow and not too scalable. Pijul does almost the same, but doesn't restrict itself to just functions, using also points in the computation, which makes it much faster (but way harder to implement and a bit less flexible, no free lunch).
3. The Hooli way: take an arbitrary existing VCS, say Git. Get one of your company's user interviewer, and try to please interviewees by tweaking the command names and arguments.
The tradeoff between 1 and 2 is that 1 is much more likely to produce a new usable and scalable system fast, but may result in leaky abstractions, bad merges and hacks everywhere, while 2 may have robust abstractions if the project goes to completion, but that may take years. OTOH, method 3 is the fastest and safest method, but may not produce anything new.
So, I am the main author of Pijul, and I also don't quite see how to do much better (I'm definitely working on improvements, but not technically radical). But the causal relationship isn't the one you may be thinking: it is because I thought this was the ultimate thing we could have that I started the project, not the other way around.
It took a little bit of work to get that down to actual fast code, for example I had to write my own key-value store, which wasn't a particularly pleasant experience, and I don't think any existing programming language could have helped, it would have required a full linear logic type system. But at least now that thing (Sanakirja) exists, is more generic, and modular than any storage library I know (I've used it to implement ropes, r trees, radix trees…), and its key-value store is faster than the fastest C equivalent (LMDB).
Could we do the same in Haskell or OCaml? As much as I like these two languages, I don't think I could have written Sanakirja in a garbage-collected language, mostly because Sanakirja is generic in its underlying storage layer: it could be mmap, a compressed file, an entire block device in Unix, an io_uring buffer ring, or something else. And the notion of ownership of the objects in the dictionary is absolutely crucial: Sanakirja allows you to fork a key-value store efficiently, so one question is, what should happen when your code drops a reference to an object from the kv store? what if you're deleting the last fork of a table containing that object? are these two the same thing? Having to explain these to a GC would have been hard I think.
I wouldn't have done it in C/C++ either, because it would have taken forever to debug (it already did take a long time), and even C++-style polymorphism (templates) isn't enough for the use of Sanakirja we have in Pijul.
remember the "poop" paper about mmap for databases, right? Well, guess what: having a generic key-value store implementation allowed me to benchmark their claims, and actually compare congestion, throughput, speed between mmap and io_uring. Conclusion: mmap rocks, actually.There are other things related to the community/zealots/Mozilla/Rust foundation, but I'm not sure this is the proper place.
Edit: Git zealots are worse than Rust zealots, I attribute this to Git being "harder to learn" (i.e. never really does what people think it does) than Rust.
I'm also the author of Pijul, a much simpler and more scalable (yes, both! why choose?) version control system, and of Sanakirja, an on-disk transactional allocator to write persistent datastructures (like B trees, ropes, radix trees, HNSW…).