Bitkeeper was neat, and my overall take on it mirrors Larry McVoy's: I wish he had open sourced it, made his nut running something just like github but for Bitkeeper, and that it had survived.
I only had one interaction with him. In the early '00s, I had contributed a minor amount of code to TortoiseCVS. (Stuff like improving the installer and adding a way to call a tool that could provide a reasonable display for diffs of `.doc` and `.rtf` files.) I had a new, very niche, piece of hardware that I was excited about and wanted to add support for in the Linux kernel. Having read the terms of his license agreement for Bitkeeper, and intending to maintain my patches for TortoiseCVS, I sent him an email asking if it was OK for me to use Bitkeeper anyway. He told me that it did not look like I was in the business of version control software (I wasn't!) and said to go ahead, but let him know if that changed.
I use git all the time now, because thankfully, it's good enough that I shouldn't spend any of my "innovation tokens" in this domain. But I'd still rather have bitkeeper or mercurial or fossil. I just can't justify the hit that being different would impose on collaboration.
Like I tell lots of people, check out Jujutsu. It's a very Mercurial-inspired-but-better-than-it UI (the lead dev and I worked on Mercurial together for many years) with Git as one of the main supported backends. I've been using it full time for almost a year now.
I would love to use jujutsu, and it seems like a great model. I think it'd be a bad outcome if the world starts building top a piece of software with a single company owner and a CLA, though.
I think my memory is probably colored by BitKeeper being my first DVCS. I was never a heavy user of it.
I was exposed to BitKeeper when I was managing my team's CVS server. On my next team, we moved to svn, which always felt like cvs with better porcelain from a developer perspective, but when administering that server fell onto my plate, I liked it a lot better than CVS. And I thought BitKeeper would be nicer from a developer perspective.
Then on my next team, we used mercurial. I really, really, really liked mercurial, both as a developer and as a dev infrastructure administrator. It also sucked a lot less on Windows than git or BitKeeper.
The last time I had to decide for a new team, mercurial and git were the obvious options. I went with git because that was clearly what the world liked best, and because bringing new team members up to speed would require less from me that way.
All that goes to say... my direct comparison of git and bitkeeper came from when bitkeeper was mature and git decidedly was not. Then I lumped it in with mercurial (which I really would still prefer, right now) and fossil (ditto). You're probably exactly right about BK.
I wouldn't put fossil in that list of collaboration, since its not really a collaborative tool, or more like, there are barriers to that collaboration, like creating a username for each fossil repository. That's a huge barrier in my view. It would be nice if there was something like a general auth identity that can be used everywhere but that's still not implemented.
FWIW, mercurial seems to have an advantage over git, and that support for BIG repositories which seems to be provided by facebook of all people, so until facebook moves to git, mercurial lives on.
There's a screenshot purporting to be of GitHub from May 2008. There are tell-tale signs, though, that some or all of the CSS has failed to load, and that that's not really what the site would have looked like if you visited it at the time. Indeed, if you check github.com in the Wayback Machine, you can see that its earliest crawl was May 2008, and it failed to capture the external style sheet, which results in a 404 when you try to load that copy today. Probably best to just not include a screenshot when that happens.
(Although it's especially silly in this case, since accessing that copy[1] in the Wayback Machine reveals that the GitHub website included screenshots of itself that look nothing like the screenshot in this article.)
Larry wants to call you and discuss two corrections to this piece ("one minor, one major"). I've already passed on your email address for good measure, but you should reach out to him.
Thanks Andrew Tridgell for not letting the kernel get stuck with a proprietary source control. An example how sticking to your principles can make the world better in the long run even if it annoys people at first.
The Kernel was not "stuck"; Linus is ultimately a practical man and was fine using it for integration work. The question whether or not switching to an Open Source solution would have eventually been raised again, but at the time it did what it was supposed to do.
> My biggest regret is not money, it is that Git is such an awful excuse for an SCM. It drives me nuts that the model is a tarball server. Even Linus has admitted to me that it’s a crappy design. It does what he wants, but what he wants is not what the world should want.
Why is this crappy? What would be better?
Edit: @luckydude Thank you for generously responding to the nudge, especially nearly instantly, wow :)
- no weave. Without going into a lot of detail, suppose someone adds N bytes on a branch and then that branch is merged. The N bytes are copied into the merge node (yeah, I know, git looks for that and dedups it but that is a slow bandaid on the problem).
- annotations are wrong, if I added the N bytes on the branch and you merged it, it will (unless this is somehow fixed now) show you as the author of the N bytes in the merge node.
- only one graph for the whole repository. This causes multiple problems:
A) the GCA is the repository GCA, it can be miles away from the file GCA if there was a graph per file like BitKeeper has.
B) Debugging is upside down, you start at the changeset and drill down. In BitKeeper, because there is a graph per file, let's say I had an assert() pop. You run bk revtool on that file, find the assert and look around to see what has changed before that assert. Hover over a line, it will show you the commit comments to the file and then the changeset. You find the likely line, double click on it, now you are looking at the changeset. We were a tiny company, we never hit the claimed 25 people, and we supported tons of users. This form of debugging was a huge, HUGE, part of why we could support so many people.
C) commit comments are per changeset, not per file. We had a graphic check in tool that walked you through the list of files, showed you the diffs for that file and asked you to comment. When you got the the ChangeSet file, now it is asking you for what Git asks for comments but the diffs are all the file names followed by what you just wrote. It made people sort of uplevel their commit comments. We had big customers that insisted the engineers use that tool rather a command line that checked in everything with the same comment.
- submodules turned Git into CVS. Maybe that's been redone but the last time I looked at it, you couldn't do sideways pulls if you had submodules. BK got this MUCH closer to correct, the repository produced identical results to a mono repository if all the modules were present (and identical less whatever isn't populated in the sparse case). All with exactly the same semantics, same functionality mono or many repos.
- Performance. Git gets really slow in large repositories, we put a ton of work into that in BitKeeper and we were orders of magnitude faster for things like annotate.
In summary, Git isn't really a version control system and Linus has admitted it to me years ago. A version control system needs to faithfully record everything that happened, no more or less. Git doesn't record renames, it passes content across branches by value, not by reference. To me, it feels like a giant step backwards.
Here's another thing. We made a bk fast-export and a bk fast-import that are compatible with Git. You can have a tree in BK, have it updated constantly, and no matter where in the history you run bk fast-export, you will get the same repository. Our fast-export is idempotent. Git can't do that, it doesn't send the rename info because it doesn't record that. That means we have to make it up when doing a bk fast-import which means Git -> BK is not idempotent.
I don't expect to convince anyone of anything at this point, someone nudged, I tried. I don't read hackernews any more so don't expect me to defend what I said, I really don't care at this point. I'm happier away from tech, I just go fish on the ocean and don't think about this stuff.
Git doesn't track changes yes, it tracks states. It has tools to compare those states but doesn't mean that it needs to track additional data to help those tools.
I'm unconvinced that tracking renames is really helpful as that is only the simplest case of of many possible state modifications. What if you split a file A into files B and C? You'd need to be able to track that too. Same for merging one file into another. And many many many more possible modifications. It makes sense to instead focus on the states and then improve the tools to compare them.
Tracking all kinds of changes also requires all development tools to be aware of your version control. You can no longer use standard tools to do mass renames and instead somehow build them on top of your vcs so it can track the operations. That's a huge tradeoff that tracking repository states doesn't have.
> submodules
I agree, neither submodules nor subtrees are ideal solutions.
> You run bk revtool on that file, find the assert and look around to see what has changed before that assert. Hover over a line, it will show you the commit comments to the file and then the changeset. You find the likely line, double click on it, now you are looking at the changeset.
I still have fond memories of the bk revool. I haven't found anything since that's been as intuitive and useful.
I hadn't heard of the per-file graph concept, and I can see how that would be really useful. But I have to agree that going for a fish sounds marvellous.
As someone who has lived in Git for the past decade, I also fail to see why Git is a crappy design. It's easy to distribute, works well, and there's nothing wrong with a tarball server.
Exactly. While the article is good about events history, it doesn't go deep enough into the feature evolution (which is tightly connected to and reflects the evolution of the software development). Which is :
TeamWare - somewhat easy branching (by copying whole workspace from the parent and the bringover/putback of the changes, good merge tool), the history is local, partial commits.
BitKeeper added distributed mode, changesets.
Git added very easy branching, stash, etc.
Any other currently available source control usually is missing at least one of those features. Very illustrative is the case of Mercurial which emerged at about the same time responding to the same need for the modern source control at the time, yet was missing partial commits for example and had much cumbersome branching (like no local history or something like this - i looked at it last more than a decade ago) - that really allowed it to be used only in very strict/stuffy settings, for everybody else it was a non starter.
> “Here’s a BitKeeper address, bk://thunk.org:5000. Let’s try connecting with telnet.”
Famously, Tridge gave a talk about this, and got the audience of the talk to recreate the "reverse engineering". See https://lwn.net/Articles/133016/ for a source.
> I attended Tridge's talk today. The best part of the demonstration was that he asked the audience for each command he should type in. And the audience instantly called out each command in unison, ("telnet", "help", "echo clone | nc").
This is completely untrue. There is no way that you could make a BK clone by telneting to a BK and running commands. Those commands don't tell you the network protocol, they show you the results of that protocol but show zero insight into the protocol.
Tridge neglected to tell people that he was snooping the network while Linus was running BK commands when Linus was visiting in his house. THAT is how he did the clone.
The fact that you all believe Tridge is disappointing, you should be better than that.
The fact that Tridge lied is disappointing but I've learned that open source people are willing to ignore morals if it gets them what they want. I love open source, don't love the ethics. It's not just Tridge.
> There is no way that you could make a BK clone by telneting to a BK and running commands. Those commands don't tell you the network protocol
The network protocol, according to multiple sources and the presented talk at LCA, was "send text to the port that's visible in the URL, get text back". The data received was SCCS, which was an understood format with existing tools. And the tool Tridge wrote, sourcepuller, didn't clone all of BitKeeper, it cloned enough to fetch sources, which meant "connect, send command, get back SCCS".
Anything more than that is hearsay that's entirely inconsistent with the demonstrated evidence. Do you have any references supporting either that the protocol was more complicated than he demonstrated on stage at LCA, or that Tridge committed the network surveillance you're claiming?
And to be clear, beyond that, there's absolutely nothing immoral with more extensively reverse-engineering a proprietary tool to write a compatible Open Source equivalent. (If, as you claim, he also logged a friend's network traffic without their express knowledge and consent, that is problematic, but again, the necessity of doing that seems completely inconsistent with the evidence from many sources. If that did happen, I would be mildly disappointed in that alone, but would still appreciate the net resulting contribution to the world.)
I appreciate that you were incensed by Tridge's work at the time, and may well still be now, but that doesn't make it wrong. Those of us who don't use proprietary software appreciate the net increase in available capabilities, just like we appreciate the ability to interoperate with SMB using Samba no matter how inconvenient that was for Microsoft.
Come on, man, you should be better than this. With so many years of hindsight surely you realize by now that reverse engineering is not some moral failing? How much intellectual and cultural wealth is attributable to it? And with Google v. Oracle we've finally settled even in the eyes of the law that the externally visible APIs and behavior of an implementation are not considered intellectual property.
Tridge reverse engineering bk and kicking off a series of events that led to git is probably one of the most positively impactful things anyone has done for the software industry, ever. He does not deserve the flack he got for it, either then or today. I'm grateful to him, as we all should be. I know that it stings for you, but I hope that with all of this hindsight you're someday able to integrate the experience and move on with a positive view of this history -- because even though it didn't play out the way you would have liked, your own impact on this story is ultimately very positive and meaningful and you should take pride in it without demeaning others.
> In a 2022 survey by Stack Overflow, Git had a market share of 94%, ...
> Never in history has a version control system dominated the market like Git. What will be the next to replace Git? Many say it might be related to AI, but no one can say for sure.
I doubt it's getting replaced. It's not just that it's got so much of the market, but also that the market is so much larger than back in the days of CVS.
It's hard to imagine everyone switching from Git. Switching from GitHub, feasible. From Git? That's much harder.
Git shortcomings are well known by this point, so "all" a successor project has to do is solve those problems. Git scales to Linux kernel sized projects, but it turns out there are bigger, even more complex projects out there, so it doesn't scale to Google-sized organizations. You would want to support centralized and decentralized operation, but be aware of both, so it would support multiple remotes, while making it easier to keep them straight. Is the copy on Github up to date with gitlab, the CI system, and my laptop and my desktop? It would have to handle binaries well, and natively, so I can check-in my 100 MiB jpeg and not stuff things up. You'd want to use it both as a monorepo and as multirepos, by allowing you to checkout just a subtree of the monorepo. Locally, the workflow would need to both support git's complexity, while also being easier to use than git.
Anyway, those are the four things you'd have to hit in order to replace git, as I see them.
If you had such a system, getting people off git wouldn't be the issue - offer git compatibility and if they don't want to use the advanced features, they can just keep using their existing workflow with git. The problem with that though, is that then why use your new system.
Which gets to the point of, how do you make this exist as a global worldwide product? FAANG-sized companies have their own internal tools team to manage source code. Anywhere smaller doesn't have the budget to create such a thing from scratch but
You can't go off and make this product and then sell it to someone because how many companies are gonna go with an unproven new workflow tool that their engineers want? What's the TAM of companies for whom "git's not good enough", and have large enough pocketbooks?
You are right. GIT is not DVFS, its DVCS. It was made to track source code, not binary data. If you are putting binary to DVCS, you are doing something wrong.
But, there are industries that need it, like game industry. So they should use tool that allow that. I heard that Plastic-SCM is pretty decent at it. Never used it so cant tell personally.
Replacing GIT is such a stupid idea. There is no ONE tool to handle all cases. Just use right one for your workflows. I, for example, have a need to version binary files. I know GIT handles them badly, but I really like the tool. Solution? I wrote my own simple DVFS tool for that usecase: dot.exe (138KB)
Its very simple DVFS for personal use, peer to peer syncing (local, TCP, SSH). Data and Metadata are SHA-1 checksumed. Its pretty speedy for my needs :)
After weeks of use use I liked it so much, I added pack storage to handle text files and moved all my notes from SVN to DOT :)
You say this, but Git has made great strides in scaling to huge repositories in recent years. You can currently do the "checkout just a subtree of the monorepo" just fine, and you can use shallow clones to approximate a centralized system (and most importantly to use less local storage).
> If you had such a system, getting people off git wouldn't be the issue - offer git compatibility and [...]
Author here. I don’t think ASCII is the right comparison. True, it would be really hard for anything to compete with Git because a lot of infrastructures we have are already deeply integrated with Git. But think about x86 vs. ARM and how AI might change our ways of producing code.
Bitkeeper was neat, and my overall take on it mirrors Larry McVoy's: I wish he had open sourced it, made his nut running something just like github but for Bitkeeper, and that it had survived.
I only had one interaction with him. In the early '00s, I had contributed a minor amount of code to TortoiseCVS. (Stuff like improving the installer and adding a way to call a tool that could provide a reasonable display for diffs of `.doc` and `.rtf` files.) I had a new, very niche, piece of hardware that I was excited about and wanted to add support for in the Linux kernel. Having read the terms of his license agreement for Bitkeeper, and intending to maintain my patches for TortoiseCVS, I sent him an email asking if it was OK for me to use Bitkeeper anyway. He told me that it did not look like I was in the business of version control software (I wasn't!) and said to go ahead, but let him know if that changed.
I use git all the time now, because thankfully, it's good enough that I shouldn't spend any of my "innovation tokens" in this domain. But I'd still rather have bitkeeper or mercurial or fossil. I just can't justify the hit that being different would impose on collaboration.
I hope that the CLA goes away one day.
https://github.com/martinvonz/jj
Seems an interesting take indeed :)
To me, Git is almost exactly like a ground-up cleaner rewrite of BitKeeper. Gitk and git-gui are essentially clones of the BitKeeper GUI.
I don't understand why you'd want to keep using BitKeeper.
I was exposed to BitKeeper when I was managing my team's CVS server. On my next team, we moved to svn, which always felt like cvs with better porcelain from a developer perspective, but when administering that server fell onto my plate, I liked it a lot better than CVS. And I thought BitKeeper would be nicer from a developer perspective.
Then on my next team, we used mercurial. I really, really, really liked mercurial, both as a developer and as a dev infrastructure administrator. It also sucked a lot less on Windows than git or BitKeeper.
The last time I had to decide for a new team, mercurial and git were the obvious options. I went with git because that was clearly what the world liked best, and because bringing new team members up to speed would require less from me that way.
All that goes to say... my direct comparison of git and bitkeeper came from when bitkeeper was mature and git decidedly was not. Then I lumped it in with mercurial (which I really would still prefer, right now) and fossil (ditto). You're probably exactly right about BK.
FWIW, mercurial seems to have an advantage over git, and that support for BIG repositories which seems to be provided by facebook of all people, so until facebook moves to git, mercurial lives on.
https://www.fossil-scm.org/home/doc/trunk/www/caps/login-gro...
Deleted Comment
It's the most complete history of git that I know now. Exceptional!
I'd love to read more historical articles like this one, of pieces of software that have helped shape our world.
I wasn't going to read the story until I read your comment. I knew the summary of BitKeeper and the fallout, but wow this was so detailed. Thanks!
[1] https://www.abortretry.fail/
(Although it's especially silly in this case, since accessing that copy[1] in the Wayback Machine reveals that the GitHub website included screenshots of itself that look nothing like the screenshot in this article.)
1. <https://web.archive.org/web/20080514210148/http://github.com...>
Why is this crappy? What would be better?
Edit: @luckydude Thank you for generously responding to the nudge, especially nearly instantly, wow :)
- No rename support, it guesses
- no weave. Without going into a lot of detail, suppose someone adds N bytes on a branch and then that branch is merged. The N bytes are copied into the merge node (yeah, I know, git looks for that and dedups it but that is a slow bandaid on the problem).
- annotations are wrong, if I added the N bytes on the branch and you merged it, it will (unless this is somehow fixed now) show you as the author of the N bytes in the merge node.
- only one graph for the whole repository. This causes multiple problems: A) the GCA is the repository GCA, it can be miles away from the file GCA if there was a graph per file like BitKeeper has. B) Debugging is upside down, you start at the changeset and drill down. In BitKeeper, because there is a graph per file, let's say I had an assert() pop. You run bk revtool on that file, find the assert and look around to see what has changed before that assert. Hover over a line, it will show you the commit comments to the file and then the changeset. You find the likely line, double click on it, now you are looking at the changeset. We were a tiny company, we never hit the claimed 25 people, and we supported tons of users. This form of debugging was a huge, HUGE, part of why we could support so many people. C) commit comments are per changeset, not per file. We had a graphic check in tool that walked you through the list of files, showed you the diffs for that file and asked you to comment. When you got the the ChangeSet file, now it is asking you for what Git asks for comments but the diffs are all the file names followed by what you just wrote. It made people sort of uplevel their commit comments. We had big customers that insisted the engineers use that tool rather a command line that checked in everything with the same comment.
- submodules turned Git into CVS. Maybe that's been redone but the last time I looked at it, you couldn't do sideways pulls if you had submodules. BK got this MUCH closer to correct, the repository produced identical results to a mono repository if all the modules were present (and identical less whatever isn't populated in the sparse case). All with exactly the same semantics, same functionality mono or many repos.
- Performance. Git gets really slow in large repositories, we put a ton of work into that in BitKeeper and we were orders of magnitude faster for things like annotate.
In summary, Git isn't really a version control system and Linus has admitted it to me years ago. A version control system needs to faithfully record everything that happened, no more or less. Git doesn't record renames, it passes content across branches by value, not by reference. To me, it feels like a giant step backwards.
Here's another thing. We made a bk fast-export and a bk fast-import that are compatible with Git. You can have a tree in BK, have it updated constantly, and no matter where in the history you run bk fast-export, you will get the same repository. Our fast-export is idempotent. Git can't do that, it doesn't send the rename info because it doesn't record that. That means we have to make it up when doing a bk fast-import which means Git -> BK is not idempotent.
I don't expect to convince anyone of anything at this point, someone nudged, I tried. I don't read hackernews any more so don't expect me to defend what I said, I really don't care at this point. I'm happier away from tech, I just go fish on the ocean and don't think about this stuff.
Git doesn't track changes yes, it tracks states. It has tools to compare those states but doesn't mean that it needs to track additional data to help those tools.
I'm unconvinced that tracking renames is really helpful as that is only the simplest case of of many possible state modifications. What if you split a file A into files B and C? You'd need to be able to track that too. Same for merging one file into another. And many many many more possible modifications. It makes sense to instead focus on the states and then improve the tools to compare them.
Tracking all kinds of changes also requires all development tools to be aware of your version control. You can no longer use standard tools to do mass renames and instead somehow build them on top of your vcs so it can track the operations. That's a huge tradeoff that tracking repository states doesn't have.
> submodules
I agree, neither submodules nor subtrees are ideal solutions.
I still have fond memories of the bk revool. I haven't found anything since that's been as intuitive and useful.
0. https://arstechnica.com/information-technology/2017/05/90-of...
TeamWare - somewhat easy branching (by copying whole workspace from the parent and the bringover/putback of the changes, good merge tool), the history is local, partial commits.
BitKeeper added distributed mode, changesets.
Git added very easy branching, stash, etc.
Any other currently available source control usually is missing at least one of those features. Very illustrative is the case of Mercurial which emerged at about the same time responding to the same need for the modern source control at the time, yet was missing partial commits for example and had much cumbersome branching (like no local history or something like this - i looked at it last more than a decade ago) - that really allowed it to be used only in very strict/stuffy settings, for everybody else it was a non starter.
> “Here’s a BitKeeper address, bk://thunk.org:5000. Let’s try connecting with telnet.”
Famously, Tridge gave a talk about this, and got the audience of the talk to recreate the "reverse engineering". See https://lwn.net/Articles/133016/ for a source.
> I attended Tridge's talk today. The best part of the demonstration was that he asked the audience for each command he should type in. And the audience instantly called out each command in unison, ("telnet", "help", "echo clone | nc").
Tridge neglected to tell people that he was snooping the network while Linus was running BK commands when Linus was visiting in his house. THAT is how he did the clone.
The fact that you all believe Tridge is disappointing, you should be better than that.
The fact that Tridge lied is disappointing but I've learned that open source people are willing to ignore morals if it gets them what they want. I love open source, don't love the ethics. It's not just Tridge.
The network protocol, according to multiple sources and the presented talk at LCA, was "send text to the port that's visible in the URL, get text back". The data received was SCCS, which was an understood format with existing tools. And the tool Tridge wrote, sourcepuller, didn't clone all of BitKeeper, it cloned enough to fetch sources, which meant "connect, send command, get back SCCS".
Anything more than that is hearsay that's entirely inconsistent with the demonstrated evidence. Do you have any references supporting either that the protocol was more complicated than he demonstrated on stage at LCA, or that Tridge committed the network surveillance you're claiming?
And to be clear, beyond that, there's absolutely nothing immoral with more extensively reverse-engineering a proprietary tool to write a compatible Open Source equivalent. (If, as you claim, he also logged a friend's network traffic without their express knowledge and consent, that is problematic, but again, the necessity of doing that seems completely inconsistent with the evidence from many sources. If that did happen, I would be mildly disappointed in that alone, but would still appreciate the net resulting contribution to the world.)
I appreciate that you were incensed by Tridge's work at the time, and may well still be now, but that doesn't make it wrong. Those of us who don't use proprietary software appreciate the net increase in available capabilities, just like we appreciate the ability to interoperate with SMB using Samba no matter how inconvenient that was for Microsoft.
Tridge reverse engineering bk and kicking off a series of events that led to git is probably one of the most positively impactful things anyone has done for the software industry, ever. He does not deserve the flack he got for it, either then or today. I'm grateful to him, as we all should be. I know that it stings for you, but I hope that with all of this hindsight you're someday able to integrate the experience and move on with a positive view of this history -- because even though it didn't play out the way you would have liked, your own impact on this story is ultimately very positive and meaningful and you should take pride in it without demeaning others.
Dead Comment
https://news.ycombinator.com/item?id=11671777
Edit:
There was another thread from him linked as well https://news.ycombinator.com/item?id=26205688
> Never in history has a version control system dominated the market like Git. What will be the next to replace Git? Many say it might be related to AI, but no one can say for sure.
I doubt it's getting replaced. It's not just that it's got so much of the market, but also that the market is so much larger than back in the days of CVS.
It's hard to imagine everyone switching from Git. Switching from GitHub, feasible. From Git? That's much harder.
Anyway, those are the four things you'd have to hit in order to replace git, as I see them.
If you had such a system, getting people off git wouldn't be the issue - offer git compatibility and if they don't want to use the advanced features, they can just keep using their existing workflow with git. The problem with that though, is that then why use your new system.
Which gets to the point of, how do you make this exist as a global worldwide product? FAANG-sized companies have their own internal tools team to manage source code. Anywhere smaller doesn't have the budget to create such a thing from scratch but
You can't go off and make this product and then sell it to someone because how many companies are gonna go with an unproven new workflow tool that their engineers want? What's the TAM of companies for whom "git's not good enough", and have large enough pocketbooks?
But, there are industries that need it, like game industry. So they should use tool that allow that. I heard that Plastic-SCM is pretty decent at it. Never used it so cant tell personally.
Replacing GIT is such a stupid idea. There is no ONE tool to handle all cases. Just use right one for your workflows. I, for example, have a need to version binary files. I know GIT handles them badly, but I really like the tool. Solution? I wrote my own simple DVFS tool for that usecase: dot.exe (138KB)
Its very simple DVFS for personal use, peer to peer syncing (local, TCP, SSH). Data and Metadata are SHA-1 checksumed. Its pretty speedy for my needs :) After weeks of use use I liked it so much, I added pack storage to handle text files and moved all my notes from SVN to DOT :)
> If you had such a system, getting people off git wouldn't be the issue - offer git compatibility and [...]
Git is already doing exactly that.
Deleted Comment
A replacement would be niche, only for the huge orgs, which is usually made by them anyway. For everyone else, git is good enough.