A large number of Fossil positives are related to not having rebase. It feels like this is a huge concern for functionality that many people, do not use that often. The last time I used rebase at a job was maybe 5 years ago?
Other than that my bigger gripe is when I read something like this:
> Git strives to record what the development of a project should have looked like had there been no mistakes
Git does not strive to do this. It allows it, to some degree. This is not the same thing at all and is basically FUD. I would say the debate is ongoing as to the value of history rewriting. It's probably a tradeoff that some orgs are willing to leverage and Fossil is masking that they allow less flexibility in workflows as an obvious advantage, feels slimy.
Git gets its bias from the Linux kernel development.
When you're sharing your source changes with external people, who need to review your code, it just makes sense to present it in a clean, logical progression of changes. To wit, remove unnecessary noise like your development missteps.
And it's only in that context that the emphatic call for history rewriting is born. Meaning, you can use all the power of Git to record all your changes as you proceed through development, and then rebase them into something presentable for the greater world.
It's also useful in code reviews in general - I don't care about your development noise, and every single person in the future does not need to read it to understand the final result either. Rebases solve that: present a coherent story for easy understanding, rather than the messy reality.
When you're purely local, sure - do whatever the heck you want. Nobody cares. But messy merges are rough for collaboration, both present and future.
(rough, not fundamentally wrong, to be clear. It's just a friction tradeoff, dealing with human behavior has next to no absolutes)
It feels like overindexing on git as a source of truth for the iterative development process itself is just bikeshedding. Do whatever you want to do locally, then squash your commits into a single unit change. Document that comprehensively in your commit message for that squashed change. If there was some profound learning that feels like it needs rebase history for, just explain it narratively.
Perhaps a more contentious take: rebasing doesn’t bring any real value. To the original comment above, I would say a significant percentage of teams never use rebase and drive business value just fine. I do not think there exists and evidence to suggest teams that use rebase over squash merging are in some way higher performing. Rebase is something that some people’s obsessiveness compels them to care abut because they can, and then retroactively justify their decisions by suggesting value that isn’t there.
Regarding rebase, it's been my experience that among many developers rebase has a mythical status. You're "supposed to" rebase, but no one knows the benefit of doing so.
It's a big downside of git being treated like some magical difficult spell. Same with exiting Vim, people treat it as way harder than it really is.
I tend to agree. I haven't used Git in a large project, but...why would I want to rewrite history? The project is what it is. What happened, happened. If there are a couple of weird commits, who cares? At most, maybe edit the commit messages to explain.
If you're doing trunk based development, with continuous integration, then you're approximately always on a public branch, and rebasing is not very useful.
Generally you merge main into your branch to resolve the conflicts there, then push to make the PR. Sometimes it's easier to rebase, sometimes easier to merge your main. The frequency of one or the other being more useful/easier often influences the accepted workflow.
I keep coming back to fossil again and again, despite git having a huge pull because of the easy publishing and collab on github/gitlab.
Just the other day I was starting an exploratory project, and thought: I'll just use git so I can throw this on github later. Well, silly me, it happened to contain some large binary files, and github rejected it, wanting me to use git-lfs for the big files. After half an hour of not getting it to work, I just thought screw it, I'll drop everything into fossil, and that was it. I have my issue tracker and wiki and everything, though admittedly I'll have some friction later on if I want to share this project. Not having to deal with random git-lfs errors later on when trying to merge commits with these large files is a plus, and if I ever want to, I can fast-export the repo and ingest it into git.
It is extremely rare that I have a file over 100MB.
I also think it’s one of those situations where if I have a giant binary file in source control “I’m doing it wrong” so git helps me design better.
It’s like in the olden days when you couldn’t put blobs directly in a row so databases made you do your file management yourself instead of just plopping in files.
I like git. I don’t like giant binary files in my commit history. It’s cool that you like fossil, but I don’t see this as a reason for me to use it.
You didn't put blobs directly in the database because of annoying database limitations, not because there's a fundamental reason not to.
It's the same with Git. Don't put large files directly in Git because Git doesn't support that very well, not because it's fundamentally the wrong thing to do.
There should be a name for this common type of confusion: Don't mistake universal workarounds for desirable behaviour.
In the age of Large Language Models, large blobs will become the rule, not the exception. You’re not going to retrain models costing $100M to build from scratch because of the limitations of your SCM.
I fail to understand people that can't be bothered to empathize with other use cases than their own. Game development usually has a large number of binary assets that need to be in source control, does that sound like a reasonable use, or are they also doing it wrong?
Not in gamedev where you can have hundreds of gigs of art assets (models, textures, audio...), but you still want to version them or even have people working on them at the same time (maps...). But that is a different can of worms entirely.
That's a ridiculous claim. Can you really not think of a single situation in which it makes sense to keep track of big pieces of data alongside (or even instead of) source code? The fact that many VCS don't handle large binary data nicely doesn't mean there's never a good reason to do so.
My problem with Fossil is that it is a "one solution for all problems". Fossil packs all solutions together while the Git ecosystem provides several different solutions for each problem.
When you want to do things that Fossil is not meant to do, then you're in trouble. I have no idea on how to do CI/CD and DevOps with Fossil and how to integrate it with AWS/Azure/GCP.
I find the whole ecosystem of Gitlab/Github, Notion, Jira and stand-alone alternatives like Gitea [1], Gogs [2], Gitprep[3] and others to be more flexible and versatile.
Unfortunately for git alternatives, the momentum behind git is in large part pushed by the "social network" aspect of GitHub.
In the past I used Mercurial, among other things, for my open source work. And various issue trackers of my own choosing. I am not particularly wedded to Git. But I keep getting sucked into GitHub these days.
To get publicity or outside contributions it's hard to avoid the GitHub trap. It's become a discovery service for open source (like Freshmeat.net back in the day), a common reference point for how to do code reviews ("merge requests") and issue tracking (even though it doesn't really do either all that awesomely), but most importantly it's become a place where people network generally.
I don't love that this is the case but it's hard to avoid.
> Unfortunately for git alternatives, the momentum behind git is in large part pushed by the "social network" aspect of GitHub
And there was a time everyone thought facebook wouldn't dethrone myspace, [something.js] wouldn't replace [somethingelse.js], and so on.
First mover doesn't mean a lot in software. The network effect you brought up does, but there'll be plenty of people who don't want to get caught up in that "trap" and git/MS-land to seed a decent alternative. (Why should your code discovery networking site be prescribing your choice in VCS, anyway?)
I agree with all of this, for sure, and I look forward to the situation changing. And I hope when it does, it does so in a way where the system has more than just Git as an SCM option.
I had hopes for bitbucket for a while, but it stagnated, and then Atlassian got their mitts on it.
Git is an absolutely abysmal industry standard and as far as I'm concerned is further proof of my theory that tech is lacking (and actively discourages) much-needed creatives from the field.
With them having more representation we would have replaced it years ago.
If Fossil is so against deleting commits, what do you do if you've accidentally committed sensitive information that cannot live in any form in the repo?
Fossil provides a mechanism called "shunning" for removing content from a repository.
Every Fossil repository maintains a list of the hash names of "shunned" artifacts. Fossil will refuse to push or pull any shunned artifact. Furthermore, all shunned artifacts (but not the shunning list itself) are removed from the repository whenever the repository is reconstructed using the "rebuild" command.
It is a problem in all decentralized systems. Once you publish something, there is no going back. Anyone of your peers can decide to leave with your sensitive data. That's also what make them so resistant to data loss.
Now if you know everyone who has a copy of your repository, you can have them run a bunch of sqlite commands / low level git commands to make sure that the commit is gone.
If you didn't publish anything, as someone else said, your best bet is to make an entirely new clone, transfer the work you did on the original, omitting the sensitive data, then nuke the original.
The difference seems to be that commits are serious business on fossil, and they encourage you to test before you commit. While on git, commits are more trivial, pushing is where things become serious.
Or you can just rebase to edit the commits and remove the secret file. If you're really paranoid you can run `git gc` vto ensure the object file is cleaned up also. If you're super paranoid, then you can do:
git hash-object secretpassword.txt
And check that hash isn't an object in the `.git/objects` directory.
That's a good point. Delete the repo and start over I suppose? W/ git wouldn't it possible to find and restore that info anyway? Guess it becomes what do you care about most at that point.
Other than that my bigger gripe is when I read something like this:
> Git strives to record what the development of a project should have looked like had there been no mistakes
Git does not strive to do this. It allows it, to some degree. This is not the same thing at all and is basically FUD. I would say the debate is ongoing as to the value of history rewriting. It's probably a tradeoff that some orgs are willing to leverage and Fossil is masking that they allow less flexibility in workflows as an obvious advantage, feels slimy.
When you're sharing your source changes with external people, who need to review your code, it just makes sense to present it in a clean, logical progression of changes. To wit, remove unnecessary noise like your development missteps.
And it's only in that context that the emphatic call for history rewriting is born. Meaning, you can use all the power of Git to record all your changes as you proceed through development, and then rebase them into something presentable for the greater world.
When you're purely local, sure - do whatever the heck you want. Nobody cares. But messy merges are rough for collaboration, both present and future.
(rough, not fundamentally wrong, to be clear. It's just a friction tradeoff, dealing with human behavior has next to no absolutes)
Perhaps a more contentious take: rebasing doesn’t bring any real value. To the original comment above, I would say a significant percentage of teams never use rebase and drive business value just fine. I do not think there exists and evidence to suggest teams that use rebase over squash merging are in some way higher performing. Rebase is something that some people’s obsessiveness compels them to care abut because they can, and then retroactively justify their decisions by suggesting value that isn’t there.
It's a big downside of git being treated like some magical difficult spell. Same with exiting Vim, people treat it as way harder than it really is.
> The golden rule of git rebase is to never use it on public branches.
https://www.atlassian.com/git/tutorials/merging-vs-rebasing#...
If you're doing trunk based development, with continuous integration, then you're approximately always on a public branch, and rebasing is not very useful.
Just the other day I was starting an exploratory project, and thought: I'll just use git so I can throw this on github later. Well, silly me, it happened to contain some large binary files, and github rejected it, wanting me to use git-lfs for the big files. After half an hour of not getting it to work, I just thought screw it, I'll drop everything into fossil, and that was it. I have my issue tracker and wiki and everything, though admittedly I'll have some friction later on if I want to share this project. Not having to deal with random git-lfs errors later on when trying to merge commits with these large files is a plus, and if I ever want to, I can fast-export the repo and ingest it into git.
I also think it’s one of those situations where if I have a giant binary file in source control “I’m doing it wrong” so git helps me design better.
It’s like in the olden days when you couldn’t put blobs directly in a row so databases made you do your file management yourself instead of just plopping in files.
I like git. I don’t like giant binary files in my commit history. It’s cool that you like fossil, but I don’t see this as a reason for me to use it.
It's the same with Git. Don't put large files directly in Git because Git doesn't support that very well, not because it's fundamentally the wrong thing to do.
There should be a name for this common type of confusion: Don't mistake universal workarounds for desirable behaviour.
Your VCS should not be opinionated, that is not its job
I store binary files outside of git but keep build logs containing binary file CRCs on git
Your workflow and use cases aren't everyone's.
Dead Comment
When you want to do things that Fossil is not meant to do, then you're in trouble. I have no idea on how to do CI/CD and DevOps with Fossil and how to integrate it with AWS/Azure/GCP.
I find the whole ecosystem of Gitlab/Github, Notion, Jira and stand-alone alternatives like Gitea [1], Gogs [2], Gitprep[3] and others to be more flexible and versatile.
[1] https://about.gitea.com/
[2] https://gogs.io/
[3] https://github.com/yuki-kimoto/gitprep
Previous discussions:
https://news.ycombinator.com/item?id=2524422 (86 comments)
https://news.ycombinator.com/item?id=19006036 (247 comments)
https://news.ycombinator.com/item?id=27736980 (127 comments)
https://news.ycombinator.com/item?id=31696940 (73 comments)
In the past I used Mercurial, among other things, for my open source work. And various issue trackers of my own choosing. I am not particularly wedded to Git. But I keep getting sucked into GitHub these days.
To get publicity or outside contributions it's hard to avoid the GitHub trap. It's become a discovery service for open source (like Freshmeat.net back in the day), a common reference point for how to do code reviews ("merge requests") and issue tracking (even though it doesn't really do either all that awesomely), but most importantly it's become a place where people network generally.
I don't love that this is the case but it's hard to avoid.
And there was a time everyone thought facebook wouldn't dethrone myspace, [something.js] wouldn't replace [somethingelse.js], and so on.
First mover doesn't mean a lot in software. The network effect you brought up does, but there'll be plenty of people who don't want to get caught up in that "trap" and git/MS-land to seed a decent alternative. (Why should your code discovery networking site be prescribing your choice in VCS, anyway?)
I had hopes for bitbucket for a while, but it stagnated, and then Atlassian got their mitts on it.
With them having more representation we would have replaced it years ago.
Every Fossil repository maintains a list of the hash names of "shunned" artifacts. Fossil will refuse to push or pull any shunned artifact. Furthermore, all shunned artifacts (but not the shunning list itself) are removed from the repository whenever the repository is reconstructed using the "rebuild" command.
https://fossil-scm.org/home/doc/trunk/www/shunning.wiki
Now if you know everyone who has a copy of your repository, you can have them run a bunch of sqlite commands / low level git commands to make sure that the commit is gone.
If you didn't publish anything, as someone else said, your best bet is to make an entirely new clone, transfer the work you did on the original, omitting the sensitive data, then nuke the original.
The difference seems to be that commits are serious business on fossil, and they encourage you to test before you commit. While on git, commits are more trivial, pushing is where things become serious.
Or you can just rebase to edit the commits and remove the secret file. If you're really paranoid you can run `git gc` vto ensure the object file is cleaned up also. If you're super paranoid, then you can do:
git hash-object secretpassword.txt
And check that hash isn't an object in the `.git/objects` directory.
> FURTHER WARNING: This command is a work-in-progress and may yet contain bugs.