Sure: you do have to include an identity document, but it didn't give me an overwhelming sense of safety that I couldn't accidentally expose all my private repos to the P2P network.
Sure: you do have to include an identity document, but it didn't give me an overwhelming sense of safety that I couldn't accidentally expose all my private repos to the P2P network.
IMHO, this shows the huge dependency of all of the world’s open source software on a single company. (Microsoft, who owns GitHub).
In this context, Radicle - as an alternative way of hosting the world’s OSS - makes a lot of sense to me.
But there's of course the chance that the team developing it will be sold, even if the actual app is open source.
Also, being old gives it a huge advantage: you can bet you won't need a major rewrite of your pipelines in a couple of years when e.g. GitLab sells, or Microsoft decides GH actions aren't so free (or fast) any more, etc.
Ok, they didn't kill the open source community which is what we feared at the time (because they found a way to make more money from it), but I'm still more skeptical of Microsoft essentially controlling the world's open source software than Datadog buying an open core company.
But with Microsoft now painting everything with its "AI" brush, aren't you, as open source maintainers, concerned about keeping the world's FOSS on a proprietary platform?
That's what most open models rely on.
1) Hosted services are cheaper than running it myself, once I include staff time, so by default, I will go to the source vendor.
2) Even if there's a cheaper alternative, I'll gladly pay e.g. 50% more to go to the organization which wrote / maintains the product. Many companies just aren't that price-sensitive. If you pay $200k for a SWE, is saving $100/year worth it to go for gitlabknockoff.com instead of gitlab.com? Most big organizations wouldn't. The risk only comes in if gitlabknockoff was AWS, Azure, or GCP, which we're still learning what to do about.
3) There is a shallow moat in the forms of things like brand recognition, canonical URLs, etc.
If you're comfortable with e.g. a <100% profit margin, open models do just fine. Open models just mean you can't have a 10,000% markup or do an Oracle-style milking of customers. As a customer, that's why I pick open models.
Open models also mean I'm not SOL if you go out-of-business.
The friction comes in with a lot of hybrid models. Most things between open and proprietary don't work well. Datadog + gitlab are on opposite sides of this divide, and I don't see that working well.
Exactly! It really makes me wonder how GitLab thinks Datadog can help them defend against this risk. Then again, they're not open source - just open core.
Not looking to convince you of that or anything though... :)
And presumably the person hosting it will make sure that the computer hosting it is often on, for instance ISP routers and TV boxes are a good way to popularize it, since they often come with NAS capabilities :
https://en.wikipedia.org/wiki/Freebox
(Notably, it also supports torrents and creating timed links to share files via FTP.)
- finding what the domain name is ? - resolving the DNS to an IP address ?
Radicle solves both problems in theory, but more the latter than the former right now:
- there is some basic functionality to search for projects hosted on Radicle, to find the right repo id (I expect this area will see a lot more activity and improvements in the near future), - given a repo id, actually getting the code onto your laptop. This is where the p2p network comes in, so that the person hosting it doesn't always need to keep their computer/router/tv box on, etc.
git pull github master
git push rad masterMigrating GitHub Pull Requests (PRs) to Radicle Patches is somewhat more involved, but that should still be possible (even if it involves some loss of information along the way, due to potential schema mismatches) ...
[1] - https://github.com/cytechmobile/radicle-github-migrate
locally running CI should be more common
And yet, that's technically not CI.
The whole point we started using automation servers as an integration point was to avoid the "it works on my machine" drama. (Have watched at least 5 seasons of it - they were all painful!).
+1 on running the test harness locally though (where feasible) before triggering the CI server.