Hi HN. I am the co-founder of the project. If you are interested in how the protocol works under the hood, start here: https://docs.radicle.xyz/ Docs are still WIP though.
I read the documentation and this stands out to me:
> Radicle repositories, which can be either public or private, can accommodate diverse content including source code, documentation, and arbitrary data sets.
If this is, basically, a peer-to-peer file sharing application, what part of the protocol handles dealing with abuse?
Otherwise, how is this different from the previous generation of file sharing applications (BitTorrent, winny, etc) where people just share arbitrary copyrighted content like movies, songs, software, etc?
I feel like a few bad actors will ruin this?
Can you partition your “personal” network somehow, so you can use it with absolute confidence your not participating in anything illegal?
One of the key ideas is that each user chooses what repositories they host via pretty fine-grained policies. This means you can easily block content you're not interested in seeding, or simply configure your node to only host content you explicitly allow.
You can also choose which public nodes to connect to if you'd rather not connect to random nodes on the network; though I don't expect most users to go this route, as you are more likely to miss content you're interested in.
Though Git (and thus Radicle) can replicate arbitrary content, it's not particularly good with large binary files (movies, albums etc.), so I expect that type of content to still be shared on BitTorrent, even if Radicle were to be popular.
Fascinating project!
I'm curious what's the business model? it's listed on Crunchbase that you raised 12M$ so I'm assuming you do have plans to make money?
Curious as well. Searching around I found this documentation on their ecosystem [0], which may shed some light on the organization structure. It may be they are organized as a DAO? From the intro:
> Radworks is a community dedicated to cultivating internet freedom.
They do not shy away from cryptocurrency technology, though AFAICS that is not directly applied to the Radicle project. Another project of Radworks is Drips [1], to help fund open source.
Hi. While not actively looking for replacement to proprietary services s.a. Github or GitLab, from time to time I'm asked about an alternative.
I'm all for a distributed self-hosting solution, so Radicle is definitely hitting the mark here, however:
> Linux or Unix based operating system.
For the kind of project I have to assist with, this would be a deal-breaker. Since the code seems to be in Rust: do you intend to make it available to MS Windows? (I took it for granted that Mac OS is included in the Unix family, right?)
If not straight-up support for MS Windows, then maybe an MSYS2 port?
----
To give some background: I'm not in charge of decisions like service vendor selection, and we are talking about a quasi-government organization with substantial open-source code base that is currently hosted on Github. I.e. sometimes I might have a chance to pitch a particular idea, but it's not up to me if the idea is accepted. They are quite motivated to make their work as resilient to private vendor policies as possible as well as try to "do good" in other ways (i.e. sustainability, breadth of the outreach etc. -- a typical European gov. org :) So, Github is... obviously in conflict with such policies.
While there are other gov. agencies tasked with archiving or networking support, they seem to be woefully incompetent and / or outdated, as well as often falling for the vendor-laid traps (eg. the archiving service went all-in after DataBricks not even realizing it's a commercial closed-source product). So, I wouldn't have high hopes for the org. to be able to leverage a self-hosted solution. That's why a distributed solution looks great.
However, they wouldn't be able to use a tool that doesn't work on major popular PC systems.
Hey there. Yes, Windows support is something we'd like to have, but focusing on less OSes is helping us ship faster. In principle, there shouldn't be any issue in porting to Windows, but since no one on the team runs Windows it would have been hard to ensure things are working smoothly. If there is demand though, we will certainly start allocating time towards it.
I won't reveal anything about our finances, but the current code base is a little under 2 years old. We've worked on the general problem for over 4 years in total though. The team is around 12 people, split between protocol, cli, tui, web and content.
The product is set to launch this month, so we're just starting to onboard users, but many people in the community are already using it, and we've been using it internally for about a year.
i really like the website and application design, bc so many oss projects often completely falter w/ visual design, and while this is a superficial thing, beautiful design makes me want to interact w/ a project more :)
also, i'm curious, what kind of adoption were you anticipating (some time ago and now) and did the result align with it?
I do remember Mango! I didn't actually try it out, but we had experimented with Ethereum and IPFS in the past, and it wasn't a great fit for a code collab platform due to performance and cost.
I'm interested in this, but I noticed a base58 hash on the page. I'm not really interested in crypto. How much could I use this product without adopting crypto? Is this attached to some digital currency like ipfs or is it independent?
There are lots of potential intellectually stimulating research projects. Why code repositories instead of like, a video game? Why not harness the same manic energy into something that already existed? Like the kind of person who can be sincerely passionate about source code repositories, why can't that kind of person then be passionate about literally anything?
It's been fascinating watching Radicle evolve over the –what seems to be– last 5 years.
I attended the workshop at Protocol Berg 2023 and think they built something really powerful and novel.
Perhaps the most exciting aspect is that even the collaborative aspect of the protocol is local-first which means you can submit patches and issues without internet and that your team isn't on HN every time GitHub is having problems.
This looks like a fine project for its purpose, but I think git is already open-source and p2p. You don't need sh<(curl) a bunch of binaries, instead simply connect to another git server, use git commadns to directly pull or merge code.
What's missing in git is code issues, wikis, discussions, github pages and most importantly, a developer profile network.
We need a way to embed project metadata into .git itself, so source code commits don't mess up with wikis and issues. Perhaps some independent refs like git notes?
While Git is designed in some way for peer-to-peer interactions, there is no deployment of it that works that way. All deployments use the client-server model because Git lacks functionality to be deployed as-is in a peer-to-peer network.
For one, it has no way of verifying that the repository you downloaded after a `git clone` is the one you asked for, which means you need to clone from a trusted source (ie. a known server). This isn't compatible with p2p in any useful way.
Radicle solves this by assigning stable identities[0] to repositories that can be verified locally, allowing repositories to be served by untrusted parties.
> it has no way of verifying that the repository you downloaded after a `git clone` is the one you asked for
Respectfully disagree here. A repository is a(or multiple) chain(s) of commits, if each commit is signed, you know exactly that the clone you got is the one you asked for. You're right that nobody exposes a UI around this feature, but the capability is there if anyone would have any workflows that require to pull from random repositories instead of well established/known ones.
The problem I'd like to see solved is source of truth. It'd be nice if there were a way to sign a repo with an ENS or domain withiut knowing the hash.
Another thing is knowing if the commit history has been tampered with without knowing the hash.
The reason for needing to not know the hash is for cases like tornado cash. The site and repo was taken down. There's a bunch of people sharing a codebase with differing hashes, you have no idea which is real or altered.
This is also important for cases where the domain is hacked.
> What's missing in git is code issues, wikis, discussions, github pages and most importantly, a developer profile network.
Radicle adds issue tracking and pull requests. Probably some of those other features as well.
On mobile there are buttons on the bottom of the screen in the op link, click those and you get to the issue tracking tab and the pull request tabs etc
But that’s not what parent meant. Those things should be embedded in the git repository itself, in some kind of structure below the .git/ directory. That would indeed make the entire OSS ecosystem more resilient. We don’t need a myriad of incompatible git web GUIs, but a standard way of storing project management metadata alongside version control data. GitHub, Gitea, Gitlab, and this project could all store their data in there instead of proprietary databases, making it easy to migrate projects.
Repositories and code-sharing are inherently about trust. Even if you personally audit every line of code, you still need to trust that the owner isn't trying to slip one past you. Identity is a key component of trust.
Classic git does not evade censorship, such as the extremely recent news concerning Nintendo. An idea like this has been rolling around in my head, and I'm overjoyed that someone has done the hard work.
Git evades censorship just fine, since it is properly decentralized and doesn't care about where you got the repository from. Plain HTTP transport however does not and most Git repositories are referred to by HTTP URL.
If you simply host Git on IPFS you have it properly decentralized without the limits of HTTP. IPNS (DNS of IPFS), which you need to point people to the latest version of your repository, however wasn't working all that reliably last time I tried.
You're missing the discovery part. You want to get the repository X from user Y cloned - how do you find it? Especially if you don't know Y and their computer is off?
Also radicle does want to tackle the issues / prs and other elements you mentioned as well.
And presumably the person hosting it will make sure that the computer hosting it is often on, for instance ISP routers and TV boxes are a good way to popularize it, since they often come with NAS capabilities :
I think this already exists for issues. git-bug [1] uses git internal files to store the issues. It is distributed and it even comes with a web ui in addition to the usual cli.
do you know of any projects using [anything like] git-bug?
i know i've encountered something like this once in a notable repo. thought it was graphics related, like mesa or something, but looks like they're using GitLab.
> It’s important to only publish repositories you own or are a maintainer of, and to communicate with the other maintainers so that they don’t initialize redundant repository identities.
Based on my experience with people taking my code and shoving it onto GitHub--as well as separately in my demoralizing general experience of putting random little "please for the love of all that is holy don't do X as it will cause problems for other users" notices in the documentation or even as interstitial UI (!!) of my products and watching everyone immediately do exactly that thing as no one reads or thinks (or even cares)--a large number of people aren't going to honor this request in the documentation... and, frankly a large number of people aren't even going to see this in the first place as the home page tells you how to push code but you only find this "important" request in the "user guide" that people definitely do not bother to read.
It thereby seems quite concerning that, apparently?!, this system is designed in a way where doing what feels like a very reasonable thing to do--just pushing whatever open source code you are working on, based on the instructions on the home page--is going to interact with something about this protocol and how things are stored that something important enough to have this separated boxed "important" statement in the documentation is going to get cluttered and maybe even confusing over time :(.
I don't think there's anything "special" here. You have the same problem currently where finding the canonical location of a repository is done via some out-of-band social network or website.
On GitHub, you also can look at the stars to give you extra confidence, and on Radicle the equivalent is the seed count for a given repository.
Then why does the documentation say this is "important"? GitHub certainly does not have a notice anywhere saying "it's important to only publish repositories you own or are a maintainer of" (...well, I guess it could be buried deep in some user guide I never read, lol).
> putting random little "please for the love of all that is holy don't do X as it will cause problems for other users" notices in the documentation or even as interstitial UI (!!) of my products and watching everyone immediately do exactly that thing as no one reads or thinks (or even cares)--a large number of people aren't going to honor this request in the documentation
Kind of off topic, but you shouldn't get annoyed at people for ignoring your notices and not reading the docs. It's an extremely logical thing to do. Think about it - how many notices do you see in a typical day of computing? Probably dozens. How many tools to you use? Also dozens. Now imagine how long it would take if you read all of those notices, and exhaustively read the documentation for every tool. Too fucking long!
It's much better to use heuristics and not read. For example if you close a document and you've made unsaved changes to it, you know the dialog is going to be "Do you want to discard it?". There's no point reading it.
This is a good thing!!
So the conclusion is that you should design your software with the knowledge that people behave this way. It is usually possible to do so. If you give a concrete example I can probably suggest a better solution than "asking and hoping they read it".
I spoke in the past tense, and already learned this lesson back 20 years ago; you can tell that I believe software can and should be coded to avoid such issues from the position I took with my comment: that it was concerning that the software would stop working not if but when people do not read this "important" notice. Although, maybe you didn't actually bother to read the rest of my comment, and so failed to appreciate my actual point, given how you just quoted something near the beginning which was mere evidence and focused on it with what feels a bit like an axe to grind ;P.
Which, though, leads me to something I will say in response to your reframing: while I do believe that one must build systems with the understanding that people will not read any of the documentation, we should still judge people for the behavior. I am pro commingled recycling, and yet I also believe that people who don't even try to read the signs on top of a series of trash bins are shirking a duty they have to not be a jerk, the same way we should be angry at people for not knowing local laws even if we put them on giant signs on the street as they'd rather just be lazy.
Isn't the github way of doing things: You add a copyright notice to your code, identifying your repository as the source, and changing the copyright is illegal? That would be applicable to this as well.
Congrats on the launch! I’ve been following this project and I’m really excited to see how much it has matured. For projects currently on GitHub, what’s the best way to migrate? Is there a mirror mode as we test it out?
Thanks! There is no mirroring built-in yet, though this is something we're looking into. It should theoretically be as simple as setting up a `cron` job that pulls from github and pushes to radicle every hour, eg.
In addition, in order to migrate your GitHub issues to Radicle (which the above doesn't cover), there's this command-line tool [1] that should get you most - if not all - of the way there.
Migrating GitHub Pull Requests (PRs) to Radicle Patches is somewhat more involved, but that should still be possible (even if it involves some loss of information along the way, due to potential schema mismatches) ...
The main value capture at Github is issue tracking, PR reviews and discussion. Maybe not today, but is there an automated way to migrate these over in the future?
I wonder how discoverable (for normal people) these repositories are. It looks like https://app.radicle.xyz/robots.txt doesn't exist, so it seems like fair game for search engines, and indeed a search on Google and DDG for
site:app.radicle.xyz
does give some results. Maybe not that high up yet if not using that site filter, perhaps the ranking will improve?
Tools for integrating CI support with this would also be nice to see. Ultimately a loop with
while true; do wait_repo_update; git pull && ./run_ci.sh; done
but something nicer that you could only limit to pushes by trusted identities.
And then finally artifact storage. But maybe Radicle doesn't need to solve everything, in particular as a distributed network for sharing large binaries is going to get some undesirable uses pretty fast..
I realize I'm just some rando on the Internet, but I'm begging you please don't introduce Yet Another CI Job Specification ™
I'm sure you have your favorites, or maybe you hate them all equally and can just have a dartboard but (leaving aside the obvious xkcd joke) unless you're going to then publish a JSON Schema and/or VSCode and/or IJ plugin to edit whatever mysterious new thing you devise, it's going to be yet another thing where learning it only helps the learner with the Radicle ecosystem, and cannot leverage the existing knowledge
It doesn't even have to be yaml or json; there are quite a few projects which take the old(?) Jenkinsfile approach of having an actual programming language, some of them are even statically typed
I also do recognize the risk to your project of trying to fold in "someone else's" specification, but surely your innovation tokens are better spent on marketing and scm innovations, and not "how hard can it be" to cook a fresh CI job spec
I likely would have already written a much shorter comment to this effect, but having spent the past day slamming my face against the tire fire of AWS CodeBuild, the pain is very fresh from having to endure them thinking they're some awesome jokers who are going to revolutionize the CI space
I wish people would define precisely what they mean by "peer to peer" (or more commonly, "distributed"). Its such an ambigious term now it can mean anything when used as a buzzword.
I haven't seen the term misused very often - the way it is defined in Radicle and most other peeer-to-peer systems is how Wikipedia defines it[0]; specifically this part: "Peers are equally privileged, equipotent participants in the network".
So a peer to peer system is one where all participants are "equally privileged in the network". This usually means they all run the same software as well.
I mean, that definition doesn't fit with supernodes ("seed" nodes in your design) but that is a nitpick.
I guess im mostly just wondering what are the properties you are trying to accomplish. Like there is talk of publicly seeding repositories that are self-certifying, but also using noise protocol for encryption, so what is the security model? Who are you trying to keep stuff secret from? It is all very confusing what the project actually aims to do.
Mostly all i'm saying is the project could use a paragraph that explains what the concrete goals of the project are. Without buzzwords.
Ah.. my high hopes were immediately dashed by the trash that is curl-bash. What a great signal for thoughtless development, if this project catches on I can't wait to watch the security train wreck unfold. Maybe someday we'll get an "Open-Source, Peer-to-Peer, GitHub Alternative" that doesn't start with the worst possible way to install something.
This is an overreaction, almost to the point of absurdity.
Risks inherent to pipe installers are well understood by many. Using your logic, we should abandon Homebrew [1] (>38k stars on GitHub), PiHole [2] (>46k stars on GitHub), Chef [3], RVM [4], and countless other open source projects that use one-step automated installers (by piping to bash).
A more reasonable response would be to coordinate with the developers to update the docs to provide alternative installation methods (or better detail risks), rather than throwing the baby out with the bathwater.
FWIW, Homebrew (no longer) deserves quite such ire as you will note that it explicitly does NOT pipe the result to a copy of bash: by downloading it first it and quoting it using a subshell it prevents the web server from being able to get interactive access.
The script is safe regarding interrupted transfer, unless you happen to have a dangerous commands in your system matching ^(t(e(m(p(d(ir?)?)?)?)?|a(r(g(et?)?)?)?)?|i(n(_(p(a(th?)?)?)?|fo?)?)?|s(u(c(c(e(s?s)?)?)?)?)?|f(a(t(al?)?)?)?|m(a(in?)?)?|w(a(rn?)?)?|u(rl?)?).
And after that's been handled, well, what's the difference to just providing the script but not the command to invoke it? Surely if one wants to review it, downloading the script to be run separately is quite straightforward. (Though I there was a method for detecting piped scripts versus downloaded ones, but I don't think it works for such small scripts.)
Here you go [0] - the project hasn't launched yet and there are bits and pieces to be dealt with, the current focus is a bit somewhere else. You can also build from source [1] with Rust's cargo.
Thanks but... no thanks, you've missed my point entirely. Why would I want to run peer to peer software built by developers whose security stance starts with curl-bash? Would you curl-bash a webserver? an email server? No? Probably even worse for your source code repository then right?
Download it or don't. Trust the maintainer or don't. Whether you trust the maintainer or not shouldn't be a matter of the installation method, not even with curl-bash.
> Radicle repositories, which can be either public or private, can accommodate diverse content including source code, documentation, and arbitrary data sets.
If this is, basically, a peer-to-peer file sharing application, what part of the protocol handles dealing with abuse?
Otherwise, how is this different from the previous generation of file sharing applications (BitTorrent, winny, etc) where people just share arbitrary copyrighted content like movies, songs, software, etc?
I feel like a few bad actors will ruin this?
Can you partition your “personal” network somehow, so you can use it with absolute confidence your not participating in anything illegal?
One of the key ideas is that each user chooses what repositories they host via pretty fine-grained policies. This means you can easily block content you're not interested in seeding, or simply configure your node to only host content you explicitly allow.
You can also choose which public nodes to connect to if you'd rather not connect to random nodes on the network; though I don't expect most users to go this route, as you are more likely to miss content you're interested in.
Though Git (and thus Radicle) can replicate arbitrary content, it's not particularly good with large binary files (movies, albums etc.), so I expect that type of content to still be shared on BitTorrent, even if Radicle were to be popular.
(P.S. I am working at Radicle)
> Radworks is a community dedicated to cultivating internet freedom.
They do not shy away from cryptocurrency technology, though AFAICS that is not directly applied to the Radicle project. Another project of Radworks is Drips [1], to help fund open source.
[0] https://docs.radworks.org/community/ecosystem
[1] https://www.drips.network/
I'm all for a distributed self-hosting solution, so Radicle is definitely hitting the mark here, however:
> Linux or Unix based operating system.
For the kind of project I have to assist with, this would be a deal-breaker. Since the code seems to be in Rust: do you intend to make it available to MS Windows? (I took it for granted that Mac OS is included in the Unix family, right?)
If not straight-up support for MS Windows, then maybe an MSYS2 port?
----
To give some background: I'm not in charge of decisions like service vendor selection, and we are talking about a quasi-government organization with substantial open-source code base that is currently hosted on Github. I.e. sometimes I might have a chance to pitch a particular idea, but it's not up to me if the idea is accepted. They are quite motivated to make their work as resilient to private vendor policies as possible as well as try to "do good" in other ways (i.e. sustainability, breadth of the outreach etc. -- a typical European gov. org :) So, Github is... obviously in conflict with such policies.
While there are other gov. agencies tasked with archiving or networking support, they seem to be woefully incompetent and / or outdated, as well as often falling for the vendor-laid traps (eg. the archiving service went all-in after DataBricks not even realizing it's a commercial closed-source product). So, I wouldn't have high hopes for the org. to be able to leverage a self-hosted solution. That's why a distributed solution looks great.
However, they wouldn't be able to use a tool that doesn't work on major popular PC systems.
Radicle does work on macOS as well.
The product is set to launch this month, so we're just starting to onboard users, but many people in the community are already using it, and we've been using it internally for about a year.
also, i'm curious, what kind of adoption were you anticipating (some time ago and now) and did the result align with it?
[0]: https://app.radicle.xyz/nodes/seed.radicle.xyz/rad:z3gqcJUoA...
No need for crypto/digital currency whatsoever.
Deleted Comment
[1] https://www.youtube.com/watch?v=PWFF7ecArBk
I attended the workshop at Protocol Berg 2023 and think they built something really powerful and novel.
Perhaps the most exciting aspect is that even the collaborative aspect of the protocol is local-first which means you can submit patches and issues without internet and that your team isn't on HN every time GitHub is having problems.
Deleted Comment
What's missing in git is code issues, wikis, discussions, github pages and most importantly, a developer profile network.
We need a way to embed project metadata into .git itself, so source code commits don't mess up with wikis and issues. Perhaps some independent refs like git notes?
https://git-scm.com/docs/git-notes
For one, it has no way of verifying that the repository you downloaded after a `git clone` is the one you asked for, which means you need to clone from a trusted source (ie. a known server). This isn't compatible with p2p in any useful way.
Radicle solves this by assigning stable identities[0] to repositories that can be verified locally, allowing repositories to be served by untrusted parties.
[0]: https://docs.radicle.xyz/guides/protocol#trust-through-self-...
Respectfully disagree here. A repository is a(or multiple) chain(s) of commits, if each commit is signed, you know exactly that the clone you got is the one you asked for. You're right that nobody exposes a UI around this feature, but the capability is there if anyone would have any workflows that require to pull from random repositories instead of well established/known ones.
Another thing is knowing if the commit history has been tampered with without knowing the hash.
The reason for needing to not know the hash is for cases like tornado cash. The site and repo was taken down. There's a bunch of people sharing a codebase with differing hashes, you have no idea which is real or altered.
This is also important for cases where the domain is hacked.
Radicle adds issue tracking and pull requests. Probably some of those other features as well.
On mobile there are buttons on the bottom of the screen in the op link, click those and you get to the issue tracking tab and the pull request tabs etc
Fossil (https://fossil-scm.org) embeds issues, wiki etc. into project repository.
What has the world come to where that is the most important part?
--
I think gerrit used to store code reviews in git.
If you simply host Git on IPFS you have it properly decentralized without the limits of HTTP. IPNS (DNS of IPFS), which you need to point people to the latest version of your repository, however wasn't working all that reliably last time I tried.
Also radicle does want to tackle the issues / prs and other elements you mentioned as well.
And presumably the person hosting it will make sure that the computer hosting it is often on, for instance ISP routers and TV boxes are a good way to popularize it, since they often come with NAS capabilities :
https://en.wikipedia.org/wiki/Freebox
(Notably, it also supports torrents and creating timed links to share files via FTP.)
[1]: https://github.com/MichaelMure/git-bug
i know i've encountered something like this once in a notable repo. thought it was graphics related, like mesa or something, but looks like they're using GitLab.
> It’s important to only publish repositories you own or are a maintainer of, and to communicate with the other maintainers so that they don’t initialize redundant repository identities.
Based on my experience with people taking my code and shoving it onto GitHub--as well as separately in my demoralizing general experience of putting random little "please for the love of all that is holy don't do X as it will cause problems for other users" notices in the documentation or even as interstitial UI (!!) of my products and watching everyone immediately do exactly that thing as no one reads or thinks (or even cares)--a large number of people aren't going to honor this request in the documentation... and, frankly a large number of people aren't even going to see this in the first place as the home page tells you how to push code but you only find this "important" request in the "user guide" that people definitely do not bother to read.
It thereby seems quite concerning that, apparently?!, this system is designed in a way where doing what feels like a very reasonable thing to do--just pushing whatever open source code you are working on, based on the instructions on the home page--is going to interact with something about this protocol and how things are stored that something important enough to have this separated boxed "important" statement in the documentation is going to get cluttered and maybe even confusing over time :(.
On GitHub, you also can look at the stars to give you extra confidence, and on Radicle the equivalent is the seed count for a given repository.
Kind of off topic, but you shouldn't get annoyed at people for ignoring your notices and not reading the docs. It's an extremely logical thing to do. Think about it - how many notices do you see in a typical day of computing? Probably dozens. How many tools to you use? Also dozens. Now imagine how long it would take if you read all of those notices, and exhaustively read the documentation for every tool. Too fucking long!
It's much better to use heuristics and not read. For example if you close a document and you've made unsaved changes to it, you know the dialog is going to be "Do you want to discard it?". There's no point reading it.
This is a good thing!!
So the conclusion is that you should design your software with the knowledge that people behave this way. It is usually possible to do so. If you give a concrete example I can probably suggest a better solution than "asking and hoping they read it".
Which, though, leads me to something I will say in response to your reframing: while I do believe that one must build systems with the understanding that people will not read any of the documentation, we should still judge people for the behavior. I am pro commingled recycling, and yet I also believe that people who don't even try to read the signs on top of a series of trash bins are shirking a duty they have to not be a jerk, the same way we should be angry at people for not knowing local laws even if we put them on giant signs on the street as they'd rather just be lazy.
Migrating GitHub Pull Requests (PRs) to Radicle Patches is somewhat more involved, but that should still be possible (even if it involves some loss of information along the way, due to potential schema mismatches) ...
[1] - https://github.com/cytechmobile/radicle-github-migrate
The main value capture at Github is issue tracking, PR reviews and discussion. Maybe not today, but is there an automated way to migrate these over in the future?
Tools for integrating CI support with this would also be nice to see. Ultimately a loop with
but something nicer that you could only limit to pushes by trusted identities.And then finally artifact storage. But maybe Radicle doesn't need to solve everything, in particular as a distributed network for sharing large binaries is going to get some undesirable uses pretty fast..
I realize I'm just some rando on the Internet, but I'm begging you please don't introduce Yet Another CI Job Specification ™
I'm sure you have your favorites, or maybe you hate them all equally and can just have a dartboard but (leaving aside the obvious xkcd joke) unless you're going to then publish a JSON Schema and/or VSCode and/or IJ plugin to edit whatever mysterious new thing you devise, it's going to be yet another thing where learning it only helps the learner with the Radicle ecosystem, and cannot leverage the existing knowledge
It doesn't even have to be yaml or json; there are quite a few projects which take the old(?) Jenkinsfile approach of having an actual programming language, some of them are even statically typed
I also do recognize the risk to your project of trying to fold in "someone else's" specification, but surely your innovation tokens are better spent on marketing and scm innovations, and not "how hard can it be" to cook a fresh CI job spec
I likely would have already written a much shorter comment to this effect, but having spent the past day slamming my face against the tire fire of AWS CodeBuild, the pain is very fresh from having to endure them thinking they're some awesome jokers who are going to revolutionize the CI space
So a peer to peer system is one where all participants are "equally privileged in the network". This usually means they all run the same software as well.
[0]: https://en.wikipedia.org/wiki/Peer-to-peer
I guess im mostly just wondering what are the properties you are trying to accomplish. Like there is talk of publicly seeding repositories that are self-certifying, but also using noise protocol for encryption, so what is the security model? Who are you trying to keep stuff secret from? It is all very confusing what the project actually aims to do.
Mostly all i'm saying is the project could use a paragraph that explains what the concrete goals of the project are. Without buzzwords.
>
>The easiest way to install Radicle is by firing up your terminal and running the following command:
>
>$ curl -sSf https://radicle.xyz/install | sh
Ah.. my high hopes were immediately dashed by the trash that is curl-bash. What a great signal for thoughtless development, if this project catches on I can't wait to watch the security train wreck unfold. Maybe someday we'll get an "Open-Source, Peer-to-Peer, GitHub Alternative" that doesn't start with the worst possible way to install something.
Risks inherent to pipe installers are well understood by many. Using your logic, we should abandon Homebrew [1] (>38k stars on GitHub), PiHole [2] (>46k stars on GitHub), Chef [3], RVM [4], and countless other open source projects that use one-step automated installers (by piping to bash).
A more reasonable response would be to coordinate with the developers to update the docs to provide alternative installation methods (or better detail risks), rather than throwing the baby out with the bathwater.
[1] https://brew.sh/
[2] https://github.com/pi-hole/pi-hole
[3] https://docs.chef.io/chef_install_script/#run-the-install-sc...
[4] https://rvm.io/rvm/install
The script is safe regarding interrupted transfer, unless you happen to have a dangerous commands in your system matching ^(t(e(m(p(d(ir?)?)?)?)?|a(r(g(et?)?)?)?)?|i(n(_(p(a(th?)?)?)?|fo?)?)?|s(u(c(c(e(s?s)?)?)?)?)?|f(a(t(al?)?)?)?|m(a(in?)?)?|w(a(rn?)?)?|u(rl?)?).
And after that's been handled, well, what's the difference to just providing the script but not the command to invoke it? Surely if one wants to review it, downloading the script to be run separately is quite straightforward. (Though I there was a method for detecting piped scripts versus downloaded ones, but I don't think it works for such small scripts.)
[0] https://files.radicle.xyz/latest/
[1] https://app.radicle.xyz/nodes/seed.radicle.garden/rad:z3gqcJ...