Having built CI/CD pipelines in CircleCI, Jenkins, GitLab CI, Concourse, Semaphore, and GitHub Actions, I'm not entirely sure what this offers, other than being FOSS. That's not to say that it being FOSS is not valuable – for some users it most certainly is – but that I can't see another way that this would be valuable.
The config language is nice, but honestly I've never been bothered by YAML for pipelines. I want my pipelines to be declarative anyway. I'm a little disappointed that Cicada didn't go with one of the existing options such as Jsonnet, Cue, Dhall, etc.
Lastly, there's a feature I've always found useful and in many cases necessary, that many CI systems don't have – enforced serial execution. When you're doing continuous delivery it's often critical to make sure that only one release is going out at a time and that the version being released strictly increases. I've seen outages because of release jobs racing and an unintended downgrade happening. This requires CI system support to achieve. Last time I checked, Jenkins, Concourse, and GitHub Actions all provided mechanisms for this, Semaphore may have as well. Circle and GitLab did not (the former after endless discussion with our account manager over it!) and I found it hard to trust the platforms as a result. Cicada does not appear to have this, which is a shame. It suggests to me a lack of hands-on production experience with continuous delivery. Arguably this is not a CD system, but there's no reason why a CI system shouldn't be CD as well, it's not until a much larger team/product size that a dedicated CD system becomes truly necessary and they take a lot of work to set up well.
Creator of Cicada here. Thank you for the feedback! I've mentioned this in a few threads already, but the reason for making a new DSL for writing the workflows is that YAML makes it hard/cumbersome to express more complex workflows. Using a programming language though gives you more control over how your workflows execute. While there are already plenty of tools that use existing programming languages (ie, Python/TypeScript) to configure workflows, having a custom DSL allows you to make some of the more abstract terms like caching, conditional execution, permissions, etc. more explicit.
To your last point, I have experience with using CD in production, but not to the scale where I have builds stepping over each other and causing issues. I agree that serial builds are important in this case, and is something that I will need to look into (conceptually it sounds pretty simple).
You might want to check out the SparrowCI - it has combination of yaml AND standard programming languages ( Python/Ruby/GoLang/Powershell/Bash/Raku) giving you the best of two worlds - declarative style and extremely flexible flows when required
Awesome! I think serial builds should be fairly simple to implement if you can tie it back to a transaction in a data store.
I get the DSL desire, and I feel I've already lost the "YAML is fine" battle elsewhere so that's not a problem. I think a language like jsonnet or cue would have been a better choice simply because they don't involve users learning a new language or you from implementing a new language. Both would have allowed plugging in your own standard library of functions and abstractions.
Maybe I’m misunderstanding? For GitLab this should be doable by two different ways.
The first is merge trains, which merges requests one by one to prevent this exact outcome. You just have to force all deployments to be done via a merge request. That’s the downside.
The second being forcing a GitLab Runner to run one job at a time. Tag it as “deployer” then ensure all deployment jobs are marked as “deployer”. That runner will pick up deployment jobs one by one in order of first creation.
There's also a third way called resource groups [1]. We use that to ensure that we only run the newest job if we have multiple deployment jobs waiting for execution. This way even if we have multiple pipelines racing each other, only the last deployment job wins.
I think merge trains weren't a feature when I worked with GitLab, it was a while ago. This does certainly solve it, but at the cost of quite a different process. If what you're optimising for is time to release, merge trains add an overhead. At some point that overhead is worth it, but it depends on the team/product/etc.
Having just one runner be the deployer is an option too. I think we used hosted runners so not sure if this is possible in that setup? This would also make pipelines harder to optimise. Often there are many parts in pipelines that are safe to do in parallel, and only a few "critical sections" around which you want locking. This would solve simultaneous releases, but not the general case of the problem (which at least Jenkins and GitHub Actions manage ok).
GitLab always felt to me like Travis++, whereas systems developed later felt like they were built on fundamentally better primitives. Jenkins is a weird one because it has all of the features, can do all of these things, really quite well in many cases, but has a pretty bad developer experience and required a lot of maintenance to run a performant and secure install.
Is there any CI system that allows one to write declarative pipelines? What would that even mean? GitHub Actions, GitLab Pipelines, etc. are all effectively just shell scripts disguised as more or less verbose yaml, with some trigger conditions added.
I guess nix's hydra is the closest to a declarative CI system that exists, because nix does the hard work of abstracting the imperative build steps into a declarative interface. Even then, if you want to do anything outside of nix derivations you would be writing something imperative.
I'm not looking for literally everything to be declarative, but ideally each snippet of shell script would be entirely independent and run effectively reproducibly. GitHub/GitLab do this well enough, although not perfectly. Concourse does this really well. Bazel is also great at this.
This project looks interesting, I very much like how secrets are handled, but here’s 3 reasons why, given what I can gather from the docs, I can’t use it right now.
1. It doesn’t seem it’s possible to include other .ci files? I have multiple projects that use the same ci config with their own augments and Cicada seemingly won’t work with that flow?
2. Self hosted non-docker runners are Python3.11. Some of us (albeit few of us) don’t have the luxury of being able to abandon ancient OS targets.
3. Doesn’t seem git.push allows branch to specified as “$DEFAULT_BRANCH” macro (or similar). Some projects use master, some use main, some use gold, whatever, it would be nice to not have to know.
The example CI repo is no more than a “hello world”. I don’t think people with simple CI requirements are interesting in switching from what they already have. Your target audience is likely someone like me who maintains 10k+ lines (merged) of GitLab YAML and wants to get out. I would be more encouraged to look deeper into this project if it could show me the value it adds, because right now it just seems like a different YAML that I’ll eventually loathe too.
Creator of Cicada here. Thank you for these questions! I'll try and answer them all:
0: Secrets are stored using Vault. Read this commit message [1] for a full breakdown.
1: Currently you cannot include other .ci files. I have been in feature creep mode for months now, and I've been forcing myself to stop adding features and start talking to users. The goal is to make Cicada more or less a general purpose programming language, but the first step is making it work well for defining CI/CD workflows.
2: If it is necessary, I could back-port the self-hosted runners to Python 3.10/3.9 or earlier. And, since the runner interface just uses websockets, I (or someone else) could make a runner in a different language, ie Rust or Go.
3: There is a "event" global variable that includes info like "event.branch", which is the branch that was pushed, but it does not include stuff like the default branch. Currently you could do `on git.push where event.branch is "main"`, but something like "event.branch is event.default_branch" would be even better. I'll work on adding that.
The value add currently is that Cicada is FOSS, platform agnostic (works with GitHub/Gitlab), and uses a language that consolidates the workflows and scripts into one manageable file format. Existing CI systems are already packed with features that people expect, so the current struggle is catching up to this and then adding more on top of that. I'm trying to focus on what sets Cicada apart: That it gives you more control over your workflows, while being expressive and easier to manage then YAML and shell scripts.
I definitely understand the reluctance towards feature creep, but I can imagine most bigger customers mightn't want to rewrite their include-heavy CI definitions in this, knowing it will come later and they'll have to rewrite again.
It also seems to require github credentials instead of just supporting saml/ldap/local accounts and I don't see a way to make it work with over git providers (i.e. gitea) so not sure where the home gamer market would use this either? If I'm ponying up private github repositories already why wouldn't I just pony up slightly more to use their system? maybe I'm just missing it (I am, after all, a noob just scraping by on beginning to learn git and using it to manage a few things).
Creator of Cicada here. Currently I am using GitHub SSO to reduce spam for the logins. When installing Cicada locally though you need to create an admin account, though there is no way (in the UI) to create new local users. Adding more sign-in methods (ie Gitlab, Bitbucket) is definitely on my radar.
This looks really interesting, but I wish people would instead contribute to WoodpeckerCI. It's a FOSS fork of Drone, and Drone really is the pinnacle of CI for me. Only what you need, no fat, simple, functional, modern, easy. IMHO the only thing Woodpecker really needs right now is help finishing their core implementations of K8s support and support for the big 3 DVCS vendors.
Something I have noticed lately, First startup idea that every Devops person starts with is to create a CI/CD project because they think world is inefficient and workflows can be optimized, and then they pivot to other ideas.
It's definitely a much bigger challenge than it appears on the surface. But I think the space is due for a shake-up - the current crop of tools has been around for years now, and they all suffer from a broadly similar set of scalability and usability problems.
I think market is very mature for CI/CD, it often boils down to price war. Economics of operating your own CI/CD in org like this one is fairly bad compared to Github actions. Especially, the way Github bundles actions with the other services as a buyer it definitely feels like a better deal than buying best in class CI/CD. Github actions gets the job done.
The slow changing nature of pipelines makes then candidate for not being touch once the configuration is set.
So, to answer your question, in my opinion - is there a room for improvement - yes, but the value is minuscule for a customer to switch to a different provider in CI/CD is working fairly well.
I have logic that needs to be put into the build. I can put it into a DSL or I can put it into a bash script. A bash script can be run on a developer's machine. This a bash script allows the developer to test the build on their own machine without the need for running an entire build through the CI system first.
I therefore don't see the need for all that DSL stuff that's designed around not needing a bash script. I still need the batch script for testability purposes.
I'm fine with the DSL otherwise; switching from Jenkins to GitHub actions isn't a big deal especially when all you're going to do is run a bash script.
What I really want is a 'Grand Translator' tool that allows any CI pipeline definition to be translated into any other CI pipeline definition.
I want to take any .gitlab-ci.yml and magically translate it to a github workflow, and vice-versa. I know it isn't impossible, but it's a heck lot of work to get it right with all the hidden features behind declarative pipelines.
If doing this sort of a migration is a real possibility, it's an argument for having the bare minimum logic in the CI tool and as much as you can in your own scripts...
That's actually what we did for our monorepo. We had a huge Gitlab pipeline of multiple steps and jobs, now it's a single job pipeline that builds, tests and deploys 15 different projects, all powered by Nix and some Python scripts.
Ugh, yes, but then I fall into the "too complex for bash, too simple for Python" trap. I feel like everything goes to hell the moment someone writes a Python class to deal with /something/ in a build pipeline. For bash, well, we all know where that goes... the other options are simply too obscure for me to invest in.
But I agree with your assessment, as much as the classic
Creator of Cicada here. I mentioned this in another thread, but I have an experimental "GitHub Actions to Cicada" converter tool I'm working on that makes it easier to import GHA workflows to Cicada. GitHub already has an importer tool to import other git providers to GHA, but of course they don't have any export features.
Like you said, there are lots of intricate details you need to get right, and each CI/CD provider has a different ethos about how CI should be done. What I'm trying to do with Cicada is create one workflow format that gives you the power of GitHub Actions with their numerous event triggers, but make it work for other providers like Gitlab. Having one format that works with many providers is better than converting multiple formats back and forth, IMO.
If anyone from the project is reading this, the multiple misspellings or typos featured prominently on your website make it look very amateurish (imo). Between "You deserve better then YAML", "autotomatically escaped", and other weird phrasing, this doesn't really inspire confidence.
The config language is nice, but honestly I've never been bothered by YAML for pipelines. I want my pipelines to be declarative anyway. I'm a little disappointed that Cicada didn't go with one of the existing options such as Jsonnet, Cue, Dhall, etc.
Lastly, there's a feature I've always found useful and in many cases necessary, that many CI systems don't have – enforced serial execution. When you're doing continuous delivery it's often critical to make sure that only one release is going out at a time and that the version being released strictly increases. I've seen outages because of release jobs racing and an unintended downgrade happening. This requires CI system support to achieve. Last time I checked, Jenkins, Concourse, and GitHub Actions all provided mechanisms for this, Semaphore may have as well. Circle and GitLab did not (the former after endless discussion with our account manager over it!) and I found it hard to trust the platforms as a result. Cicada does not appear to have this, which is a shame. It suggests to me a lack of hands-on production experience with continuous delivery. Arguably this is not a CD system, but there's no reason why a CI system shouldn't be CD as well, it's not until a much larger team/product size that a dedicated CD system becomes truly necessary and they take a lot of work to set up well.
To your last point, I have experience with using CD in production, but not to the scale where I have builds stepping over each other and causing issues. I agree that serial builds are important in this case, and is something that I will need to look into (conceptually it sounds pretty simple).
I get the DSL desire, and I feel I've already lost the "YAML is fine" battle elsewhere so that's not a problem. I think a language like jsonnet or cue would have been a better choice simply because they don't involve users learning a new language or you from implementing a new language. Both would have allowed plugging in your own standard library of functions and abstractions.
The first is merge trains, which merges requests one by one to prevent this exact outcome. You just have to force all deployments to be done via a merge request. That’s the downside.
The second being forcing a GitLab Runner to run one job at a time. Tag it as “deployer” then ensure all deployment jobs are marked as “deployer”. That runner will pick up deployment jobs one by one in order of first creation.
[1] https://docs.gitlab.com/ee/ci/resource_groups/index.html
Having just one runner be the deployer is an option too. I think we used hosted runners so not sure if this is possible in that setup? This would also make pipelines harder to optimise. Often there are many parts in pipelines that are safe to do in parallel, and only a few "critical sections" around which you want locking. This would solve simultaneous releases, but not the general case of the problem (which at least Jenkins and GitHub Actions manage ok).
GitLab always felt to me like Travis++, whereas systems developed later felt like they were built on fundamentally better primitives. Jenkins is a weird one because it has all of the features, can do all of these things, really quite well in many cases, but has a pretty bad developer experience and required a lot of maintenance to run a performant and secure install.
Is there any CI system that allows one to write declarative pipelines? What would that even mean? GitHub Actions, GitLab Pipelines, etc. are all effectively just shell scripts disguised as more or less verbose yaml, with some trigger conditions added.
I guess nix's hydra is the closest to a declarative CI system that exists, because nix does the hard work of abstracting the imperative build steps into a declarative interface. Even then, if you want to do anything outside of nix derivations you would be writing something imperative.
Deleted Comment
1. It doesn’t seem it’s possible to include other .ci files? I have multiple projects that use the same ci config with their own augments and Cicada seemingly won’t work with that flow?
2. Self hosted non-docker runners are Python3.11. Some of us (albeit few of us) don’t have the luxury of being able to abandon ancient OS targets.
3. Doesn’t seem git.push allows branch to specified as “$DEFAULT_BRANCH” macro (or similar). Some projects use master, some use main, some use gold, whatever, it would be nice to not have to know.
The example CI repo is no more than a “hello world”. I don’t think people with simple CI requirements are interesting in switching from what they already have. Your target audience is likely someone like me who maintains 10k+ lines (merged) of GitLab YAML and wants to get out. I would be more encouraged to look deeper into this project if it could show me the value it adds, because right now it just seems like a different YAML that I’ll eventually loathe too.
Very neat project, I hope to see it mature.
0: Secrets are stored using Vault. Read this commit message [1] for a full breakdown.
1: Currently you cannot include other .ci files. I have been in feature creep mode for months now, and I've been forcing myself to stop adding features and start talking to users. The goal is to make Cicada more or less a general purpose programming language, but the first step is making it work well for defining CI/CD workflows.
2: If it is necessary, I could back-port the self-hosted runners to Python 3.10/3.9 or earlier. And, since the runner interface just uses websockets, I (or someone else) could make a runner in a different language, ie Rust or Go.
3: There is a "event" global variable that includes info like "event.branch", which is the branch that was pushed, but it does not include stuff like the default branch. Currently you could do `on git.push where event.branch is "main"`, but something like "event.branch is event.default_branch" would be even better. I'll work on adding that.
The value add currently is that Cicada is FOSS, platform agnostic (works with GitHub/Gitlab), and uses a language that consolidates the workflows and scripts into one manageable file format. Existing CI systems are already packed with features that people expect, so the current struggle is catching up to this and then adding more on top of that. I'm trying to focus on what sets Cicada apart: That it gives you more control over your workflows, while being expressive and easier to manage then YAML and shell scripts.
[1]: https://github.com/Cicada-Software/cicada/commit/2659f79b500...
I definitely understand the reluctance towards feature creep, but I can imagine most bigger customers mightn't want to rewrite their include-heavy CI definitions in this, knowing it will come later and they'll have to rewrite again.
The slow changing nature of pipelines makes then candidate for not being touch once the configuration is set.
So, to answer your question, in my opinion - is there a room for improvement - yes, but the value is minuscule for a customer to switch to a different provider in CI/CD is working fairly well.
I therefore don't see the need for all that DSL stuff that's designed around not needing a bash script. I still need the batch script for testability purposes.
I'm fine with the DSL otherwise; switching from Jenkins to GitHub actions isn't a big deal especially when all you're going to do is run a bash script.
I want to take any .gitlab-ci.yml and magically translate it to a github workflow, and vice-versa. I know it isn't impossible, but it's a heck lot of work to get it right with all the hidden features behind declarative pipelines.
But I agree with your assessment, as much as the classic
script:
makes me want to die.Like you said, there are lots of intricate details you need to get right, and each CI/CD provider has a different ethos about how CI should be done. What I'm trying to do with Cicada is create one workflow format that gives you the power of GitHub Actions with their numerous event triggers, but make it work for other providers like Gitlab. Having one format that works with many providers is better than converting multiple formats back and forth, IMO.
It's really, really limiting, and very hard to do things like "run this pipeline if on this branch"
I mean, go for it if you want but I'm not sure why you'd need to maintain so many heterogeous pipelines that would warrant a tool like this.