Disclaimer: I'm pretty biased towards Gitlab -- write about the things you can do it from time to time and they gave me some free swag once.
Best CI I've ever used is Gitlab CI[0]. The runner is completely open source[1] and you can use your own runner with your gitlab.com (or local instance) projects -- set it up in an autoscaling group[2] for savings.
I run https://runnerrental.club but Gitlab also recently released the ability to pay for minutes in 11.8 [3], so my product is more-or-less dead in the water but I don't mind since Gitlab is such an excellent tool, I'm glad to see them fill the need.
But back to Gitlab CI -- the YAML configuration documentation[4] is pretty fantastic -- Most easy things are easy and hard things are possible. I suspect that one could run an entire startup like circleci/travis/drone based on just the software that Gitlab has open sourced and made available already.
If you know a bit about GitLab and Docker, GitLab CI is pretty easy to grok. I really enjoy that you can run your CI jobs inside any old Docker container (with a shell). GitLab CI is built up from very simple concepts and functionalities, but still enables some powerful use-cases.
The artifacts feature is great and some artifacts, like unit test report files, can be interpreted by GitLab and used in various parts of the GitLab web UI. A lot of this just works and most of the CI features are available in the GitLab community edition which is open source.
I have not used Jenkins actively since before the Jenkins pipeline file format was common. So for me Jenkins always appeared to be this game of checking the right checkboxes and clicking the right buttons in the Jenkins UI. The new pipeline feature is probably much nicer. However, now that I use GitLab I don't really see any reason to switch back to Jenkins.
You mention docker and it is super great for CI: it’s probably one of the widest used feature of Jenkinsfiles (you can specify what image or dockerfile you want a stage to run in - simple but powerful, as you probably know)
Last I checked, there were a lot of little things that made it not possible to move to GitLab CI. E.g.:
- Can't customize your git checkout process (e.g. shallow clone with depth, or merging source branch with target branch with certain strategy)
- Can't make job run/not run based on filter on source branch/target branch/etc. of a merge request
- Can't dynamically make certain jobs only run on certain agent machines
So I'm still stuck with Jenkins for now. I know we love bashing Jenkins but I have yet to come across anything that offers the same amount of flexibility.
I will second this, Gitlab has the best CI I've ever used, and I don't know what it is. The UI is just so clean, it does what I need to and is easy to configure. I put all my projects on Gitlab mostly because of the CI, but also because of the other great features.
GitLab product director for CI/CD here - thanks so much for the feedback, everyone. It's really great to read how much you're getting value out of what we built.
We have an overall CI/CD direction page up at https://about.gitlab.com/direction/cicd which you can drill down into the individual stages plans from. Feedback is always welcome, we love building things in partnership with real users. You can reach me at jason@gitlab.com any time.
I haven't checked out Gitlab in a long time. My, they've come a long way! I love a lot of what I'm seeing including the Web IDE and their bias towards making CI/CD a priority.
Inclusion of the free docker registry was also pretty visionary, and it's been a while since they added that -- it's crucial for just about all my new projects.
This allows you to connect to a terminal in the running job on GitLab CI for debugging. Would love to understand how this does or does not meet your use-case here.
If you prefer to host your code on GitHub, it is fine! You can use GitLab CI/CD the way that you host your code on GitHub, but build, test and deploy from GitLab. Take a look at https://about.gitlab.com/solutions/github/!
This is not a full standalone mode, because GitLab CI is a built-in solution that can not be easily separated, but might work for you.
TBH, its easily done in shell via REST API. We use custom pipeline runner from simple Powershell scripts that works even better for us then default style.
We set all jobs to manual and then our script triggers them depending on commit message, person, moon phase etc.
But really, this should be in core. Its very hard to do multirepository stuff. Its not that easy to do mono repo stuff too - I really need a pipeline withint sub-project, a mother pipeline, option to run whatever one I want etc... Gitlb pipelines could be a lot better.
Hello, I see a lot of great feedback in this post. I am a product manager working at CloudBees, the primary corporate sponsor of Jenkins. Jenkins is now in the Continuous Delivery Foundation as well.
While it is easy to bash on an inanimate object, there are some very dedicated and empathetic people who care deeply about the project. Some of those people do this work in their off-hours and some to this work as part of their daily work activities AND also in their off hours.
In that spirit, we want to make Jenkins better and created a separate group at CloudBees in the Product and Engineering teams late last year. They focus on open source work for Jenkins and on some proprietary things for CloudBees Core (built on Jenkins). We also have a dedicated user experience/product designer who started working on the project a few months ago. One of the first things he and I worked on was creating a curated, tailored version of Jenkins via the CloudBees Jenkins Distribution. This distribution will focus more and more over time on a guided workflow for continuous integration and delivery with Jenkins. These patterns will also be shared with open source Jenkins - some through direct contributions and others through suggestions (better plugin categorization, documentation, etc.).
Please use this comment thread to share your constructive, honest feedback about how we can improve Jenkins.
I think its already mentioned about Jenkins Configuration as Code. But in general stick to a configuration method, I have had to migrate from JJB > groovy scripts > init.d groovy > Jenkinsfile.
And can we PLEASE open up the issues tab on the Github repos especially for plugins. There is currently no way to provide any feedback/report issues on these plugins because the "issues" tab is disabled. Our current options are
- Put a comment on the plugin wiki page
- Hunt down the the relevant support forum for the plugin
- Find the original source repo and post an issue
All of which are not ideal and does nothing to directly help the development of the plugins.
Jenkins has used Jira for bug tracking since before GitHub existed, so although the project's code and all plugins are now managed on GH, bug tracking continues to live in Jira. Anyone can create an account and add issues here: https://issues.jenkins-ci.org/secure/Dashboard.jspa All plugins track issues there, so you just put the appropriate plugin, or "core" for Jenkins itself, in the 'component' field and the issue should be assigned to the right person.
> While it is easy to bash on an inanimate object, there are some very dedicated and empathetic people who care deeply about the project. Some of those people do this work in their off-hours and some to this work as part of their daily work activities AND also in their off hours.
I've been using Jenkins heavily over the past year since a client of our purchased the enterprise version. It was billed to me as a mature open source product that has been refined over the years by people trying to optimize their dev operations. While I'm sure some people really care about it, the user experience is so utterly disappointing it's almost impossible to imagine how it got to this state without neglect.
Even if you ignore all the complicated stuff, the web UI is embarrassing. While I don't suggest a "pretty" UI is necessary for devops, I would think you'd have a quick win just by having some people re-style the existing UI to make it look and feel like something built this century and not require a dozen clicks to get to important information. There are also bugs that are so painfully obvious it makes me wonder how they still exist. An example is if your Github branch has a slash in it (eg. feature/something) you get a 404 error if you try to navigate to that builds' results.
There are also features that appear to have almost no value yet are in the core UI and clearly took some time to build. The weather icons representing various permutations of previous build states is one ridiculous example that comes to mind.
I would respectfully suggest you run through some real world Jenkins experiences like the ones mentioned in the article. Also setting up a new server, configuring non-trivial SCM settings, debugging Jenkinsfiles, etc. To echo the article's sentiment - it feels like I'm constantly fighting with Jenkins to do what I need instead of being guided into a mature set of features.
Conversely - Octopus Deploy is a related product I have been using alongside Jenkins which has been an absolute joy to work with. Everything from initial setup to configuring its agent software on host servers has been straightforward. It has a simple, elegant UI that provides access to important information and actions where you would hope to see them. And most importantly - everything works. I have yet to encounter a bug or experience any broken UI states.
I'm glad to hear CloudBees is making some effort to improve things and I hope PMs like you continue to be involved in the community and solicit feedback, even if it's hard to hear sometimes.
Jenkins configuration as code. It's hard to configure without going to all the files and options. You have several things to take care of: server config files, plugins, credentials, projects, etc. Going through xml files is not fun.
Also, having predetermine pipeline plugin configuration based on runner types would be nice. For example, if k8s I would expect k8s plugin + credentials + organization plugin. In many cases, the only thing I want is a pipeline which I can deliver code.
Do something with jenkins declarative and get rid of groovy script. The latter makes a mess in your files, the former is very limited. At the end you have to write groovy if you want things more bespoken. It would be nice to just use any language in a container as drone does for your plugins.
I've been involved in pioneering Jenkins, maintaining Jenkins, and using Jenkins for the past 7 or so years off and on in different roles, often that of the Jenkins admin. The original article has a lot of good feedback that I'd suggest addressing; it hits a lot of the pain points I've had in the deploys I've had my hands in.
For myself, I'd prioritize -
An indepth improvement of the Config as Code system as applied to Kubernetes, both for management of the config and of relevant plugins.
Plugin compatibility and management of plugins. It's not... smooth.
cloudbees shop here - i can say that because of how fragile the plugin environment is, we never, ever update our jenkins - we cant, we would spill money out the instant it went offline. so every few years we just roll out a new jenkins and force the devs to migrate to it, slowly, also over the course of years. every half a decade the cycle restarts. ok, we have been through exactly one of these, but it's a full cycle and starting up again. this has always been the biggest pain point, developers begging for a new plugin they cant have because updating a dependency is forbidden. so they either dont, or they do, by: making their own island ci/cd, or doing it so poorly, just as much risk comes into jenkins as it would rolling the dice on an untested upgrade.
CloudBees Support Engineer here! We offer a free Assisted Update service to customers for exactly this reason. We will examine your existing Jenkins installation, compare it with the target version you would be updating to, and outline any possible snags that you would need to address during the process. We also help ensure that you have a good backup so that you can roll back if need be, and if you are a Platinum customer a Support Engineer will hang out on a conference call while you perform the update. We've done loads of successful updates with customers this way, I think it's one of the most useful services we offer. Updates don't have to be as painful as you've described!
You can use the YAML-like declarative syntax [1] instead to configure the pipelines for 90% of what you do, and just use Apache Groovy for the more complex logic, or interfacing with plugins.
I sincerely wish I could move away from Jenkins for the reasons stated in TFA (GUI-oriented, slow, hard to backup/config, test-in-production mentality and boundless plugins) but I've never found something that fits the bill.
The much-touted repo integrations (travis, circle...) all have an exclusive focus on build-test-deploy CI of single repos.
But when you have many similar repos (modules) with similar build steps you want to manage, and want to have a couple of pipelines around those, and manage the odd Windows build target, these just give up (it's docker or bust).
Sadly, only Jenkins is generic enough, much as it pains me to admit.
Anyone got a sane alternative to jenkins for us poor souls?
TeamCity from JetBrains is the same thing as jenkins, except the core features are working core features instead of broken plugins. It's paid software though, you get what you pay for. https://www.jetbrains.com/teamcity/
Of the CI tools I've used (most of them) TeamCity was my personal favorite--but the advantage of Jenkins is that it's very widely used, has a greater breadth of capabilities due to the huge plethora of plugins, and a huge amount of support info readily available online. Some plugins are even maintained by an external vendor that produces the tool you're trying to integrate with and are either better supported or the first to get timely updates.
Bamboo on the other hand is IMO the worst of the commercial CI tools by far and where I work has gone down for us the most. Atlassian itself doesn't appear to be investing in it much anymore judging by the slow pace of development in recent years and at their most recent conference, you can hardly find it mentioned or see much presence for it anywhere.
In all the CI systems I've used though, there has not been one that I haven't encountered some major difficulties with.
Beyond that, anything to do with build automation for a large number of users always quickly becomes a support & maintenance quagmire. Users frequently want to install a new (barely maintained) plugin to solve a problem they have, complex interactions lead to difficult to understand failure modes that require time consuming investigations ("your CI tool is the problem and broke my build" ... "No, your build is broken" ...).
Fine, when you are one specific vendor shop, like Jetbrains or Atlassian stack and you have got plenty of financial power, then there is always cool features, what can bring benefit. But in the end CI and CD systems are glorious semi-smart cron runners. Are these tools 10x better than Jenkins. Not so much, CI/CD is from one of the standpoint most important and in the same time less important tool, delivery should suck very much to migrate to new platform just because. Jenkins shines here, it's not perfect, but it works.
More or less it's for free from licensing standpoint, you don't have to go thru Corporate procurement hell. It's not for free from workforce perspective, but none of these tools are with zero configuration. Just x,y,z, still some yaml or some other crazy configuration needs to be done (like Bamboo dsl).
And actually, you can get quite far with the free TeamCity license of three build agents and 100 build configs. I’m also fairly sure that Jetbrains would take kindly to license requests from open-source projects and academia.
TeamCity doesn't handle downstream builds properly. Bamboo has severe stability problems. I've worked at places that evaluated them and always found Jenkins was still the least bad option.
TeamCity has an extremely generous 100 build configuration limit, if you’re exceeding that, than in all likelihood you’re getting far better value from it than the additional licensing cost.
* Broken base Ubuntu images being recommended by Atlassian as the default for agent Image configuration, only to be fixed to a usable state months later;
* Being generally years behind other CI tools, even the traditional ones;
* Data exports corrupting themselves despite apparently succeeding, blocking migrations after upgrades or server changes;
* The official documentation somewhere recommending copying across password hashes directly for setting up new admin for a migration, but I can't find this anymore so they've hopefully improved the process and documentation for this;
* A bug in an earlier version in which a strange combination of error cases in an Elastic Bamboo image configuration could spam EC2 instances in a constant loop, which we thankfully spotted before it ate through our AWS bill;
* No clear messaging from Atlassian about how the future of Bamboo compares to Pipelines. The official line seems to be that Bamboo is for highly custom setups who want to host their CI themselves, but you get the impression from to the slow pace of development that they're slowly dropping it in favour of Pipelines. I'd rather they be honest about it if that is the case.
Those are just the problems I can think of from the top of my head, anyway.
I agree. I used TeamCity and liked it. It was like Jenkins, but easier to setup, less messy and just worked for what we needed it. It was worth paying every penny for it.
We use Teamcity even though we have Gitlab for source control. Teamcity has worked for years which we needed. Don't know if we ever will switch to Gitlab for CI.
Not with pipeline files. I am a total Jenkins noob, but I was able to (relatively) quickly setup a minimal job that automatically pulls config from the relevant GH repo.
Ah yes, pipelines do make a difference in configuring jobs. However, how are you managing your plugins? Your Jenkins configs? Most likely those are manual (however if you've found a way that works well, please share). I've also found that for some functionality, I've had to add Groovy into my pipelines.
That said, pipelines has made a HUGE difference. I still want to migrate but this fixes a large pain point.
Have you observed any limitation or problem with it? I've been very interested in transforming our internal Jenkins CI into something lighter and modular with less maintenance which still allows multi-platform slaves, and BuildKite seems like a very interesting new player.
Gitlab and Concourse both support windows runners as far as I can see. They also don't require docker, but you might actually want that for most of your jobs.
My biggest gripe about gitlab is you can't schedule a job in code, and I suppose it's less then ideal to support 3rd party repos in hosted gitlab, but I don't know why you'd not use it as an SCM.
The bigger problem, would be using a group job that triggers a bunch of other jobs to do the many modules type of development you spoke about, but I'd just develop my modules seperatly, and build them in parallel steps if need be.
Or are you looking more for putting the values in the .gitlab-ci.yml itself? This is something we have thought a bit about, but it gets strange with branches and merges where it's not always clear you're doing what the user wants as the different merges happen.
Same boat as you. I'm very happy with Gitlab CI. Do look into it, it's extremely flexible. Not quite as flexible as Jenkins, but far more than Travic/Circle CI without it becoming an issue.
They now have configuration includes and cross project pipeline triggers, which is part of what GP seems to be looking for.
Personally I’ve found that for my past and present use cases generating any needed step (e.g test matrix) e.g with a script is much more flexible, predictable, and reproducible since the generated result can be versioned.
I also successfully used various custom runners including baremetal Windows ones and virtualised macOS ones inside VirtualBox.
I don't think Jenkins is gui oriented, slow or hard to backup/config, but I did enjoy using TeamCity few yers back. Sure it costs you arm and a leg, but it worked well without any plugins.
Happy Buildkite user here across two companies. We've built some custom tooling around the GraphQL API here but have since found it solid for both periodic jobs and CI needs.
I’m experimenting right now with how far I can simplify the abstractions, and writing my own thing in rust.
Since my use case is integration with gerrit, I poll the updated changes over ssh, and have the regex-based triggers which cause a “job” launches. Job consists of making a database entry and calling a shell script, then updating the entry upon completion. Since job is just a shell script it can kick off other jobs either serially or in parallel simply using gnu parallel :-)
And voting/review is again just a command so of course is also flexible and can be made much saner than what I had seen done with Jenkins.
So the “job manager” is really the OS - thus killing the “daemon” doesn’t affect the already running jobs - they will update the database as they finish.
The database is SQLite with a foreseen option for Postgres. (I have made diesel optionally work with both in another two year old project which successfully provisioned and managed the event network of about 500 switches)
Since I also didn’t want the HTTP daemon, the entire web interface is just monitoring, and is purely static files, regenerated upon changes.
Templating for HTML done via mustache (again also use it in the other project, very happy).
For fun I made (if enabled in config) the daemon reexec itself if mtime of config or the executable changes.
I think these kind of home-grown systems are pretty hard to "sell" to others. I know that I've written a couple, my general approach was to :
* Get triggered by a github (enterprise) webhook.
* Work out the project, and clone it into a temporary directory.
* Launch a named docker container, bind-mounting the temporary directory to "/project" inside the image.
* Once the container exits copy everything from "/output" to the host - those are the generated artifacts.
There's a bit of glue to tie commit-hashes to the appropriate output, and a bit of magic to use `rsync` to allow moving output artifactes to the next container in the pipeline, if multiple steps are run.
But in short I'd probably spend more time explaining the system than an experienced devops person would be creating their own version.
Zuul-ci.org had recently caught my eye, particularly because it fully supports heavy integration testing of multi-repo apps. It doesn't yet have support for bitbucket server though, which is sort of a deal breaker for me.
> But when you have many similar repos (modules) with similar build steps you want to manage
How many teams do you have? In all seriousness, if you aren't talking at least one team per repo, have you considered a monorepo setup? Aren't you burning time managing those many similar repos with many similar build steps?
That said, even in a monorepo, I still prefer Jenkins compared to cleaner seeming cloud offerings due to its limitless flexibility.
Internal libraries and similar fun stuff. Common build step ~~ same packager commands run on them.
Management is fairly simple with a template + seed jobs. It's just ... everything else is annoying.
I don't understand what you mean by one team per repo?
I agree, as I keep saying at $WORK: Jenkins is the least-worst system out there.
side note: I am confused by your usage of "TFA". I looked it up and it stands for what I thought it does, which has a pejorative connotation. That doesn't seem to be what you meant?
Heyo, sorry about that, I was playing on the fact that common parlance has tamed the usage to have "TFA = The FINE Article" in civil discourse =)
My bad, will check my assumptions some more!
TFA is in reference to actually Reading TFA or RTFA. Historically, it has very strong roots in Slashdot culture, which was sort of the Hacker News of the late 1990s and all of the 2000s. By using TFA, somewhat indicates you RTFA, as opposed to everyone else who is just speculating on the content of the linked article (didn't RTFA).
Some of us here have been using terms like RTFA and TFA for twenty years, maybe longer.
Actually, historically its use doesn't necessarily have a pejorative connotation. You can take it to mean "The Fine Article" just the same. It's more of a joke reference with roots to 'RTFA' used frequently in discussion forums like this.
I think it was here on HN that someone introduced me to reading it as The Fine Article.
While I am a conservative christian myself (hah, most of you didn't guess that) I try to make a point out of not getting offended for such things, and if I can do it so can most people :-)
I'm helping clients move from Jenkins to Azure Pipelines which is part of Azure DevOps (formerly VSTS, TFS). If that doesn't make you dizzy then it's a pretty good product. It has a free tier. Windows build targets shouldn't be a problem since it's from Microsoft. Obviously it's not OSS.
We run our infrastructure off of cloudflare, so we can easily spin up a staging environment that's an exact replica of production (only difference is # and size of instances). We also run a staging jenkins server that's defined in the cloudflare config.
We keep our jenkins jobs version controlled by checking in each job's config.xml into a git repo. In the past I've seen the config.xml files managed by puppet or other config management tools.
This helps us get around the "hard to backup" and "test in production" issues. We can test out jenkins changes in staging, commit those changes to our jenkins repo, and then push up the config.xml file to the production jenkins server when we're ready to deploy.
>Anyone got a sane alternative to jenkins for us poor souls?
I haven't tried this yet myself but AWS CodePipeline lets you have Jenkins as a stage in the pipeline. You use Jenkins only for the bits you need without the extra plugins. The resulting Jenkins box is supposed to be very lean and avoid the problems you describe.
Performance isn't great. We're using codepipeline/codebuild (triggered via jenkins), and it's common to wait 30 seconds while the step is being created
Cloudbuild on the gcp side has had much better performance
I'm still on buildbot, but it's definitely showing its age and I'm hoping to move off of it within a year. I've been keeping an eye on Chromium's buildbot replacement, LUCI (https://ci.chromium.org/). It's still light on documentation and the source is very internal google-y (they seem to have written their own version of virtualenv in go). However, based on the design docs it does look like they ran into a lot of the same problems I have with buildbot, specifically the lack of support for dynamic workers, and how underpowered the buildbot build steps can be.
OP really needs to try Concourse. Same container-based workflow as Drone that is touted as a solution, but more mature, and much more testable than Drone.
Concourse really hits his requirements for ops-friendliness and testability. It's easy to upgrade because the web host and CI workers are completely stateless, and the work you do with Concourse is easy to test because the jobs themselves are all completely self-contained. Because Concourse forces you to factor your builds into inputs, tasks, and outputs, it becomes somewhat straightforward to test your outputs by replacing them with mock outputs.
The main issue with Concourse is that it has a huge learning curve to use effectively. When you have many pipelines, you end up learning the undocumented meta pipeline pattern to help manage them. You end up implementing stuff like mock outputs by yourself, since it's not really a first-class concept. Concepts like per-branch versioning that have been in products like Jenkins for years are only now entering development as "spaces". All of the configuration can be version controlled, but it's all YAML, so to wrangle everything together, you end up using something which will compile down to YAML instead. RBAC much improved in v5 but still leaves much to be desired. There are no manual steps, only manual jobs. Job triggering is relatively slow due to a fundamental design tradeoff where webhooks trigger resource checks which trigger jobs instead of triggering jobs directly, to make resources easier to maintain. Nobody really tells you this on the front page.
It has its flaws. But if you delve into it you see very quickly that it's an incredibly solid product. The ability to run one-off jobs from developer workstations on CI workers, and to easily SSH into those CI worker containers from developer workstations, is incredibly helpful for debugging issues. Because everything is stateless, everything tends to "just work". If you go in with your eyes open about its limitations, you'll have a good time.
Tekton [1] works in a similar manner where the pipeline stages define inputs, outputs and tasks. The great part about Tekton is it provides a set of building blocks that can be integrated into a larger system.
I hope to integrate Tekton into Drone [2] and allow individual projects to choose their pipeline execution engine. Projects can choose to start with a more basic engine, knowing they have the option to grow into something more powerful (and complex) when they need to.
The thing that turned me off concourse last time I checked it out is that their documentation assumes (assumed?) you're going to use BOSH. I don't want to have to learn and maintain yet another infrastructure as code tool, just for my build server. I know you can run concourse without it, but all their examples seemed to use it and I didn't want to hit edge cases that they didn't account for. So I gave up before too long.
certainly I love the idea of concourse as a release engineer but the lack of a nice UI for dev feedback/monitoring makes it a hard sell as a drop-in jenkins replacement
So, there is a resource that will fetch your pull requests so that they can be built. It's not quite as good as per-branch builds, but with GitHub's new draft pull request feature (if you use GitHub), it does the trick for us, but we're also a relatively small dev team.
Either way, it's not a drop-in Jenkins replacement. It really does have a high learning curve because it forces you to wrap your mind and your code to Concourse's model. Probably, a lot of your build and deployment scripts would need to be rewritten. The reason why you would do so is to get the benefits described above - everything (including environments) is version controlled, ops are stateless, running code in the CI environment from a developer machine, etc.
Our setup runs Jenkins master and slaves as Kubernetes pods, with plugins limited to only the very few required to get GitHub integration and slaves working.
Jobs are configured by adding an entire GitHub organization. All repositories with corresponding branches, pull requests and tags are automatically discovered and built based on the existence of a Jenkinsfile.
Everything is built by slaves using Docker, either with Dockerfile or using builder images.
Job history and artifacts are purged after a few weeks, since everything of importance is deployed to Bintray or Docker repositories.
By keeping Jenkins constrained in this fashion, we have no performance issues.
That is exactly how we're doing it as well, though I am interested in checking out Cloudbees'Jenkins. We've recently incorporated Zalenium (selenium grid which autoscales nicely natively in kubernetes) - just had to work a little magic with automatic service creation during builds.
I'm just waiting for Apache to adopt it, and then it'll sit and fester like everything else in the Apache graveyard, full of vulnerabilities and slowly decaying.
Jenkins is now part of the CD Foundation (https://cd.foundation/) which is one of the linux foundation sub-foundations. Don't expect it to show up in the apache foundation.
Were they using an older version of Jenkins on the public internet? There's been a randomized GUID applied to the initial Jenkins admin password, which you can only access if you have direct access to the Jenkins install. I think this was added in 2016.
If you're stuck on an older version of Jenkins, you better not click the "refresh" button in the plugin management page, cause otherwise the page is just filled with red warnings saying that the latest version of each plugin is incompatible or has dependencies that are incompatible with your current Jenkins version.
There is afaik no way to install the last plugin that was compatible with your version of Jenkins.
Check this out, too. Free forever: https://www.cloudbees.com/products/cloudbees-jenkins-distrib.... We have included Beekeeper, which is an implementation of the plugin manager that provides a list of known-compatible, recommended plugins that CloudBees verifies and tests with each long-term support release of Jenkins.
> There is afaik no way to install the last plugin that was compatible with your version of Jenkins.
Probably not directly, but if you know the version, you can download the HPI and install it manually. Jenkin's Docker build also contains install-plugins.sh (https://github.com/jenkinsci/docker/blob/7b4153f20a61d9c579b...) that you can use to install specific version of plugin via command line.
We use Jenkins at work and have found a pretty damned sweet spot.
Let me start by saying that we used to use GitLab, a lot of it was because of the CI but I didn’t have a great experience trying to manage it on top of Kubernetes, they’ve since introduced a Kube native package and I’ve been told it’s much easier, but with the deployed omnibus we ran into a lot of issues with runners randomly disconnecting, it became frustrating to the point where I had developers not wanting to use GitLab and finding interesting ways to work around it.
So I set up Jenkins on a dedicated EC2 instance with a large EBS volume for workplace storage and installed the Jenkins plugin then I wrote a Jenkins library package that exposes a function to read a gitlab yaml file and generates the appropriate stages with parallel steps that execute as pods in khbernetes - took about a week to get the things we actually used from Gitlab CI and their YAML DSL working correctly.
Now we very happily use Jenkins, mostly through YAML but in the occasions where things are much easier going directly to Groovy to use interface with plugins, developers can.
I want to specify this is my own experience and I think a lot of our own issues may have been from mismanaging our self-deployed setup. I’ve had a lot more experience managing Jenkins.
GitLab and their approach to CI (easy to use yaml) really facilitated developers writing CI, which increased our software quality overall.
I'm just getting started using it, but it seems like the solution to scaling up to a lot of Jenkins jobs. There's a good talk about it, and since you're one of only two people in the thread who used the word DSL and you are having a good experience with Jenkins, I thought I'd ask.
My config is similar except my single EC2 node is actually running Kubernetes via kubeadm, it's a single node Kubernetes cluster and has enough room for about 3 of my worker pods to execute concurrently before the rest have to wait.
(But that's just my setup and has nothing to do with Job DSL.)
For me, managing Jenkins via the helm chart has been the best part of the deal, but I'm a pretty big fan of Helm already...
Best CI I've ever used is Gitlab CI[0]. The runner is completely open source[1] and you can use your own runner with your gitlab.com (or local instance) projects -- set it up in an autoscaling group[2] for savings.
I run https://runnerrental.club but Gitlab also recently released the ability to pay for minutes in 11.8 [3], so my product is more-or-less dead in the water but I don't mind since Gitlab is such an excellent tool, I'm glad to see them fill the need.
But back to Gitlab CI -- the YAML configuration documentation[4] is pretty fantastic -- Most easy things are easy and hard things are possible. I suspect that one could run an entire startup like circleci/travis/drone based on just the software that Gitlab has open sourced and made available already.
[0]: https://docs.gitlab.com/ee/ci/
[1]: https://gitlab.com/gitlab-org/gitlab-runner
[2]: https://docs.gitlab.com/runner/configuration/runner_autoscal...
[3]: https://about.gitlab.com/2019/04/22/gitlab-11-10-released/#p...
[4]: https://docs.gitlab.com/ee/ci/yaml/
The artifacts feature is great and some artifacts, like unit test report files, can be interpreted by GitLab and used in various parts of the GitLab web UI. A lot of this just works and most of the CI features are available in the GitLab community edition which is open source.
I have not used Jenkins actively since before the Jenkins pipeline file format was common. So for me Jenkins always appeared to be this game of checking the right checkboxes and clicking the right buttons in the Jenkins UI. The new pipeline feature is probably much nicer. However, now that I use GitLab I don't really see any reason to switch back to Jenkins.
Our hope to simplify away the checkboxing and plugins is ready to go distro (free of course): https://www.cloudbees.com/products/cloudbees-jenkins-distrib...
You mention docker and it is super great for CI: it’s probably one of the widest used feature of Jenkinsfiles (you can specify what image or dockerfile you want a stage to run in - simple but powerful, as you probably know)
- Can't customize your git checkout process (e.g. shallow clone with depth, or merging source branch with target branch with certain strategy)
- Can't make job run/not run based on filter on source branch/target branch/etc. of a merge request
- Can't dynamically make certain jobs only run on certain agent machines
So I'm still stuck with Jenkins for now. I know we love bashing Jenkins but I have yet to come across anything that offers the same amount of flexibility.
1) you can customise the git checkout depth and style: https://docs.gitlab.com/ee/ci/yaml/#shallow-cloning https://docs.gitlab.com/ee/ci/yaml/#git-strategy
2) https://docs.gitlab.com/ee/ci/yaml/#onlyexcept-basic
3) https://docs.gitlab.com/ee/ci/yaml/#tags
I'm not associated with gitlab at all but happy to give pointers if anyone wants to contact me direct
We have an overall CI/CD direction page up at https://about.gitlab.com/direction/cicd which you can drill down into the individual stages plans from. Feedback is always welcome, we love building things in partnership with real users. You can reach me at jason@gitlab.com any time.
I'm not super familiar with Circle CI's SSH debugging, but we do have a feature called "Interactive Web Terminals" https://docs.gitlab.com/ee/ci/interactive_web_terminal/.
This allows you to connect to a terminal in the running job on GitLab CI for debugging. Would love to understand how this does or does not meet your use-case here.
This is not a full standalone mode, because GitLab CI is a built-in solution that can not be easily separated, but might work for you.
We set all jobs to manual and then our script triggers them depending on commit message, person, moon phase etc.
But really, this should be in core. Its very hard to do multirepository stuff. Its not that easy to do mono repo stuff too - I really need a pipeline withint sub-project, a mother pipeline, option to run whatever one I want etc... Gitlb pipelines could be a lot better.
Am I misunderstanding what you meant?
While it is easy to bash on an inanimate object, there are some very dedicated and empathetic people who care deeply about the project. Some of those people do this work in their off-hours and some to this work as part of their daily work activities AND also in their off hours.
In that spirit, we want to make Jenkins better and created a separate group at CloudBees in the Product and Engineering teams late last year. They focus on open source work for Jenkins and on some proprietary things for CloudBees Core (built on Jenkins). We also have a dedicated user experience/product designer who started working on the project a few months ago. One of the first things he and I worked on was creating a curated, tailored version of Jenkins via the CloudBees Jenkins Distribution. This distribution will focus more and more over time on a guided workflow for continuous integration and delivery with Jenkins. These patterns will also be shared with open source Jenkins - some through direct contributions and others through suggestions (better plugin categorization, documentation, etc.).
Please use this comment thread to share your constructive, honest feedback about how we can improve Jenkins.
And can we PLEASE open up the issues tab on the Github repos especially for plugins. There is currently no way to provide any feedback/report issues on these plugins because the "issues" tab is disabled. Our current options are - Put a comment on the plugin wiki page - Hunt down the the relevant support forum for the plugin - Find the original source repo and post an issue
All of which are not ideal and does nothing to directly help the development of the plugins.
I've been using Jenkins heavily over the past year since a client of our purchased the enterprise version. It was billed to me as a mature open source product that has been refined over the years by people trying to optimize their dev operations. While I'm sure some people really care about it, the user experience is so utterly disappointing it's almost impossible to imagine how it got to this state without neglect.
Even if you ignore all the complicated stuff, the web UI is embarrassing. While I don't suggest a "pretty" UI is necessary for devops, I would think you'd have a quick win just by having some people re-style the existing UI to make it look and feel like something built this century and not require a dozen clicks to get to important information. There are also bugs that are so painfully obvious it makes me wonder how they still exist. An example is if your Github branch has a slash in it (eg. feature/something) you get a 404 error if you try to navigate to that builds' results.
There are also features that appear to have almost no value yet are in the core UI and clearly took some time to build. The weather icons representing various permutations of previous build states is one ridiculous example that comes to mind.
I would respectfully suggest you run through some real world Jenkins experiences like the ones mentioned in the article. Also setting up a new server, configuring non-trivial SCM settings, debugging Jenkinsfiles, etc. To echo the article's sentiment - it feels like I'm constantly fighting with Jenkins to do what I need instead of being guided into a mature set of features.
Conversely - Octopus Deploy is a related product I have been using alongside Jenkins which has been an absolute joy to work with. Everything from initial setup to configuring its agent software on host servers has been straightforward. It has a simple, elegant UI that provides access to important information and actions where you would hope to see them. And most importantly - everything works. I have yet to encounter a bug or experience any broken UI states.
I'm glad to hear CloudBees is making some effort to improve things and I hope PMs like you continue to be involved in the community and solicit feedback, even if it's hard to hear sometimes.
For myself, I'd prioritize -
An indepth improvement of the Config as Code system as applied to Kubernetes, both for management of the config and of relevant plugins.
Plugin compatibility and management of plugins. It's not... smooth.
Also, the documentation badly needs updates and examples.
(Also, amusingly, we chatted a bit by mail on April 25th 2018, but there was no follow up on your side, I guess priorities changed...)
[1] https://jenkins.io/doc/book/pipeline/syntax/
Dead Comment
The much-touted repo integrations (travis, circle...) all have an exclusive focus on build-test-deploy CI of single repos.
But when you have many similar repos (modules) with similar build steps you want to manage, and want to have a couple of pipelines around those, and manage the odd Windows build target, these just give up (it's docker or bust). Sadly, only Jenkins is generic enough, much as it pains me to admit.
Anyone got a sane alternative to jenkins for us poor souls?
On the other hand there is Bamboo from Atlassian. https://www.atlassian.com/software/bamboo
I really don't understand this mentality of there is no better tools when there are better tools than jenkins and they've been around for a while.
Bamboo on the other hand is IMO the worst of the commercial CI tools by far and where I work has gone down for us the most. Atlassian itself doesn't appear to be investing in it much anymore judging by the slow pace of development in recent years and at their most recent conference, you can hardly find it mentioned or see much presence for it anywhere.
In all the CI systems I've used though, there has not been one that I haven't encountered some major difficulties with.
Beyond that, anything to do with build automation for a large number of users always quickly becomes a support & maintenance quagmire. Users frequently want to install a new (barely maintained) plugin to solve a problem they have, complex interactions lead to difficult to understand failure modes that require time consuming investigations ("your CI tool is the problem and broke my build" ... "No, your build is broken" ...).
And actually, you can get quite far with the free TeamCity license of three build agents and 100 build configs. I’m also fairly sure that Jetbrains would take kindly to license requests from open-source projects and academia.
* Broken base Ubuntu images being recommended by Atlassian as the default for agent Image configuration, only to be fixed to a usable state months later;
* Being generally years behind other CI tools, even the traditional ones;
* Data exports corrupting themselves despite apparently succeeding, blocking migrations after upgrades or server changes;
* The official documentation somewhere recommending copying across password hashes directly for setting up new admin for a migration, but I can't find this anymore so they've hopefully improved the process and documentation for this;
* A bug in an earlier version in which a strange combination of error cases in an Elastic Bamboo image configuration could spam EC2 instances in a constant loop, which we thankfully spotted before it ate through our AWS bill;
* No clear messaging from Atlassian about how the future of Bamboo compares to Pipelines. The official line seems to be that Bamboo is for highly custom setups who want to host their CI themselves, but you get the impression from to the slow pace of development that they're slowly dropping it in favour of Pipelines. I'd rather they be honest about it if that is the case.
Those are just the problems I can think of from the top of my head, anyway.
Not with pipeline files. I am a total Jenkins noob, but I was able to (relatively) quickly setup a minimal job that automatically pulls config from the relevant GH repo.
That said, pipelines has made a HUGE difference. I still want to migrate but this fixes a large pain point.
(My company switched from JJB to pipelines in the last year and has found it pretty decent.)
I use it for build and test automation and it's been pretty solid.
My biggest gripe about gitlab is you can't schedule a job in code, and I suppose it's less then ideal to support 3rd party repos in hosted gitlab, but I don't know why you'd not use it as an SCM.
The bigger problem, would be using a group job that triggers a bunch of other jobs to do the many modules type of development you spoke about, but I'd just develop my modules seperatly, and build them in parallel steps if need be.
Or are you looking more for putting the values in the .gitlab-ci.yml itself? This is something we have thought a bit about, but it gets strange with branches and merges where it's not always clear you're doing what the user wants as the different merges happen.
To your second point, you might be interested in some of the primitives we're looking at building next here: https://about.gitlab.com/direction/cicd/#powerful-integrated.... These, in concert, will help with a lot of more complex workflows.
They have an integrated Docker registry as well!
Personally I’ve found that for my past and present use cases generating any needed step (e.g test matrix) e.g with a script is much more flexible, predictable, and reproducible since the generated result can be versioned.
I also successfully used various custom runners including baremetal Windows ones and virtualised macOS ones inside VirtualBox.
Since my use case is integration with gerrit, I poll the updated changes over ssh, and have the regex-based triggers which cause a “job” launches. Job consists of making a database entry and calling a shell script, then updating the entry upon completion. Since job is just a shell script it can kick off other jobs either serially or in parallel simply using gnu parallel :-)
And voting/review is again just a command so of course is also flexible and can be made much saner than what I had seen done with Jenkins.
So the “job manager” is really the OS - thus killing the “daemon” doesn’t affect the already running jobs - they will update the database as they finish.
The database is SQLite with a foreseen option for Postgres. (I have made diesel optionally work with both in another two year old project which successfully provisioned and managed the event network of about 500 switches)
Since I also didn’t want the HTTP daemon, the entire web interface is just monitoring, and is purely static files, regenerated upon changes.
Templating for HTML done via mustache (again also use it in the other project, very happy).
For fun I made (if enabled in config) the daemon reexec itself if mtime of config or the executable changes.
You can look at the current state of this thing at http://s5ci-dev.myvpp.net and the associated toy gerrit instance at http://testgerrit.myvpp.net
I am doing the first demo of this thing internally this week, and hopefully should be able to open source it.
It’s about 2000 LOC of Rust and compiles using stable.
Is this something that might be of use ?
* Get triggered by a github (enterprise) webhook.
* Work out the project, and clone it into a temporary directory.
* Launch a named docker container, bind-mounting the temporary directory to "/project" inside the image.
* Once the container exits copy everything from "/output" to the host - those are the generated artifacts.
There's a bit of glue to tie commit-hashes to the appropriate output, and a bit of magic to use `rsync` to allow moving output artifactes to the next container in the pipeline, if multiple steps are run.
But in short I'd probably spend more time explaining the system than an experienced devops person would be creating their own version.
How many teams do you have? In all seriousness, if you aren't talking at least one team per repo, have you considered a monorepo setup? Aren't you burning time managing those many similar repos with many similar build steps?
That said, even in a monorepo, I still prefer Jenkins compared to cleaner seeming cloud offerings due to its limitless flexibility.
I don't understand what you mean by one team per repo?
I agree, as I keep saying at $WORK: Jenkins is the least-worst system out there.
https://www.urbandictionary.com/define.php?term=TFA
Some of us here have been using terms like RTFA and TFA for twenty years, maybe longer.
HTH.
While I am a conservative christian myself (hah, most of you didn't guess that) I try to make a point out of not getting offended for such things, and if I can do it so can most people :-)
We keep our jenkins jobs version controlled by checking in each job's config.xml into a git repo. In the past I've seen the config.xml files managed by puppet or other config management tools.
This helps us get around the "hard to backup" and "test in production" issues. We can test out jenkins changes in staging, commit those changes to our jenkins repo, and then push up the config.xml file to the production jenkins server when we're ready to deploy.
https://codefresh.io/continuous-deployment/codefresh-versus-...
https://codefresh.io/continuous-integration/using-codefresh-...
I haven't tried this yet myself but AWS CodePipeline lets you have Jenkins as a stage in the pipeline. You use Jenkins only for the bits you need without the extra plugins. The resulting Jenkins box is supposed to be very lean and avoid the problems you describe.
Cloudbuild on the gcp side has had much better performance
https://github.com/luci/recipes-py/blob/master/doc/user_guid...
Concourse really hits his requirements for ops-friendliness and testability. It's easy to upgrade because the web host and CI workers are completely stateless, and the work you do with Concourse is easy to test because the jobs themselves are all completely self-contained. Because Concourse forces you to factor your builds into inputs, tasks, and outputs, it becomes somewhat straightforward to test your outputs by replacing them with mock outputs.
The main issue with Concourse is that it has a huge learning curve to use effectively. When you have many pipelines, you end up learning the undocumented meta pipeline pattern to help manage them. You end up implementing stuff like mock outputs by yourself, since it's not really a first-class concept. Concepts like per-branch versioning that have been in products like Jenkins for years are only now entering development as "spaces". All of the configuration can be version controlled, but it's all YAML, so to wrangle everything together, you end up using something which will compile down to YAML instead. RBAC much improved in v5 but still leaves much to be desired. There are no manual steps, only manual jobs. Job triggering is relatively slow due to a fundamental design tradeoff where webhooks trigger resource checks which trigger jobs instead of triggering jobs directly, to make resources easier to maintain. Nobody really tells you this on the front page.
It has its flaws. But if you delve into it you see very quickly that it's an incredibly solid product. The ability to run one-off jobs from developer workstations on CI workers, and to easily SSH into those CI worker containers from developer workstations, is incredibly helpful for debugging issues. Because everything is stateless, everything tends to "just work". If you go in with your eyes open about its limitations, you'll have a good time.
I hope to integrate Tekton into Drone [2] and allow individual projects to choose their pipeline execution engine. Projects can choose to start with a more basic engine, knowing they have the option to grow into something more powerful (and complex) when they need to.
[1] https://tekton.dev/ [2] https://github.com/drone/drone/issues/2680
https://github.com/helm/charts/tree/master/stable/concourse
certainly I love the idea of concourse as a release engineer but the lack of a nice UI for dev feedback/monitoring makes it a hard sell as a drop-in jenkins replacement
Either way, it's not a drop-in Jenkins replacement. It really does have a high learning curve because it forces you to wrap your mind and your code to Concourse's model. Probably, a lot of your build and deployment scripts would need to be rewritten. The reason why you would do so is to get the benefits described above - everything (including environments) is version controlled, ops are stateless, running code in the CI environment from a developer machine, etc.
Jobs are configured by adding an entire GitHub organization. All repositories with corresponding branches, pull requests and tags are automatically discovered and built based on the existence of a Jenkinsfile.
Everything is built by slaves using Docker, either with Dockerfile or using builder images.
Job history and artifacts are purged after a few weeks, since everything of importance is deployed to Bintray or Docker repositories.
By keeping Jenkins constrained in this fashion, we have no performance issues.
https://www.cvedetails.com/vulnerability-list/vendor_id-1586...
I'm just waiting for Apache to adopt it, and then it'll sit and fester like everything else in the Apache graveyard, full of vulnerabilities and slowly decaying.
Those are just Jenkins core exploits too... there are so many many more for Jenkins plugins.... https://www.cvedetails.com/vulnerability-list/vendor_id-1586...
Deleted Comment
If you're stuck on an older version of Jenkins, you better not click the "refresh" button in the plugin management page, cause otherwise the page is just filled with red warnings saying that the latest version of each plugin is incompatible or has dependencies that are incompatible with your current Jenkins version.
There is afaik no way to install the last plugin that was compatible with your version of Jenkins.
Probably not directly, but if you know the version, you can download the HPI and install it manually. Jenkin's Docker build also contains install-plugins.sh (https://github.com/jenkinsci/docker/blob/7b4153f20a61d9c579b...) that you can use to install specific version of plugin via command line.
Let me start by saying that we used to use GitLab, a lot of it was because of the CI but I didn’t have a great experience trying to manage it on top of Kubernetes, they’ve since introduced a Kube native package and I’ve been told it’s much easier, but with the deployed omnibus we ran into a lot of issues with runners randomly disconnecting, it became frustrating to the point where I had developers not wanting to use GitLab and finding interesting ways to work around it.
So I set up Jenkins on a dedicated EC2 instance with a large EBS volume for workplace storage and installed the Jenkins plugin then I wrote a Jenkins library package that exposes a function to read a gitlab yaml file and generates the appropriate stages with parallel steps that execute as pods in khbernetes - took about a week to get the things we actually used from Gitlab CI and their YAML DSL working correctly.
Now we very happily use Jenkins, mostly through YAML but in the occasions where things are much easier going directly to Groovy to use interface with plugins, developers can.
GitLab and their approach to CI (easy to use yaml) really facilitated developers writing CI, which increased our software quality overall.
I'm just getting started using it, but it seems like the solution to scaling up to a lot of Jenkins jobs. There's a good talk about it, and since you're one of only two people in the thread who used the word DSL and you are having a good experience with Jenkins, I thought I'd ask.
My config is similar except my single EC2 node is actually running Kubernetes via kubeadm, it's a single node Kubernetes cluster and has enough room for about 3 of my worker pods to execute concurrently before the rest have to wait.
(But that's just my setup and has nothing to do with Job DSL.)
For me, managing Jenkins via the helm chart has been the best part of the deal, but I'm a pretty big fan of Helm already...