Brent Laster, author of Jenkins 2: Up and Running talks about build pipelines, on Jenkins 2 a build server that can be used to implement continuous integration and deployment and is more devops-friendly that Jenkins 1. Host Robert Blumen talks to Brent about continuous integration and continuous delivery (CI/CD), the role of the build server in CI/CD, build pipelines, DevOps and the “pipeline as code” model, differences between Jenkins and Jenkins 2, the Jenkinsfile, scripted pipelines in a groovy DSL versus a declarative model, shared libraries of code, Jenkins2 as a workflow management system, scaling out Jenkins with a pool of compute resources, management of on-demand resources, integration of pipelines with Docker, the Jenkins plugin model, how Jenkins jobs are initiated (scheduled, event-driven, or UI), failure modes for pipelines, alerting people or other systems from Jenkins, management of credentials, users, roles and role-based authorization, and key drivers for adoption of Jenkins within an organization.
Show Notes
Related links
- Jenkins 2: Up and Running by Brent Laster
- Jenkins project web site
- SE-Radio 231: Joshua Suerath and Matthew Farwell on scala build tool (SBT)
- SE-Radio 221: Jez Humble on continuous delivery
- SE-Radio 211: Rachel Laycock on continuous delivery on Windows
- SE-Radio 198: Wil van der Aalst on workflow management
- SE-Radio 240: Cedric Champeau on the Groovy language
- SE Radio multiple shows on DSLs
- SE Radio 289: James Turnbull on declarative programming
- SE-Radio 268: Kief Morris on infrastructure as code
- SE-Radio 311: Armon Dadgar on Secrets Management
- Guest twitter: https://twitter.com/brentclaster
- Guest email: [email protected]
Transcript
Transcript brought to you by IEEE Software
Robert Blumen 00:00:22 For software engineering radio. This is Robert Blumen. I have with me today. Brent Laster. His day job is a senior manager in R and D at SAS. Brent is also the author of two books, professional get and Jenkins to up and running. I’m going to read the subtitle of that book, evolve your deployment pipeline for next generation automation. He frequently presents trainings for O’Reilly’s Safari on get Jenkins deployment pipelines, and continuous delivery. He was also a regular presenter at OSCON DevOps, world Jenkins, world, rich web experience and other conferences. Brent, welcome to software engineering radio.
Brent Laster 00:01:05 Thanks Robert. I appreciate the invitation to appear on the show.
Robert Blumen 00:01:10 Would you like to add anything else about your bio that I didn’t cover?
Brent Laster 00:01:15 I think that covers it pretty well.
Robert Blumen 00:01:22 Okay. Today we’re going to be talking about Jenkins two subject of your book, and we willing to that book in the show notes. Jenkins is a build server. We’re going to work up to a discussion about Jenkins, but, uh, I want to get some background concepts in. Could you give us a brief review of CIC D we’ve covered that on a couple of shows, two 21 and two 11. So listeners can go back and listen to those, but let’s get a quick summary of that.
Brent Laste 00:01:50 Sure. You know, I tend to think of CEI and CDs, these continuous ideas, continuous integration, continuous delivery as being kind of like an assembly line and continuous integration is the process that kind of kicks off the assembly line. It’s the, uh, updating source code out in a source management system and having some process like Jenkins that monitors those repositories and then kicks off the rest of the pipeline from there. Uh, that goes into what’s generally called continuous delivery is a larger set of getting your code from source management, through testing, building all the pieces of the pipeline and kind of putting it through more and more what we call continuous testing, more and more broader sets of tests there to prove that it is of a high quality that it works well with other pieces of code, other things that have been built through the pipeline and down into you get to continuous deployment.
Robert Blumen 00:02:50 The idea that you have produced something that is of a high enough quality and works well enough, that it could be deployed mean you deploy it
Brent Laster 00:03:00 Every time, but you have proven that it can be deployed. So it goes to the assembly line and you get something out the other side from your source code change that you would feel comfortable deploying to customers if you chose to do so.
Robert Blumen 00:03:13 Could you clarify what is continuous about it as compared to plain old integration and deployment?
Brent Laster 00:03:20 Well, so continuous is kind of a, perhaps an overused terms in a lot of cases, but it continuous implies typically a couple of things. It implies that it’s a repeatable process. It’s more of a discipline or a science than an art. It’s not somebody handcrafting processes together. It’s something that can be reproduced. You’ve given the same inputs. You get the same outputs out there. Also, it is expected to be fairly fast. There’s an idea of fail fast in the process. If it’s something is not going to work, you want to find that out quickly, but you also want to be able to produce something that is deployable or eligible to be deployed in a fairly quick manner. So some companies will do it as often as you know, multiple times a day. Some companies will choose to do it once a day, depends on their, but it is more of a repeatable fast process, uh, to get the source code changes through there.
Robert Blumen 00:04:20 In your description, you used a couple of terms, pipeline and assembly line drill down into what you mean by pipeline.
Brent Laster 00:04:29 So pipeline can have a lot of, uh, of meanings, but typically the way I think of it is, uh, the set of processes that are changed together. Uh, the set of processes for what it would take to take your change from source code again, to a deliverable product out there. Those can include processes for compiling and building the code, doing unit tests, functional tests, integration, tests, gathering source code metrics, uh, putting them into a repository with other artifacts, combining them, getting them ready for the deployment, putting them into containers, all those kinds of things. So a pipeline in simplest terms would just be a, a chain of processes linked together.
Robert Blumen 00:05:13 And those are things you need to do. Every one of those things for a change to be production ready
Brent Laster 00:05:21 In most cases, yes. Although pipelines actually may have some points of human interaction in them. For example, a user acceptance tests might be part of the pipeline where a user wants to go in and try some stuff out and make sure that stuff looks into user interface. So there are typically pieces of the pipeline. They all run automated, but there may be points along the way where it’s incumbent upon a human or where it’s desire to have a human say, yes, I want to go ahead and proceed to the next stage.
Robert Blumen 00:05:51 We’re going to now move on more specifically to the Jenkins piece. Jenkins is a build server. What is a build server?
Brent Laster 00:06:01 Well, , it’s primarily been used for builds, but it’s really an automation tool. It’s a workflow orchestrator, and that’s a, uh, perhaps a fancy way of saying that it can automate processes and connect them together and monitor them. So that’s where it really comes into its power there as being able to model processes, to find them and link jobs together, uh, in, as code now, write out your, your pipeline as code in there and be able to monitor and report problems and execute it, be able to run this kind of pipeline. Now that can be a lot more than builds. It can be all the pieces of the pipeline, the testing, um, the running other applications, for example, together metrics and to do all the different pieces.
Robert Blumen 00:06:49 I think of build as compiling or maybe merging on large number of files into some kind of an archive you’re saying that’s really something we started out with, but the term it’s much more generic now and can apply to almost any step that you’d need to do to get your code ready for delivery.
Brent Laster 00:07:14 Right? That the gentleman had created a Jenkins, which is originally called Hudson cause Suki Kawaguchi Gucci. And I’m probably saying his name wrong. And I apologize for that. But he said he got tired of being the guy who broke the bills. That was why he originally created Jenkins. Uh, the idea originally was as more of a CIO that continuous integration monitoring for changes coming into the source management system and then kicking off builds. But then as continuous delivery, this idea of continuous delivery has evolved and has grown in popularity. Jenkins was a logical tool to take that and to implement the other pieces of the pipeline down stream. So today we can really use Jenkins and the functionality it provides to drive any other number of applications in different, uh, pieces or processes to make up an entire pipeline.
Robert Blumen 00:08:03 You may have answered this already. I’m going to ask and see if I get something else back from you. What is the role of the build server within the CIC D process?
Brent Laster 00:08:17 So the depends on what you kind of mean by the build server today, we typically would talk more about like a build application that is probably run by something like Jenkins these days. It might be something like grateful or in the past Maven or any number of, uh, of compilers or applications that actually build the code. Uh, the build server typically, or the build system is typically where the process, um, takes off and starts after you have code pushed into source control and something like Jenkins has detected it and kicked off the builds. The build is the first point of a problem or failure in there to make sure things look good these days, they typically will also encompass a unit testing and their individual tests to be able to make sure that code changes work kind of an isolation giving the same, some inputs for a function. You get the expected outputs in that. So the build is the first place in there where you want to validate that your code looks good. Uh, typically though that’s in an isolated in isolation and then you move it on down the pipeline to test it out with the other pieces.
Robert Blumen 00:09:28 Let’s move more specifically into Jenkins. We’ve been talking about general concepts that could apply regardless of the tool situate Jenkins in the landscape of comparable tools. What does it do? What is it good at?
Brent Laster 00:09:45 So that’s a good question. Um, so Jenkins, as I’ve said, it’s kind of an orchestration in Janessa, probably the workflow tool. That’s probably the best way to think about it. Um, it allows you to tie other tools together into this sort of pipelines that you need to do or do monitoring on other things out there with the Jenkins to release, which is what we kind of use to call the Jenkins two dot O and beyond. Uh, it’s changed in the, from the interface. The interface used to be more of a, what was called freestyle. You would interact with a web forums and you would select stuff and drop down list. You would type stuff and fields with Jenkins to now, you can essentially write these pipeline, these, this orchestration of processes, just as you would a program, you can write it in code out there and you can store it even with your source code in the repository and Jenkins can monitor that code as a change. So this really ties into the whole dev ops idea here, being able that your infrastructure, your processes can be expressed as code as well and be monitored and, uh, pick up and cause a new run of things. When something changes out there,
Robert Blumen 00:11:02 That’s a really important point. I want to drill down more into that. I’ve been doing some work within the last couple of weeks with Jenkins one, every other server we manage in my dev ops group, the configuration management tool will generate a conch. If you have an ABC server, it would generate ABC dot con using templating. It can function as a programming language, do anything you want to get the bright cot, drop it on the server and then tell a server, Hey, reload, conch Jenkins. One, the server can modify the configuration file and it’s not under source control. So it’s not DevOps friendly. So it sounds like Jenkins to the goal was to make it more compatible with how dev ops works in modern environments.
Brent Laster 00:11:55 Absolutely. We, as you said in the earlier versions of Jenkins, typically your configuration was stored in XML files out there and the Jenkins home directory, and you had to back that up or use a plugin or something to keep track of changes to it. And with Jenkins too, they have really made a writing your pipeline as code as we call it, being able to express your, your infrastructure by recoding it up in a Jenkins application and storing it with the source control. They’ve really made that a first class construct out there. It’s, it’s a prime when the primary focuses of Jenkins to, and again, it really does tie in nicely to the whole dev ops infrastructure there because not only can I code up my pipeline into Jenkins application itself and test it that way and evaluate it and make sure it all works, I can also store it as what’s called a Jenkins file, an external file that has that code in it, put it in source control point Jenkins at that and say, watch this just as you would for source control for a continuous integration, watch this file. And if something changes in that file, go out and create a job to automatically run my pipeline based on what’s in that file. So it really does carry Jenkins to where it needs to be in terms of really integrating well with the dev ops ideals.
Robert Blumen 00:13:12 You mentioned the Jenkins file. Talk about what goes into the Jenkins file.
Brent Laster 00:13:18 So Jenkins file is, is basically the storing the code that you type into Jenkins that you would enter in Jenkins, or you can do it directly into Jenkins file, but it’s storing the code that describes your pipeline as in a external file. It’s like a, if you think about like a build script or something that has a certain expected name, maybe like the griddle build file, that sort of thing, a Jenkins file is the Jenkins piece of that. It’s the name Jenkins file. It’s just a text file with the program code to run your pipeline out there. They do have a couple of different formats that you can express your pipeline in one called scripted syntax, and one called declarative syntax scripted syntax gets you. It’s basically Jenkins steps, DSL steps, DSL being domain specific language that are supplied by plugins out there. Any plugin you put in Jenkins can supply steps that you can use in your script.
Brent Laster 00:14:19 So a scripted pipeline is those steps and groovy code, a groovy programming language. You can use that in there. The declarative syntax is an alternative way to write your pipeline is again, makes the steps available, but it’s more about declaring what you need. Uh, instead of doing the kind of imperative coding, just saying, I need these tools, I need these applications, those kinds of things, and letting Jenkins figure out how to, how to do it. They’re both pipeline as code, just alternate syntax, and some are easier for some people to deal with than the others.
Robert Blumen 00:14:54 We did a show a while back on declarative programming in dev ops, many dev ops tools are declarative. What is your view on the pros and cons versus the scripted Jenkins file versus declarative Jenkins file? And which one would you adopt if you were starting a new project?
Brent Laster 00:15:13 So that’s a, that’s a good question. And, um, I did a lot of consideration and looking at this when I was writing the book and, and talking with people there and everybody of course has different opinions about it. So it originally started with the scripted pipeline and scripted syntax. You have more flexibility and more freedom to do the kinds of things you would normally do. If you’re a programmer out there in terms of add, adding in loops into your code or eight using any valid, groovy syntax to finding variables, even those kinds of things in there. So if you’re a programmer by nature, or you want to have the flexibility to use kind of chronic constructs like loops or variables and those kinds of things, the inscripted syntax is the way to go. The declarative syntax came about because the cloud B’s the ones who contribute and the Jenkins community one and two have, make it easier for people who were coming from the original Jenkins in the original Jenkins, uh, prior to two dot Oh, versions out there, as I said, you normally filled in web forms, you had, uh, fields that you would fill in or drop down lists and stuff.
Brent Laster 00:16:23 So it was very much about a very structured process there to defining the jobs and declarative feels more like that. It’s a very well structured. You have a very, uh, well-defined structure out there to use, um, that to fill in information about what you need. So if you are not a programmer, for example, and coming just from using Jenkins before declarative may feel more comfortable to you, you have to realize though that declarative is a bit, in my opinion, more limited in terms of having less flexibility and stuff, because not everything can be done with all plugins necessarily without like defining variables and stuff in there. The other thing about declarative though, is it is much better because it is structured about checking for syntax errors and identifying where problems occur in your code, because it can monitor the structure of that. It also is a better fit for their new, a gooey interface out there, blue ocean, because declarative is very structured and structure objects can map better into a visual interface out there.
Robert Blumen 00:17:29 You mentioned the ability of plugins to supply different keywords or tasks that are using the Jenkins file. Give examples of what are some of the more useful or popular plugins.
Brent Laster 00:17:42 Well, certainly of course you have source management plugins out there, like get which supply gets step that you can put into your to say you get, and you can supply a branch or a location for the repository out there. Uh, probably one of the, there are some that come with the code, like the shell step, the sh step to run code out there on a Unix shell or the bat step to run stuff on a windows thing out there. But any plugin that you install that is compatible with Jenkins to, um, is expected to supply a step that can be used in the pipeline code now. So, uh, any of the ones I use like SonarQube for code analysis or Artifactory different ones, they will all supply steps that you can use and hook into there. Uh, so if there’s, there’s a wealth of plugins and in fact that’s probably why the Jenkins is as popular as it is. It’s just because of the huge number of plugins that are available for it. And, uh, now they all should supply steps that you can use in your code.
Robert Blumen 00:18:45 Does Jenkins provide a plugin provider interface so that anyone in the community can create a plugin and it should work?
Brent Laster 00:18:56 I’m sure there is that there are examples out there of how to create plugins and how to, uh, work with them. These days, the plugins again, are expected to provide those kinds of steps. They’re also expected to be a re-introduce meaning that if the master server, the master node goes down, that they can survive a restart. So you have to make sure you can, that plugins can serialize their data. They can write their data to disk and pick up again if the master stops from that. So, but within those kinds of boundaries, certainly you can write plugins out there. Jenkins also supports a feature called a global variables, which is kind of like defining an object and methods associated with it that you can add in as well. You can use shared pipeline libraries and bring things in well. So lots of, so certainly lots of opportunities for people to develop new code and new functionality and contribute back.
Robert Blumen 00:19:52 You mentioned this idea of plugins being re entrant. We did an earlier show on workflow management systems, as you point out a few minutes ago, Jenkins is a workflow engine. One of the characteristics is these jobs, which you call pipelines have multiple steps. You could have different failure modes that could occur between steps is Jenkins able to reliably complete a pipeline in spite of different kinds of failures that might occur, or a system restarts, things like that.
Brent Laster 00:20:28 The idea with the, with Jenkins too, again, is that the system is supposed to be reentrant or jobs should be reentrant. And that’s dependent upon the, uh, the plugins ability again, to serialize their data out to disk and be able to pick up a Dan if something happens on the master side. So that degree, yes, it is reentered in there. Now, when you are, when you move from like coding and web forms to actually writing your code as, as, uh, riding your pipeline as code out there, especially like in the scripted syntax, there’s more responsibility on you to be able to handle exception cases. So just like in any programming environment, uh, like Java groovy, those sorts of things. If you’re using scripted pipeline syntax, you would typically use like try catch functionality to be able to catch exceptions and then handle those well. If you kind of have to do that in the scripted pipeline to get that post, to build functionality that you used to get for free and traditional Jenkins, the thing that says, no matter if my pipeline succeeds or fails, I’ll still send email to notify people.
Brent Laster 00:21:36 So you have to kind of work around that a little bit with these kinds of programming constructs and scripted syntax into declarative syntax. They’ve done more for this. And there actually is a, uh, post section directive there that you can include in your code and just put that in there, and that will always get executed at the end. So it’s both a combination of what the plugins provide in terms of being reintroduced and being able to pick up again, but also there’s some responsibility on you as the coder of the pipeline to use the appropriate method, uh, to make sure that you can survive those kinds of, uh, cases as well.
Robert Blumen 00:22:13 What’s an example of an exception in a build step that you would want to catch rather than allow the job to fill.
Brent Laster 00:22:23 Well, for example, if something happened in, in general of it as an a job, um, there are various constructs and Jenkins for flow control, uh, such as retry in there that will retry an operation for a certain number of times. And if it goes past that time and still can’t do it, it throws an exception. That’s the way that it indicated that it needed, that it couldn’t complete it. And so you would need to catch that exception then to keep your program from aborting. If you’re using scripted syntax there and be able to handle it gracefully and continue on.
Robert Blumen 00:22:57 What about if we have a step that is compiling a program and the compilation fails due to an error syntax error in the program. That seems to me, that’s a pretty normal thing. And you’d want to handle that rather than the entire pipeline going into a failed state.
Brent Laster 00:23:16 Well, it, if you have a syntax error though, chances are, you want to detect that and alert the person who did it, or somehow alert people to that verse because the pipeline doesn’t, can’t proceed down the Fern test down the further steps of testing code and doing all the other steps you have to code doesn’t compile in the first place. Uh, you know, certainly you could catch that work around that sort of thing, and there too, but more normally you would want to at least, uh, flag that and least notify people about it so they could deal with it quickly.
Robert Blumen 00:23:54 In that case, this would be a common thing, not only completion spell, but other types of conditions where you want to notify, what is, are there plugins that Jenkins uses to communicate with the development team through different Slack email?
Brent Laster 00:24:12 Yeah, absolutely. In fact, there’s a, there’s a chapter in the book that I wrote just on about notifications in there and reporting. And we have examples in there abusing, uh, things like Slack out there and other social media things to communicate status back, uh, for any kind of application that you can think of pretty much these days that used in software development or producing software, there’s usually a plugin for Jenkins and the social media things are no exception out there. So certainly you could have a notifications back to Slack teams. You can have colors in there. You can define all kinds of criteria there. Of course, the most common one, is it, or arguably, I guess the most common one still is the email notifications. You certainly can sit down email notifications and there are all kinds of advanced set ups for those kinds of things. So again, using plugin, do you just have to find the plugin that you need and you can pretty much tie into any of those applications
Robert Blumen 00:25:09 I want to move on now talk more about scaling and distributing Jenkins. I can download install, run on a single server, how many Jenkins jobs or what can I support on one server?
Brent Laster 00:25:25 You know, I don’t actually know what the limits are out there. I would say it’s probably dependent upon your infrastructure and there too, but certainly I know we have, I’ve seen a ton, that’s over not an exact number, of course, a ton of stuff, things out there. And I think it just depends on the infrastructure out there. Of course you want to set up, you know, you can set up different nodes. And one of the things you think about with Jenkins is having the master. The master is not really intended to do a lot of heavy lifting. It’s more about managing the processes, managing the jobs and having access to things, but it really has what they call a lightweight executor. So typically you’re trying to farm stuff off two nodes out there, and you can define nodes for example, that, you know, you might have a group of nodes that has a label as they call it an identifier that says, these are our windows nodes, or these are our Unix nodes over here, or maybe these are on the East coast or West coast out there.
Brent Laster 00:26:27 And so then you can in your pipeline select a particular node you want to send stuff to, and that’s a big advantage as well. And the other thing I’ll say about the Jenkins to stuff, they’ve done a lot of integration for containers in particular, with things like, so you can have a Docker file thing that describes how to build a container and give that to Jenkins, and it can spin up an agent or a virtual node based off of that. So there’s a lot of capabilities within your pipeline code to fire up containers and run processes on that, which gives you a lot of flexibility. Then I’m distributing workloads that way.
Robert Blumen 00:27:07 I think you, uh, you have been talking about, uh, Jenkins as a distributed system. Could you give a overview? What is the architecture of the distributed system or talking about nodes? So we have the server, which is the master and then a fleet of nodes,
Brent Laster 00:27:26 Right? So there’s a couple of terms that, uh, that are used in Jenkins, uh, the master system. And then they will talk about if you’re working with declarative pipelines to talk about agents, if you’re working with scripted pipelines, you’re talking, they talk about nodes out there, uh, for all intents and purposes, you can think of nodes and agents and stuff. It’s the same time. But again, the idea is that I have a master GenCon system, but I farm the workout, the various, uh, jobs, whether it’s a traditional Jenkins job or a pipeline job, if we’re talking about now, uh, whatever out to these nodes or agents out there and an agent, or can be anything from a Docker container to an actual, you know, bare metal kind of system out there, uh, whatever makes sense. It could be something across the country. It could be whatever out there.
Brent Laster 00:28:21 So the real distributed power of Jenkins comes in being able to farm these jobs out and send them to the most appropriate, uh, system to run them on. And it’s very easy within Jenkins because of this idea of a label, a label is just an identifier that you can attach to any of these nodes, and you can have the same label on multiple nodes. So, like I said, if you had ones that run windows, you might have a farm of systems, all labeled as windows that you could just say, send this thing to the windows node and it would send it automatically out there. And then the results run and the results come back to the master. So you can see the output there, but that sort of a structure makes it actually very easy to be able to, um, uh, send work out and distribute the load.
Robert Blumen 00:29:11 If we have a number of windows, nodes, Jenkins needs to send a job, is it going to pick one that presently is not too heavily loaded and send a new work there?
Brent Laster 00:29:23 Yeah, it’ll be, it’ll pick one that’s available. I honestly don’t know exactly what the algorithm is in terms of whether it’s looking at the loads and stuff. I’m sure there are probably ways you can tune that, but I’m not sure how it’s implemented any underlying setup.
Robert Blumen 00:29:39 Are you aware if it has integration with cloud provider services, like autoscaled groups where it can spin up a new capacity in line with a workload and tear it down as the workload declines
Brent Laster 00:29:56 Familiar with that? I don’t know exactly what it can do with that. Uh, but, uh, it might, it might have a chance to do that.
Robert Blumen 00:30:07 What is a typical number of nodes or agents that a master would have, or not a typical number, but what’s the range of sizes you’ve seen in deploy deployment?
Brent Laster 00:30:21 Well, certainly I know where I work. We have, uh, quite a few systems out there, uh, you know, probably on the order of maybe of a hundred or so masters and stuff. And then nodes off of that, you know, it really, uh, varies. And I don’t know, I don’t know the best way to answer that. I think it varies from situation to situation out there, uh, you know, anywhere from a single node to I’m sure there are ones that have probably a hundred or so of their sitting out lots of stuff. Uh, the other factor that plays into this and one that is, um, uh, more common. I think these days is the sword of femoral nodes. The idea of they call it the cloud there’s cloud plugin for Jenkins, where you can spin up a Docker container to run stuff on, on demand. And then when the job finishes that goes away, or you can spin something up in AWS, uh, out there to run a Jenkins job. And when you’re done it can go away. So it’s not even necessarily anymore having a constant number of nodes, always available. It can be more of the on demand. I spin up something, uh, with Docker or Kubernetes or some of those things to run the jobs when I need it. And then when I’m done it spins it down and it takes it away.
Robert Blumen 00:31:43 You just answered my earlier question about auto-scale groups. What I was really getting at was the ability to have job execution resources on demand rather than have a fixed number of nodes that may be idle half the time.
Brent Laster 00:32:00 Sure, sure. Yeah. And that’s one of the nice things of, I’ve got a section in the book about working with containers in there, and that’s one of the things we talk about. It’s very simple and Jenkins. You can have things like the cloud provider plug in that goes out and spend these up, or you can even in your pipeline code, just say, Hey, go, I’m going to spin up a Jenkins can or sorry, a Docker container for this, run my code in it and then bring it back down. In fact, there is a, they call it a global variable. It’s kind of like a step in the code, but there’s a Docker global variable. There’s an inside method. You can call in your pipeline code that will, uh, get an image down for you automatically started up map your workspace in Jenkins as a volume in the Docker container.
Brent Laster 00:32:50 Then within the inside method, it’s, uh, has a, a beginning and an end to it. It has kind of a block in there within that scope. You can call the shell steps, sh steps and any steps that you run in that block will automatically be executed on the Docker container instead of on the Jenkins system itself. So, and then when you’re done this step will spend down to the container and just make a little record that says it was used in there. So just by calling one method on this Docker global variable, you get all of that functionality for free in there.
Robert Blumen 00:33:27 The cloud providers mostly have different ways of offering a Docker container execution as a service. If you’re able to integrate with that, then you don’t need to host your own execution infrastructure at all. Right. I want to move on now and talk about how jobs are initiated. What are the different ways that a, a job or a pipeline could get started?
Brent Laster 00:33:53 Well, typically if you’re starting with the continuous integration, sort of a functionality there at the front of the pipeline, um, you’re doing it in a couple of ways. For example, you might be doing what we call polling polling, this the source management system, which says every minute, every five minutes or however often it is, I go out and check and ask the source management system. Do you have anything new for me out there? Another one in terms of that could just be that periodically, regardless of whether or not there’s something new out there. You can just go out and say, I’m going to build every hour, every 15 minutes. However, that is there’s a syntax out there. It hearkens back if you know, the old Cron syntax out there to be able to specify the day, the hour, the week, that sort of thing. Being able to specify that for either polling or for just periodically building out there, a couple of other ways you can do it.
Brent Laster 00:34:53 Uh, there is a tool called Garrett, which is, uh, designed to do kind of code reviews, but you can also have a kickoff builds. It basically works with get, and kind of catches your code before it gets all the way into get and says, let me do things like code reviews and verify changes, and it can communicate with Jenkins and kickoff test builds from that. And then finally there is a, an approach called like a web hooks out there. If you’re to have code on GitHub, for example, you can define a GitHub web hook, which will send a notification and they call it a payload over to your Jenkins system. When something on GitHub, if somebody makes a change or, uh, updates a file, whatever the case may be, it will actually do kind of a push notification, send it across the way to your Jenkins system course for that one that implies your Jenkins system has to be accessible, has to have a port open that it get hub can communicate to. But so there’s, there’s a couple of different ways out there to kick off the builds.
Robert Blumen 00:36:00 You said something I want to drill down into a bit more typically developers, uh, changing systems will create a branch in get, did I understand that we could have Jenkins do run? Let’s say a number of tests, manual steps, like a code review on the branch before it gets merged. So I don’t want to deploy it, but I do want to have different kinds of tests passing before I consider it ready to merge.
Brent Laster 00:36:30 So that’s more of a function of the pipeline out there. I was referencing an application called up a Garret, which is a separate application that you can install on your, and having your pipeline that allows you to do the code reviews and also integrates with Jenkins do pre-build out there. And you can integrate, you can use that as one of the applications with the Jenkins pipeline out there. There are also other applications, I mean, get hub and has code review stuff in there and get lab. Those things, a lot of tools have code review and those sorts of things built into it. But certainly once you get your code into the source management system as well. So these, these, those kinds of applications let you look at the code before it gets into the source management system all the way, once it gets into the source management system and the pipeline pulls the code out, then part of the typical continuous delivery pipeline is running the integration, testing, the functional testing and all those successive levels of testing out there to be able to prove the quality of the code and to prove that it works in there.
Brent Laster 00:37:36 So you’ve got kind of both before the pipeline and kind of as it’s in the pipeline, uh, covering that
Robert Blumen 00:37:42 Finder’s Stan is I might define a Jenkins pipeline. Let’s say I call it developer pre commit. And that pipeline consists of compile the code, run some integration tests, maybe run some unit tests, produce a report, but that pipeline would not include a deployment step. And it’s up to me to use Garret or some other tool to say, when I want to run that Jenkins pipeline. And then I would wire that pipeline up to run on branches when the developer’s ready to merge. Is that it more or less
Brent Laster 00:38:20 Kind of essentially, if you had like Garrett out there, Garrett has a plugin that integrates with Jenkins. And so it would be, you would probably define a Jenkins job that would say when I put code into Garrett before gets all the way into, uh, into the get remote repository, I do a, we might call it like a review job or a small review task in Jenkins. And it only may just say, when something gets into Garrett, let me try and build it and report back on it there. And then once you go ahead and get that successful, uh, you could go ahead and say, merge this in the codes, pass the basic sanity, check out there, but review, build, go ahead and merge it in. And then you could have your other pipeline is you’re talking about kicked off from based on that code being merged. And finally, and then that will kick off the rest of the downstream processes.
Robert Blumen 00:39:14 I want to talk about some security aspects, many of these things that Jenkins needs to do like pulling Docker images from a registry, interacting with cloud providers are going to require credentials. How does Jenkins manage the credentials it needs?
Brent Laster 00:39:36 So Jenkins has a whole, um, for lack of a better word ecosystem around credentials and built in a credential per beacon defined credential providers. You can find credential stores, you can define all kinds of credentials and there, of course, SSH keys, certificates, password, years, name, passwords, those kinds of things. So there is a whole credentials, uh, system built into Jenkins. In fact, I devoted a whole chapter in the book to around the security aspects and credentials in there. So the basic idea is you define the credentials that you want to use in Jenkins. And then within your pipeline, you have steps available that can use those credentials. And it’s kind of, kind of cool of what they typically do is they’ll say, uh, there’s a step say, uh, you know, with some credential you can define that step. And what it does is it takes the credentials you’ve identified in Jenkins, takes the values and puts them into environment variables.
Brent Laster 00:40:37 So you can give an environment variables names, and then you can use those environment variables in the rest of your code without having to expose those things. So you’re telling Jenkins, go out and grab those values out of the credentials, put them into these variables here, I’ll use the variable names of my code. And so you can reference those without ever having them exposed out there. Or in a case of like SSH keys, you can just say with SSH key and you have a block of code, and as long as you’re doing something within that block, it has access to those credentials.
Robert Blumen 00:41:08 We did a show on Hashi Corp vault, and I’m aware that cloud providers have some similar services. Are you saying then that Jenkins has integration with services like this? It can communicate and say, I need the certificate to get this resource from this other place. Give it to me now.
Brent Laster 00:41:31 Yeah, it’s a Jenkins. Again, you can define, you define the credentials upfront that you want. And then it’s very easy to access them in there as well. And I actually did a little section in the book, uh, and the credentials thing where I talked about some very simple integration with vault has probably gotten more sophisticated now, but, uh, did talk about some of using some of the vault driving some of the vault stuff from there. So, uh, certainly again, as these new products come out, new features come out, there’s more and more plugins to integrate with them. But a credential system is a, it works pretty well in Jenkins.
Robert Blumen 00:42:06 If you are running your own nodes as servers, the master has to get access to them, maybe SSH, or how, how does the master authenticate itself to do work on one of the nodes or agents?
Brent Laster 00:42:23 So there’s a couple of different ways. Uh, most commonly it’s probably done via like SSH having SSH keys there, um, between the master and the node out there. There are some other ways, uh, some of the more traditional, like JNLP stuff, you can start up nodes that way you can authenticate various ways. The most common way that I’ve seen it done is like through the driving, the SSH keys out there
Robert Blumen 00:42:49 Does Jenkins implement users. And is there, um, permissioning system for users?
Brent Laster 00:42:56 Absolutely. Uh, you Jenkins by default Jenkins two. Oh, now it is secured from the start. So you go through the installation process and it generates a random password and you put the password, you have to go and find that password and put it in to finish, installing it, to set up the initial user. But there is a whole system around user and user permissions. Again, it’s covered in that, that section on security there, um, you can define various sort of security models. There’s a role based access, uh, plugin. You can get to define it based on rows out there. There’s also the Jenkins, typical kind of a matrix set up where you can define users and you can use the, a matrix rows and columns permissions to check off what permissions people have and stuff in there. And it’s actually kind of cool since we’re on the topic about users and security here, one of the things that they’ve done in Jenkins too, is if you think about it, when you are writing a script or writing a pipeline script, you can call any kind of methods you want.
Brent Laster 00:44:04 You can code any kind of thing you want to in your script doesn’t necessarily mean you should be allowed to do that. So one of the features they have is a called in script or process approval out there. Basically, if you’re an administrator, you can run any kind of script you want. They assume, you know what you’re doing. If you’re not an administrator, then what Jenkins will do is it will, it will, can look if you’re using a thing called the sandbox and say, what methods are you calling in your pipeline code? And are those methods allowed? It has a white list that it keeps to say, is this method okay, to be able to be used out there? If it’s not, if it’s not one of the allowed methods, then it will actually create a queue of, and send something to an administrator. It says, you can’t run your script until it administrator looks at this and approves it. And it has a little in-process script approval out there where an administrator can log in and say, okay, this script is okay. Or this method call is okay, or it’s not okay, or it’s okay if the user has permission to run it. So that’s, you know, they thought ahead to that at least. So it’s not just like you can code up anything and run it. It’s actually tied in to the permission system and into the good list to make sure that it’s okay to do.
Robert Blumen 00:45:25 We were talking a little while back about, you could have get hub when a new commit comes in. It could launch a job through a web hook to test the code in that commit does get hub. Does the get hub web hook authenticate as a specific user that has permission to execute that job?
Brent Laster 00:45:45 I’d have to go back and look at exactly what it it does in there. But I certainly think that you have to have authentication set up for the web book. I don’t remember exactly how that works, uh, through the process there
Robert Blumen 00:45:58 Let’s move on now and talk about deployment. I’m roughly going to define that as moving your code to run on a staging or production network, and that might involve restarting parts of the system. Is that something I would use Jenkins for?
Brent Laster 00:46:18 Well, typically I think of a, you know, deployment deployment can mean so many things these days, as you said, it kind of roughly is putting it out there. So it’s available to use, but it could mean it’s deploying it out to a cloud system. It could mean that it’s making it available on a website. It’s just putting a, uh, you know, instead of executable is out somewhere for it. So depending on the application, and if you have a plugin and stuff, certainly you could drive that through Jenkins. If you have a process that can be automated to deploy your stuff out there, there is no reason why you couldn’t use Jenkins to do that.
Robert Blumen 00:46:55 The deployment van, it might be a shell script, or there are tools like Capistrano that it gives you the ability to define scripts that will move your code with Jenkins. Doesn’t necessarily know it’s doing deployment, it’s doing a job and you define that job to do deployment steps.
Brent Laster 00:47:13 You see it, right? I mean, there are plugins, I think like for Capistrano and different things. And even if it’s just deploying it into a, you know, a Tom cat system or something like that, it’s fairly easy to have Jenkins be able to do that, any kind of command that you can to execute that, uh, should be able to be done by Jenkins out there. A lot of times these days too, it might even be deploying it into a container, creating a container and putting it out there. Um, I’ve had jobs that I’ve created that spin up Docker containers or create a war file, take the war file, put it into Tom cat, build a Docker image off of that and a Docker container and link containers together. So really anything that you can drive, uh, particularly the, uh, some kind of a command line or your face or a plugin interface, and like that can be used.
Robert Blumen 00:48:03 I have a show coming up more specifically about deployment. It’s an interesting process in that you can have a lot of, uh, failure modes and you have to decide, or maybe set a policy that you roll back or you a roll forward. Is that something you’d want to delegate to Jenkins?
Brent Laster 00:48:25 I guess it kind of depends on what your criteria are. You know, one of my favorite quotes about a deployment and I’m drawing a blank on who, I think it’s a Carl bomber. I can’t remember the fellow’s name, but basically he said that, you know, continuous deployment doesn’t mean that you always deploy every release. It means that you proven that it can be deployed out there. Certainly there are, there are things like Kubernetes and stuff now that do the clustering technology and rolling out versions and things that make it easier to roll back. So if you have a plugin or something with that, you could probably make it, you could make it work. But I think, you know, one of the ideas is if you’re using deployment schemes like blue, green deployment, those kinds of things, it’s probably going to be more of a manual, flip the switch and decide which one is it really good enough? Is it ready or not? If you’re just rolling it out there just to make it available without putting it into production, then certainly you can do that. There’s no reason, again, you couldn’t drive it with Jenkins to automatically deploy it out there. And certainly some companies do that. They always just get the latest and just assume it, uh, but you might also want to have a human kind of gating that decision as well.
Robert Blumen 00:49:37 This isn’t really a Jenkins point, but I’m aware there are organizations that are doing hundreds, thousands of deploys per day. At that scale, you’d probably want a computer making a lot of the decisions and not having a person have to do all.
Brent Laster 00:49:54 Absolutely. If you’re one of those organizations where you are just always putting out the latest and certainly you could want to automate that. And that would certainly be something you could drive through Jenkins. Why not
Robert Blumen 00:50:05 Spend a little time on the implementation of a Jenkins itself? What language is it implemented in?
Brent Laster 00:50:12 I don’t honestly know. I think it is probably Java, but I’ve not, I’m not entirely sure. Is it open source? There is an open source version of it. Certainly. There’s also an enterprise version of it, but it is, there is open source as well. Does it have a corporate
Robert Blumen 00:50:28 Sponsor that contributes most of the package?
Brent Laster 00:50:31 Yeah, there’s a company called cloud BS, uh, out there. And, uh, the, the gentleman who originally created Jenkins is the one of the heads of cloud BS out there. And they do do enterprise support for Jenkins. But the really nice thing about the relationship, uh, with CloudBees is there. They are part of the Jenkins community. They are a big part. And the vast majority of what they work on gets contributed back to the open source version out there. So it’s not just like a, a company doing things. They are part of the open source community. They’re engaged with other people out there answering questions, those kinds of things. So, yeah, it’s, it’s very, it’s a cool sort of relationship and they’re in that they’re giving a lot back to that while they’re also supporting the enterprise aspects of it,
Robert Blumen 00:51:23 Move on now. And this’ll be our last main topic is, uh, adoption suppose we’re in a environment which does not have a build server, meaning that all these tasks would be done manually or maybe ad hoc or custom tools. Where does the advocacy for adopting Jenkins usually come from?
Brent Laster 00:51:46 You know, honestly, I think it probably comes organically through a lot of organizations out there. I typically have seen these, these sorts of things come from a group who says we have to do something. We have to find a way to do something better. We have to be more efficient in it. We’re spending too much time on this and they start looking for an answer out there and, you know, they start looking probably at what is considered quote unquote, industry standard or what do a lot of people use. And so they’ll get something and try it. And if they get it and try it, then that message gets communicated via word of mouth, or be experienced to other groups out there. You know, developers are very smart. They’re going to find the, the most, the best way so they can spend most of their time coding and not have to worry as much about this sort of, of, of these pieces of it.
Brent Laster 00:52:36 And that’s part two of what the dev ops movement is too. I think when you get operations, people working with development out there, they’re both looking for the best solution and something like Jenkins that provides this kind of workflow automation through it. Once people start to see what it can do, then they start to think about other ideas and other opportunities and things they can use it for the other way. I think it comes into is, as I said, there’s a huge ecosystem of plugins out there. And so pretty much any of the major, uh, even open source applications out there these days that, uh, do tooling kinds of things. We’ll have a plugin for Jenkins, so a lot of time, but you’ve start looking at the tool, like get or something you see, Oh, there’s a plugin for Jenkins. And wow, that would be cool if I could have my bills automatically kicked off or have Jenkins detect changes in my source control and kick off the bills out there. So I think all those factors kind of play into it. So it’s more of a, of a, of a breadth, sort of a bottom up kind of advocacy that happens maybe then more of a top down
Robert Blumen 00:53:42 Some dev ops tools. And, and I’m not sure whether it’s fair to group Jenkins as a dev ops tool, but they are specialized enough that you mostly find skills for writing, for example, puppet or chef scripts in the dev ops team versus other skills like shell scripting that you’d expect everyone could do as needed or the Jenkins files. Did you typically have a dev ops or a build and release engineer who’s mastered the Jenkins file language or is it something really everyone’s going to be expected to write their own Jenkins files for their modules?
Brent Laster 00:54:20 You know, I think it’s probably going to vary by organization. But first thing I would say is that dev ops or Jenkins is very much a dev ops tool. They’re their conferences and stuff now are kind of geared towards that whole dev ops model. And I think it does provide that sort of the dev op functionality. Some of that that’s been missing for a while, but I have seen in even in where I work and stuff that developers, you know, have been willing to jump in and write Jenkins jobs and work with that. It’s the nice thing is, like I said, if you’re, if you’re a programmer writing your pipeline as code programming, your pipeline probably feels pretty natural to you being able to use constructs in there and steps and write code and test it and run it. Those sorts of things, uh, probably feels more natural to you now than perhaps filling in the web forms did before out there.
Brent Laster 00:55:10 So I think the, the pipelines as code has opened up perhaps more doors to using Jenkins and access there. But I think it’s, it’s been, what I’ve seen is that it’s really not been the case that people are really hesitant to use it or, or kind of put it off to other people. Now you may have a group, of course, at once to have standard ways of doing things, creating what we call chair, pipeline libraries and stuff, standard ways of doing things out there. But I think it’s, it’s a, it’s a very low cost of entry for anybody who wants to work with it.
Robert Blumen 00:55:45 I could see this spanning across concerns of many different groups because the developer generally knows how to compile or build and package their application. But the dev ops would have a better idea of how do we get this pushed out onto the production network, which machines does this piece of code need to run on? So it’s kind of a cross cutting thing.
Brent Laster 00:56:11 Yeah, absolutely. You know, and there are certainly other tools out there like Gretel the great old build system, which can run test cases and those sorts of things you can define targets or tasks, they called him and grateful to those kinds of things and invoke those from Jenkins. So you can actually have your testers writing test, you know, when people are writing deployment scripts and stuff, and then Jenkins, again just becomes kind of the orchestrator at the overall, at the higher level. So as long as you can say, this is how you invoke my part of the process, then you can translate that into a Jenkins pipeline easily.
Robert Blumen 00:56:47 We’ve covered pretty much everything I wanted to about Jenkins. Is there anything you wanted to mention or want us to cover that we haven’t hit on?
Brent Laster 00:56:56 I guess I would just, maybe just really quickly mentioned that they, there is a new graphical interface they’re working on called blue ocean out there in that it’s still a little bit rough in some spots out there, but I think in the future, it’s going to provide a nice sort of graphical interface for people who aren’t as comfortable diving into writing pipelines, as code being able to look at that. There’s also the concept of shared pipeline libraries. I’ve touched on just briefly, but the idea of being able to take your code and kind of put things in there that you want to share across groups, it’s basically a source management repository with a certain structure. You put your code in there, and then there are ways to bring it into the Jenkins pipeline as well out there and use that. So that’s a, that’s a very nice thing to have out there. And I would just say, I think that it’s really worthwhile for anybody who’s looking to do continuous delivery. And I don’t, I don’t work for cloud BS. I’m not a, you know, any getting anything from this, but I just want to, I’d say from my experience, Jenkins is a very easy to use tool. It does seem to have a lot of flexibility now. And I really feel like with the, uh, the whole dev ops movement in here, it provides a lot of opportunities for, for groups to migrate towards that.
Robert Blumen 00:58:14 I’m glad you brought up the point about shared libraries. That was something I had wanted to ask you about one I’ve seen in Jenkins. One is absolutely no modularity. So tons of copy and pasted jobs, and you’ll see a job XYZ job, XYZ dash one job X, Y, Z dash two. Somebody wanted to run it slightly differently. The idea that the Jenkins jobs are now going to be code is great because now we think more like programmers. So of course the first thing I want to do as a programmer is can I create a module or a function or something I can call a from different places and have it run the same way. So I’m not copying and pasting. It sounds like they’ve solved that problem or going in that direction.
Brent Laster 00:59:02 Yeah. I think the, you know, the shared pipeline libraries are a nice way to do that. Of course. I mean, if you, you know, sometimes you can parameterize things and that sort of thing, but if you want to just have common code, uh, there is a shared pipeline library structure out there. There’s, there’s essentially three parts of a very quickly, there is a resources. So basically you’re just defining a structure and your resources is one is for like data files out there, Jason files, any kind of data files you want. And then there is a source area SRC, which is more like the kind of Java source path out there that, that gets added to your class path automatically when you’re using that. And then there’s one called bars out there for these global variables. And that can be pretty much any code and like defined in a groovy script or groovy file with an object and methods and such on it.
Brent Laster 00:59:55 And the VARs is probably the easiest to use, but yes, you can take that, create that structure, put it up in a source management repository, configure it and Jenkins say Jenkins. This is where my shared pipeline library is. I’m going to call it food. I want you to get out of this branch, use this version. And then once you define it globally in Jenkins, your script can bring it in. There’s a couple of different ways. There’s a, uh, an annotation at library annotation. You can bring it in and then, and declarative, there’s another way to bring it in as well, but you can bring that in. And that the cool thing about that as well, I mentioned the declarative and the scripted syntax, and I said there were some things you could do in the scripted syntax with like loops and variables that you couldn’t do in declarative. You can always create your pipeline library as scripted syntax, and still load it in and declarative and get that functionality. So you can get kind of the hybrid approach, the best that the best of both worlds out there,
Robert Blumen 01:00:52 Often the case with declarative programming, 5% of the time, you just have to write a little bit of imperative
Brent Laster 01:01:00 Code. Yes, absolutely. And that’s that there are a couple of ways to work around that. There actually is also a script block that you can put in your declarative to actually use some localized, scripted syntax, but the pipeline libraries are probably your best bet in that case,
Robert Blumen 01:01:16 Let’s move on to wrapping up the show for our listeners. I’d like to find your book, where can they obtain it?
Brent Laster 01:01:22 Probably the easiest way is, is a Goodwill Amazon out there. Uh, if you go out there Jenkins to up and running, um, you should find it out there. They’ve been kind of running low on copies lately, but they’re got understand. They’re getting some more in out there. So just check out, you can search by my name or a Jenkins two out there on Amazon.
Robert Blumen 01:01:40 If any listeners would like to contact you or follow you, where can they fit?
Brent Laster 01:01:45 I’m at a, at Brent C Laster on Twitter. Uh, B R E N T C L A S T E R. His last name also on LinkedIn, under Brent Laster out there. So yeah, feel free to reach out to me, always happy to talk to people about topics like Jenkins or good or continuous delivery.
Robert Blumen 01:02:05 Brent, thank you very much for speaking to software
Brent Laster 01:02:08 Engineering radio. Thanks Robert. I’ve enjoyed it. Appreciate you having me on the show
Robert Blumen 01:02:12 For software engineering radio. This has been Robert Blumen. Thanks for listening to the SE radio educational program
[End of Audio]
This transcript was automatically generated. To suggest improvements in the text, please contact [email protected].