Search
Adam Frank

SE Radio 585: Adam Frank on Continuous Delivery vs Continuous Deployment

Adam Frank, SVP of Product and Marketing at Armory.io, speaks with SE Radio’s Kanchan Shringi about continuous integration, continuous delivery, and continuous deployment – and how they differ. Frank suggests that organizations begin by identifying how the CI/CD process aligns best with their unique goals, noting that such goals might be different for B2C versus B2B SAAS (software as a service). They also discuss how the process can differ for monoliths compared to microservices-based products. Finally, they talk about continuous deployment as a service and some unique aspects of Armory’s approach.



Show Notes

Related Episodes

Other References

Transcript

Transcript brought to you by IEEE Software magazine and IEEE Computer Society.
This transcript was automatically generated. To suggest improvements in the text, please contact [email protected] and include the episode number and URL.

Kanchan Shringi 00:00:18 Hi all. Welcome to this episode of Software Engineering Radio. Our guest today is Adam Frank. Adam is SVP Product and Marketing at Armory.io. Armory is focused on providing solutions for continuous deployment at any scale for all developers. Adam has over two decades of software development experience. He’s focused on delivering products and strategies that help companies attain optimal business agility. Welcome to this show, Adam. It’s great to have you here. Is there anything you’d like to add to your bio before we get started?

Adam Frank 00:00:51 That was a fantastic introduction. Thank you for having me. I’m very excited about being here. Optimal business agility. Well that does sound nice there at the end, huh?

Kanchan Shringi 00:00:59 Before we start, I’d like to point our guests to a few related episodes that we have done in the past. These are Episode 498, James Socol on Continuous Integration and Delivery, Episode 567, Dave Cross on GitHub Actions and lastly, Episode 338, Brent Laster on Jenkins 2 Build Server. So Adam, can you kind of help us build on Episode 498 by helping our listeners recap what exactly is continuous integration, continuous delivery, continuous deployment, and what is the CD? Is it continuous delivery or continuous deployment in CI/CD?

Adam Frank 00:01:42 Well, that’s an ever-going debate, I would say in these days between continuous delivery and continuous deployment. But let’s start with continuous integration. First of all, that’s been around for quite some time and is absolutely a practice that everybody needs in their software development. Continuous integration is really about producing that artifact, going through and having automated tests and just having an artifact that is up to the standard that you believe it should be in a really automated fashion. Continuous delivery in a nutshell is focused on automating the process to have a release ready artifact. So with a manual approval in place, you can then deploy that artifact out to your runtime environments, continuous deployment. The other CD in CI/CD is really about automating that last step. So when developers are committing their codes, there’s an artifact being created out of their continuous integration process. That artifact is being deployed out to their runtime environments without any manual approval. Now of course with continuous deployment, there are a whole bunch of safeguards and practices that I’m sure we’ll get into, but at the end of the day, it’s really about automating that deployment of that code out to the runtime environments.

Kanchan Shringi 00:03:08 So the big difference between continuous integration and continuous deployment is the approval needed?

Adam Frank 00:03:15 Yeah, the difference between continuous delivery and continuous deployment is as simple as that. Like I said, it’s the focusing on having a release ready and automating that release ready. So there’s no manual intervention with continuous deployment and it’s a fantastic practice. We certainly encourage everybody and it’s very attainable these days, especially with the safeguards in place like doing canary deployments and blue-green deployments and having automated checks in place and things like that. Really just orchestrating that deployment across all of your different runtime environments, so you can have the confidence that the change you are making will remain safe and improve your customer’s experience.

Kanchan Shringi 00:03:58 I think we’ll get to that a little bit in the context of some later questions and investigations I’d like to do around differences between B2C and B2B? HoweverÖ

Adam Frank 00:04:09 Oh, most definitely.

Kanchan Shringi 00:04:10 At this time, just focusing on the big picture, do you think there’s a big difference between implementing CI/CD for a monolith versus a microservices based product?

Adam Frank 00:04:21 Oh, great question. So the practices there are the same. The practices there are really quite similar. The differences with the microservices and the monolith are those microservices each have their own individual responsibilities. So you can make smaller changes to smaller areas of your overall experience you are delivering. With the monolith you’re typically making, if you’re making a change to one area, it’s typically updating kind of that entire area. So a lot of people with monoliths really get into their practice of continuous delivery and have that manual check in place at the end to release that release ready artifact with a checklist to just ensure an extra layer of safety. And when it comes to the microservices, that’s really where people are starting to employ a lot more of the continuous deployment and automation. I mean, they have their own responsibilities and you know, some teams are responsible for different microservices, so they want to continue to move fast and iterate and that all allows them to do that.

Kanchan Shringi 00:05:25 So are they on any prerequisites before somebody starts implementing CI/CD?

Adam Frank 00:05:31 You know, that’s a fantastic question that we get from a lot of our customers all the time. Yeah. When we’re talking at their two prospects, and when I talk to a lot of developers, they say, yeah, you know, we really want to do continuous deployment, but we just don’t have this integration test automated or you know, we’ve got a security scanning process that we need to run. So it’s just something that we’re not quite ready for. When in fact, I believe quite the opposite. You don’t have to have everything automated to then go and employ continuous deployment. If you don’t have an integration test automated, that’s okay. If you need to do some security scanning, that’s okay. That’s actually great. We certainly encourage that as part of orchestrating your deployment, you can have some checks in place, you can have the artifact deployed out to the environment, you can have the integration test run and then you can click the approve button and have that continue on to the next environment or the next stage within your overall deployment. That’s all great. You know, we encourage people, we help you get there. You can get to the fully automated process. I don’t think you can’t just because you don’t have one piece automated.

Kanchan Shringi 00:06:43 So that manual approval is contingent on some manual test passing, is that what you mean?

Adam Frank 00:06:48 Exactly, yes. Yeah, if there was an integration test there or there was some security scanning that needed to pass successfully before it continued onto production, anything of that nature, it’s really about orchestrating the checks and balances that you have in place. And when that passes, then the deployment carries on. I would say the major prerequisite to doing any type of continuous delivery or continuous deployment is really having a continuous integration process in place. And that’s where a lot of developers start. Obviously getting that build automated, having that artifact ready, that’s absolutely a prerequisite. Nobody would, you know, don’t recommend anybody jumping straight into continuous delivery or continuous deployment if they’re not actually practicing continuous integration and having that build artifact automated.

Kanchan Shringi 00:07:35 What about the actual update? Shouldn’t there be a prereq of making sure that it’s a zero downtime update?

Adam Frank 00:07:42 Oh, so back into the microservices and the monolith, that’s a great point. Zero downtime updates are absolutely a goal for a lot of organizations. A lot of organizations you can help mitigate some of that using traffic management service meshes and things like that as part of your deployment process. Certainly with load balancers, blue-green deployments, kind of shifting that traffic for users, there’s a lot of things that you can do there to really mitigate that zero downtime and get as close to zero downtime as possible. There are certainly some application and development practices that you would need to employ as part of your application that would live outside of some of that process, which we certainly encourage. And you know, I think there are a lot of people practicing that today. We certainly are with our SaaS offerings, having that zero downtime, but there’s a way that you can do it to really, really mitigate it using a number of techniques throughout your continuous deployment process. So it’s, it appears, it appears zero downtime, I would say.

Kanchan Shringi 00:08:46 So back to the question I had earlier and if you had any examples that would be great. But is the approval process or the practice of adopting or choosing between continuous deployment and continuous delivery any different for a B2C SaaS and a B2B SaaS?

Adam Frank 00:09:06 We’ve got a number of customers that are B2B and then we’ve got a number of customers that are B2C. I would say some of the largest complexities that we see with B2C are different regulations and geographical constraints. People are creating apps that are both on the web, on mobile phones, there are a lot of different processes and regulations that occur within there. So while they are practicing continuous delivery and pushing code out something like the app store as an example, they would absolutely have a manual process in place to make sure that a lot of the regulations, the checks, the balances are in place before the latest version of that particular app is available on the app store. I mean, I’ll use Snapchat as an example. Snapchat is a global company. They’re in over a hundred countries, absolutely massive in India and they have so many different regulations that they need to deal with and they’re deploying across thousands and thousands of Kubernetes clusters as backends.

Adam Frank 00:10:24 So they are both practicing continuous deployment in some areas and in different areas of the planets, just around the globe, but certainly different areas of the application. But then continuous delivery in other areas and you know, using their main app as an example and going through app store process and making sure that everything, all the checks and balances are in place there, both from Apple and both from Snapchat. They do have manual approvals that are in place to make sure that the changes are not breaking changes and stay within the regulations and customer experience that they’re expecting to deliver.

Kanchan Shringi 00:11:01 So how does one start the journey beginning with identifying what is the best process based on my unique goal or my organization’s unique goals?

Adam Frank 00:11:13 Yeah, a lot of people start with that one team in the organization that is really a team that everybody looks up to. You know, they’re kind of a little bit more mature, a little bit more advanced in some of their processes. They’ve taken the time to really hone in their integration process. They’ve really taken the time to start adopting and practicing new methodologies and adopting new technologies. So what we’ve seen is a lot of people will take that poster child’s team within the organization and they’ll work through the process that they need to be able to automate their deployment. And some nuances in there can be, they require different security scanning, they require different constraints put in. So they’ve got multiple environments they deploy to the west of North America and then the east of North America before they go out to Europe or something of that nature.

Adam Frank 00:12:10 They set forth that process with that team and then it’s all automated and of course people within the organization are looking up to that team and that team has now increased their velocity even more, increased their reliability even more by really honing in and employing continuous deployment. And then they start to roll on the next team and the next team then the more teams that they roll on, the better and better they get at kind of drawing out process and adopting very similar processes. We have a very expansive customer base right now that is really in a lot of the elite category of development doing thousands and thousands of deployments per day. And something that we noticed when we looked at all of the different pipelines that all of these different customers were setting up was that when it comes down to it, people are really doing almost the same four or five things. So when you start to realize that you can start to roll teams on much faster and you can start to have process ready to go for them, and there’s a lot of companies out there that they’ve been able to come in and use a single pipeline, a single process for a lot of this despite different areas of their application or what it may be. And that of course has certainly accelerated things and made the platform engineer’s life a little bit easier as they draw our process.

Kanchan Shringi 00:13:30 You mentioned four or five things. Can you elaborate?

Adam Frank 00:13:33 Yeah, I mean, integration tests are one, everybody’s doing a level of integration testing. Security scanning, the vast majority of people are doing some type of security scanning or signing for that matter. Everybody’s got at least three environments from seed stage companies that we’ve talked to all the way up to the largest top fortune 100, top Fortune 50 companies. Everybody has at least three stages dev stage and production. So they need to be able to deploy to dev, do something, deploy to staging, do something, and then deploy to production. And in that order those are the constraints that they would want put in. And that’s kind of the one of the largest scenarios that we see out there is, you know, everybody’s got three and then it grows from there, three to five to 10 to hundreds, you know, around the globe, different environments, different regions and things of that nature.

Adam Frank 00:14:27 And then you start getting into a little bit more of the advanced use cases, you start getting into a little bit more of the blue greens or the canaries and then automating canary analysis. So when it comes down to it, everybody has that goal of increasing their reliability and moving quickly without breaking things. So they want to start by deploying to those three environments, running tests like integration, making sure that the code is secure, there are no vulnerabilities and things of that nature. And then start getting into 5% of that traffic and 10% and 25% up to 50 and analyzing that in an automated fashion so they know that the change that is being rolled out is safe and they can have that confidence to continue pushing multiple changes and multiple updates per day.

Kanchan Shringi 00:15:14 Thanks Adam. So I wanted to explore a little bit the manual approval before delivering to production and the potential cascading effect of that. So if you have multiple services that have a dependency, meaning if my service A has a dependency on B and B has a dependency on C, and if I have a manual approval, at what level should I do that at the service level? And if I do it at more of a product level, which includes all the services, what are the challenges I would have? I’m thinking just constructing environments to do full end-to-end testing with all the latest code in place. Does my question make sense?

Adam Frank 00:15:59 It does, absolutely. We have a good amount of customers that do have tightly coupled services and need to make sure that those are all updated in any type of sequential fashion or updated simultaneously. And that includes rollbacks. So if there is a rollback needed, those tightly coupled services also need to roll back. Some of the difficulties that we’ve seen there with some of the manual approvals that people had in place beforehand were really making sure that the sequence, the dependencies were known and things were done in the correct order. I mean, I can take you back 15 years before that if there’s even proper continuous delivery put in place. I used to have to update this application that we had when I was on call and there were nine different services and you had to update service one, service two, service three, service four, all the way through service nine.

Adam Frank 00:16:53 And each one had their own manual approval. Each one had their own manual testing, each one had their own manual, everything. And when we were confident that one was good, we went on and updated the next one. And then as soon as we found out we were on service four or service five and the update didn’t work, then we had to go back and roll everything back because there was no fixing it and rolling it forward at that point in time. So there were a lot of difficulties having that manual approval in place like that. And that’s again where some continuous deployment can actually help or solutions out there hours included, that enables you to automatically update tightly coupled services and roll back tightly coupled services as needed, understanding those dependencies.

Kanchan Shringi 00:17:37 So if somebody did have such tight coupling, then cutting’s deployment is certainly a challenge. However, let’s say people you know, especially in a B2C SaaS environment have a much better structure where they are able to do continuous deployment. What does this mean to the customer experience? So the consuming company will now be exposed to changes almost continuously. What does it mean for them to be able to test as well as look at any new features that are coming with the changes being made available?

Adam Frank 00:18:16 So I want to step back on that just for a second because I want to be very clear that the best way to change a user experience is my leveraging feature flags. You don’t want to have a feature a quarter completed or half completed or anything like that out there for users to stumble upon and think that it doesn’t work or it doesn’t work as it’s supposed to. And you know, perhaps support tickets and bugs and things like that raised when it’s not a complete full feature. So continuous deployment is still deploying that code out there, having it ready. But when it comes to actually changing the user experience significantly like that, we certainly recommend using feature flags. Now smaller changes that may not be changing the customer experience to that dramatic effect that again, you can control using traffic shaping. So you can do it to 5% of the traffic, 10% of the traffic, 25% of the traffic and make sure that everything is working as it is intended to.

Adam Frank 00:19:19 And the users aren’t going to necessarily notice that level of change. But like I said, when it’s a bigger feature or let’s say a UI refresh, you’re completely changing where buttons used to be, you’re going from a top level nav to a left hand nav, you would certainly want to change that type of experience with something like feature flags. So the code is continuously deployed out there, but the actual change to the user is done through something like feature flagging. If not, you’re going to have a left-hand nav and a top hand nav at the same time. And you know, we work in a very agile fashion and we’re going to iterate on things. So having a top level nav and a left-hand nav with you know, one module in the top and one module in the left, that’s going to be a very clunky user experience. They’re absolutely going to think that your application’s broken in that case. So again, that’s where something like feature flagging comes into the SDLC and works really, really well.

Kanchan Shringi 00:20:14 So Episode 498 did dwell into some detail on the feature flag and just the complexity maybe in deciding when you start on something, is it going to need a feature flag? What’s your experience on that?

Adam Frank 00:20:31 I think a lot of the times when we’re designing a new feature, a new experience from the vision of that and that North Star goal that we have for it, we can understand if there’s going to be feature flags needed or at what level feature flags will be needed. There’s a lot of underlying changes that you can make to a particular feature or implementing a new feature that may not interrupt, may not be there for the user to experience. But a lot of the times that I’ve seen developers and the team, the product development team that is creating this has a really good sense of when the user is going to be able to start viewing this, start using this so they know when something should be feature flagged. I mean have it as part of your planning, make it part of the discussion, make it part of the design phases. Like what point are we going to feature flag this. We’re going to feature flag it right from the beginning? Are there changes that we can make and then feature flag it is a practice that really should happen at the onset?

Kanchan Shringi 00:21:31 So as maybe let’s end this section of just drilling into testing. So on the developer side and on the integration side, what are the different kinds of tests you would expect to have in the pipeline, you know, and environments? So do we talk in terms of a unit test set up, which is automated in a dev environment and then a pre-production environment. What are the different kinds of environments do you recommend and what the different levels of testing automation do you recommend?

Adam Frank 00:22:04 Yeah, definitely unit tests, absolutely unit tests in the development environment, typically integration tests absolutely happen in, you know, a full fledged staging environment. Security scanning and security tests oftentimes happen in another isolated environment. One that is before your production environment alongside the staging environment we see quite often as well. And then of course there are certain smoke tests and things like that that run when the deployment happens out to production. And again with something like canary analysis, even some of those end-to-end tests and testing some of the user experience baselining that metric or, or you know, those metrics that are communicating back to you what the customer is experiencing, what the latency looks like, what the saturation looks like of your particular application and understanding that baseline to move forward with your deployment and to increase that traffic to move it to from customers in the west to customers in the east, things of that nature.

Kanchan Shringi 00:23:12 And then what about testing on the consumer side of the B2B application? Do I also need to have automated tests every time the SaaS provider is updating and deploying to production? Is that recommended?

Adam Frank 00:23:26 On the consumer side of things? No, I mean I can’t think of a single service or application that I use as a consumer that I have any type of tests running. I think we really have to trust and part of that reliability and stability and customer experience that’s being delivered to us is allowing us to have that level of trust and it’s an incredibly competitive market and competitive day that we live in. I can move to another service, I can move to another application very, very quickly and still achieve what I need to achieve, whatever that may be. Think of streaming services as an example, you know how many streaming services are out there now? You start to have a bad experience with your streaming provider, you can move to another streaming service very, very quickly. So we have to trust and if that trust is broken, that company’s going to pay not us. We’re going to move, we’re going to go continue to live our life and experience it.

Kanchan Shringi 00:24:27 On the B2B side though it is more sticky . So does your answer change for the B2B app?

Adam Frank 00:24:34 Depending on what the level is, absolutely. A lot of B2B apps there will be some level of integration with other B2B apps and those two companies may or may not have a relationship with one another. This may be something that was created by these companies together jointly. There’s an integration there and you don’t necessarily need to test that if they’re working together and ensuring that the quality is there. If it’s something that you created yourself to stitch two applications together to stitch two processes together, I would absolutely recommend testing it. If, you know, if there’s no relationship between the two companies and it was a third party that created some level of integration or something like that, you know, I’d absolutely want to test it. And the larger the enterprise, the larger that application is and the more business critical it is, you know, the more we would absolutely recommend, you know, testing there as well.

Kanchan Shringi 00:25:30 And hence creating an automated test suite.

Adam Frank 00:25:33 Yeah, yeah, most definitely.

Kanchan Shringi 00:25:35 Okay. Let’s switch now and spend some time on the developer experience and the tooling. Maybe just starting with the evolution of this tooling for CI/CD.

Adam Frank 00:25:46 Yeah, I think developer experience is something that has certainly come to the forefront and if we look back at kind of the inception of DevOps bringing development operations together back in 2007, 2008, there is the whole mantra about creating a culture of you build it and you own it at any cost. There are a lot of complexities, a lot of complexities with that and a lot of things that have to be known in order to fully own it and operate it. And I think that’s really where we’ve seen the evolution and the new role and responsibility of site reliability engineers and as of late platform engineers, site reliability engineers and platform engineers should still very much work in a DevOps and agile manner. Don’t get me wrong, DevOps is still very much alive, but now platform engineering has kind of been that focus a lot more on the developer experience to really empower the developer to enable that developer to own it and build it by providing a platform that increases and improves that developer experience.

Adam Frank 00:26:54 And if you look at all the different nuances of the software development lifecycle, creating that platform that enables the developer to deploy their code fast at a high quality, increasing reliability and assuring their customer experience, you increase your developer experience, you are definitely going to increase your customer experience. So I think it’s something that’s really come to light over the last couple of years as businesses try and stay more and more competitive, move faster. So there’s a whole suite of processes and tools that kind of underlie that platform to an increase and improve the developer experience. And continuous deployment is certainly at the center of that.

Kanchan Shringi 00:27:40 Can you cover the tooling and the evolution maybe at a high level?

Adam Frank 00:27:44 Yeah, yeah, for sure. I mean we just kind of talked about the DevOps mentality that was there of build it, own it and you know first continuous integration and continuous integration tools and automating a lot of that build. There’s a huge market for test tools and quality assurance. Security as of you know, more and more as of late has started to shift left and become into observability. Came around in about 2015, it was no longer just the ability to monitor from the outside things that are happening but really being able to observe the internal of an application from telemetry. It’s creating, there’s a massive market for observability now. Hot on the heels of continuous integration was continuous delivery and then even more automation came in place with continuous deployment and DevOps has really enabled people to continue to move quickly. But I think more so now that developer experience is at the forefront and platform engineering is starting to have a focused responsibility on enabling and empowering those developers. You see them with responsibilities of providing testing frameworks, providing continuous integration tools, providing continuous delivery and deployment tools, providing you know, the observability suite and framework. And whether that’s you know, set between the developer and the site reliability engineer, they are certainly looking to improve both experiences and you know first and foremost the developer. So that code is of high quality and being deployed quickly.

Kanchan Shringi 00:29:20 So earlier in the podcast you mentioned customer of yours that has hundreds or maybe thousands of deployments a day. What kind of challenges does that create?

Adam Frank 00:29:31 As soon as you said that another customer came to mind has over 125,000 deployments per month. They are a technology company that’s been around for many, many years and I think some of the challenges that I can think of that they have really seen and we sat down and chatted with them in a seed stage company at the exact same table and it was so interesting to hear the complexities that they talked about and the complexities that the seed stage company talked about and just, you know, them kind of smile back and forth at each other from time to time. Like they’re both crazy, you know, living in a world that neither of them even remember or can fathom. I think some of the challenges that this very large company has experienced is they’ve got eight different languages that they need to support across hundreds of applications.

Adam Frank 00:30:22 They’ve got underlying infrastructure that is using multiple cloud services from serverless to container services to container services that they’ve built within their own data centers. They are a company that’s around the globe. So they’ve got multiple regions, again, different regulations in different countries. So they have got an immense amount of complexity depending on what application updates they’re deploying and where they’re deploying it to. So I think part of it is really establishing that culture and you’re going to ha really be able to establish and help that culture by empowering your developers. The more you empower those developers, the more they’re going to get on board with the overall culture.

Kanchan Shringi 00:31:08 So for these deployments, since this such a huge volume, how do the developers track if deployment has to be rolled back, what are the metrics they would typically use to determine that? And I assume do that automatically?

Adam Frank 00:31:22 Yeah, the most basic ones are really looking at some of the underlying infrastructure metrics. Those ones are easy, but every application is going to have its own set of metrics and it’s really up to the application development team that understands what those metrics are and the experience that they’re delivering. I mean you can look at things like golden signals, looking at saturation, looking at latency, you know, looking at response times and things of that nature that are certainly going to help you establish. Does this need to be rolled back? The latency was one second before this change and now it’s three seconds. That could be detrimental to the application or it could still be okay. You know, that’s really up to the development team and understanding the experience that they are delivering. So being able to look at those metrics and understand when things need to roll back or can continue forward and then going even further and doing that in an automated fashion and having statistical analysis and machine learning do that for you. I mean that’s the ultimate price and that’s very achievable in today’s world that is very achievable with our solutions and you know, practicing the continuous deployment out there.

Kanchan Shringi 00:32:28 So how has generative AI, if at all, changed us? You mentioned machine learning.

Adam Frank 00:32:33 Yeah, I think generative AI in terms of the software development lifecycle, I think the area that we’re seeing the most focus in right now is code generation. Part of that code generation can be configuration generation as well. So producing different sets of YAML that may be needed and when we live in the world that we live in, Kubernetes really introduced us to declarative and being able to produce some declarative configuration has been a pretty easy process for generative AI. When it comes to doing machine learning on metric data and things like that. That’s been around for a number of years now. You know, before this, this big generative AI bang that’s been a great process that has worked for a number of customers, a number of people, a number of developers that you know, we have great relationships with and help every single day.

Adam Frank 00:33:25 But I think the generative AI, the big boom in it right now is really the code generation. Armory is a global company and I think one of the biggest uses that we have it for right now as a global company, English is not the first language for a lot of the company. And being able to have a common language that everybody speaks within the code, using comments and things like that to help other developers understand, you know, what the thought process was and you know, what this code is supposed to do has been really helpful. It’s been great using generative AI to help write a lot of our comments and things like that. Fast prototyping, using generative AI to produce some levels of code for fast prototyping has also been really, really great for us. But in terms of deployment at this stage, I think we’re still very early on, although I did write an article about some prototyping and practicing that we did a couple of months ago. So I think there is most definitely a future beyond declarative that is generative when it comes to continuous deployment.

Kanchan Shringi 00:34:27 So maybe let’s switch to Armory and unique aspects of Armory solution then. I read about continuous deployment as a service. What is that? Can you elaborate?

Adam Frank 00:34:38 Sure. Armory started out in 2015, 2016 and at that time Netflix was one of the leading development organizations in the world and they had developed internally a project called Spinnaker. And Spinnaker quickly became the defacto continuous delivery, continuous deployment project out there on the market. So Armory adopted commercializing Spinnaker and one of the biggest advantages that Armory has had over other companies is being able to work with elite development teams that are kind of at the forefront of adopting early stage technologies like Kubernetes, early stage practices like continuous deployment. And at very large scales, I mean like we talked about, you know, hundreds of thousands of deployments per month across several different environments around the globe, we’ve been able to learn an immense amount from our customer base. And continuous deployment as a service is really the evolution of that. So what we knew, we needed to do was create a simpler offering that was full-fledged in power.

Adam Frank 00:35:43 So declarative came along with Kubernetes, so CD as a service is really a declarative continuous deployment process that orchestrates the deployment of that artifact from your artifact repo across all of your environments. It allows you to hook in security, scanning, integration, testing, any different process that you have fits directly into your SDLC. The traffic shaping that we’ve been talking about, the blue green, the canary, all that great stuff, it’s all in there. It’s very, very powerful. It is as you can guess by the name as a service. So it also has, you know, Auth and RBAC and things like that all built into it. So it’s a full package ready to go for any development team, any platform engineering team that is looking to increase and improve their developer experience and spend more of their time focused on their competitive advantage can leverage CD as a service as their CD control plane.

Kanchan Shringi 00:36:42 So then you take over the challenge of scaling . If somebody is doing hundreds or thoughts of deployments, I assume you have to make sure that your platform or your service scales up to that.

Adam Frank 00:36:54 Oh most definitely, most definitely. And now part of that evolution, we looked at other open source projects and we saw, you know, a lot of difficulties with those when in terms of scale when it does come to some of these larger organizations and that’s why we developed it from the ground up and it’s been so much fun developing it from the ground up and working with some of The best technology companies in the world and working with, you know, a lot of seed companies at the same time that don’t have the funding or time to employ people to build out a CD process, a CD solution based on open source projects and other things like that. And don’t get me wrong, we are huge advocates of open source. We’re still very much contributing back to several open source projects that are out there, big, big fans of the CNCF and everything that’s happened there as well. So I think there’s a big bright future for CD as a service, the community in general and people who are getting started with continuous deployment.

Kanchan Shringi 00:37:56 So is there a difference in recommendation to teams to that are getting started depending on how many different products that they want to integrate and build CI/CD pipelines for versus some teams that’s just getting started and has a very small, maybe a couple of services that they’ll be testing.

Adam Frank 00:38:16 The more tools you have, the more it’s going to take to manage them. So people and the more they’re going to cost, whether it’s an open-source project and you need to hire people, there’s some costs there, ramp up time, knowledge, all that kind of good stuff or it’s some type of subscription service. So the more you have, the more it is going to cost you. So we absolutely see ourselves included tool rationalization and consolidation and all of that good stuff. So I think whether you have a couple of services or whether you have a hundred services, I think you should always try and keep your platform as tight as possible so you can manage it easier, you can help keep your costs down and there are some fantastic tools out there. You know, open cost is a great open-source project. Cloud Zero is an example that can help you really manage your costs at the same time. Not just your infrastructure but translating those to your business costs and everything that you have in and around your platform and what you are serving out.

Kanchan Shringi 00:39:13 So you did talk a little bit in the context of generative AI on what’s in the future, but you want to elaborate and talk about some other stuff that you’re looking at Armory?

Adam Frank 00:39:24 One of the big things that we are working on right now that we’re incredibly excited about comes from the learnings that we’ve had with a lot of these customers. Kubernetes has taken the world by storm, but every single customer that we have and every single prospect that we’ve talked to has Kubernetes and something else. And Kubernetes is also complex, so a lot of people don’t have the time and energy to invest in it. So they move to simpler container services like ECS, Amazon’s ECS as an example. So there’s still very much a large world out there that requires deploying to more than just Kubernetes. So we are actually working right now releasing it very soon on our next target, which is Lambda and building that out so people can build additional targets that they need. As Armory continues to build out more and more targets so people can deploy from a single platform to more than just Kubernetes.

Adam Frank 00:40:17 They can deploy to Kubernetes, they can deploy to Lambda, they can deploy to ECS and so on and so forth. That’s something that we’re working on in the near term right now that we are very, very excited about. And then of course security supply chain is a big one right now and that’s a story that we haven’t really told yet. Full capability is there, but it’s a story we haven’t really told yet. And making sure that the code that you are deploying to development, the code that you have written is signed and that is indeed the code that is making it out to production is another one of those shifts left security things that’s very, very important and it’s just a story that we haven’t quite told yet further down the road. Yeah, absolutely. The generative AI stuff is very, very exciting and we have absolutely started to play with it to the point where you’ve got Brownfield first, everybody’s got some level of infrastructure in place, so simply providing the account information, the credentials to that infrastructure, to any tooling and letting the generative AI connect them and then simply using text to say this is going to deploy or deploy this, deploy this using Blue-green or even letting the generative AI figure out whether Blueg-reen or Canary is the proper strategy there.

Adam Frank 00:41:34 It’s really exciting stuff to see what we’re doing. Brownfield’s different, there’s a lot more complexities there when there is no infrastructure or nothing in place. I think that’ll take a little bit longer to figure out, but needless to say, it’s still very much an area that is being explored. Greenfield, sorry, I think I said Brownfield there twice, but I meant Greenfield in the first, on the second one there. .

Kanchan Shringi 00:41:55 Got it. Adam, do you think there’s any key topic that we missed talking about today that you’d like to cover?

Adam Frank 00:42:02 No, I don’t think so. I’ve had a ton of fun today. It covered a lot of ground when it comes to continuous deployment. I think this is going to be a fantastic episode for all the listeners out there and you know, certainly excited to come back and, and do it again and continue talking about developer experience perhaps, or platform engineering to incredibly exciting topics. Again,

Kanchan Shringi 00:42:22 Adam, how can people contact you?

Adam Frank 00:42:25 People can reach me anytime at Adam.Frank. That’s A-D-A-M dot F-R-A-N-K @armory.io. A-R-M-O-R-Y.I-O.

[End of Audio]

Join the discussion

More from this show