Nigel Poulton, author of The Kubernetes Book and Docker Deep Dive, discusses Kubernetes fundamentals, why Kubernetes is gaining so much momentum, and why it’s considered “the” Cloud OS. Host Gavin Henry spoke with Poulton about when to use Kubernetes, the different ways to deploy it, the protocols involved, security considerations, pods, sidecars, the RAFT protocol, Docker Desktop, Open Container Initiative, Docker Swarm, alternatives to Docker for containers, handling secrets, Docker container runtime deprecation, the correct number of master and worker nodes, using different availability zones, “declarative models,” “desired state,” the beauty of YAML, “deployments,” testing, APIs, Helm (kubernetes package manager), persistent state, DNS, mTLS, service meshes, deploying an example app that’s a JSON API written in golang with PostgreSQL backend, and when to think about starting with Docker Swarm instead.
- Episode 409: Joe Kutner on the Twelve-Factor App
- Episode 334: David Calavera on Zero-downtime Migrations and Rollbacks with Kubernetes
- Episode 319: Nicole Hubbard on Migrating from VMs to Kubernetes
- Episode 246: John Wilkes on Borg and Kubernetes
- Episode 429: Rob Skillington on High Cardinality Alerting and Monitoring
- A Performance Evaluation of Containers Running on Managed Kubernetes Services
- Hierarchical Scaling of Microservices in Kubernetes
- Open Container Initiative
- Docker Desktop
- Docker Swarm
- RAFT protocol
- Docker Captain
Transcript brought to you by IEEE Software
Gavin Henry 00:00:17 For engineering radio. I’m your host, Gavin Henry. And today my guest is Nigel Poulton and I told as a Docker captain and trainer of many businesses all around the world. He is the author of two fantastic books titled the Kubernetes book and Docker deep dive. He’s a regular speaker and workshop leader at leading tech conferences and events, such as DockerCon and Q con and many others. Nigel, welcome to software engineering radio. Is there anything I missed in your bio that you’d like to add?
Nigel Poulton 00:00:47 Well, thanks for having me and now I think that sounded all right. I’m easy.
Gavin Henry 00:00:51 Okay, excellent. So I’ve broken the show up into three parts, so quite a meaty introduction to lay the foundation for the next two parts and we’ll move on to Kubernetes technology, and then we’ll try to discuss deploying an application or Kubernetes. So I’d like to start with an overview of what community is and its history for a deeper understanding of its history. Command the listeners to listen, to show two 46, which was John well, keys on the board. And Kubernetes back in 2016. So like a, what is Kubernetes?
Nigel Poulton 00:01:30 So elevator pitch, it is an orchestrator of microservices applications and don’t we just live a buzzword in the industry when orchestrator of microservices applications. Let me just try and boil that down briefly. And look, we can go into more detail if you want, but I think the modern way of developing applications is different to what it was certainly 10 years ago. So we’re building these applications that are built of like lots of small specialized parts that talk to each other via API and in the modern business world, we want those applications to be able to scale up and down based on demand. We want them to be able to self-hate or ruin one, to be able to push like rolling updates and things to them kind of during the business day, without having to wait for long weekends and do loads of planning for like, I dunno, maybe one update a year. So, you know, small moving parts in our applications, scale up scale down South hail push updates all the time. And we need something to sort of without using a buzzword, but to orchestrate it or to oversee or manage that. And that’s effectively what Kubernetes does.
Gavin Henry 00:02:36 Is it new?
Nigel Poulton 00:02:38 It feels new, but it isn’t really, so we’ve had it since I think the summer of 2014 I’ve been using it since then. It is a big project or a big platform or tool. I think the core parts of it, like the core objects that are required to build and deploy applications on Kubernetes, they’re really nicely baked and generally available and stable and all of that. Goodness. But of course, then you’re going to get more, not edge case, but sort of newer features in the platform and things that are trying to push the boundaries that are, that are a bit newer. So I think like the core components of Kubernetes a nicely baked and role generally available, but of course, if you want to go to the bleeding edge with some of the newer features, then so parts of it, any view without waffling and parts of it, a super trustworthy and reliable, I mean, people are deploying applications to Kubernetes in production left, right. And center these days. So it’s obviously it’s stable enough for a lot of people and it’s mature enough. I’m always careful when I say, you know, something is enterprise ready or is, is mature governed because terms like that, mature and enterprise ready mean different things to different people in different organizations. Um, so of course you always have to do your own research and your own testing, but look, people are using it and I can go and loving it and taking it at the same time.
Gavin Henry 00:03:58 Thanks Nigel. I think you touched on this answer my first question, but what problem does it solve?
Nigel Poulton 00:04:06 If you an analogy here, and this is something that I’ve given trainings and things that I do all the time, right? Let me compare a modern microservices application to a football team that can be American football or soccer. It doesn’t really matter. Okay. But that team is made up of lots of individuals and each individual has a different role to play. So in soccer, you know, somebody is a goalkeeper and other people are strikers and other people are defenders, whether it’s American football, you know, there’s somebody whose job is just to kick. And then you’ve got a quarterback and wide receiver. It’s all these different things, but they have to come together to work as the team. And the microservices application is very similar. So it might have like a web front-end on it and it takes a store on the backend and some authentication and some middleware and on their own, those elements that kind of great, a little bit like a goalkeeper in football or soccer is great, but on their own, they’re not going to accomplish very much in a game.
Nigel Poulton 00:04:59 So the individuals of the team have to come together with a plan and play as a team. Well, in a microservices application, you’ve got all these different moving parts that need to come together to give an overall useful application experience to an end user. Now in the football or soccer analogy, you have a coach comes along and organizes everybody into possessions and arranges the tactics. If somebody gets hurt or injured, they swap them out with somebody else. If the score, you know, if you down towards the end of the game and you need to press forward more attack more, you’ll change the tactics, you’ll change the personnel on the pitch. Permanente’s is very similar for a microservices application. Yes, all those pieces have to talk to each other, but they also have to be able to respond to business demand. So if it’s, if it’s whatever, if it’s black Friday or something, right, and you’re increasing demand on your system while Kubernetes can ratchet things up so that you have more instances of your web server and your backend to cope with demand.
Nigel Poulton 00:05:54 And then later on in the month, when cool down it can cool the application down and remove instances and things like that. But also then if things break Kubernetes does what the coach on the soccer or football team does is it takes the broken part, the application out of play and replaces it with a new part and keeps things ticking over. So basically modern microservices applications have to be able to respond to business needs, have to be able to self heal. All of that stuff Kubernetes does not. And if you knew anything about VM sprawl and like the VM, well, why world or like virtualization world, you kept the same with containerized applications or microservices applications. You get huge sprawl and you need tools to be able to help you manage that. And Kubernetes does all of that as well.
Gavin Henry 00:06:39 Why is this a hard problem? And why is Kubernetes the best solution to something in an earlier generation of something trying to address that problem? The hard problem.
Nigel Poulton 00:06:49 So I’ll tell you the reason that it’s hard back in the day, we would tend to deploy an application as a more monolithic application. And excuse me, if I’m being a bit high level and I’m not going into detail here, but let’s just call it a traditional monolithic application. You would have everything that, that application did like the front-end, the authentication, the logging, the database. Remember I’m being super high level here. Okay. But it would ship as a single binary or is a single installer, meaning that, you know, if you ever had to perform an update on that side of the reporting aspect of it, quite often, you’d have to take the whole thing down over a long weekend, high risk, everyone in the office, Peters and coffee all weekend, and it was kind of painful. So we moved to this more services model where we say, we take all of those different features of the application and we break them out into their own pieces so that we can have different development teams working on them.
Nigel Poulton 00:07:43 We can update the reporting system without having to touch the web front-end or the database or whatever. And it’s a much better model. It allows us to scale one element without starting another podcast, update one element without updating another part. But because they’re all talking via API is over a network. Now it suddenly becomes more complex. So you get a lot more capabilities from your modern applications. They’re much better suited to modern business needs and requirements, but gluing it all together is way harder. Keeping up with them. Talking is way harder. When you do that at scale, you can’t have a human being doing it. You need a system to do it. And that system is Kubernetes now to the second part of the question. And I’m not sure whether I’m necessarily answering your question here, but there were alternatives. So Docker swarm and Apache mezzos, things like that can do very similar things to Kubernetes and still do.
Nigel Poulton 00:08:43 But the reason that Kubernetes grabs all of the air time, it seems is that it has so much momentum and so much almost universal backing from all of the cloud providers, from all of the enterprise data management companies, you said scares you, HPS NetApps, people like that. Everybody is behind Kubernetes. So there are alternatives out there, and I’m not here to say that one is better than the other, but Kubernetes is looking like the safest bet. Like if you’re a business and you’re like, we want to do this and we have to pick the right technologies Kubernetes as a pretty nailed on safe bet. We know it’s going to be here in the future and we know it’s developing and it’s got a great ecosystem because everybody is on board with it. I hope that kind of answered the question.
Gavin Henry 00:09:32 Yeah. Perfect. I think that probably answers my next three, which is why is it winning and what is it driving adoption and how rapidly it is being adopted, just because, like you said, everyone, well, with my business hat on, you got a bit tired tire trying to assess with an out there, you know, you get to a certain point. It’s like, well, why is everyone user? And you know, what big names tick the boxes? Would you agree with that? That’s, what’s driving adoption. There’s just, everybody’s talking about it.
Nigel Poulton 00:10:05 And that is certainly a big part of it, for sure. I think if we just wind the clock back a little bit though, to give some sort of, at least from my opinion, and I’ve been in the ecosystem industry, you know, containers and stuff for a while now,
Gavin Henry 00:10:19 What do you think is driving adoption then for?
Nigel Poulton 00:10:22 Yeah. So if you think back to when Amazon web services was new, it came in from the left-field and it started eating the lunch of your Microsofts and your IBM’s and your red hats and your HPS and people like that. And they didn’t like that. And they needed an answer to Amazon web services. Now, I think at first, it’s fair to say that we all thought that or hoped that was going to be OpenStack and OpenStack, trying very much to be like an open source version of AWS that would go toe to toe feature, to feature, but be something that you didn’t have to be tied into a cloud provider for. And we did a lot of hard work around OpenStack and unfortunately, no disrespect intended heritable, but it didn’t end up becoming what I think a lot of us hoped it would be. And when I say, Oh, it’s, I mean, like, you know, a lot of those traditional entropies and Cisco and Microsoft and people really needed something to compete with NetApp, but OpenStack didn’t become that again in, from the left field and refer back to the episode that you had in the past on bog and what have the Borg and Omega probably Google launched Kubernetes into the market.
Nigel Poulton 00:11:25 Now we’ll hear, people will say that Kubernetes is the operating system of the cloud. And this for me is at the crux of why it’s gaining so much popularity. Right. When I say to the operating system in the cloud, if you think about Linux or windows being the operating system that sits on top of Dell servers and HP servers and Cisco servers and Supermicro service. Yeah. And one of the jobs of the operating system is to obstruct what that hardware is below. So that the user of an application doesn’t have to, or even know whether the application is running on a Dell versus a compact set or an HPE server. Yeah. Like why would they want to care? They don’t. And as long as your application will run on Linux or windows, you can move that application to a different server hardware infrastructure, as long as you’re running, then it’s all windows, but Kubernetes does the same thing.
Nigel Poulton 00:12:11 It says, we’ll take your cloud infrastructure, AWS, Google cloud platform, Azure, all of the other strikes or even virtual machines or physicals on premises. We’ll take all of that, your slot Kubernetes on top of it. And as long as your application will run on Kubernetes the consumer of the application doesn’t have to care whether it’s AWS at the bottom or your on premises, virtual machines, right? And by the same token, you can take the application that runs on Kubernetes and run it on Kubernetes somewhere else. So it does not abstraction that lets you run your code anywhere and potentially, and look, it’s not a silver bullet here, but it does make life easier when it comes to things like migrations as well. So as an end user, it’s got a lot of things that are advantageous, but also the ecosystem needed something that was going to obstruct Amazon web services.
Nigel Poulton 00:13:02 And Kubernetes potentially does that and does it for a lot of people so that all, most that AWS or, or Azure or GCP or IBM cloud or Linode or wherever, right. It’s abstracted. And it makes it easy for you to pick one cloud today, realize that you made a mistake next month and you want a different cloud and move to it much easier than you could. If you didn’t have Kubernetes. Now look, I have a habit of waffling. Now I speak for a living. So I’m going to cut myself short that and by all means, tell me if that didn’t make sense.
Gavin Henry 00:13:33 No, that’s good. I’ve just got a few bits left in the intro to sort of smooth it off. But yeah, that was a great answer. Thanks Nigel. It’s just some nitty gritty what’s Kubernetes written in
Nigel Poulton 00:13:44 Predominantly.
Gavin Henry 00:13:46 Okay. Go along yet. What does it actually run on? So Kubernetes itself is an application written and go.
Nigel Poulton 00:13:53 Yeah. So it will run on Linux or windows on anything underneath cloud instances, virtual machines in your data center, physical machines in your data center, as long as you have Linux or windows. Now, for the most part, it’s been Lennox, but there’s catch up being played by windows as well.
Gavin Henry 00:14:12 So you could have like an infinite mirror where you’ve got Kubernetes running on Kubernetes, running on Kubernetes, running on virtual machines, running on. You might not want to do that.
Nigel Poulton 00:14:25 So what I would say is like, if you want to build it yourself, so you have a bunch of hardware in your data center, let’s just take an on premises example, but it could be cloud instances. You stopped Linux on that. Let’s say, and let’s say you’ve got six machines. And then on those six machines, you deploy Kubernetes and you would probably, when I was getting into too much detail here, deploy three manager, nodes that to control plane that manages how the whole cost of the works and then three work nodes where you run your user applications, but it’s hardware be that physical 10 in your data center or cloud instances or virtual machines. Because of course they emulate hardware, you install an operating system, Linux or windows, and then you go to work installing Kubernetes on top of that equivalent is forms a cluster or a substrate or whatever that you then deploy reputations to.
Gavin Henry 00:15:13 Excellent. So would a safe bet be some type of virtual machine platform, which Kubernetes runs on top or?
Nigel Poulton 00:15:20 Yeah, absolutely.
Gavin Henry 00:15:23 Okay. So you’re still in the virtual machine world for some, for those types of things. So where can we find it? How do we get our hands on Kubernetes?
Nigel Poulton 00:15:33 So lots of ways. And it’s got lot simpler these days. So let me run you through maybe two or three scenarios. So let’s say you are a developer or just somebody who wants to play around with that on your laptop, hit Google and search for either mini cube or Docker desktop personally. And it’s just me. They’re both great. Don’t get me wrong personally. I prefer Docker desktop because it’s click, click, click gooey two minutes later, you’ve got to, you’ve got Kubernetes up and running on your laptop. A single note cost us. So don’t get me wrong. It’s not for production, but you get to play around on your laptop. Just Google Docker, desktop, download it, Mac OS and windows 10, 10 minutes later, you’ll be in business. If you want to deploy it to the cloud. And you’re relatively new, the easiest way is a hosted Kubernetes service.
Nigel Poulton 00:16:19 Basically. That’s where your trusted cloud provider and all the big cloud providers. In fact, just about every cloud provider has hosted Kubernetes you go there and you say, I want a Kubernetes cluster with three manager nodes for high availability. And that say five worker nodes to run my applications, build the work and to this number of calls and this amount of Ram that’s up this Bozeman Kubernetes and then you cut the build bottom pretty much two minutes later, you will have a Kubernetes posted that you can deploy applications to. And your cloud provider hides all of the hard parts of how do you make that control plane highly available in high-performance. And how do you manage upgrades of the control plane hosted Kubernetes your cloud provider does all of that for you. So an easy on-ramp and then the third way would be to build it yourself, either in the cloud or in your own data center. And that’s what we talked about before you build your infrastructure. That can be physical machines, virtual machines, or cloud instances, lashing operating system on top go and install Kubernetes
Gavin Henry 00:17:17 Sounds good. Yeah. I’ve had a play with mini cube early last year sometime, and then I installed it cause it just sat with a high CPU, my machine doing nothing, but that was probably me some show notes for the Docker desktop and what would be a good district to use in the Linux world to run it on.
Nigel Poulton 00:17:38 I try, I try and shy away from recommending different products. Okay. It’s different for different companies. I tend to go with vanilla upstream. Kubernetes as much as possible, but red hat has opened shift that there are others out there as well. And I’m uh, I’m personally quite a fan of rancher. If you look up rancher, I mean they do great things with coupons.
Gavin Henry 00:18:00 Yeah. They had a ranch and they that’s kind of the life now I think. Okay. That’s perfect. I’ll just skew as onto the middle section after this last question. So given all the greatness of Kubernetes, what shouldn’t we use it for?
Nigel Poulton 00:18:16 Oh, right. Yeah. That’s a really good question. So Kubernetes is definitely not for everybody. I think going forward, by the way, it will be the right choice for more and more people. But if you’re a small shop with just a small handful of applications, Kubernetes is very much potentially overkill. It has a learning curve and you have to be able to keep up with the updates and things like that. And in no way should your infrastructure. And I’m referring to Kubernetes hair infrastructure, right? Demand more of the time than your applications. If that’s the case, then it’s probably a big sign that it’s not the right choice for you. So if you are deploying something smaller than you’re a smaller team, and you’re just testing the water with this, a highly recommend Docker swarm, it does pretty much what Kubernetes does, but it’s way easier to install and way easier to wrap your head around. But the chances are that you will either outgrow that in the future. If you grow in your business or as you want to do more advanced things like what Docker swarm gains and the simplicity and the ease of deployment category, it loses in the extensibility category. So to get started. And then if you think, yes, Microsoft has applications off me and we’re going to deploy more and more in anger going forward, then you might want to say, well, let’s start moving to Kubernetes.
Gavin Henry 00:19:42 Yeah. That’s where we’re at in my business. We’re on DACA, swamp for the past two or three years. So looking at all this stuff. Yeah. I mean, you just said it and off you go, when it connects great with all the source code repositories on their C I C D tools, I just love it. But like you’ve just said, you get to a point where you want to do sort of half and half upgrades and you know, stuff like that. So there’s a little bit of logic. We have to check if things are working before the next step gets rolled out and things. And I think Kubernetes just does that much nicer. So we’re going to move into Kubernetes terminology. We’ve covered its history. If listeners want to dig in a bit more, we’ve done a whole show on its history, a nice high level overview for you there. Nigel. Thanks. So I’d like to discuss the terminology. We should use what we should have in our heads. When we’re thinking about the example app we’re going to talk about in the last section, which we’re going to deploy and Kubernetes, so I’ve got around 10 questions to second a section. Don’t worry if we mix it much, what are the main components of Kubernetes?
Nigel Poulton 00:20:52 So I would say like a hundred thousand feet. So Kubernetes, this is a cluster that you deploy applications to, and that posture is made of masters and workers. The masters you’re generally speaking, shouldn’t be deploying user applications to them. You should be reserving those to basically oversee the costs and oversee your applications. And quite often I will refer to the masters as the brains of the cluster. So this is where all the logs like the scheduler runs there, the data store runs there that holds the state of the costume, the state of your applications. It makes all the decisions as to which of the worker nodes to deploy application instances, to which firewall the logic sets to do the scaling and the self-healing and all the updates and things like that. Then you have your work nodes, which are Linux or windows, and you can mix and match these in a cluster. So you can run Linux and windows applications side-by-side in the same Kubernetes cost should you want to, or need to, but those worker nodes all talk back to the control plane, reporting the status of the applications that they’re running. So generally speaking, you’d want a highly available control plane, probably three or five nodes and dispose those across different availability zones or regions in your data centers or cloud, you know, just so that if one of your regions or, or data centers goes down, you don’t lose the whole thing.
Gavin Henry 00:22:14 Usual odd number, odd number, isn’t it, it’s usually three, five or seven,
Nigel Poulton 00:22:19 Three, five, sometimes seven just to avoid split brain scenarios. And if you’re interested, it’s because I mean, there’s more than this, but, um, it runs the raft consensus algorithm in the background would have a preference for it. Yeah.
Gavin Henry 00:22:30 We’ve done a show on raft and some other versions of raft earlier this year. So I’ll link to those, but there, if you want to drill down, they’re really good.
Nigel Poulton 00:22:39 And then I’d say as well, you want to deploy your work and nodes across different infrastructure zones as well, just because of course, as important as Kubernetes and the brains of the cluster are, but the real meat of everything is your business applications. So you want those spread across infrastructure as well. And I would say that that’s like maybe a 50,000 view of Kubernetes it’s about masters, it’s about nodes, but then when you come to applications and I don’t want to steal some of your other questions potentially, but it’s things like declarative manifests and desired state versus actual state or observed state and things like that.
Gavin Henry 00:23:13 That’s perfect. And when you talk about the threes and the fives of masters and workers, and then your different let’s call them availability nodes, would you have three masters in each availability node? So do you split one in each?
Nigel Poulton 00:23:32 Yeah. So generally speaking, let’s say you are running a data center or a cloud setup where you’ve got two availability zones just to keep it simple. Yeah,
Gavin Henry 00:23:43 Yes.
Nigel Poulton 00:23:43 Yeah. Or even two areas within Europe or within the U S I would say if you go with three musters, put two in one and one and the other, or if you go with five ma five musters, three in one zone and two in the other side of
Gavin Henry 00:23:59 The quorum.
Nigel Poulton 00:24:01 So as long as it can achieve for them, let’s say you’ve got those two availability zones and you’ve got two masters in one and one master in the other. And the network connectivity between the two goes down now, both sides now. So one we’ll call it in zone. Two knew that they used to be three masters zone. One can only see two and zone two can only see one, four. I picked the wrong numbers that did not. Anyway, one side can say two. Once I can see one, they both knew they used to be three. Well, the site that can see two knows that it has a majority, so it can form what we call a quorum. The other side knows that it doesn’t have a majority. So it would effectively put itself into read only mode. The applications in that zone will still run, but you can’t perform updates against them because that zone knows that it doesn’t have the majority of nodes. It doesn’t know whether the other two were up or not. So it says, I’m going to be careful. I’m not going to write any updates to the cost of conflict or to our applications, just in case those other two are still running. And they’re writing updates as well.
Gavin Henry 00:25:06 Nice. We’ve got masters that holds all the information and workers that do the work. So we’ve done masters and notes. What is the declarative model?
Nigel Poulton 00:25:17 Okay. And this is a good one, I think because at least for me, right. And I know it is for the people. It can be a really hard one to wrap your head around, especially if it’s super new to you. So the idea of a declarative model is to say to the platform and for us, that’s, Kubernetes you say what you want and you let, Kubernetes do the hard work. So we might say, we have an application and it’s got, we’ll go dead simple, right? Because we haven’t got diagrams to help us. And as a frontend and a backend, and I want five instances of the web frontend running, and I want two instances of a data store on the backend to keep it highly available. That is what I want. Kubernetes and you put a little bit more in that. Don’t get me wrong.
Nigel Poulton 00:26:02 You say the web front end should be based on this container image. And the data store backend should be based on this container image. But I want five of the front, 10, two of the backend. You pop out any Jamo, five for Kubernetes and you post it to Kubernetes an HTTP post. Yeah. Cause all API driven Kubernetes looks at it and says, right, I’m supposed to have five front end and two backend at the moment when I observed the cost of I’ve got zero on the front tendency on the backend of this config. So I’ll go and do the hard work of building 510 web servers and two backend data stores in a high availability formation. So Kubernetes goes away. And it does all the hard work of pulling down container images, starting containers, joining them to networks, allocating network poets. Kubernetes does all of that for you.
Nigel Poulton 00:26:46 Cause we’ve just said now manifest, you know, I want five, right from and service. You tell them what port you want it on and what image. And then Kubernetes gets those up and running. And it stores that in the cost of store as its desired state, this is what we want. And then it kind of sits back, puts its feet up and it watches the costume, the applications. And it’s kind of polling it all the time. Let’s just call it piling it. It’s not quite, but, and it says, it’s checking the cluster. I’ve got five on the front, two in the back, happy days, five on the front, two in the back, happy days, then let’s say something happens and we lose a worker node. And let’s just say to keep it easy. And we have lost two instance of the web front-end. So Coopernetics looks at the costumes.
Nigel Poulton 00:27:26 Post are fine. Whoa, hang on. I’ve got three , you know, general quarters, all hands to battle stations. We’re supposed to have five, we’ve got three. We need to spend another two up. So it spends another two up and brings it back to that desired state defined. And it does all the kinds of stuff of making sure, you know, as long as you’ve got it configured properly, you know that it doesn’t stick those two new instances on the same note, that’s already running another two. If it’s going to know that it’s running non in a different availability down, it will, you know, can be covering up the governance, spread them out like that. So that’s the declarative state. You don’t put together massive scripts that say, run this command to pull an image, run this command, to start the container, run this command, to expose it on this port. None of that you just say to Kubernetes I want this go and make it happen. And by the way, if the state changes and I haven’t told you to change it, you do whatever hard work is required to get it back to what I’ve asked for. And that’s the model.
Gavin Henry 00:28:24 So my model is what I want the application to be. So
Nigel Poulton 00:28:30 They give you a quick example, right? So let’s say you’re building an extension on your house and you contract a builder to do it. It’s unlikely. I mean, some people might, but it’s unlikely that you’re going to say right in this extension, I want the foundations to be three feet deep or whatever, with this type of cement. And then I want 17 courses of bricks with, you know, whatever installation you have. You generally don’t do that. You will go with a more high level plan and say, look, I want an extension with three walls on the back wall. I want a big window facing the garden. And I would like an oven. That’s got five hops or whatever. It doesn’t matter. You don’t go into the detail telling the builder how to do the work. You just say, look, this is almost a picture of the kitchen extension that I want go and build it and make sure it conforms to that. Instead of you giving the builder a massive long list of saying, go to this tradesman and get this type of brick, go and get this type of insight. You don’t, you just say effectively to Kubernetes, here’s a picture of what I’m after make it happen, make it. So,
Gavin Henry 00:29:34 Yeah, so it’s not quite a blueprint. It’s just a level above that.
Nigel Poulton 00:29:38 There’s a level above that. Now don’t get me wrong. We, we do go into a bit of detail, like with telling exactly which images to use and which ports to use and things like that. But it is such a difference from like, I think the key is that when things break here, here’s an example when things break Kubernetes knows what it’s supposed to have. So goes and does the work to fix it back to that. Whereas if you don’t take a model like this and you’ve built it with scripts, you’re then logging on and you’re manually firing up two new instances and you’re choosing which nodes to put them on and your working out which scripts to run. There’s none of that. You tell Kubernetes what to do. If it breaks in the middle of the night, you sleep through it and you wake up in the morning and you’re like, cheers, that’s for fixing it for me.
Gavin Henry 00:30:20 Cool. So the desired state is what the masters are trying to figure out what
Nigel Poulton 00:30:26 Constantly trying to work towards that desired state.
Gavin Henry 00:30:29 And is that our desired state, how we want it to be or how
Nigel Poulton 00:30:35 So a stone say, we’re kind of in charge. We say we want five web front-ends and two database back ends. That’s my desired state Kubernetes and store that in the cluster store is effectively a record of intent. That’s what I want. And then Kubernetes is constantly observing the cluster, making sure that what it observes matches, what we’ve asked for matches our desired state.
Gavin Henry 00:30:56 Uh, so my model there is I want a web server and a backend. This is what they’re made up of. And the desired state is how many are want and what they should look like when they’re life and up. Yeah, absolutely. These are all in Jamo. So yeah. Another markup language. How would you test and verify that that works? The Jamar you’ve got,
Nigel Poulton 00:31:20 Yeah, don’t get me wrong. I mean, I do love YAML now, but it,
Gavin Henry 00:31:26 This was a question from one of the other hosts that has obviously got something going on at the moment and he’s like, Oh, can you make sure you ask, how are you tasks?
Nigel Poulton 00:31:37 So cube Netflix does how it does let you do dry runs of conflicts that you want to have your desired States, that you’re wanting to push to the cluster. And we’re like, we’ll show you, you know, what would happen if you were to apply it, it will feed back to F you know, you’ve got the animal syntax and stuff wrong. There are tools out there that will help you with that as well. I’m actually not an expert on what those tools are. I’m not an expert on Jamo, but do absolutely love it these days. I used to hate it. But I think once you wrap your head around it, then it’s actually quite friendly language. In my opinion, I’m much preferred over full on Jason.
Gavin Henry 00:32:12 Yeah. It’s good. Just to have something that everybody uses, you know, so you don’t have to think too much. And I think that’s what phone calls too,
Nigel Poulton 00:32:21 As well, right? Is that because we’re defining our applications in this young or five or yamal files, we’re almost forced into a self documentation, like we’re documenting to, Kubernetes what we want, what the application should look like. So if we’ve automatically got a document that let’s just say, developers can give to operations, you know, operations is always saying document their ops, tell us what they’re comprised of. What do you need? Well, because we’re forced to do that, to tell Kubernetes to run it for us. We can also give that Jamar file or those younger files to operations and say, this is our application and operations can read it. Cause the other one is not hard to read. Don’t get me wrong. We can get you in dentation wrong. And I do all the time. But you know, just waiting as a human is relatively easy.
Nigel Poulton 00:33:03 It’s easy to look at a Yammer file and say, okay, go to web from 10 service here based on engine X, version, whatever. And we’re asking for five replicas of it, Oh, look, it’s on port 80 80, and we’re pulling it from this particular container registry operations can easily look at that as well. You can onboard a new team member and say, you know, go and start working on this application. Oh, well, what’s the application comprised of rate the Jamo file. And in that way, it’s like a super powerful tool. I think for one of those more or less technical jobs, you can onboard people. You can talk to operations say, Hey, here’s the config of our app. Yeah.
Gavin Henry 00:33:43 That’s very true. One question I didn’t ask when we were talking about desired state is when the masters are trying to get things into that shape, what would in your experience be some of the reasons that maybe one of the server web servers just disappears, that’d be your application’s done wrong or the hardware goes or comments.
Nigel Poulton 00:34:07 Yeah. So you can have hardware failures of the infrastructure, like the server or whatever that’s hosting the virtual machines or whatever. You can have network related issues. And I’m fond of blaming the network. I’ll always used to do that in my previous jobs. The Kubernetes agent Cubelets that runs on each working node could also have crushed for whatever reason, I guess any reason could cause any of your work and notes to become unavailable. There’s also the other thing as well, that when you’re trying to deploy or to scale an application, if you’re doing it the right way and the right way is to say, every time you deploy an application or a microservice, you tell Kubernetes what its resource requirements are. You know, so the minimum in order for it to run and what it’s limits your base out, tapping it from taking over the whole system.
Nigel Poulton 00:34:57 If you’re doing that with all of your applications and your cluster gets full, and you say deploy a new instance of an application or a new application, or, or you manually try to scale part of an application up and it’s re it’s demanding, or it requires more resources than your working notes have available, then you will start to see parts of your application go into the pending queue. So I don’t think Kubernetes does anything special here, or I don’t think it introduces any new types of faults or problems, but those are generally the types of things you will find from an infrastructure and an application perspective.
Gavin Henry 00:35:32 Perfect. Thank you. Two last bits in this section, before we move on to example application, we’ve managed to not mention pods or cards up until this point, and that would tie into my last one, which would be, what is it deployment? So I think those three are probably best discussing together. Go for it. So can you take us through what a part is sidecar and I suppose the deployment would be deploying our intended application to Kubernetes. Yep.
Nigel Poulton 00:36:01 So I’ll just wind it back a little bit, right? So you take application code just written in your, your favorite language or whatever, and then you have its dependencies just like you would for, if you’re at the point of virtual machines or whatever it’s application source code and its dependencies, you package those two together. You’re upcoding dependencies into a container or container image. Now Docker runs containers natively on its own platform. Okay. However, Kubernetes does not let you run a container directly on Kubernetes even though, and this can be confusing even though Kubernetes is an orchestrator of containerized applications, you can’t run a container directly on Kubernetes in order to run that it has to be re-upped in a pod. So it’s quite often easy just to think of a pod in a container as being the same thing. The real difference is that a pod is just a very thin wrapper around the container that has a bunch of metadata that Kubernetes uses.
Nigel Poulton 00:36:57 So like you write your application and you package it as a container. But like I just said, Coopernetties really likes you to be able to say, well, what resources does this container require? And what resources should I tap it at on the node? And that’s what you put inside the pod that sort of wraps around the container. So Kubernetes, it’s got high moral standards. It does not allow containers to run on it. You must dress your containers in a pod, but it’s such a super like thin lightweight wrapper. Now there are instances as you become more advanced where you will deploy an application, but that application needs a help. Look, I tell let’s just use, let’s use a service mesh as an example. Okay. Let’s say you’re deploying an application as a container, but you want to be able to extract telemetry. And let’s say you want to automatically encrypt traffic that goes into that container and comes out of it.
Nigel Poulton 00:37:53 Kubernetes has a very powerful model called the sidecar model. And there are different types of side costs, but we’ll keep it pretty high level, um, a service mesh, which is a buzzword these days. But what allows you to inject another container into your application that then in the service mesh model sits between your main application and the network and intercept all traffic coming into your application and traffic that goes out to it, allowing you to get great network telemetry and things like that, but also enabling you without touching your application code, to say, I’m going to force traffic in and out of this application to be encrypted. And you can do that because the sidecar is in between the reputation and the network. So the sidecar says un-encrypted everything. And your application doesn’t even have to know. So you can, in a service mesh model, you can take an existing application that you’ve got and not have to touch that application code and be able to our network time and tree and mutual TLS for authentication and encryption without having to touch you up because you just throw the service, mesh sidecar into your application, and it does it all transparently for you.
Nigel Poulton 00:39:06 Now, Kevin, tell me if you don’t think that was well-explained and we can go at it again.
Gavin Henry 00:39:11 No, that’s perfect. That allows you well, am I had to add new things to a part that’s got a container within it, you know, different versions of it. I always understood. I think it was from your book that the main thing you’d use it for us to go and fetch some new information that I’m need to update something that shipped with the container. You know, that was one of the examples.
Nigel Poulton 00:39:35 That’s maybe a better example, right? More, more of a simple example. You might be running a static web server that you’ve built as a container. Okay. But you update the content that web server is serving. And every time you make an update to that, you don’t want to repackage you retest and redeploy your container. So you would just run that web server container have a sidecar container alongside it that says, I am periodically pulling new content from get herbal from a file server wherever. And I pull it to the local file system that, that, that the web container is serving its content from.
Gavin Henry 00:40:09 So like hero and sort of mini content, distribution network, CDN archive. Yeah, that works cool. So what would our container choices be? It doesn’t just use Docker. Does it? Kubernetes can use anything that’s OCI.
Nigel Poulton 00:40:27 So in the early days, Kubernetes heavily relied on DACA to do all of the low-level level container functions, like whole container images, stock containers joined them to networks, things like that, but people, so Kubernete is now how’s a pluggable container runtime layer called the CRI the container runtime interface, which basically says you can swap out pretty much any container runtime that you want to do. Your low level pulling images and starting and stopping containers. Now Kubernetes has just announced that it will not be supporting full fat DACA going through.
Gavin Henry 00:41:02 Yeah. I saw that three last week.
Nigel Poulton 00:41:06 I’m going to take a couple more. This is really a storm. So we’ve been moving towards this for a long time because DACA actually I’ll call it full fat Docker for fat Docker does. Yes. It pulls images and starts and stops containers and all that stuff, but it does a whole bunch more and a bunch more that Kubernetes doesn’t need it to do. So Kubernetes has always been moving towards a more cut down version of container run times that just do what Kubernetes wants it to do. So for a long time now Kubernetes has used something called container D, which is actually a Docker technology donated to, I think the CNCF and Kubernetes has it and allowed you to run container D instead of Docker for ages. Now just doing those low level functions. And a lot of people have been using container D instead of Docker for Kubernetes and it just works.
Nigel Poulton 00:41:58 But the point is right. You can switch and swap which container and time is your favorite and the reason or one of the reasons you might want to do that. Isn’t out of the box Docker or container decomposers are not as secure as virtual machines. They share the same host kernel. It’s much easier to escape the container and get access to the host than it is from a virtual machine. So there are other low-level container runtimes out there that have a different workload isolation model to doctrine container D, and you might need that for certain applications or for certain business units. So the ability to say for actually running my containers, doc is not a good fit for me, or containers is not a good fit for me. Maybe I don’t know, CATA containers or GVS, or one of the others is a better fit for this application. Oh, I can easily swap it out.
Gavin Henry 00:42:50 Yeah. As long as it’s OCI compliant. Absolutely. Yes. So just to draw a line under the Docker depreciation for anyone that is familiar with it, it’s a depreciation of the shim, isn’t it? That it doesn’t matter for most developers?
Nigel Poulton 00:43:06 I would say, I mean, of course always do your testing, especially if you’re running in production, but it is almost such low level plumbing that most of us don’t even need to know. Don’t get me wrong business, important business applications. I would never recommend saying you don’t need to know, but it’s the kind of thing that most people won’t even know it’s happened.
Gavin Henry 00:43:31 Great. So in relation to pods and sidecar is what is it deployment?
Nigel Poulton 00:43:36 Well, deployment is a higher level object that then wraps around a pod. Remember a pod wraps around a container. And the reason for that is the pod. Well add metadata that Kubernetes users may be for scheduling and resource requirements and things. While then wrapping around a pod is a deployment object. That brings things like the self-healing and the scalability. So the deployment object is where you say, I want 10 instances of this always to be running, getting back to our desired state that we talked about before. And it’s a place where you can say, I want to update the version of the image that’s being used. And the deployment will allow you to do that. As a, as words here, I do apologize, but as a zero downtime rolling update. So like, let’s say you’ve got 10 instances running or 10 containers or 10 pods, and we’ll update one at a time.
Nigel Poulton 00:44:28 We’ll wait five minutes in between each update. We’ll run tests in between the deployment object has all of the logic for that. So I like to think that that’s where the magic happens. Generally speaking, you will always be deploying via higher level object, like a deployment. Now I’ll say one thing and deployment object is the right way to deploy state less microservices. If your microservices are creating and managing state or persisting data, then you will want to use a different object, maybe like a stateful set or something. But the point is it’s, it’s higher level than a pod. And it brings magic to the game of self-healing and updates and rollbacks and things.
Gavin Henry 00:45:08 So you’d find the desired state within that deployment object. And the declarative model is just the name for the thing. But yeah,
Nigel Poulton 00:45:16 Yeah. Declarative model is just a buzzword for that to say, I want five of this make it should make sure that everybody’s looking like
Gavin Henry 00:45:23 Desire to stay. We could actually look inside the deployment config and see that, okay, let’s quickly summarize that and stop me if I’m wrong. The main components of Kubernetes are masters and nerds working notes, and there’s an OCI spec to help people pick containers that would run containers, do not run on Kubernetes. There they live inside a pod, which is basically like you go into your it department and say, my phone’s not working and they need some more context. So the pod gives you the context. The sidecar helps you bolt on things or refresh things are within that container that lives inside the pod. And the desired state is really what we want everything to be when it’s live, which is contained with inside the deployment object. But there are other types of objects depending on how you want that desired state to be. Yes.
Gavin Henry 00:46:19 Excellent. So the last section I’ve just made up, uh, an app. So I was going to ask you to give us an example of an app that would run on Kubernetes. And I think we’ve done a bit of that with a web server and backend. And that was actually my example, too. I was going to ask to be taken through a public facing API written and go along. So it’s a nice binary. It serves and saves some Jason over HTTPS. And it has a bit of a trickier backhand, which is a push KRAS, QL, hard TPMS backend. So there’s some state in there. So my question would be how would this be packaged, which would relate to a deployment object. So that’d be my first question. Then my second would be, how do we deal with the persistent state of the database?
Nigel Poulton 00:47:07 You’ve got two distinct elements of application that the state does frontend and the state full backend. The end would be built from your source code into a container wrapped as a pod, and then wrapped into a deployment that says, I won. However many of it. You’ve got a bunch of metadata that will help Kubernetes pick the right availability zones and all that kind of stuff. And you would push that deployment to Kubernetes and Kubernetes well, the storage is a record of intent and the cost of the store and deploy those. Let’s call it five instances of the wipe front end, and then it will sit in a loop watching the cluster, making sure that it’s always got five. Yeah, something breaks. Kubernetes fixes it. Lovely. The second component of your application is a stateful component. Now, what we maybe weren’t clear about before is that a deployment object only defines one pod or one application micro. So we’ve got, we’ve got two microservices here, a web front-end and a database backend. They can’t be defined inside the same deployment object, because if we were to say, I want five instances of it, what do you get? Five instances of the web front end bit or the database backend bed. So if you let’s say you’ve got five different application components, you would need five different deployment object. So for deployment object and stateful set object, each service in your application is wrapped by its own higher level object. Am I making sense?
Gavin Henry 00:48:33 Yeah, because otherwise, if you, if you put them both in the same deployment, you’d have an API speaking to a database and they’d all have different things. It’d be serving up when they,
Nigel Poulton 00:48:42 Now you can define them, both, both separate objects in the same Jamar file with three dashes between the objects, but they are different objects, even though they’re in the same Jambo file now then for your database backend, it depends why database isn’t how you deploy it. But that would be deployed via a high level object as well, could be a deployment. And it could be a stateful set as well. And then you’ve, you’ve effectively told Kubernete is what you want. We’ll just say five of the frontend and two of the backend posts, those conflict files, those Jamar files to Kubernetes and Kubernetes deploys them and watches them. Now within your applications, you need to have done the work to configure them, to talk to each other. So obviously the API gateway, the frontend will be exposing itself on a port. You’ve got to code that into your application, and you’ve got to put it in your pod spec as well.
Nigel Poulton 00:49:38 So that Kubernetes knows which ports to expose it on. You will also need to tell it how to talk to the database on the backend. What part is the database listening on? Do I have any credentials that I need to present to the database in order for me to be able to write to it and read from it now, then the model here is that you don’t code that runtime config into your application. You define it as a separate object. And the reason for doing this is instead of coding all of that into your web front end, so that it knows the poets to expose itself on the ports, to talk to the database backend on, and more importantly, any secrets that are required to talk to the database. You don’t want that shipping with your web server because you package that as a container image, a new store, it in a repository somewhere.
Nigel Poulton 00:50:32 Now, of course you can secure these repositories with our back and all that kind of stuff, but it’s just not a good model to put that kind of configuration data into your production application. I mean, what if you need to change the port or what, if you need to change a secret, you then have to repackage your entire application, run it through dev and test when QA and all that kind of stuff, and then redeploy it, it’s a bit of a pain. So what we do is we keep the application as simple as we can. It’s a web server, a post, some data, it exposes some port and it talks to some database on some in backend with some credentials, we put that config and those credentials in Kubernetes object. So generally speaking, we would put the conflict of the port where things may be in a conflict map object and any sensitive information like passwords and usernames into a secret object. And then it run time as part of that Jamar configuration. We Mount those conflict maps and those secrets into the web server so that it knows how to talk to the backend. Now don’t get me wrong. This is so much easier to explain when you’ve got diagrams and things like that. And I’m trying to be
Gavin Henry 00:51:43 No, that’s fine until it fits nicely. We in show four nine this year, we did a whole shoe on the 12 FOC to app, which talks about putting your secrets and things into the environment, and then your containers can see the environment. So I think that fits nicely with the conflict bit there. Okay. So the real tricky bit here is to make sure you map what port needs to be public, how your ops broken down and really visualize how each bit speaks to each other.
Nigel Poulton 00:52:16 Yeah. So I think like you D you kind of develop your application just as you normally would, but you just got to think like that 12 factor application mindset of like, I don’t store my config or my secrets in the application, but you write your application in the normal code that you’re writing, then your favorite languages using whatever the, the standard requirements and dependencies are. The only difference is like, you just use different tools to package it as a container and just store your config and your secrets as Kubernetes conflict map and secret objects. And then a little bit of yamble to just bind them all together. But I mean, don’t get me wrong. Like the first time you do that, it is going to be probably a bit head exploding moment, but it’s so simple once you’ve done it,
Gavin Henry 00:53:03 It does force you into a nice, into the right way to do things. Doesn’t it.
Nigel Poulton 00:53:08 And once you’ve done it, you’re on the likely to want to go back and do it the other way.
Gavin Henry 00:53:12 Once we’ve done all this, and we get to on to Kubernetes just to refresh that bet we’re posting a physical file to the master nodes on me. And then it goes nothing I stuff with like Karl or whatever, you’re
Nigel Poulton 00:53:26 You would use the cube CTL command line tool, but it’s all an HTTP restful API. So co yeah,
Gavin Henry 00:53:35 It could be built into whatever source code tool you’re using that does post up. Okay. And I mentioned that it would be an API with an HTTPS front end. Would we do that in the binary and run the HTTPS there? Or would we put something in front of Kubernetes to do all this, or is that depend on your model?
Nigel Poulton 00:53:58 I say that depends on your model and you can do either.
Gavin Henry 00:54:02 Okay. And where does DNS fit into all this on TLS?
Nigel Poulton 00:54:06 I mean, so many different layers. So obviously you will need, if it’s coming from the internet, you, connections are coming externally from the internet. You will need to have the correct DNS records and out there in public, but also within your Kubernetes Costa. So Kubernetes ones, its own internal DNS service that it uses for service discovery. So in our example, we’ve got two services of web front-end and a back-end. You are going to configure your web front-end to talk to that database backend using a name. We don’t have the used IP, so that doesn’t change with Kubernetes. So you would include that name in the config map for your web front-end, but then how does it get resolved to the containers or pods that are actually running that database backend? And Kubernetes has a, an object called a service object that does automatic search and DNS registration. And does all of the name lockups and things like that for you. So it is automagic as long as you’ve configured it properly.
Gavin Henry 00:55:11 Okay. And is the TLS like the mutual TLS automatically there between things or does it only do what you tell it to do? So you have to think about,
Nigel Poulton 00:55:20 I know he knows what to do and within your application and stuff, generally speaking, the preferred model is the service mesh model. These days, it allows you to inject those side calls that we’ve talked about before, that will set between your application and the network. And we’ll automatically do the MTLS for you without you having to touch your application
Gavin Henry 00:55:38 Makes it much easier because it’s just off the shelf.
Nigel Poulton 00:55:41 Now don’t get me wrong. Service mesh requires an amount of knowledge and expertise to deploy and manage itself. But that is, again, it is the right model. Instead of you touching your application code and having your developer of one of the service have to fudge some TLS stuff into that code that they’re not necessarily wanting to do, or very good up bad model, let’s decouple the MTLS service. That’s call it from your application and just have it as a commodity that you can just throw into your infrastructure.
Gavin Henry 00:56:16 Perfect. And just to touch on persistent state again, say we’ve deployed it with cube control Q B E CTL command line to masters are figuring out what to do. We’ve got the five front end cooling API bits, and we’ve gone for a three node. Kresky I’ll up. Now, the data that actually gets ruined the database and lives on the, what would in a virtual machine or a physical server on the hard desks. What do we decide about that? You know, how do we make sure that the next version of Postgres that spun up has access to that data?
Nigel Poulton 00:56:53 Gotcha. Okay. So you would configure Postgres or whatever it is, and I’m not a Postgres person, but you would configure that into its own high availability, formation or configuration, and you would configure your web front-end to talk to the live node or the master, or however it works, the primary node yet, and Postgres will then take care of its own availability and things like that. The storage that backs that increasingly going forward, you’re going to use, I’m just going to call it external storage. Now, if you’re on the cloud platform, that can be Google’s persistent desks. If you’re on GCP or it can be elastic block storage, if you’re on AWS or it can be whatever their NFS is because Kubernete is now has the ability for external storage systems. And look that can be NetApp and your handbags and your EMCs and things, or it can be your cloud zone storage and have that exposed into Kubernetes just like you would expose external storage area, network sign, or NAS storage into virtual machines in the past. So that it’s like it’s decoupled from your server infrastructure, meaning that no matter which node you spin an instance upon that external storage can be connected into that node.
Gavin Henry 00:58:12 So it’s another abstraction layer that you can just document in the Jamo and it will take care of it.
Nigel Poulton 00:58:19 Abstraction is like one of the key words of modern applications and modern infrastructure going forward. So Kubernetes has a model now called the container storage interface, the CSI, which allows you to take external storage systems, have that the life cycle of the volumes that they create, the managed and independently of your applications and have that, those storage resources connected up to any application running on any node in your cost at dead easy.
Gavin Henry 00:58:46 It just helps you grow nicely as well. Because as your things change, you might need to redeploy stuff in the traditional world. And you’re like, okay, we’re going to have to get you started to touch network, all this type of stuff, but because it’s obstructed on there. So it makes it easy just to not worry about that right now. Okay. And we’re getting close to the end. Is there anything you want to mention about the network requirements for Kubernetes?
Nigel Poulton 00:59:10 Yeah, so I’ll take two things, right? So from a network perspective, right? When you deploy Kubernetes for the most part, you will create a pod network on your cluster, and it’s a VX land-based overlay network that is extended across all Kubernetes nodes in the cluster. And by default, it is wide open, right? So any application you deployed to Kubernetes goes on this pod network and it’s a free for all, they can all talk to each other, which is great. I mean, it makes life easy, right?
Gavin Henry 00:59:44 Yeah. And the VX LAN is a virtual learn. Like if you imagine it being a physical sweat, you’ve got 48 ports on it. Anyone that plugs into that switch can listen to all the traffic cause it’s broadcast and mound. So you really want to put some segregation between each port.
Nigel Poulton 01:00:01 Well, it lets you take hosts to nodes that might be on different physical, underlying networks. And it lets you create a virtual network like a tunnels through those, but you might, it gives you effectively a wide open network that Oliver applications can talk to. So you want to start then looking at network policies to start saying, we’re going to start locking things down a bit because while a wide open network is great, when you’re developing and testing, you know, it’s one less hurdle to jump over when you’re building something to play with actually in production, it’s not what you want. So you need to start securing things. And network policies is a great place to start. The other thing from a security perspective, I want to talk about, and it’s not MTLS related, but it’s the fact that pods or containers, you might be running 50 on a node.
Nigel Poulton 01:00:47 They all share that nodes operating system kernel. So if one of them is compromised and gets access to the host kernel, then all 50 containers on that node are at risk and not scary stuff in production. Right? So I just want people to be aware, and this is a little bit advanced. Okay. But as you’re starting to deploy to production, you need to be looking at some of the other technologies that Kubernetes supports to start securing access to your hosts kernel. And there are things like app armor and se Lennox, if you know, Ubuntu and red hat Linux, okay. You can also do capability dropping. So drop the capabilities of the root user in your containers so that they don’t have full root access to your container. Then you can also filter the SIS calls. We’re a bit low level here. Um, but the requests that the application and your container can make to the hosts kernel, and once you start locking stuff like this down, then it can be super secure. There’s just an amount of work and an amount of testing that you must be prepared to do in order to really fully secure something in production.
Gavin Henry 01:01:58 Yeah. So you can go as deep as, as you need to. There I’ll need to wrap us up now, Nigel, we could do another couple of hours. I’m sure we did a nice show. Robert, the show editor did a nice show with Rob Skellington, Killington on high cardinality and alerting and monitoring. Obviously you’ve got the bit, but you want to monitor your app and that’s healthy and everything. Would I be right in saying the key bits to monitor on Kubernetes would be the masters and the notes, or is there something off the shelf we can look out for that.
Nigel Poulton 01:02:31 I know that. So you definitely want to be monitoring your cluster. I would just refer to that as infrastructure that’s your managers and you know, and you, once, you know, of course, things that we’re monitoring applications that have application intelligence and security and monitoring the two least interesting topics in the world for me. So I’m not an expert at them, but I will say that Prometheus is a great tool to start with. I’m not a Prometheus expert, but it’s a CNCF project it’s massively used in the Kubernetes ecosystem. So you get all of like the, the fact that it’s a well-developed project that it’s being constantly worked on, that it’s been constantly expanded what it can do. So in the world of monitoring, Prometheus’s a great place to start.
Gavin Henry 01:03:17 Perfect. Thank you. Yeah, we’ve done a show on Prometheus, so I’ll get that in there and the links. So I’m going to wrap up now. Obviously Kubernetes is a very flexible beast, but it doesn’t have to be too scary if there was one thing software engineers should remember from our show, what would you like that to be
Nigel Poulton 01:03:34 The Kubernetes for all intents and purposes, looks like it is nailed on as the future of infrastructure for anything that’s wants to be cloud native or running in the cloud, but just if it’s too complex for you and too much to bite off, take a look at Docker swarm. Okay.
Gavin Henry 01:03:52 Okay. That’d be a good idea for another show actually. Perfect. Was there anything we missed that you would have liked me to mention or thought should be mentioned?
Nigel Poulton 01:04:00 I don’t think so. I mean, like you say we could go or
Gavin Henry 01:04:05 Yeah. I mean the title is Kubernetes is fundamental, so I think we’ve done a good job. Yeah. I think that’s it. Uh, where can people find more about you? You’ve got your Twitter account, but you know, what’s the best way to reach out?
Nigel Poulton 01:04:17 Um, pretty much at Nigel Polton everywhere, Twitter, ghetto, LinkedIn, I’m more than happy to engage with people. I mean, I can’t be free technical support for hard questions because I’m super busy, but I, you know, I’ll make a living out of helping make Kubernetes easier for people. So by all means, please reach out at Nigel. Polton all of my stuff, my books, my video training courses to book me for a company live stream and things like that. Just go to Nigel polton.com. I’m going to finish right by saying it’s not hubris that I’m not, or potent.com or anything like that. It’s just, I used to be a windows guy and then I was a Lennox guy. And then I was a storage guy and I’ve been a network guy. Now I’m a containers guy. I don’t want to be like, Kubernetes Nigel somewhere because like in five years time, I might, I might be something different. So I just keep it easy. I just go with my name everywhere that I am. I’ve done.
Gavin Henry 01:05:09 And so what you’re saying, Kubernetes, isn’t going to be here in five years, right?
Nigel Poulton 01:05:12 No, that’s what I’m saying is me as an individual, I’m a tech aholic. Right? And the best thing about technology for me is the mystery of it. And I love learning things. So once Kubernetes becomes super stable and no longer of interest to me, then I’ll be onto the next interesting groundbreaking.
Gavin Henry 01:05:33 Yeah, I just, I get ya. Perfect. I’d like to thank you for coming on the show. It’s been a real pleasure. This is governor Henry for software engineering radio. Thank you for listening.
[End of Audio]
This transcript was automatically generated. To suggest improvements in the text, please contact [email protected].