Yechezkel Rabinovich

SE Radio 591: Yechezkel Rabinovich on Kubernetes Observability

Yeckezkel Rabinovich, CTO of Groundcover, speaks with host Philip Winston about observability and eBPF as it applies to Kubernetes. Rabinovich was previously the chief architect at the healthcare security company CyberMDX and spent eight years in the cyber security division of the Israeli Prime Minister’s Office. This episode explores the three pillars of observability, extending the Linux Kernel with eBPF, the basics of Kubernetes, and how Groundcover uses eBPF as the basis for its observability platform.

Show Notes

Related Episodes

Related IEEE


Transcript brought to you by IEEE Software magazine and IEEE Computer Society. This transcript was automatically generated. To suggest improvements in the text, please contact [email protected] and include the episode number.

Philip Winston 00:00:18 Welcome to Software Engineering Radio. My guest is the CTO and Co-founder at ground cover, which provides full stack observability for Kubernetes. He was previously the chief architect at the Healthcare Security Company, cyber MDX, and spent eight years in the cybersecurity division of the Israeli Prime Minister’s office. He holds degrees in electrical engineering, physics, and biomedical engineering. First, I’d like to ask you to pronounce your name and if there’s anything you’d like to add to your bio.

Yechezkel Rabinovich 00:00:49 Hey, so it’s Rabinovich, but you can call me Chaz. It’ll be easier. Yeah. Mainly worked around Linux embedded software and, uh, distributed systems in the last 10 years or so.

Philip Winston 00:01:01 Great. This episode is going to be at the intersection of three technologies: observability, EBPF, and Kubernetes. I’m not going to go into too much depth on either one of these, so I’m gonna list three episodes here that covered it in more detail. Episode 455, Jamie Wiesel on Software Telemetry, episode 446, Nigel Polton on Kubernetes Fundamentals, and episode 445, Thomas Graph on EBPF. Let’s start with observability. What is observability?

Yechezkel Rabinovich 00:01:34 So it’s common to think about observability in three pillars of data. One is the, is logging the text messages we’re creating from our applications. The other one is metrics, which are basically counters and gauges that our applications are creating. You can think about that. There’s the speed of a car, right? That’s a gauge, right? The fuel amount of fuel you left in your car. And the third one is tracing, which are samples of data that represent interactions between two services. So if you’re calling an HTP request, that will be a trace or a span, which is part of a trace. And observability is the ability to query all those three in a very meaningful way of troubleshoot or understanding the state of application. It could be for security, it could be for, uh, performance investigations. Basically everything that developers are interested on.

Philip Winston 00:02:24 From my experience, observability can make a huge difference to sort of the quality of life of the developer trying to get to the bottom of some problem. Can you give an example of a system that didn’t have good observability that was a struggle to work with, and contrast that with something that had sort of full observability and how that accelerated the debugging or the investigation?

Yechezkel Rabinovich 00:02:49 Oh yeah, actually I have a, a good example of the, I think that was the trigger for me to start Guan cover. So we had a, a problem, uh, with our platform where customers experienced data loss and we had, uh, some complex pipeline of data. So, you know, 30 microservices all talking to each other with message queues and readies and API calls and everything you can think of from a modern application. And so, you know, where do you start, right? We got this lead that our customers saying, you know, we are missing data, but where do you start? At that time, we had a lot of logs that we paid a lot for them, but you know, you can’t really read for, you know, 20 or 30 million logs lined and understand what’s going on. So we’ve decided to instrument our application with a lot of counters that will represent the pipeline of the, of the system.

Yechezkel Rabinovich 00:03:45 So it took us somewhere around two months to instrument. We had somewhere around 1 million counters, and then we can finally detected where the leakage is and start solving it. But this process was a nightmare, right? So we walked two months just to get, just to see where the problem is. On the other hand, you know, when we started ground cover, we knew that we had to set as an an example to how you observe and monitor production. So we, we built our entire stack from day one to have performance monitors and meaningful logs and traces that will help us troubleshoot and investigate any performance issue. So, you know, it feels like the difference is like, you know, using a paper as a map or a a navigation app, that’s the difference for me. You know, something guided, not, you know, you don’t need to search for just see answers.

Philip Winston 00:04:36 Yeah, it’s a major difference. Let’s move on to EBPF. eeb. PF is a technology that’s used perhaps for many different purposes. We’re gonna talk about observability, but let’s just talk a little bit about the technology at first. So what is EBPF

Yechezkel Rabinovich 00:04:53 Eeb? PF is technology, as you said, that allow you to dynamically change and adapt the Linux kernel. So a lot of people can, you know, first thing they can say you, a lot of people that work with Linux can say, okay, that’s was easy before, you know, you just, you could write a kernel model to extend the kernel, but the reality is that writing a kernel model to extend the kernel is a very complicated task. And also it could be very harmful if you, if you’re making a mistake. EBPF is a, is a technology that allow you to extend the kernel with a very safe way safe in two important aspects. One is from performance aspect. So the EBPF engine that run your program, uh, guaranteed that you program are running very efficiently and that you cannot harm the application itself. And the second one is your are running in a read-only mode. Of course there are different eeb PF programs and I’m not sure we, we should get into that, but in terms of observability, you can run programs with read-only mode that can guarantee by the kernel to not change the state of the application. it, those two allow you to develop your models very efficient and very fast.

Philip Winston 00:06:09 So some people say EBPF is to the kernel, like JavaScript is for the browser. Do you feel that’s a good analogy or how would you phrase that?

Yechezkel Rabinovich 00:06:19 Yeah, I’ve heard that many times before. I think it’s a great analogy because when you write JavaScript, you don’t really worry about are you gonna run on a Intel machine or it’s gonna be a Mac or it’s a, you know, the amount of resources you’re gonna have, you’re running in a sandbox and you get guaranteed by the browser that your code gonna run. And I think that’s the magic of Eeb PF, you can run your program without being worried about affecting the Linux itself or the hardware itself. And this is something very, very powerful in terms of speed of development because developing for Kernel today is almost slow as developing hardware. So I do really agree with the, with this analogy with JavaScript, which is basically speed up web development. So I think that’s for the kernel. Yeah, definitely.

Philip Winston 00:07:11 So that’s interesting. Which languages can you write EBPF and are you saying that whatever language you use, it compiles down to a bike code that works on any architecture or do you have to compile it again for X 86 and arm and, and I guess which languages do you prefer to use for EBPF?

Yechezkel Rabinovich 00:07:33 So as you said, EEB PF is a by code itself, so that’s the running, it’s the, that’s the VM running your instructions and translate it to the proper instruction set. So you don’t need to worry about running on XX 86 or, you know, that’s the kernel will translate it. And it’s really, I know some people write in different languages, high level languages that compile to Eeb PF, but I think the majority of software is written in in C because that will be the, the easiest way to write an EBPF program. And don’t get me wrong, it’s a hard process learning how to code C code that can translate into EBPF. It’s because the verifier, the eeb PF verifier is, is very picky in choosing what programs you should run. So you kind of need to learn how to walk with the compiler to verify your programs and where that are there. They’re safe to run and very efficient. So most people write in C that which translate into eeb PF bytecode and run with this VM with obstruction of the hardware itself.

Philip Winston 00:08:39 So how about the EBPF verifier? Does that run when you compile the code, does that run when it’s loaded into the kernel? And can you give an example where you did have to kind of fight with the verifier, you know, what did you have to do to your code to appease the verifier or to to make it load your code?

Yechezkel Rabinovich 00:08:59 So the verifier runs when you try to load the program and it also, it depends on the kernel version because when kernel get more sophisticated with EBPF, the VE verify, get smarter and smarter and allow you to do more sophisticated things because again, the purpose of the verifier is to make sure your program is safe. So in terms of things we had to do, you, you don’t wanna know, it’s a hustle. It could be a hustle even sometimes, you know, copying data from A to B and make sure you are, you don’t have out of bounds copy. You really need to work in terms of making the verifier happy. Some people at ground cover call it, you know, to know how to dance with the verifier. There are a lot of examples and every eeb PF programmer probably, you know, has these kind of stories every day. It’s a great technology but it’s still, in order to make it safe and to make sure, you know, you can process so much data in a safe manner, the VE verify is, is picky. You need to know how to please it.

Philip Winston 00:09:57 So a big part of EBPF are hooks and I read about you can hook system calls function entry and exit kernel trace points, network events. Can you give kind of just an overview of the different types of hooks and indicate maybe a little bit what they’re used for?

Yechezkel Rabinovich 00:10:16 There are two major hooks that you can use and they’re separated by the space they are. So if you want to trace kernel functions, you’ll, you’ll use kros and if you need some kind of user space information, you’ll use uros. So why would you care? So kros are a lot cheaper because you’re already running in kennel space so you don’t need to worry about contact switch and in interrupting your application. So you’ll probably want, if your information that you’re after can be retrieved from Kernel, you probably wanna do that. And there are a lot of books about that with I think Brandon Greg wrote a lot about that. But sometimes your information could not be retrieved from K problems. I’ll give you an a simple example. Imagine you want to trace SSL data, the SSL for most run times user space encryption and you have to get the information from euro probs. So you’ll use euro probs for the Libs L lead. So of course it’ll cost you a little bit more because it’s a user space application and now you kinda have to interact but it’s the only way to get it so you need to use it wisely.

Philip Winston 00:11:33 How about network events? I think EBPF originally stood for extended Berkeley packet filter, although I read on the EBPF homepage, they’re trying to kind of drop that acronym and just call it EBPF itself. But I imagine network events are still a primary domain for eeb PF. I’ve read about dropping packets, it wasn’t immediately obvious to me why you’d wanna drop packets at the Eeb PF level. Can you give an example of filtering, which I guess is the original use?

Yechezkel Rabinovich 00:12:07 Yeah, sure. So I imagine you’re writing a firewall and you wanna do at high speed and you want to enforce policies. For instance, you cannot access an external IP to the subnet. So EBPF can do that for specific programs. So you can load specific programs that can also drop buckets. So I think the majority of these use cases, probably security and high throughput processing with ground cover. We do not use these kind of programs because we don’t want to change flow of information. The opposite actually we’re trying to get to extract as much information as we can without interfering and to guarantee that. So that’s the common use case of dropping buckets. You can also use in the last I think releases you can also do uh, signaling from the kernel that’s also really powerful from security perspective. You can signal the process and even, you know, sending termination signals to processes if, if they’re doing something wrong. So I think security is a major use case for these kind of programs. I’m sure uh, Thomas with the probably talk a lot about that and cilium project that uses all those kind of programs.

Philip Winston 00:13:17 So to be clear with the U probes, even if you’re in a read only mode, that is a lot of access in the sense you can look at the memory of any running user process. That’s pretty alien to me as a application developer. I just don’t think about being able to look inside processes, but that’s true. So I guess you really want to make sure no one has loaded A-E-B-P-F program that you don’t know about.

Yechezkel Rabinovich 00:13:44 Yeah, so you need to give right permissions in order to load DBPF programs. You need to have, uh, specific capabilities from the Linux perspective. Probably need an elevated user or a, a privileged container if you’re running on a Kubernetes environment. So yes, definitely. You know, it’s amazing technology. When you get a lot of information you need to use it wisely. So you need to make sure when you’re running these kind of programs, you need to make sure you understand what they’re doing and who’s using it and, and why.

Philip Winston 00:14:14 So maybe just two more about EBPF and then we’ll move on to Kubernetes and then on to observability with Kubernetes and ground cover. So there are EBPF helper functions and there’s EBPF maps. I’m just wondering do those two relate and particularly how do EBPF maps work? It sounds again unusual to an application developer, so just it’d be interesting to hear about that.

Yechezkel Rabinovich 00:14:42 So in order to write programs, right, we need to have some kind of data structures. We actually pretty used to them as application developers. So for instance, list or or NRA or map, those are kind of data structures that we all familiar and love and need in order to use our application. And so those are the same for EBPF, because Eeb P FFI need to make sure we do not access arbitrary memory. The data structures that EBPF programs need are created by the kernel. So you walk with functions that are giving you the equivalent opportunity, the equivalent features of data structures that you have on your application. So MAP RA, TTL, or LIU cache, all of those data structures are exist in the EBPF ecosystem and we need them in order to write efficient and meaningful applications.

Philip Winston 00:15:40 So I guess one of the things that verifier is probably doing is limiting the amount of CPU time that an e BF PF program can have. And you’re talking about limiting memory access too. What are those limits? Are they in milliseconds or how does that get measured?

Yechezkel Rabinovich 00:15:59 So it’s a configurable, but now I, I think milliseconds are out of the, the healthy zones. When you write, uh, EBPF programs, we’re usually talking about hundreds of nanoseconds that will be a, a good direction and it’s also changes with the kernel version itself. So the community is trying to increase the complexity of uh, EBPF programs because of the demand. So a lot of products are trying to push down some of their application into EVPF, so they’re also trying to increase the complexity and the instructions that EVPF program can run, but it’ll be around microseconds, not milliseconds.

Philip Winston 00:16:37 Wow, that’s interesting. Let’s move on to Kubernetes. Kubernetes is a huge topic and we are only going to touch on a few elements to sort of set the scene for the rest of the episode. But let’s start with what is Kubernetes?

Yechezkel Rabinovich 00:16:52 So that’s a good question because a lot of people describe it differently. So some will tell you it’s a, an orchestration for containers and some and, and me, I personally think it’s more than that. It’s, it’s a modern operating systems because you have all the, you know, storage, network scheduling, all the primitives that we use to know from, you know, other operating systems, but it’s in a very native to containers and to cloud. So to me it’s a cloud native operating system that allow you to run and orchestrate containers and manage the resources that each needs.

Philip Winston 00:17:29 I like the term cloud native operating system. When I first moved into cloud development, I did feel that all of the individual systems, systems reflected operating system primitives that we had on a single machine. And so I think that’s a good way to think about it. How about the relationship between Kubernetes and microservices? If you’re using Kubernetes, does that imply you are using microservices or is Kubernetes broader than that?

Yechezkel Rabinovich 00:17:56 Not necessarily, but if you’re using Kubernetes, using microservices will become much easier because microservices require some kind of orchestration. So for instance, network, you need to be able to reach each of, and every microservice need to find the resources that needs from others. Kubernetes will take care of that. What do you do when you upgrade? How do you upgrade? What are your upgrade strategy? Kubernetes have primitives for that to maintain the relationship between microservices. So you don’t have to write to use microservices when you are using Kubernetes. But it will be easier and Kubernetes will probably be be a pretty good decision if you are heavily using microservices.

Philip Winston 00:18:43 So I think a big part of Kubernetes is deployment in the sense of running the containers or systems that you want and the configuration that you want. Is that true that it’s mostly about deployment or once your application or your backend is up and running, what role does Kubernetes play sort of moment to moment?

Yechezkel Rabinovich 00:19:04 So to me, the promise of Kubernetes is the reconciliation loop. So as developers, we know that things can go wrong, something can shut down, some machines will break, some storage will be corrupted. And one of the fundamentals of Kubernetes is the reconciliation loop. So basically what it means is we have the actual state and the desired state and what Kubernetes promised you is to do everything it can to bring the current state to the desired state. So if process crashed because you know someone, uh, I don’t know, network you outage, Kubernetes will detect that and recreate it and make sure it’s healthy on another node or another network. And to me that’s what’s so great with Kubernetes. It doesn’t really say, you know, bad things won’t happen to your production. It’s all about making it high availability, it’s more like things gonna happen, but we are going to do everything we can in order to fulfill the promise that you ask for us.

Philip Winston 00:20:09 So in Kubernetes there’s sort of hierarchy between these elements, containers, pods, nodes and clusters. Can you talk about these and mention which of these does the Eeb PF program operate on and just sort of how these different levels relate to observability.

Yechezkel Rabinovich 00:20:30 Clusters are an abstract word of, you know, group of resources, group of nodes if you want. Nodes are the machines, so that’s the, you know, the physical representation of all your compute. So we’re taking nodes and now we need to schedule workloads, so deployments or replica sets or wherever you’re, you’re using with Kubernetes ecosystem. So now Kubernetes, the control plane has those resources, the nodes and the desired state, which will manifest with 10 codes that will hold, you know, two containers each and it’ll start to allocate resources. So the containers will run on those nodes and if one node will crash, it’ll transfer it to another. And when you scale up your cluster, more nodes will be spinning up and more pods will scale up. So when you monitor these kind of environments, you want to separate between those two planes. One is the, you know, infrastructure.

Yechezkel Rabinovich 00:21:32 So we want to make sure all of the nodes are healthy, not overbooked with high memory or high CPU usage. The network is fine and a lot of those things can be easily achieved by Eeb PF, right? We just talked about eeb PF as being part of the kernel. So each node has its own kernel and therefore the Eeb PF program will cover that. And on top of those you’ll have the application itself. So even if our infrastructure is fine and your application is you know, returning some broken APIs, serving broken APIs to customers, that’s also not a good thing. But it’s important to separate between those two planes because usually it’s different people responsible for those different planes. You have the platform team or the infra team or, or you know, whoever responsible for the infrastructure and you have the r and d or the the team that actually create the application that’s responsible for those. So it’s really important to have the ability to separate between those two. And actually when you think about it, EBPF is also helping with that because on each and every event you can correlate it to who is responsible for it. So you can detect if it’s a pod that’s triggering this API or it’s a node, is it a kernel thing or is it user space thing? So you want to have both and you want to be able to separate that in order to find the right people to help with this incident.

Philip Winston 00:22:58 I’m gonna ask about Kubernetes events in a little bit, but first Docker seems like the most predominant container system, but I read that Kubernetes is not necessarily bound to using Docker. Can you say from your experience what container systems you have seen and do they play a challenge for observability?

Yechezkel Rabinovich 00:23:22 Yeah, there are a lot of new con orchestrate ization environments. Uh, I can’t really keep up, but at the end of the day Kubernetes, it’s not about how do you containerize it, it’s about using containers. So from the Kubernetes perspective, it’s like a plug and play, right? If Kubernetes, we talked before about scheduling pos, right? So eventually it’s running a container which has an image or manifest and make it schedule in the Linux ecosystem. And from there it’s the containerized layer to how it’ll resolve in in the Linux primitives. So from Kubernetes perspective, it doesn’t really matter what containerized platform do you use.

Philip Winston 00:24:05 I did see that Kubernetes allows you to plug in different systems depending on your needs. It sounds very configurable. How about networking? I think networking is a big aspect of Kubernetes. I imagine there’s communication going on between pods but then also between nodes and then also with the outside world. Can you talk just a little bit about Kubernetes? If I’m setting up say a new system, what are the different network paths that I might care about?

Yechezkel Rabinovich 00:24:36 You have a few objectives that you want to achieve with networking in Kubernetes. First you want to being able to discover your other microservices and remember that Kubernetes is a flexible system. So one service can hold a hundred codes behind it for this moment and tomorrow is gonna have 400. So you need to keep track on those ips. How can you reach the microservice that you, you’re after? And this process can be really expensive in terms of compute and finding the right ips and and maintain it. So EVPF is obviously one of the solutions for that and and Celia is a great project that try to push down all these translations and and rewrite rewiring of packets to eeb PF to do it very optimized. That’s one challenge with Kubernetes. The other one is how do you expose some services to the external world? So you want to wire it with cloud providers, you want to, you know, to connect it to NLBs or ARBs and you wanted everything to be natively and to be as Kubernetes as you can, even though it’s interacting with actually cloud provider which are opinionated.

Yechezkel Rabinovich 00:25:47 So that’s another thing that cloud providers are doing. They’re using the pluggable system ecosystem to join resources between Kubernetes to the cloud. So you can provision an ingress controller that will eventually wire to an A-W-S-N-L-B or the equivalent of of GCP or or Azure and it’ll walk out of the box with the Kubernetes native way. So that’s a pretty big one because in the past you had to rewire your application in a, in a proprietary way. But today because Kubernetes becoming def facto standard cloud provider is doing it for you and making it Kubernetes native as possible.

Philip Winston 00:26:25 Let’s talk about a PM in the context of Kubernetes. So I think that’s application performance monitoring. Can you give us a little overview of that?

Yechezkel Rabinovich 00:26:35 A PM is the, a general name that represent, you know, taking all the three pieces of observability and tie it together to make some meaningful insights from it. Specifically a PM is some like related to traces specifically because this pillar is, is a bit more hard to get right because you need to either instrument your application, which takes could take a lot of time or use some kind of EBPF sensor which re just recently became available. So A PM is a is general name for gathering traces and tie them with metrics and logs.

Philip Winston 00:27:10 When you say traces, I think of a stack trace, which is a single program function calling function calling function. Is that what you mean? Or is there sort of a broader type of trace that would involve multiple programs?

Yechezkel Rabinovich 00:27:23 It’s the extension of that. So you’re right stack trace, it used to be, you know, the way the chain of of events that happens, you know, a call to B called to C, but then we started using microservices so now A and B are no longer using the same process. So it’s really hard to get a stack trace for a few microservices talking to each other. So each and every call is called a span and the chain of spans are called a trace which trace this business unit from A to Z and can investigate it to see where the bottleneck is. Where do you get error from and everything you need to in order to investigate this incident.

Philip Winston 00:28:04 So I guess the key with a PM is that there are applications and user applications or or backend applications as opposed to the system itself. When you think of observability with Kubernetes, how does that divide in terms of what technologies are focused on the system versus the applications or is it basically the same technology?

Yechezkel Rabinovich 00:28:26 So we do have those two planes we talked about earlier about, you know, the infrastructure plane which are agnostic to the applications are running on top of it. So you wanna make sure the infrastructure is healthy, you have enough resources in order to fulfill all the, the requests that you get as a, you know, to deploy those kind of microservices to doing an update and so on. So this is one plane that you want to to monitor and obviously if that’s broken, everything else going to be broken, but still like if that’s healthy and, and you know all the nodes are healthy, that doesn’t mean that your application is is running smoothly. There could be a network errors between application, there could be, you know, wrong configuration, different passwords could be a lot of things. So the other plane is the application itself and that’s harder to monitor because in order to do that you need either instrument each and every application and, and not all of them, you wrote the code and there are third parties that you’re not aware of and and so on and it’s so that’s really hard or, or you can use eeb PF as a sensor which basically automatically detect all those traces and all those spans and and applications and, and get all these data automatically regardless if you instrumented the application.

Philip Winston 00:29:46 How about the term service level objectives? Is that basically the amount of uptime that your services have or could there be other objectives?

Yechezkel Rabinovich 00:29:57 Yeah, so each and every platform will define what is their SLOs. So what do you promise to your end users or customers? What are the standards that you are achieving when serving their applications? Usually we tend to see a latency promise. So you know, P 95 of all requests will be under 200 milliseconds for instance. That’s a very common metrics and of course I think the golden signal will be error rate. So what will be the error rate that we can guarantee to you as an end user or customer that our platform will provide and those to measure those, that’s not an easy task. Like it sounds simple, but in order to measure those kind of signals and you know, in order to do that without you know, being involved in each and every process, you either need to use some kind of EBPF sensors or you need to have a dedicated team that will make sure that everything is implemented, all the metrics are instrumented and checked that there are no drift because sometimes we think we’re done and then a new developer add a new API and we forgot about that. So now it’s not just we broke our SLO, we don’t even know that. So that’s not a nice situation to be in.

Philip Winston 00:31:15 You mentioned P 95, can you explain what that is and why you use metrics like that as opposed to say average, which maybe is a simpler way to put it.

Yechezkel Rabinovich 00:31:28 Yeah, so P 95 is a unit of how we measure distribution. So this will be the latency that 95% of our application of our APIs will be less than this latency and why it’s important and, and we’re seeing actually a phenomena where companies are focusing around P 99 or P 99.9 because it really depend on your SLO. So think if you are a bank and if you have, you know, P 99 of a hundred milliseconds, but then you have this anomaly that takes 30 seconds, one out of a million, you won’t notice it on the average, right? It wouldn’t change the average at all because it’s one out of a million, but it could break a trade for a timeout. So those numbers could be very crucial. It depends on your industry and your specific needs, but we’re seeing more and more, especially in the gaming industry, especially in the ad tech and FinTech, that those anomalies, those one out of a thousand are matters. And for those you need to measure P 99 and P 95 and not only average

Philip Winston 00:32:37 That makes sense that there’s gonna be a few exceptional events that are much slower than the rest. How about Kubernetes events? We mentioned that early on. Can you give specific examples of a Kubernete events? Are these related at all to EBPF hooks or events or is it just a different topic?

Yechezkel Rabinovich 00:33:00 It’s a different type of events. Kubernetes trying to expose some of its state in order to help us developers troubleshoot. So it generates events, normal events, warning events just to give you a clue of what it’s doing and where the problems are. So those are actually really important because you can find events such as image pull back off for instance. So it means Kubernetes is trying to fetch this container image and can’t can’t find it. That will probably indicate on a serious problem. It could also have events such as Crash lu back, so it means your container crashing and you probably need to check what’s going on. So those events could be meaningful but they could also create a lot of noise. It’s really important to know that Kubernetes generates a lot of events all the time because that’s how it works. So it could be really important but you really need to understand and digest and dissect what are the key events that you wanna track.

Philip Winston 00:34:02 You mentioned pulling an image. I did read about deployment changes such as images, config dependencies, and I didn’t really think about that in the domain of observability, but I imagine with microservices if you have a large system there could be many different dependencies and which ones were running when something bad happened is non-trivial to figure out? Can you say a little bit more about that? How are we tracking what is actually running at a specific point in time?

Yechezkel Rabinovich 00:34:35 You’re definitely right and especially when those days, companies trying to push more and more version each day. So it used to be one version a week so you could basically estimate what was it at that time. But today we’re seeing companies pushing releases like 10 times a day, maybe 20 times a day. So what we do at ground cover, because we generate all those data, we tag it with those specific labels, each and every information, each and every trace or log or metric you want to tag with the right container image or hash in order to make the troubleshoot into investigation much more meaningful. Because in that way you can see oh those broken APIs started with this commit, it wasn’t before and I can see it because it’s all tagged. So it’s really important to achieve that. You can obviously do that with adding labels to your instrumentation so you can add a little bit of annotations on your deployments or in your code. So it’s really important to maintain that in order to get a clear sense of what was the version, what was the exact state of the code at that level.

Philip Winston 00:35:47 How about the Kubernetes API itself? Is this giving us information about all of these different things or is the API specific to the cluster or just how does that work?

Yechezkel Rabinovich 00:36:00 Those are to me are the most important things because if you are control plane, if you are, you know, the infrastructure doesn’t really work. If you are node getting errors from the control plane, the Kubernetes control plane, that’s really bad and could be a sign that the system is going down completely. So there are those APIs which are really hard to get because they’re not instrumented, the cubit is not instrumented. So it’s really hard to detect. You can be detected with logs if you are monitoring your control plane logs and you can do it with metrics if you are gathering metrics from the control plane, which I advise to do that because those are pretty important.

Philip Winston 00:36:42 So are you talking about eeb pfs ability to see these Kubernetes events? It raises the question, does Kubernetes, is it aware of eeb PF at all? Is there any sort of synergy between the two or when you’re writing eeb PF are you forced to kind of figure all that out yourself?

Yechezkel Rabinovich 00:36:59 So when you write Eeb PF program, you need to sort to Kubernetes relationship by yourself. You’re using lean experimentive like namespace and see groups in order to actually get to the exact container, exact code, exact deployment. And it’s really interesting and challenging work, but it’s a must if you want to translate the kernel layer into application layer and the EBPF, I think what’s great about it is it’s see the data from the control plane as it was from an application because at the kernel level it’s just an API. So when you translate it to Kubernetes layer you can actually detect, oh this API is from the so it must be a lot more important than others. So you can do that but you have to navigate through Kubernetes layers in order to find out which part is responsible for this. API,

Philip Winston 00:37:54 You said culet, that’s the utility I guess you use to interact with Kubernetes or what was that phrase?

Yechezkel Rabinovich 00:38:01 So culet is the process responsible for managing the node. So each node in the cluster has on it, which is responsible for scheduling containers, reporting the status from the deployments back to you know, the control plane that’s the Kubernetes unit inside the node.

Philip Winston 00:38:19 One more question and then we’ll just get into ground cover itself talking about that in detail. Prometheus and Grafana seem to play a role in observability. I don’t think it’s specific to Kubernetes, but can you just kind of say what those two are and maybe say what their role is relative to ground cover also if they are still used or not?

Yechezkel Rabinovich 00:38:40 Yeah, so Prometheus is the time series. It’s a lot of things actually. It’s a time series database, it’s a format to transfer metrics between endpoints and it’s also a query language. So I think it’s one of the biggest building blocks of time series databases, Prometheus. So it’s also an open source and also Grafana. So Grafana is an open source that built, I think originally around metrics and specifically Prometheus. And it’s a visualization tool so you can visualize metrics fetch from Prometheus in Grafana and doing it with, you know, all the visualizations that we know like trend lines and bar charts and pie charts and gauges in order to make sense from all those metrics. Nowadays Prometheus has competitors like Victoria Metrics, which I think it’s the next generation time series database, but it still holds the same protocol and query language like Prometheus. So you can still use Prometheus query language to fetch from other time series database because they’re all implementing the same protocol. So it’s really common to see Prometheus deployments in Kubernetes because a lot of the metrics are stored there and it’s the first thing to do when you deploy a Kubernetes cluster and you don’t exactly have a plan of how you want to monitor that.

Philip Winston 00:40:02 So let’s get more into ground cover and we can come back to Prometheus and Grafana and the relationship to ground cover sort of daily usage. So at the beginning of the episode you mentioned some of your motivation for starting ground cover. Can you get a little more into the story of how it was started? How did you get your initial customers, what technology did you have at the beginning? Did you start with EBPF? Just kind of give us a sense of how things got ramped up.

Yechezkel Rabinovich 00:40:31 So we started with a problem, it was right after the story I had, I spent with the team two or three months implementing all those metrics and all those traces and establishing dashboards and you know, I was really proud of the team, it was really hard work and it worked really well. And then the CTO came and said that the vendor that we used, the usage, the the observability cost was increased by five times. So he asked for us to remove all those metrics and all those traces and for me it was a, I felt like it was a failure for me as a chief architect that I could not understand the cost of observability and at that time I was trying to understand is it me or is there something broken and the ecosystem. And from there we kind of, so met Shahar and I are good friends and we sat together and he had similar experiences so we felt like okay, maybe we’re onto something and we talked to more and more VP R and ds and chief architects and, and we saw this as a common problem.

Yechezkel Rabinovich 00:41:34 So we saw there are two major problems. One is companies don’t have enough information and they’re working really hard to get more and more information spending time on that and not deploying new features to customers. And the second one is the companies that do have the information, they cannot pay the price for holding it. So they’re actually reducing some of the information in specific environments or you know, they do not deploy it on development environments and then they miss bugs or anomalies in their dev and staging environments and we realize we need to do something different or to solve it better. And we kind of walked with this idea for a few months, you know, wondering how we can do that in a frictionless way. We realized we have to do observability in a frictionless way, so no friction for onboarding, no friction with payments.

Yechezkel Rabinovich 00:42:26 And then we encounter eeb PF and we, okay, the first thing is we use EBPF, the onboarding is gonna take five minutes, that’s done. We get a lot of information, even more information we had in the past, but now we actually have a bigger problem. Now the amount of data is actually bigger and we understood that you have to leverage stream processing ye so you think about stream processing as pushing down some questions, common questions that you want to ask to the source of the information. So you don’t need to ship all this data from A to B just in order to ask the question. You can just ask the question constantly when the data is originated and save a summary of it so that way you can reduce the volume of data that you need to save and reduce costs with that. So once we realized those two can be solved, we started ground cover, which is the first frictionless observability platform.

Philip Winston 00:43:29 So I think one of the features of ground cover is doing the processing in the nodes maybe instead of sending it back to a backend at your company. Is that different than how other observability tools work?

Yechezkel Rabinovich 00:43:41 Exactly. So that’s the first part is the stream processing. So instead of sending all the data to somewhere else and then querying it, we’re doing the processing at the node level and reducing the cost of sending it somewhere else. If even in terms of egress, which means sending data outside of your cluster, that could be really, really disturbing to have this kind of, you know, sending all those data. We’re talking about terabytes a day, that’s the volume we’re talking about in modern companies could be very, very expensive. But you touched on another point which I think it’s something fundamental with the observability word is vendors trying to convince you to send as much data as they can, they charge by volume so they want you to send as much data you can and on the other hand the customer wants to save money so they want to send as less possible there.

Yechezkel Rabinovich 00:44:37 So you have this paradox between the vendor trying to send as much possible the customer trying to you know, filter as much possible and we realized modern observability vendors should be on the same page with the customer. So that’s why we decided early on that we’re not gonna charge by volume. So it makes us on the same page, it’s kind of alignment between us and the customer that we do not encourage you to send data that you don’t need and we are on the same page with you that you’re gonna have all the data you need and not data that you don’t need. We’re on the same page and I think it’s important to be aligned with your customer’s needs.

Philip Winston 00:45:19 When talking about EBPF, we said it has very tight time limits as to how much processing it can do. I’m guessing this stream processing cannot run inside of eeb PF. So are we saying eeb PF collects the data and then you have a different process running on each node that filters the data or monitors the data I guess?

Yechezkel Rabinovich 00:45:42 Yeah, that’s correct. That’s the trade off between what to do in line inside the EBPF programs and you know, how much can we push the boundary of doing everything we can in the EBPF because it will be much more efficient and how much we offload to do it out of bound and in other process that will take this information. And I think we are constantly trying to understand where are the boundaries for that. We are doing a lot inside EBPF to make sure we are sending only data we need in order to reduce the total computation cost. And the big important thing to remember is the expensive part is to send data from the node itself because that will involve in serialization de serialization network between availability zones and so on and that will be expensive. So as long as you can push down filters as low as you can to EBPF layer and on top of that do the out of bound stream processing and not interfering the application, I think that’s the way to go and that’s the way to make it frictionless as possible in terms of affecting the application.

Philip Winston 00:46:50 So if I’m using ground cover, I could imagine three situations. One, I’m using stock EBPF programs from you or you’re installing them for me. Two is I’m using your eeb PF monitoring but I’ve configured it in some way, I’ve told it what to to look for and three, I don’t know if this ever happens, but as a customer I actually write my own EBPF. I’m just wondering in ground cover which of those is most common or which is possible?

Yechezkel Rabinovich 00:47:20 The first and second options are pretty common. We see customers, you know, trying to just to make it run. So we’re trying to, we have this number five minutes in ground covers that it should be running under five minutes no matter the cluster size. So we’re doing everything we can to make it as fast as we can. No unnecessary questions just works but obviously more advanced users can configure the EVPF programs and tracing. So if you don’t want to capture, you know, Redis protocol for instance, some customers are not interested in that, some customers are really interested in that. So it’s opinionated. We will allow you to write filters in order to you know, drop some namespace, some workloads, some protocols that will be a more advanced usage of ground cover. We still do not support ad hoc EBPF programs and , to be honest, I did not get any requests for that because I think most people don’t really want to write EBPF programs. They just want to that it works, but it could be interesting idea to make it dynamically so maybe add dynamically BPF programs on demand. So definitely something to think about.

Philip Winston 00:48:30 You mentioned the five minutes set up period I guess for trial, what’s actually going on there? How is the EBPF program loaded on all of my nodes if I have a lot of nodes running? What’s distributing the actual program to those nodes?

Yechezkel Rabinovich 00:48:46 So we we, it’s not a try because our first cluster is very forever. So what we do is we use demon set primitive, which basically tells Kubernetes that you want to run this pod on each and every node. So we kind of ask Kubernetes to take the responsibility of managing it for us because what we want is to have our own EBPF programs running on each node. So we use demon set for the sensor and from there we guarantee that each and every node has this EBPF program running and loaded into the kernel.

Philip Winston 00:49:19 You’re saying demon set, is that a Kubernetes API or is that something

Yechezkel Rabinovich 00:49:24 Yeah, so that’s a primitive from Kubernetes primitive that making sure that you get a copy of this application on each and every node. The usage for that will be monitoring solutions, security solutions, maybe infrastructure. So imagine you want a DNS server on each and every node or you want a CNI plugin or any network related or storage related drivers on each and every node. It’s like the demons in on our old operating systems,

Philip Winston 00:49:55 Right? Something that’s always running. Yeah, so maybe touching back to Prometheus and Grafana, what sort of dashboard and visualization features are inside of ground cover and in what cases do I still pipe the data to one of these open source platforms?

Yechezkel Rabinovich 00:50:14 So we decided really early that we really love Grafana. We do not want to build custom dashboard. We focus around the EBPF and the databases behind it and the data model and the performance of that. And so we do not want to build another dashboarding system. So we made an architectural decision to use Prometheus as our query language from the beginning. So you can basically get your Grafana and route it to our platform and fetch all the data from our platform as it was your promeus. We also host Grafana as part of our solution. So in order to also eliminate that friction if you don’t want to, so we also have all the common, you know, visualizations as heat maps and trend lines and bar stacked bars and pie charts, all the regular thing because once we decided to use Prometheus as the query language we could reuse a lot of components that already know how to work with that.

Philip Winston 00:51:15 I think that makes a lot of sense. How about, this is a little behind the scenes but you’re using C or like a C like language to write the EBPF. What other languages are used at ground cover say to implement the stream processing? What languages do you use?

Yechezkel Rabinovich 00:51:32 We are using Golan as our major programming language. We had a long discussion between Golan and Rust and decided to go with Golan because of the ecosystem. We also took a lot of inspiration from projects like Victoria Metrics, which took Golan to another, another level of performance and memory management. So we’re taking a lot of concepts also from them. So yeah, Golan and and C and behind the scene we’re using Victoria metrics as the database for metrics and click hubs for logs and traces and events. Both are amazing technologies, really inspiring piece of technology and we’re trying to squeeze it to the max for our use cases.

Philip Winston 00:52:14 Can you say a bit more about Victoria Metrics? I hadn’t run across that one.

Yechezkel Rabinovich 00:52:19 Yeah, Vic Metric is an open source time series database. I think it’s somewhere around like five years old, definitely winning on every benchmark we did and they have a lot of uh, white papers and benchmark they did. It’s like the next level for me of uh, time series database. They took Prometheus concept and brought it to a new era, a new level of implementation. So I’ll definitely encourage everyone to check it out, especially if you are using Prometheus today or using tus, I think that will be probably pretty easy dropping replacement and save you a lot of time and and money.

Philip Winston 00:52:55 Speaking of open source, how about ground cover itself? I know the whole project is not open source but there may be some open source components that you guys have been developing.

Yechezkel Rabinovich 00:53:05 Yeah, we’re trying to contribute to open source wherever we can. We released more, which is a CLI tool for Kubernetes CPU and memory monitoring. We also released kereta, which is an Eeb PF sensor that demonstrate some of the abilities of what you can do with Eeb PF. So that could be also a good starting point for people or engineers want to play with Kubernetes and EBPF specifically. It’s pretty cool project. You can get a lot of meaningful information from it, a lot of network metrics from it. So we are contributing to open source wherever we can. I think the world of software is pushed by open source and we’re trying to pay part wherever we can.

Philip Winston 00:53:46 Going back to go for a second, I think Kubernetes itself is written in Go, is that true?

Yechezkel Rabinovich 00:53:52 Yeah, the core of it, yeah, the KU stuff itself. Yeah.

Philip Winston 00:53:56 How about this case study you had on your blog? You guys have a pretty extensive blog. I found it interesting to read through. There was a company or a project called Common Ground and I know they used go on AWS but Python for machine learning on GCP, which is a split that I’ve seen myself where you used both cloud providers, what observability challenges were there with this sort of dual cloud setup?

Yechezkel Rabinovich 00:54:23 So I think whenever you have different deployments on different even regions or even products for instance Kubernetes and serverless or so even that will be really challenging to get a a full picture of your production posture. So obviously having two cloud providers will be not an easy task to monitor. What I believe is that, that you probably want to have a monitoring cluster, so dedicated cluster from monitoring solutions and then configure all the other clusters and all the other clouds to send data to this cluster. In that way you guarantee that this cluster will remain available even if you have some kind of disruption and you also get a full picture agnostic to the cloud itself that is running. So you want to push all the data to one place. You probably want to use some kind of standard formats like auto or Prometheus remote right, or other open standards to consolidate the data and then you need a strong backend observability backend that will be also cost efficient in order to hold this data and retain it to the retention you need.

Philip Winston 00:55:35 I had kind of forgot about serverless in this conversation. That does seem like it presents a challenge. So Kubernetes seems to mostly deal with long running containers, but I imagine a specific system might have long running containers but also use serverless maybe specifically Lambda. I guess in that case you’re going to observe it when it makes that call or how do you even know that it’s made a call to a function like that?

Yechezkel Rabinovich 00:56:04 So when you’re using EBPF sensors, you only need one side of the conversations instead of having the two sides when you’re using traditional instrumentation. So as long as your lambdas at the end, you know, talking to something orchestrating Kubernetes, you are fine. But there are cases when you are, the entire stack is is Lambda for instance. And when you have this situation, you kind of need to instrument your application anyway. So specifically in ground cover, that’s why we have these endpoints that can allow you to ingest data from external sources. So if you’re using any open standard, you know, zip Zipkin or Jager or Open Telemetry, of course you can just ingest external data and we make sure to treat it as a first class citizen, their platform itself.

Philip Winston 00:56:52 So I guess going back to that five minute setup process, what do I get at the end of those five minutes? What turns on, what do I start seeing? Is it the data flowing into my Grafana instance or how do I know I’ve succeeded in the setup process?

Yechezkel Rabinovich 00:57:08 After you deploy ground cover, you’ll go to the platform itself, the app dot ground and you immediately see all your workloads with all their golden signals monitored by the platform. So you don’t need to do anything in order to do that. It’ll be automatically created with your account and your email or the registration API key. From there you can explore, we have the, I think, pretty cool map view so you can visualize all your production workload in a map and that’s pretty impressive to see. You know what you created in a map view and you all of a sudden detect some kind of weird things that you never thought you’ll see. You see that some third parties applications are actually reporting some things to some vendors and it’s really interesting to see. And of course from there you can also integrate with other third parties. So you can integrate with Oline gesture, you can integrate with your Grafana, you can integrate with our alerts and Slack and to set up as your primary solution for observability.

Philip Winston 00:58:09 Two things there and then we’ll start wrapping up. You mentioned Golden Signals. I have down here latency, traffic errors and saturation. I’m not sure what saturation means, but are those the signals or does it just mean like whatever signals are important to you?

Yechezkel Rabinovich 00:58:25 So I think Golden Signals traditionally yes, it’s those signals. You have the equivalent of that, it’s called the red signals. It’s, it’s the same thing without the saturation, which is a bit more complex topic. But yeah, those are the signals, the golden signals, which according to a lot of SREs, those are the signals that you want to start monitor when you have no idea what is important to you.

Philip Winston 00:58:46 So does saturation relate to CPU usage or is it a networking thing?

Yechezkel Rabinovich 00:58:50 So it could be, it depends on what resource your application is actually consuming. So saturation will say, will tell you how much of the resources that you require are you maxing out. So if you’re maxing out your resources, you are soon getting your limit. So if your limit is bounded by CPU or by network or by disk, that will affect your saturation rate.

Philip Winston 00:59:14 And then one final clarification, you mentioned getting a map of your system that’s not A-E-B-P-F map. That’s like a visual boxes and lines type of map or

Yechezkel Rabinovich 00:59:25 Yeah, sure. MAP could be a, an award that is representing different stuff in this talk. But yeah, I’m talking about a visual map that will guide you where data is flowing inside your system and, and outside, honestly.

Philip Winston 00:59:38 Great. Let’s start wrapping up. I think when this episode airs, we’ll be looking forward to 2024. So it might be a little bit far ahead, but what do you see as the future of EBPF in the industry and at ground cover? What are you going to do in 2024 or looking forward to doing in the future?

Yechezkel Rabinovich 01:00:02 I think Eeb PF will eventually be on each and every server that we use. So if you have a production platform that running servers, you’re probably gonna run EBPF programs for a lot of tasks because we’re just started seeing what we can do with EBPF in terms of security, in terms of performance, in terms of observability. So eeb PF is definitely the future and we, we just start seeing it and all the vendors in the security observability domains are gonna leverage it and it’s, that’s exciting for me because we are constantly working on pushing EBPF boundaries as far as we can. So I think 2024, as we all know, is also gonna be interesting in terms of AI and how do we make sense from the ability data with ai. As you know, the volume data is getting crazy and with that it’s getting really hard to make sense of it. So there’s this promise of AI getting us guided with that.

Philip Winston 01:00:58 How can people learn more about ground cover or follow you online?

Yechezkel Rabinovich 01:01:02 So you can follow me on LinkedIn or Twitter. Also, we’re writing a lot of blogs. All the engineering are really interested in writing blogs, technical blogs, and ground cover. So you can go to to the website and go to blog and you feel free to send me a message regarding EBPF for observability.

Philip Winston 01:01:20 Great. I’ll put links to those in the show notes. Thanks for joining me today, Chaz. This has been interesting.

Yechezkel Rabinovich 01:01:26 Thanks for having me. It’s really cool.

Philip Winston 01:01:28 This has been Philip Winston for Software Engineering Radio. Thanks for listening.

[End of Audio]

Join the discussion

More from this show