In this episode, Ori Mankali, senior VP of engineering at cloud security startup Akeyless, speaks with SE Radio’s Nikhil Krishna about secrets management and the innovative use of distributed fragment cryptography (DFC). In the context of enterprise IT, ‘secrets’ are crucial for authentication in providing access to internal applications and services. Ori describes the unique challenges of managing these sensitive data, particularly given the complexities of doing so on a large scale in substantial organizations. They discuss the necessity for a secure system for managing secrets, highlighting key features such as access policies, audit capabilities, and visualization tools. Ori introduces the concept of distributed fragment cryptography, which boosts security by ensuring that the entire secret is never known to any single entity. The episode explores encryption and decryption and the importance of key rotation, as they consider the challenges and potential solutions in secrets management.
Related SE Radio Episodes
Transcript brought to you by IEEE Software magazine and IEEE Computer Society.
This transcript was automatically generated. To suggest improvements in the text, please contact [email protected] and include the episode number and URL.
Nikhil 00:00:18 Hello and welcome to Software Engineering Radio. This is your host, Nikhil, and today I have the pleasure of welcoming Ori Mankali. Ori is a senior vice president of engineering at Akeyless, a leading cloud security startup. Prior to his current position, he served as the VP of research and development at the same company for nearly four years. Ori’s professional strengths include cybersecurity, IT operations, and architecture with a particular proficiency in embedded Linux internet protocol suites, debugging, and multi-threading and Unix. Before joining Akeyless, Ori held significant roles at several major companies. He was a director of software development at DriveNets and a manager of software development at both Amazon Web Services in Germany and Compass Networks. Ori holds a master’s and a bachelor’s degree in computer science from the Bari-Ilan University in Israel. Today we’ll be talking to Ori about secrets management using distributed fragment cryptography. So welcome to the show, Ori. Is there anything that I missed out in the BA bio that you would like to add?
Ori Mankali 00:01:28 No, I think it was pretty accurate. And thank you for hosting me today. Nikhil I’m delighted to be here in this show and answer questions related to secrets management and talk about cryptography and anything that interests you.
Nikhil 00:01:42 Perfect. Okay, cool. So let’s just jump right in. Right. So we said that the title of the show is Secrets Management, and let’s start from there. So could you explain what secrets are and why are we calling them secrets versus passwords versus keys, or whatever other terms that we use for these kind of things?
Ori Mankali 00:02:02 Yeah, I think it’s a good starting point because there is a lot of confusion around terminology and the differences between keys and passwords and secrets. Typically, we call secrets, any kind of sensitive information that is used mostly for authentication by applications. So for example, if you have some piece of code written in whatever language, Java, anything, similar to that and your piece of code needs to authenticate to a remote service. It can be a database or another service, I don’t know, Kubernetes clusters, anything of that nature, then it needs to identify itself. The application needs to identify itself in order to be authenticated and later on be authorized to access remote services. Historically, this sensitive information was stored in an insecure place like a configuration file or even inside the code, even hard coded. So all those types of sensitive information, we bundle other, the name secrets, passwords is a term that we normally use for human access.
Ori Mankali 00:03:10 So very similar to password managers. If you know a lot of browser extensions, mobile applications, et cetera. So all types of human access is considered, again, terminology wise for passwords and keys, we normally call keys. Anything that use cryptographic keys can be symmetric keys or asymmetric keys that are used for different purposes. Normally, symmetric is used for encryption, typically not just, and asymmetric keys are mostly used for signing operations and that’s the distinction between the different names, but eventually they’re all part of the same word of protection.
Nikhil 00:03:51 So yeah, it’s the same world of sensitive information that needs to be, I like the way you’ve differentiated. So secrets can be primarily looked at from the lens of, okay, this is usually something that you want to look at from an application or machine to machine interaction perspective. Whereas passwords are usually when there is a human involved. So moving on to the next word which is, management. So can you talk about secrets management and why it is important?
Ori Mankali 00:04:19 Yeah, I think everything at the end relates to scale. And let me elaborate what I mean by that. Imagine that you have like a single application. That’s the only thing this application is doing, is just connecting to a database. So you have a single application with a single secret and it’s not too hard to manage. You can even wrap it in a way that would be considered somewhat secured in the sense that the secret, the sensitive information will be encrypted, but that’s just one. And now imagine that you have a large organization with, I don’t know, hundreds of thousands of services applications.
Nikhil 00:04:55 A modern microservice architect.
Ori Mankali 00:04:56 Exactly, exactly. Scaling out, like if you’re running on top of Kubernetes, you have tons of that. And now you have different kinds of applications that need permissions to different kinds of secrets. So it’s not just one, it’s millions of secrets. And you need to have some kind of access policy. Like how would you differentiate between one application to another, between one human to another? You need to have some kind of auditing, right? You need to be able to see which application access, which secret or which human access, which secret in order to be able to retrospect and remediate in case of security hazards, et cetera. So this is becoming a big problem. Like it’s not enough just to protect the secret using some kind of an encryption key. You need a system to facilitate the access to secrets, to configure different kinds of authentication methods, different ways to authenticate to the platforms and be able to fetch secrets and configure them.
Ori Mankali 00:05:56 You need a good and solid access policy — or access roles, as we call them, because we implemented role-based access control. You need to be able to integrate with external identity providers. So you will have a single sign-on authentication to the platform audit log that can be edited and searched, and maybe even can be forwarded to existing log systems, because many organizations have their own log systems. May it be Splunk or Syslog, or Elasticsearch, you name it. And in many cases, you also need some kind of visualization for auditors. Like if you have a, a CISO or security officers in the organization, they would like to have a visual view or overview, should I say, about the activities about access, et cetera. So all that requires …
Nikhil 00:06:44 And even in the basic case also, right, you probably want to have keys to, you need to be able to change the key. You need to be able to delete the key, add new keys when people leave the organization, you want to refresh the keys, et cetera, et cetera, right?
Ori Mankali 00:06:58 A hundred percent, right? It’s, it’s the life cycle of a key or a secret, maintaining versions of keys. Be able, as you mentioned, to create new ones, to delete existing ones, et cetera, update them.
Nikhil 00:07:09 So obviously this is a lot of things, but there are existing systems to manage a lot of data, right? So data management, that’s pretty much every business application. You can, you do a CRUD application, it is create, read, update, delete, you can do this for so what are the unique challenges that secrets management faces that makes it kind of unique, that doesn’t kind of allow it to be fitting into a regular application management flow?
Ori Mankali 00:07:38 Yeah, I would say that not every data, every type of data is classified as sensitive. So you could store, I don’t know, this different kinds of string, you don’t necessarily have to go and encrypt them depending on the use case. So protecting secret is a mission which is with more responsibilities in terms of security, because potential hackers and malicious users would want to find those secrets inside organization in order to get access to other types of systems, and then do what we call the lateral movement, starting to expand their knowledge about the organization, about sensitive information. So it’s a good target for malicious activity to learn about the organization. So they need to be protected in an extra secure fashion. And obviously you can write your own layer on top of traditional data storage like databases, et cetera. But then it means that you need to somehow reinvent the wheel. Every company, every organization would have to go ahead and invest time, engineering time and define the security standards and be complied with certain security certifications, et cetera. And that’s something that organization would prefer not to invest their business in it.
Nikhil 00:08:53 Yeah, it’s not a core competency, right? It’s not something that they do day to day and it’s probably something that they would want to pay for. Great. So I think that’s a good overview of secrets and why we manage secrets. Maybe we can move on to the second part of it, which is, can you give us an introduction into distributed fragments, cryptography, and you know, give us a high-level overview of what that is.
Ori Mankali 00:09:18 Sure. So that’s probably the foundation of Akeyless. That’s where Akeyless started about five years ago. It was an idea that came from one of our founders, our CTO, his name is Rafael Angel, and he was working at the time for a FinTech company. And he realized that one of the things that the basic question that people need to ask themselves is not how data is protected, because encryption algorithms are with us for many, many years. The algorithms are well known. Everything is considered to be secured. But the main question is, where do you store the key that is used for the protection for the encryption? The common answer in many cases, like you have the silk key or the root key is stored in an HSM, hardware security module, which is technically a physical box, a physical machine with a certain set of security requirements. It’s going through certifications, et cetera.
Nikhil 00:10:20 I seem to remember it was something that Intel had introduced, right? As part of the CPU architecture, SGX, I think it was called.
Ori Mankali 00:10:27 That’s something particular for CPUs, as you mentioned. But this box that I’m talking about is something that is, it can be like a pizza box or a physical machine. It’s more of a system, not just a box. I can also tell you that cloud providers are offering that as a service. So you have cloud HSM solutions. First of all, they’re very expensive, and secondly, because of their hardware nature, they’re not easily scaled.
Nikhil 00:10:55 You need a different hardware box for every secret.
Ori Mankali 00:10:58 So the idea that he came up with is instead of trying to protect this root key, this initial key in some kind of a physical location that would be hard to penetrate, this is the assumption of using an HSM instead of that, let’s not use any key in any single location. So he decided to manage the keys in a different way, and he built the DFC technology. This is our own proprietary patented technology. It stands for Distributed Fragments Cryptography. So instead of having a single key in a single location in memory, or resistant storage, doesn’t matter of a specific application. Instead of that, you have X number of fragments. And those fragments are composing one logical key, but this key has never been brought together. So the fragments are remaining at those locations, physical locations, different servers, different regions even. And you can use the key, but you cannot get the full key as a whole, not even in memory.
Nikhil 00:12:05 That’s interesting because I think that’s, I think the unique difference between you and a secret sharing, right? If you want to do like multi signature kind of a deal where you have you’reÖ
Ori Mankali 00:12:17 Talking about Shamirís secret sharing, right?
Nikhil 00:12:18 Shamirís secret sharing or other, these kind of cryptographic methods, which I’m aware of, where basically you have this idea of you have multiple keys with multiple people that need to be combined to get access to a secret or, and so this is different from that, correct?
Ori Mankali 00:12:34 Which secret sharing at some point in time as part of the algorithm that you combine the fragments together or the pieces together and do some kind of cryptographic operation, and then later on you can split it the key again or to do whatever with the data. With the key list, that’s not the case because we’re, as part of the algorithm, we’re not bringing the fragments outside of their location. Okay? So they remain in their location. We have a microservice representing each one of the fragment managers, we call them. And there is some kind of an algorithm that allows us to communicate with this microservice and get a request served for each and every operation.
Nikhil 00:13:18 So one other thought that comes to mind then is that, okay, so you have this key fragments distributed across multiple places. What happens if one of them goes down? Is there a requirement that all of them should be up for the purposes of the signing or for the purpose of the functioning of this cryptography? Or is it kind of like with threshold signatures, that’s another technology that’s similar in multiparty computation, you just need KFN, right? You just need, maybe if it’s three five signatures, you just need, you can say, I leaned only three of them in order to do the encryption or the decryption. Is it similar for DFC?
Ori Mankali 00:13:54 Not entirely. So for DFC, it’s not allowing you to use only subset or KFN out of the fragments. However, for resiliency, we have designed our system to replicate fragments to different geographic locations. So instead of having just one instance in one region of the fragments, we replicate it to at least two other locations. One of them is inside the region, so another availability zone, and another one is to another region. So the likelihood that both the same region and another region will not be available at the same time is lower. But you still need an out of N. So you have more places to get the access to the N fragments, but you still need access to all of them.
Nikhil 00:14:40 So you’ll still need access to all of them. How do you actually handle refreshing? So when I refresh a key, or if I want to change the key because somebody left the organization for whatever reason, I want to refresh the key. Does it mean that all the fragments need to be refreshed or is this kind of something that happens independently? How does that work?
Ori Mankali 00:14:58 Yeah, so it’s a great question. So in terms of terminology, again, I know that a lot of terminology is involved when in our talk, so we call what you just described, rotate the key. The reason that we call it this way, because we also have an interesting refresh mechanism, and I will touch on that in just a bit. As part of our patent scheme, rotating a key essentially means that you create a whole new key and just create it in a new version. Okay? So we create a set of N new fragments that represents this new key. It just, it’s associated with the previous one in terms of the idea of the key. It’s known to have a succeeder of that previous key, et cetera. We also allow the administrators to configure a periodic rotation. So if they want, let’s say once a week or once a month or once a year to have a new version of the key that’s also doable.
Ori Mankali 00:15:48 The refresh mechanism that I talked about, it’s another interesting part of our patent because you can assume that you have, let’s say that we have N locations, five locations, for example, that holds the fragments. And if I’m a malicious hacker, and I know that I need to have access to this logical key in order to be able to decrypt all the data, so I have infinite amount of time, I can try to hack a certain location and then after a while, try to get access to another fragment and another fragment, and slowly and gradually maybe get access to all the N fragments. If, if that’s possible. We have implemented a mechanism that basically change the mathematical value of each fragment in a synchronous way without changing the overall sum of the key. So let’s imagine that the key was no, no, no, 1000 in bits, whatever, doesn’t matter. The fact that we changed the value of the fragments did not change the sum of the key. So you can still use it seamlessly. And we do that in a synchronous fashion because this operation needs to be coordinated, otherwise it’ll change the value of the key. And we do it periodically without even like completely seamless from the user’s perspective.
Nikhil 00:17:02 Okay, so it’s part of the algorithm itself?
Ori Mankali 00:17:05 Itís part of the algorithm.
Nikhil 00:17:06 Yes. Yeah.
Ori Mankali 00:17:06 Okay. And the value, the big value of it is that now it’s not enough to get one fragment after the other. Now you have to do that simultaneously. And that’s much harder for a malicious user to do because as I mentioned, there are different locations. Doing that at once is a very difficult task.
Nikhil 00:17:25 Right. Okay. Cool. That sounds really powerful. So does this mean that the algorithm or the SKI basically that comes up with, is simply a large numerical value, not like a UID where you have numbers and
Ori Mankali 00:17:42 No, it’s a cryptographic key.
Nikhil 00:17:44 It’s a cryptographic key.
Ori Mankali 00:17:46 Yeah, it’s identical. Yeah, it’s a bit value. It’s identical to any other key in terms of length and structure and shape that is generated locally, like even everything is pure standard, standard cryptography. We’re using standard encryption algorithms, signing algorithms, everything is pure standard.
Nikhil 00:18:06 So what are the standard cryptographic algorithms on which this is based?
Ori Mankali 00:18:09 For symmetric is we support AES which is probably the most common standards in two different flavors and two different sizes. So 128 beats and 256 SAV and GCM. And for asymmetric, similarly to that, we support RSA from 1K lengths to 4K at the moment. And we also support different flavor. So we basically have some kind of a seal key for any type of other keys like elliptic curve and other algorithms that are not supported yet by DFC. So we basically protect this key using another DFC key. So it’s not limiting you to use any kind of encryption or signing algorithm.
Nikhil 00:18:54 So we talked about DFC, we talked about that. It’s an underlying, it’s just a standard key. It can be used for public key, private key and symmetric and asymmetric. We also talked about the fact that you have different fragments in different places. So does the fragment kind of all reside on your machines, or is it kind of like a combination where some of the fragments has to be on the client and then it has to be on the server? It’s a flexible kind of a thing?
Ori Mankali 00:19:24 Yeah, it’s a great point because I think one of the main concerns mostly for our large corporates and enterprises is who has access to my sensitive information? That’s a question that they’re being asked a lot as part of security and certifications and compliance reasons, et cetera. And today, with many cloud-based solutions they need to somewhat trust the vendor that their data is safe. If the vendor has access to the key, it means that they also have access to the data. So because of the nature of our algorithm, our patented technology, and the fact that we can create as many fragments as we want, we allow our customers to have an optional fragment that is created locally to the customer’s environment. And it’s also a part of the key. So it’s yet another fragment of the key. So let’s say that you have, just for example, five fragments.
Ori Mankali 00:20:22 So maybe four of them are created and stored and managed by us, by Akeyless on our cloud subscription. And one of them is residing locally to the customer environment where us, Akeyless, we don’t have access to it. This means that all the cryptographic operations for any case will always happen from the customer’s environment, simply because without having access to all the end fragments as we discussed, before then you could not do any kind of operation. So the operations are happening locally. The customer has full privacy and access to his data in a way that no third party, including Akeyless can access their data. That’s a huge piece. Yeah.
Nikhil 00:21:08 So, but the downside of that obviously is that now again, it depends on the client being up all the time, right? So if there is a network partition between the client and Akeyless, then you are secrets. I mean, you’ll not be able to sign another because the client is down, for example, right?
Ori Mankali 00:21:26 It’s not entirely how we implemented it. And I can share two main concerns. One of them is what you just mentioned, but the other one is how to facilitate it to many different kinds of clients, right? You can imagine that you have different applications, and now how do I bring to this application fragment?
Nikhil 00:21:45 Yeah, yeah.
Ori Mankali 00:21:45 And that’s becoming like an operational hassle. So what we’ve designed is to have a centralized component to the customer’s environment, which we call Akeyless gateway, as the name implies. It means that the traffic to retrieve secrets and to create secrets and to modify them in any, basically any operation to our platform is going through this gateway. And this gateway is the one storing and managing the fragment. It requires only outgoing traffic, no inbound traffic is required, no need to modify your network topology, et cetera. This allows you to have a seamless encryption decryption signing operations through the gateway without getting access physically to the customer fragment. Another advantage that covers the topic that you raised is about network connectivity. What happens if for some reason the customer is unable to communicate with the backend service, et cetera, we allow optionally to have caching service on the gateway and basically storing the secrets on the gateway in two different modes. One of them is called opportunistic caching, which means that only if you requested it before, then you will have access to it. And the second one is proactive caching, which means that we store all the secrets that the client has access to in memory, of course, in a protected fashion. But in case there is a temporary network outage or something like that, you can still get access to your secrets. You can still perform operations to your internal workload and applications in a seamless manner. So your application would not even notice that.
Nikhil 00:23:23 Cool. So moving on to, just wanted to also lightly touch about the question of standards. Obviously, this is cryptography. Cryptography is usually there are standards bodies. What are the certifications that DFC has and what do you think? Are they significant? Do you feel that you can share them?
Ori Mankali 00:23:41 Sure, for sure. I agree with you regarding standards. I think that nobody wants to invent the wheel in that aspect of cryptography. There is a lot of mileage, a lot of eyes, a lot of experience that was gained throughout the years. Akeyless as a company started from the very beginning to do different kinds of certifications. So we are SOC2, Type2 certified, ISSO O27-701- 27001. And we also have FIPs 140-2 certification, that is the security certification by the US NST, which is considered pretty much the strongest and well-known standard in the industry for that. I think that we’re one of the few vendors that are certified for FIPs 140-2 for secrets management. For other realms key management and HSM, that’s very common, for secrets management–it’s not yet very common. So this is one of our differentiation between our competitors.
Nikhil 00:24:42 I think that’s a good overview of DFC and its capabilities. Let’s move on to obviously from the application perspective. So suppose if I’m a client and I have an existing business, I don’t know, it’s a standard e-commerce business, how would I kind of adopt DFC? I signed up with Akeyless. What is available? Is there any kind of guides or what is the method by which I can integrate with Akeyless into my architecture? And maybe we could kind of discuss a simple architecture like e-commerce architecture, just as an example so people can understand.
Ori Mankali 00:25:17 I think one of our main objectives is to make our platform easy to consume, easy to use, okay. And easy means that our customers need to invest as little as possible to integrate with it in different use cases. It wouldn’t be like a major task of onboarding with us from an operational perspective. So we support a lot of interfaces. When I say interfaces, that can be human interfaces like web UI through your favorite browser. It can be a CLI command line interface, both for humans that prefer to work from the terminal, from the Shell or for, small scripts and applications that would like to execute this CLI as well as Rest full API, SDKs in many programming languages, including not just Java, Python, Go, C#, Ruby, and many, many other types of programming languages and a lot of plugins that can be used directly from DevOps tools as part of your DevOps tool chain.
Ori Mankali 00:26:22 It can be CICD platforms, the most common platforms that you can imagine. We support large variety configuration management tools, orchestrations of different kinds. So you basically can choose which interface is the most suitable for your needs. For example, if you have a homegrown application, something you develop in-house, then it’s most likely would prefer to work with the SDK. If it’s an application that was created by third party and you have little to no control of it, you would probably want to use one of our plugins to, I donít know, Kubernetes or to inject the secrets seamlessly to the application without an explicit API call.
Nikhil 00:27:03 So typically, usually businesses now are in one of the main clouds, right? It’ll be either in AWS or in Google Cloud or Azure. Do you have plugins to these three as well? I mean, can I take my AWS IAM system and just use that into Akeyless and how does that actually work?
Ori Mankali 00:27:22 Yeah, so I’m decoupling between the interface or the way to communicate with the platform and the authentication method, how the client identifies itself. Between the two, so what you discussed is about the authentication. We support all the three major cloud providers to use their native IAM or identity of the machine in order to authenticate to Akeyless. So for example, you mentioned AWS, so you have a workload running on EC2, for example, you can use your AWS IAM to authenticate to Akeyless seamlessly without having any kind of initial secret or secret zero. In many cases it’s called EM4 Azure AD and GCPIM. And if you’re running on Kubernetes, you can use your Kubernetes identity to authenticate to Akeyless jot authentication. It’s very common for CICD platforms today and many other authentication methods. We have about a dozen different kinds of authentication methods. Most of them are done seamlessly relying on the underlying infrastructure signature. And that’s considered very secure.
Nikhil 00:28:29 To make this a little bit clear in my head, so imagine I’m running a Kubernetes application, right? So it’s an e-commerce site, it’s an engine X web server, there’s an order management, I don’t know, Python server backend, and then there is a database, a Postgres database, right running on this Kubernetes cluster. I’m using Kubernetes Secrets for the secret management, very basic, and I’m kind of deploying this onto AWS’s container management solution. I forget its name. So this is mostly self-managed, it’s just hosted in AWS right? So you had mentioned that we can put a gateway, right? So would that actually, can you set up the Akeyless Gateway as a gateway on my internal Kubernetes cluster, and then manage the secrets there? What would you recommend for this kind of an architecture?
Ori Mankali 00:29:24 Yeah, the gateway is definitely a good choice. And as I mentioned, it’s running on your environment, on the customer’s environment. It can run, it’s basically provided as a container, container image. So you can run it either as a standalone container, for example, it can be a docker on a VM or it can be, it’s more recommended actually to be run on some kind of orchestration, ECSs, EKS, anything of that nature to allow easy autoscaling to meet your needs and also to have a built-in monitoring and high availability in case one of the containers for some reason goes down, then it will be some kind of monitoring and be able to spin up a new container as a replacement. This gateway can run, it’s mostly typical to have one per network segment. So if you go again to the cloud architecture, is very common to have different kinds of VPCs that each VPC has access to network resources inside the VPC. So you can spin up a gateway pair of VPC.
Nikhil 00:30:25 Okay, so you’d at the, at the VPC level?
Ori Mankali 00:30:27 Exactly. Okay. To serve the workload inside the VPC and coming back to your question, then you can use the cloud native IAM to authenticate through the gateway to Akeyless cloud and consume secrets as a replacement for Kubernetes Secrets or any other secret store that you use today.
Nikhil 00:30:45 Right, right. And that’s where the SDK comes in. So I would just write the Python SDK to directly bypass Kubernetes?
Ori Mankali 00:30:51 Thatís one of the option, that’s one of the option to use the Python SDK. Another option to do the seamlessly is to use one of our Kubernetes plugins, and we support a large variety. The most common one, the most is the mutating web hook, which is basically a pod that is installed on your cluster that receives events whenever a new deployment is taking place. And then based on annotations, it can inject either an innate container or a sidecar to your existing deployment. And this will allow us to fetch secret seamlessly to our application. So the application is not aware that the secret was fetched and then you can provide it either as an environment parable to your application or as a mounted virtual file system. In both cases, it happens inside the pod. So it’s considered application-level decryption. So it’s not something you, it’s visible outside of the pod or an HCD or anywhere else. So it’s very secure and most importantly, it’s seamless. Okay. So if you used to grid secrets from a file or from environment variable, you continue to do that without knowing that the security level just took a step up.
Nikhil 00:32:02 So actually that’s an interesting point. So since in the sidecar, this is you, you’ve got the site, can you injected it? Does the algorithm run inside the sidecar, or is it kind of still it runs inside the sidecar?
Ori Mankali 00:32:12 Everything, as I mentioned, everything is related to cryptography, encryption, decryption, all that is happening on the customer’s environment. So basically we built a small container image minimal and with low footprint that does this algorithm inside that container.
Nikhil 00:32:28 So then performance becomes no longer a problem because you’re not worried about the network. It’s happening inside the side car itself.
Ori Mankali 00:32:34 And another reason it’s not considered a big issue because in many cases, the secret is required for establishing the session. So you need to get access to the secret in order to connect to the remote service, to the remote database and then the operations are done in a different fashion after you authenticate it. So it’s not in the critical path of all the data path.
Nikhil 00:32:55 So I think that’s a great example and it kind of helps me. Thank you so much. It helps me kind of mentally put in my head. Okay. How would I kind of integrate this? So we talked about these scenario where DFC can be a many a solution, Akeyless can be applied. Are there any places or are there any business applications which you would not recommend Akeyless? They’re kind of like, I’m just kind of going for the negative case. So is there any specific areas that you think that it’s not a great fit?
Ori Mankali 00:33:28 So I think that it’s a matter of where we fit best. So anything that is a modern environment, it’s kind of like our forte, mostly cloud environments, but not just, we also support on-premise environment using our gateway, because the gateway needs to have access to your resources, but environments that are fully ERGOT are not yet adapted to our solution. It’s not that we wouldn’t be able to do that in the future, to be able to provide you the backend services to run on the customer’s environment. That’s technically doable. But at the moment, we decided to run as a SaaS service running on our cloud subscription. So environments that are fully ERGOT would not be a great fit to our solution at this point in time.
Nikhil 00:34:10 Right. Okay. And so another one that kind of occurred to me might be like, you know, where you have packaged software being shipped on CDs that run independently or something, right? Where you don’t have to have any kind of network connection for it to work?
Ori Mankali 00:34:27 Yeah, so it’s again, coming down to the ERGOT environment. Anything that doesn’t have continuous connectivity to the outside world doesn’t have to be direct by the way. So it can be something like the gateway can run on some kind like a DMZ, and then the application needs to have access to the gateway. And the gateway is the one that needs to have outgoing traffic to our backend services. So it can be indirect. Also going through HTTP proxies, if you’d like. Even SSL inspection, if the organization works in that manner, then it’s all supported. It’s all doable, but full ERGOT it’s still something we do not support.
Nikhil 00:35:05 What are the of the best practices that you would recommend for secrets management and the use of DFC? I mean, other than obviously contract and buy your solution, of course, but even in terms of practices, how do I kind of make sure that I’m doing the right things?
Ori Mankali 00:35:21 We didn’t talk about that, but one of the biggest features or most advanced features that the secrets management has to offer today is called Just in Time access, or in other words, dynamic secrets. Meaning that instead of having like a username password to a database or an API key or some kind of a token stored on a secret management platform, the risk is that at some point in time, this value may be exposed to unauthorized person or unauthorized application. So instead of that, the concept is to generate just in time credentials to any application that you want, which would be, would be ephemeral, which means that after a certain amount of time that the administrator defines the credentials are being revoked or removed from the remote systems, which means that they’re not long lasting and they also don’t require rotation because they simply disappear.
Ori Mankali 00:36:21 Yeah. And that’s also in alignment with a zero standing privileges concept. Meaning that at the steady state, if nobody’s connected to a remote server or remote machine, there will not be persistent users per persistent identities on any remote system. And that’s the ultimate goal, I think, in terms of security. Secondly, it’s applying principle of list privilege, meaning that you grant access based on what the application or human needs only. So if you need access to two specific secrets, that’s the permissions that you need to define. Nothing more, nothing less. Combining the two together, it’s bringing a very high standard of security least privilege plus Just in Time or limited amount of time. So even if somebody was at a specific point in time in your application and then had access to your CICD pipeline or had access to your application memory, at some point in time, there will still nothing that they can grab and use elsewhere for a long duration of time.
Nikhil 00:37:26 Right. It’ll be kind of like an ephemeral thing. Let me ask you. So if you take that back, so isn’t that something that is, it’s a best practice and I agree, but isn’t it a little hard to kind of implement in practice, like for example, with a database connection, databases are set up for whatever reason, I guess legacy reasons, this concept of a username and a password with a long running kind of session ideas. Does Akeyless have any kind of solution that handles that or?
Ori Mankali 00:37:55 We’re not modifying the way that systems are used to work today. So there is programmatic way to create a user on a database.
Nikhil 00:38:03 So, yeah. Okay. So that would be something that the client would have to build in order to follow theÖ
Ori Mankali 00:38:07 Not necessarily. Again, not necessarily. If you go, let’s talk about the use case again with the Kubernetes cluster that we inject to a specific pod, something seamlessly, okay. And let’s say that this pod is something that runs for a short duration, doing some certain set of jobs and then terminates and then spins up again and so on. In that case, you can simply inject, instead of injecting persistent username and password, you can simply inject just in time credentials. That’s a great fit for that use case. And even if it’s something that happens, let’s say like a long-lasting service that once in a while needs access to it, imagine that you have our sidecar part of your deployment and then every, I don’t know, five minutes just renews this Just in Time user or rotates the password of that user.
Ori Mankali 00:38:55 And the only responsibility for your application is to be able to reread the file, reload the file, again using a programmatic interface or just periodically or even event driven, like using something like some mechanism like Inotify or something that tells you when the file is modified, then you can reload the credentials, and that’s taking it to the next level, again, without modifying the actual software the piece of code that consume the secret and use it from the application perspective, from your application perspective. It’s just a username and password. You don’t know, those are ephemeral or those are temporary and about to be deleted very shortly.
Nikhil 00:39:33 Very cool. Yeah, I think that’s a great use case for this. Keeping an eye on the time, we’ve had a pretty good discussion about DFC, its application in an example application and what a Akeyless brings and what Akeyless secrets management is. Is there anything that we did not cover that you think we should also cover in this episode?
Ori Mankali 00:39:55 Yeah, just a few points because I think that a lot of our listeners today are thinking themselves, hey, I do have secrets management in my Kubernetes cluster, and I do have secrets management in my CICD platforms, et cetera. Why not use them? It’s already there. It’s built in, it’s most likely free. And then the answer that I give for them is maintaining those secret silos, so to say is another task, mostly because each one of them has different security standards. I think it’s a known fact that for Kubernetes, it’s not, the secrets by default are not even encrypted. They’re basic 64 encoded. And for others, you need to configure different kinds of access policies. Sometimes you have the same secrets in different stores and now you need to synchronize them. Sometimes you need to like to have a holistic view, a global view of all the activity, what’s going on. You need audit logs. This is something that is not often received by those secret stores. So having a lot of secret stores and secret silos is just another fancy way for storing the, or maintaining the secret sprawl. One of the benefits of using Akeyless, another secrets management, centralized secrets management, is to have a centralized place, a single source of truth that holds the data, protects it, have disaster recovery and high availability standards and procedures. And that’s again, a huge benefit to large organizations.
Nikhil 00:41:25 Yeah, better visibility and more control, this kind of key. Yeah, because that’s great. So yeah, I just wanted to take the opportunity to thank you, Ori. This was a great chat and I think we had some very interesting insights and very valuable insights into secrets management. So thank you once again.
Ori Mankali 00:41:43 Thank you very much. It was a big pleasure to be hosted by you. I enjoyed it. It was fun. So thanks again and look forward to hear more episodes in your show.
Nikhil 00:41:53 Absolutely. Thank you.
[End of Audio]