Search
Charlie Jones

SE Radio 606: Charlie Jones on Third-Party Software Supply Chain Risks

Charlie Jones, Director of Product Management at ReversingLabs and subject matter expert in supply chain security, joins host Priyanka Raghavan to discuss tackling third-party software risks. They begin by defining different types of third-party software risks and then take a deep dive into case studies where third-party components and software have had cascading effects on downstream systems. They consider some frameworks for secure software development that can be used to evaluate third-party software and components – both as a publisher or as a consumer – and end by discussing laws and regulations with final advise from Charlie on how enterprises can tackle third-party software risks.

WorkOS logo

This episode is sponsored by WorkOS.



Show Notes

Related Episodes

References


Transcript

Transcript brought to you by IEEE Software magazine and IEEE Computer Society. This transcript was automatically generated. To suggest improvements in the text, please contact [email protected] and include the episode number.

Priyanka Raghavan 00:01:02 Hi, this is Priyanka Raghavan for Software Engineering Radio. And today I have with me Charlie Jones, director of product management at ReversingLabs and a subject matter expert in supply chain security. He was formerly a consultant at PWC and has about 10 years of experience delivering strategic transformation initiatives specializing in cybersecurity third-party risk management and IT audit programs for various companies, across all the lines of difference. Today we are here to discuss the topic, Tackling Third-Party Software Risks and as you all have listened to the couple of episodes that we’ve done on software supply chain risks, I think this is going to be a very exciting show. So welcome to the show Charlie.

Charlie Jones 00:01:48 Thank you very much for having me. I’m excited to dive into third-party risk today.

Priyanka Raghavan 00:01:52 Okay, great. Is there anything else that you would like all our listeners to know about you that I haven’t mentioned?

Charlie Jones 00:01:58 No, I think you did a great intro. That was perfect.

Priyanka Raghavan 00:02:01 Okay, perfect. So let’s jump right in. But first thing first, I thought I’ll ask you for some definitions. So what are third-party software risks? Are they commercial off-the-shelf components or open-source components? Can you please define that for us?

Charlie Jones 00:02:19 Yeah, I think that’s a really good foundation to start with and I think the simplest way to understand this is by first defining and understanding software ownership. So I’d like to break that down into three distinct categories, first, second, and third-party components. So first party components are any part or module of software which you custom develops in-house as an organization, it’s often referred to as proprietary software. The second party is also considered internally developed, but maybe it comes from a different part of business which is legally separate. So part of your business that operates in a different country or region of the world or part of your business that is owned by or operated as a wholly owned subsidiary. And then you have third-party components and that’s anything that’s truly external to your business. So maybe that’s software developed by an open-source maintainer as you mentioned, or maybe some type of other third-party contractor or vendor.

Charlie Jones 00:03:14 Now Commercial Off-the-Shelf Software, sometimes you’ll hear it as kind of people refer to it as this acronym COTS is software that can be made up of any combination of those types of components for a second or third-party. But the distinction that makes COTS unique is that it’s made available for purchase through some type of public marketplace, which is why it’s referred to as commercial, but also that it’s ready for use without any intensive manual modification or coding, which is where the term off-the- shelf comes from. So essentially, it’s software that anyone can buy and use almost immediately without any major customization.

Priyanka Raghavan 00:03:51 That’s great. In fact, I think I learned something new today. I didn’t realize that even the components that we use from another subsidiary could also be considered as external, but that does make sense. So thank you for that. Now the next question I have is, do you have any numbers for us for how much percentage of an enterprise inventory is made up of third-party components?

Charlie Jones 00:04:14 This is a tough question and I think you’ll start to see my consulting background emerge here in this answer. But ultimately it depends, and I say that because it depends on the strategic direction of the business that is publishing or operating this software. I’ve worked with a number of companies who have formally adopted this kind of build first mentality in which they believe they have the technical resources and know-how internally where they can develop their own software that’s tuned to their very specific business requirements and they, truly believe that their own development will actually drive a competitive advantage in the market. I’ve also worked with organizations who have adopted a buy first mentality and that essentially means that they have a desire to get their product or service to market as quickly as possible. And so they’ve strategically decided upfront we’re going to buy whatever software our business needs to accomplish that market presence faster.

Charlie Jones 00:05:07 So I’ve seen organizations on broadly on both sides of the spectrum, some with 90% COTS, some with 90% custom developed or open-source. So I don’t have an average number for you unfortunately. But that being said, we do have some insight into the makeup of a traditional COTS package that is actually used within enterprise environments. So Synopsys does this report every year, it’s called their OSSRA report. It’s the state of open-source security and risk analysis. And in last year’s reports they analyzed a number of commercial co-pays across 15 or more industries. And what they found was that 96% of all kind of modern software packages contained at least one component and even more interestingly, 76%, so more than three quarters of the modern application is actually made up of open-source software. So even within commercial software we still see this heavy reliance on open-source which is ultimately being bundled into commercial products which are being sold.

Priyanka Raghavan 00:06:05 That’s very interesting. I’ll make sure that I add a link to that reference in our show notes. The next question is, why is third-party office supply chain security important and why should I care about it?

Charlie Jones 00:06:19 I think the importance of third-party software security comes back to the simple fact of like we just talked about, whether you decide to buy it or whether you decide to build it to yourself as an enterprise, if you are entrusting a piece of software with some type of sensitive information, that could be your own intellectual property. It could be PII so Personally Identifiable Information of your own employees or even PII or sensitive financial information of your customers. You as a business are solely responsible for that decision of putting that data into that application and protecting it accordingly. And so in the event of a breach of that software and that data you put into it, yes you can maybe point the finger at a vendor or an open-source maintainer and say it was their fault, but ultimately you as a business will face downstream impact of that breach.

Charlie Jones 00:07:08 So that can come in the form of regulatory fine, an insurance claim, a brand and reputation damage. And, and that’s not just me saying that. All of these risks we’ve seen come to realization in a number of recent attacks. The one I like using for example as of recent is the MOVEit software breach. For example, we don’t remember that attack campaign because of the publisher. I think a lot of people aren’t actually aware of the vendor that publishes MOVEit software. We remember it because of the hundreds of downstream customers that were impacted whose brand was kind of plastered all over the front page of every media outlet because they trusted this third-party with their own company or even customer data for secure file transfer in that case. And so that’s why it’s so important to have these robust third-party risk processes to ensure that the software that you rely on and put your data into is secure not only before you deploy it but that also throughout that entirety of the lifecycle. So any time that software changes as well.

Priyanka Raghavan 00:08:06 Yeah, so I think while you were giving us this example, I also remembered when I was doing the research for this show that I was reading this article about from Gartner, which predicts that by 2025, 45% of the organization’s worldwide will have experienced attacks on their surface supply chain. So yeah, I think that is something that you’ve just kind of brought to light with that example. So therefore in this regard you talked about risk management. So how important is for an enterprise, their risk management function?

Charlie Jones 00:08:37 Risk management is very broad term, right? Every enterprise has a risk management function and we’ll get into it maybe even a little bit later about what that entails. But for now, if we stay into the context of securing third-party software, I’d say risk management functions are broadly responsible for a couple things. First, at a very high level, they need to establish policies, procedures, controls that govern third-party applications throughout that lifecycle of use like I mentioned so prior to onboarding, prior to deployment and future releases of that software. The second would be ensuring adequate technology is actually made available to the first line of defense. So those security practitioners that are defending the business every day to ensure that software that’s actually purchased by the organization is abiding by those policies and procedures that are established in the first place. And then finally, and I guess a lot more generally, they’re responsible for making sure that any risks that are found through the creation of policies, actual testing to make sure that they’re aligning to policies need to be managed within the bounds of the risk appetite or the risk thresholds that that enterprise risk management function sets in accordance with business risk appetite as well.

Priyanka Raghavan 00:09:50 Yeah, that’s actually, quite a lot and seemed quite interesting as well. Also, like a lot of companies also seem to have this program called the third-party risk management. So I guess this is what they do, right? This function is essentially looking at all the third parties coming in. Okay.

Charlie Jones 00:10:07 Exactly. And a lot of people will refer to it as Third-Party Risk Management, TPRM if you hear that term as well. And the easiest way I like to kind of explain TPRM is through the, the saying that you’ll often hear is no man is an island, right? No business can operate in today’s world without outsourcing certain aspects of your people, your process or your technology. So in very simple terms, the basic function of TPRM is to understand and manage the risk that’s presented by any external party which your business relies on to operate. And the one thing to keep in mind when talking about TPRM is the way TPRM programs traditionally identify and measure security risk is through the kind of notorious vendor questionnaire, right? I’m sure a lot of your corporate listeners have likely dealt with it from one side or the another where you have an Excel sheet that’s shared over email or through GRC that no one wants to receive, right?

Charlie Jones 00:11:00 It’s probably anywhere from 50 to 250 questions. But the aim of those questionnaires is mainly to understand what is the security posture of policies, procedures, and controls within the vendor environments. And that’s where it gets really interesting in my opinion, especially in the context of today’s discussion. Because when you talk about third-party software, TPRM teams have kind of viewed the vendors of commercial off-the-shelf software products as actually outside of their own oversight remit. So they don’t believe that they’re responsible for overseeing them. And that’s because commercial off-the-shelf software isn’t operated within the vendor environment, which is what a questionnaire would capture. It’s actually handed over in binary format to the enterprise that purchases it to be deployed independently managed in their technology stack. And then beyond that, they don’t have access to customer data, they don’t have connectivity to the corporate network.

Charlie Jones 00:11:55 So in a vacuum that vendor relationship seemingly holds no risk. So there’s no reason for TPRM to oversee it. Now we know that’s not true, right? Because we’ve seen all these products exploited successfully between SolarWinds, Codeka and many others. So for the past couple years we know that there is this clear hole that exists in the way that software supply chain risk has been managed, especially from TPRM capability. And unfortunately as a result a lot of risk has slipped through the cracks. So TPRM is very easy to define as a function. It’s in my opinion, very difficult to successfully deliver, especially when you’re considering those, the intricacies that are posed by the software supply chain.

Priyanka Raghavan 00:12:37 That’s really interesting. So what should the companies protect sort of, if I get when I’m evaluating a third-party software risk, is it the package to make sure that that’s well protected or?

Charlie Jones 00:12:48 Well it ultimately depends, right? There are a number of risks that are presented by a third-party. Cybersecurity isn’t the only risk. There’s privacy risk, there’s ESG risk, there’s financial viability risk, right? The risk that one of your vendors goes insolvent and can no longer provide that product or service to you. Now all those things need to be considered when deciding whether to outsource a function. So the risk though presented to an organization will ultimately depend on the third-party type. And so will the way that you manage that risk. So I’ll give an example, a commercial off the shelf software supplier is very different from that of a professional service provider like a consultancy because a consultancy is providing augmented staff or additional people and a software vendor’s providing an actual product. So the biggest piece of advice I give when kind of discussing how to effectively manage third parties is resist the natural urge to adapt that one size fits all mentality and make sure that the risk management activities that you actually perform on that vendor are specific to the actual product or service that you consume.

Charlie Jones 00:13:56 So for software suppliers, like you said, yes, that means looking at the security and integrity of the software at the actual binary level because that’s where the risk exists for that vendor, where it resides for that vendor versus that consultancy example we talked about. Maybe you’re looking deeper at hiring or background check or termination processes because the service you’re consuming is people, it’s augmented staff. So the point is risk is going to differ across every different vendor that you operate with. So the oversight and risk management activities that you should be performing should be unique to the product or service that the vendor provides.

Priyanka Raghavan 00:14:33 Okay. That’s very interesting. And you’re right about the one size fits all. I think that’s, there are a lot of places where we try to take that approach because it’s easy when to set it up right there.

Charlie Jones 00:14:42 Yeah, it’s not just me saying that too. We have like oversight bodies that say that. So I’m based out of London and we have the National Cybersecurity Center, NCSC, and they have guidance over vendor risk management and one of the things that they say is exactly that don’t provide this one size fits all, create a third-party risk program, which is catered to supplier types. So it’s not just me on my horse saying it, there’s other oversight bodies saying that as well.

Priyanka Raghavan 00:15:08 Okay. So letís move on to the next section where we do a little bit of a deep dive into some of these concepts. So now when you have this kind of software supply chain, are there like personas, like a producer and a consumer and what are their responsibilities maybe that you could kind of define for us?

Charlie Jones 00:15:24 Yeah, sure. So I’d say there’s probably two main stakeholder groups that traditionally come up when we talk about software supply chain security, especially when we’re talking about it within the context of enterprise security risk management. So the first would be publishers of software. Those are organizations that develop and sell software. So if you think about the Microsofts, the Oracles, the Adobes of the world and then there’s enterprise consumers of software. So organizations who procure deploy software to operate a certain aspect of their business. Now in reality, yes there’s a number of other stakeholders between open-source maintainers and resellers and distributors that make up that broader ecosystem, but publishers and consumers are, in my opinion, the main stakeholder groups because they often are the only ones that have formal requirements that are levied upon them by bodies like legislators and regulators as well.

Priyanka Raghavan 00:16:19 And so in this case I wanted to ask you like what are the process for identifying the security supply chain risks? So, you have a publisher producing something, so how do I go ahead, what’s the process for identifying the things? Is this something that’s run from the TPR program based on the guidance or how does one start managing this?

Charlie Jones 00:16:41 I guess it depends on the persona, right? So for publishers you have everything from the creation of source code and making sure that source code is securely built and designed. You have the integration of third-party components through your CICD pipeline, making sure that the, the sources that you’re pulling those components from are secure and they’re not masquerading as fake ones essentially, or they’re fake components masquerading as legitimate components. You have the compilation of all those source code into binary format and all the inclusion of additional components that get added, making sure it’s safe before you publish it. So doing a final build exam essentially on the publisher side. And then you have the release of it to the customer or the consumer of the software. And then on the consumer you have a separate set of checks. So it’s rather than a final release exam, it’s a final deployment exam.

Charlie Jones 00:17:35 So before I bring this software into my environment, I need to test it to make sure it’s secure and offload my responsibility. And then for every update thereafter, I need to make sure that that update is secure. So all the patches, hot fixes, feature releases, those kinds of things. And then I need to continually monitor that software throughout its lifecycle looking for either emerging risks and threats in new components that are added or old components that were legitimate and safe at one time that are no longer safe, the log four GS of the world and whatnot. So it really depends on the persona that you’re talking about and how they need to manage it and the stage of the lifecycle that it sits in.

Priyanka Raghavan 00:18:15 So I was just wondering now one of the things we could do is probably look at some case study where things went wrong and we had another show, Episode 535 where they went through the SolarWind attack as well as the Codeka. There’s this other attack with the 3CX and is that something that you can take us through and tell us what happened and how do maybe organizations protect themselves?

Charlie Jones 00:18:38 Yeah, 3CX is a really interesting one because it’s probably the first example we’ve broadly seen in the industry of what we’re calling a cascading supply chain attack. So those who aren’t familiar with 3CX, they’re a software publisher, they had one of their flagship products, a desktop phone software Breached Mandiant was brought in after the attack to perform an investigation to find out what happened. What they found was that the initial breach of 3CX actually occurred because an employee had without permission downloaded some third-party software package that wasn’t relevant to their job role. It was an application called X Trader. That application had a backdoor within embedded within it, which allowed malicious actors into their environment. Once breached, attackers then infiltrated the build pipeline of 3CX inserted malware into the product and then use their release process as a way to distribute that malware down to all the downstream customers of 3CX.

Charlie Jones 00:19:33 So in a vacuum some may view that as, as a failure by the publisher to kind of fail or secure the SDLC process like we talked about on the, if we talk about controls between the publisher and the consumer. But in my opinion, I think it’s more important to recognize that the root of the attack actually stems back to the importance of making sure that all software is tested before it’s either allowed into your network or onto an enterprise asset like an employee desktop or a developer desktop in this case. So it shows that yes, even if you’re a publisher, those publisher controls are important, but if you’re a publisher you’re likely also a consumer of software. So it’s just as important to protect your third-party and commercial software estate as it’s the own software that you are developing in-house.

Priyanka Raghavan 00:20:21 That’s I think, yeah, very, very insightful. So the thing is now, so you said two things like for an organization to protect themselves from this type of attack is one is of course protecting what you’re building and also, I think look very clearly into like your developer machines and things like that and make sure that attack vector is also plunked. So the term you use cascading attack could you again explain that to me? So this is an example like, I mean is there a way to define it and are there other examples?

Charlie Jones 00:20:51 It’s a relatively new term, so I’d probably summarize it as a double supply chain attack where the initial entry point of an attack is through the consumption of third-party software. Like in 3CXS case it was this X-Trader application. So it’s targeting a single user, which in turn results in the compromise of a much more widely used package with the ultimate goal of reaching thousands or hundreds of thousands of downstream customers. So it creates this kind of, I know I’m using the term again, but cascading attack path that’s expanding downstream within the software supply chain.

Priyanka Raghavan 00:21:25 So one of the things is of course there’s this secure software development framework that was created by NIST to handle supply chain risks. Is that something that you can define for our, I mean at least give us an overview, not define but maybe overview of what that is and what should organizations be looking at that and adopting it?

Charlie Jones 00:21:42 Yeah, so NIST actually has a special publication series which provides technical guidance on a number of specific security domains and topics. So SSDF is NIST special publication 800218, the secure software development framework. It was published last February, so just over a year ago now. But it provides a set of kind of 40 best practice controls, which they call Tasks to enable secure software development. And so for any organization that provides software to a US government agency, they must now complete a written attestation. So they must on paper say that they meet the standards which are defined within SSDF, all 40 of those practices. And that’s applicable to any new software that’s built any major version changes to existing software that’s already provided or in the terms of like a contract renewal with the US government. And if they don’t meet those requirements, it doesn’t mean that they can’t continue to provide that software, it just means that they need to simply provide their plans to address and remediate any of the shortcomings that exist within that framework.

Priyanka Raghavan 00:22:49 As I listened to your talk from RSC from 2023 and you made a comment on this software development framework, I mean there’s a certain limitation that is, you said the presence of a vulnerability does not indicate that a software package has been compromised and presents immediate threat to publishing or acquiring organization. Can you explain that?

Charlie Jones 00:23:10 Yes. It’s an issue that I’ve been pretty public about with that I have with SSDF and I’ve actually, we’ve shared with NIST directly through various feedback mechanisms, but it’s this kind of hyper focus that not only SSDF but other frameworks have out there on vulnerability detection and vulnerability management. And I pick SSDF because more than 50% of those practices that we talked about focus on either the protection, the identification, or the remediation of vulnerabilities within software. And although yes, detection and remediation of vulnerabilities is absolutely critical to uplifting the security posture of software, the presence of those vulnerabilities doesn’t provide a really strong indication that a package has been compromised or like you said, presents an immediate threat to a publisher or a consumer. I don’t want to mix up my words because there’s no doubt that vulnerabilities are an important piece of the puzzle to solving the software supply chain and I don’t think I want to argue that, but I think that the overly targeted prioritization of vulnerabilities doesn’t quite match up with the reality of the threat landscape because we see a number of these techniques that are being kind of actively leveraged in successful attacks, which aren’t leveraging known CBEs like SolarWinds and the 3CX example we just talked about.

Charlie Jones 00:24:31 And so in my personal opinion, I think if NIST really wants to take a risk-based approach to managing software security, they should shift their focus from vulnerability identification to identification of kind of known malicious components or known malware strains that exist within software, which is a much better indicator that a breach has occurred or that an attack is ongoing within an organization.

Priyanka Raghavan 00:24:56 It’s interesting because while you were talking, I was thinking about the time when the Log4j patching happened, right? And I think a lot the teams were really confused because the thing is that the communication that came out just kept rapidly changing. So because, you didn’t know if this patch worked the next patch work and finally there were certain cases where some teams were not even affected by the vulnerability, but they just had to patch and, and there were a lot of problems because of that. So what turned out to be like a three-day affair actually turned out to be like 15 or 20 days of work.

Charlie Jones 00:25:28 And that’s honestly why I think it gets vulnerabilities get so much attention and so much of the frameworks are made up by vulnerability related controls because you have these celebrity vulnerabilities like Log4j that get amplified in the media when in reality, yes it is a massive problem and it had a huge impact across the security community and organizations. But if you look at the, I guess, number of attacks that are occurring, so our threat research team internally did a study last year where they were looking at software supply chain attacks within the open-source community specifically and within a few of the kind of major package managers, so I think it was NPM and PiPi, the number of targeted or malicious attacks were almost doubling the number of attacks that used a CVE or a known vulnerability as initial attack factor. So yes, Celebrity Vulnerabilities get a lot of attention, but it doesn’t mean that it causes the majority of issues we’re seeing the majority of issues actually come through malware and or targeted attacks. So it’s worth spending time and efforts in your security program to protect against them too.

Priyanka Raghavan 00:26:34 Okay, thank you. So let’s move to another framework called the SLSA framework or SLSA. Can you tell us about that?

Charlie Jones 00:26:43 Yeah, so SLSA Supply chain Levels for Software Artifacts is what it stands for. It’s just another framework that’s published and maintained specifically by open SSF. It’s already been in use by Google for various years, but it’s specifically designed to help protect against software supply chain attacks. So it has similarly a number of requirements and controls like SSDF, but it’s actually presented in a tiered model. So the idea is it’s intended to promote security progression over time, but the reason I like SLSA is it also has in addition to its requirements, this kind of visual threat model and it helps really demonstrate the breadth of attack techniques that could be used across the software supply chain. So it covers everything from typo squatting attacks in the open-source ecosystem to tampering a build environment to secret leakage. So it does a really good job of, once again, kind of coming back to the main issue I have with SSDF is showing that yes, vulnerabilities are important but it’s truly just one mechanism out of many that can be used in an attack.

Priyanka Raghavan 00:27:45 I think that’s a good segue into my next question, which is like one of the biggest assumptions when using open-source or say a third-party competence is the trust bit that you set. So is there some guidance on how to trust, for example, can you make a vendor proof that they use one of these frameworks and is that enough?

Charlie Jones 00:28:05 Well, trust is something that everyone deals with, right? First, there’s no shortage of frameworks out there which cover supply chain security, SLSA and SSDF, which we just talked about are just the very tip of the iceberg. Enforcing your vendor to prove compliance with one of those frameworks I think is a different thing. It’s, I think of trust, just like in real relationships, digital trust isn’t something that can just be obtained or imparted on someone. It has to be earned over time. And unfortunately in the enterprise environment, the ability to validate that concept of trust is a starting point in a relationship. Really depends on how much leverage you have. So for example, the US government now is enforcing SSDF, like I said, for any software vendors of theirs, they can do that because they have the grand power of legislation, and they also have a lot of buying power.

Charlie Jones 00:28:53 And then you also have kind of some private sector entities that have very mature security programs and also significant buying power if you think of like the large financial institutions for example. So they also find ways of enforcing it through. A lot of times they’ll amend their standard contracting terms to require their organizations that they work with if they want to have their product purchased by them to maintain higher development standards. And they’ll include things like SSDF or SLSA in that, or they’ll include the requirement to say, I don’t care what frameworks that you abide by, I want the ability and the right to test the software myself before I purchase it. So right now there’s very little concrete enforcement in the industry, but we’re very quickly seeing expectations start to ramp up because of not only legislation and regulation that’s emerging but also independently in the private sector as well.

Priyanka Raghavan 00:29:49 Okay. So I think maybe we’ll cover that bit a little bit more and do this next section on the legislation bit. I have one more question to ask, which is related to this thing called transitive dependencies. So when you use a third-party, and that depends on many other components and there’s a problem in like the confidence usage of the other third parties and then you are kind of affected by that. So how does this affect the supply chain attacks and what can you do? Because you can have a direct relationship with the component that you’re using or even the vendor that you’re using, but they are affected by somebody else. So how can you actually have a say in that? That’s always, yeah, gets my goat.

Charlie Jones 00:30:31 Transit of dependencies is where it starts to get very complex. I like to think about transit of dependencies very similar to the way that we think about managing a fourth, fifth or what we call kind of nth party risk in the in third-party risk space. It’s basically an indirect dependency relationship, which is, I know it’s a bit of a mouthful, so the basic way I like to explain is walking through an example, if you have three components, A, B, and C, you have component A, depending on component B, component B, depending on component C. So you can say that there’s a transitive dependency between A and C as they’re indirectly linked through that middle component B. Now to achieve that as another thing, I think it ultimately requires a very, very granular understanding of not only the components dependencies which are embedded or contained within a software package, but like you said, also understanding what downstream dependencies those components actually rely on to operate when the software is being executed and running in multiple environments, which is a tough problem to manage. But I think ultimately it comes back to having a very comprehensive software inventory for not only all the components you’re using internally but externally as well. And then regularly monitoring that inventory to manage the risk of transitive dependencies when you come aware of new intelligence that may pose a risk or may pose a threat.

Priyanka Raghavan 00:31:55 Yeah. So can you talk a little bit about that piece on that visibility? Like so how do you build that visibility of all that? Is it through the S1 that inventory management or is that a, is that a good tool?

Charlie Jones 00:32:07 Yeah, so visibility is a big issue we hear, especially when we talk about commercial software in the security industry. A lot of organizations struggle to gain visibility for two main reasons. One, unlike open-source software, they don’t have access to the underlying source code. So there’s very few tools in their arsenal that they can use to perform testing and gain insights into that package. The second is they actually don’t have the contractual leverage to enforce a vendor to provide them any evidence or provide them any insight into that software. And that second one is important because we often forget that software contracts are very different from that which traditionally govern a normal enterprise relationship. They’re structured differently, they’re written in the form of a shrink wrap agreement or end user licensing agreement. So they don’t have standard terms like the right to audit, which would enable an enterprise customer to perform some proper due diligence and understand what’s in the software and if it poses a risk.

Charlie Jones 00:33:05 So the question becomes if you don’t have any visibility into kind of vendor security processes or software itself, what can you do? And that’s where we really see kind of binary analysis, this concept of binary analysis emerging as a really powerful option for enterprise consumers. Because it essentially allows you to not only generate an SBOM but also analyze the risk presented by all those components dependencies within the SBOM, just using the binary itself. So without any underlying access to the source code needed. And that’s extremely powerful because then you can start shedding that dependency on your vendor to provide evidence and you can start empowering yourself to independently evaluate whether you can trust not only that software but all the components and dependencies which are listed in the SBOM which make it up.

Priyanka Raghavan 00:33:52 Okay, that’s interesting. So the thing I wanted to ask you is obviously sometimes your dependencies could run into like hundreds or maybe thousands, but so then to do this binary analysis like that becomes a bit expensive, right? So would automation help in such cases?

Charlie Jones 00:34:09 Yes. So automation is absolutely needed. I think when you think about the breadth of supply chain and complexities of the kind of modern commercial software package, which like you said, it could be made up of not tens of components but hundreds of thousands of components and dependencies themselves. The task of understanding it, securing it, managing it is simply too large to achieve manually. So if you want to effectively manage that risk at scale, it has to be done in some automated fashion. So I think relying on technology to help offload basic things like risk analysis, not doing like manually reverse engineering, I think that’s vitally important if you want to achieve success in a security program, especially when talking about the software supply chain.

Priyanka Raghavan 00:34:57 Yeah, so, so maybe now I can kind of ask you like if I were to dive back into the third-party risks, right? What are the steps that the third-party risk team should do? Or when, evaluating this component, which has so many dependencies, is there, I mean I guess one of the things you said, the checklist is not a really great thing, but are there any steps that they should be following on what they should be doing, the binary analysis or what parts they should focus on?

Charlie Jones 00:35:25 It starts with some very basic foundations like understanding who your software suppliers are. That may seem like an obvious thing to do, but you’d be amazed by the number of Fortune 500 or global 1000 firms that have very mature security programs and struggle to answer that very foundational question. The second thing would be once you understand who you’re working with and what software or other vendor or third-party services that you’re consuming, understanding which of those suppliers present the highest risk to your business, you can’t oversee everyone. So you need to figure out who to target. And so you can do that by thinking about a number of inherent risk criteria that would pose risk to your business. So are they supporting a critical or important business process or for software, are they connected or considered even crown jewel systems within your business?

Charlie Jones 00:36:17 And then finally, once you understand who your software suppliers are, you’ve put them neatly into risk buckets, critical, high, medium, low, establishing some sort of consistent testing methodology that you can evaluate each of those on based on the risk that they actually pose to your business. So maybe for software suppliers that means before deployments and after every new version of software released, you’re testing X, Y, and Z and then maybe you don’t deploy that software if certain things are found. Like if I find malware present within my software, it is an absolute no-go, I’ll break the build, I will not ship it or I will not deploy it. Or if I find a critical risk vulnerability that’s being actively exploited and known, therefore posted in the CISA KEV catalog, and then making sure those issues are documented and mitigated obviously. So to summarize three steps, understand who your suppliers are, risk rank them, and then apply some repeatable testing methodology based on risk.

Priyanka Raghavan 00:37:17 Okay. So suppose now the third-party function has actually gone ahead and done these things and they tell me that this COTS either the software it’s fit to use, or the competence fit to use as well. If I’m using it as part of something larger, then is it good enough? Do I also need to reassess it periodically?

Charlie Jones 00:37:38 Yeah, the short answer is no, it’s not good enough. And I say that because once again, if we think about this in the context of broader TPRM program software vendor relationships are very different than that of a traditional enterprise relationship that we may think about. It’s not like A BPO an Outsource Business Process, let’s say back-office accounting for example, where the function or the nature of the service never actually changes after you onboard them. It’s the same from onboarding to offboarding. Software is very dynamic, it constantly changes. And so when a new version of software is released, your whole risk profile or the whole risk profile that’s presented by the vendor can change if the underlying components dependencies or versions of any of those changes well. So no, a single point in time is not enough. It should be regularly reviewed throughout the lifecycle of the package.

Priyanka Raghavan 00:38:30 And so that means I guess that that is one thing. So if there is obviously a cost associated with this, I think that’s something that we never really think about when we actually do a T-shirt sizing. That’s maybe that’s something I just suddenly think about that maybe that’s something that we need to also put in when we deliver software, right? So I guess the upper management is also aware of that, but there is this risk of when we use the software component or this thing, there’s also the maintenance cost there.

Charlie Jones 00:38:59 Absolutely.

Priyanka Raghavan 00:39:00 Yeah. So I was going to ask you as a consumer, like in this, if you were to look at the state of affairs today, are there any existing problems in the way we are evaluating a software? Anything else that strikes you that you think that business should know about?

Charlie Jones 00:39:14 Yeah, a few that we already talked about, things like the lack of visibility into software, which is a mix of, it’s the reason that visibility doesn’t exist. Like we talked about, contract limitations or the pushback on the perceived invasive nature of testing for that vendor. Things like the lack of scalability where most traditional processes are largely manual driven through things like vendor questionnaires or manual testing. But another thing that we really see emerging is this growing concern over the level of assurance that can actually be derived from these traditional testing methods like questionnaires because they’re ultimately based on self-attestation from the vendor like SSDF for example. Like if someone’s providing software to the US government, they have to self-attest that they’re meeting these requirements. So in other words, they are telling you how secure they are based on interviews they provide based on evidence they curate. So it doesn’t really provide a true representation of risk. And so as a result we’re seeing a lot of professionalís kind of scrambling to look for other ways to achieve assurance as a result of that.

Priyanka Raghavan 00:40:22 Okay. So I think one of the things I learned today was also about this binary analysis. I think that’s probably a good tool to use in your arsenal, right? Apart from this questionnaire. And I’m sure there are other things, but I think that would be one good thing. Or would you say like, I mean I think that’s one thing that just stayed with me, but is there anything else newer way of doing things?

Charlie Jones 00:40:41 Yeah, I think binary analysis, a really interesting one, and Gartner actually put it really interesting. In a recent analysis report they did over commercial software risk, which I’m glad to share with everyone, but they ultimately framed binary analysis as a way for enterprise consumers to confirm documentation that’s provided by the vendor, which is a really interesting way I think of putting it. So not solely relying on what they’re telling you but validating it yourself. And maybe I can even give a quick example of how binary analysis works for people that aren’t familiar. But essentially it takes a fully compiled binary package. So one that you may purchase off a vendor website, I deconstruct it without any manual kind of prep or manipulation required. And then it takes all of the objects which it’s extracted. So yes, components independencies, which may be listed in a standard SBOM, but also think about all those other embedded objects which may rely or may exist within a package. Device drivers, installation files, embedded images, whatever they may be.

Charlie Jones 00:41:45 And then it analyzes them for risk and threats. And it’s traditionally done not only through binary analysis but also layering things with file and reputation services, threat intelligence feeds other technologies like AI and machine learning. That’s when you can start to do some really cool stuff to understand can you not only trust a package but all the underlying objects which exist within it. And then once again, finally putting that neatly all together in an SBOM. So it’s not just binary analysis, it’s a lot of these other types of technology that you need to kind of layer it together with to not only understand what’s in your software that you publish or consume. So it’s actually useful for both personas. But then once again, what is the risk that it presents? Because SBOMs only take you so far, right? They tell you what’s in your software, you need to understand if, if that presents a risk or a threat to your business or your customers.

Priyanka Raghavan 00:42:39 Very interesting. So I think makes sense that now that we’ve covered a lot of this, that we talk a little bit about the laws and regulation. So can you talk us through this? Because one of the things is what are the laws and regulations in this space that companies need to adhere to?

Charlie Jones 00:42:55 It’s funny, I get this question probably the most frequently out of any, and I feel like it’s because there’s a new law or regulatory requirement that pops up every day after the back of a new attack. So we have the obvious ones that I’m sure people have talked about at nauseum, things like the initial executive order 1428 that was published by Joe Biden, US government. You also have the White House memo 2218 that followed that. Those are mainly focused on software suppliers providing products and services to the US government like we talked about. But in the US you also have industry specific regulations like that from the FDA, which enforce stricter requirements on software that’s embedded within medical devices. You also have a number of regulations emerging in Europe, things like the Cyber Resilience Act that covers commercial software publishers in Europe. You also have regulations on the consumer side in Europe.

Charlie Jones 00:43:48 So the Digital Operational Resilience Act, you’ll often hear it referred to as DORA. That specifically applies to financial entities in Europe, which are consuming commercial software and the expectations of those financial entities to protect against attacks on those. And then you also have requirements emerging in regions like APAC. So you have the Monetary Authority of Singapore, they have technology risk management guidelines, which has a whole section on their expectations on software supply chain risks very specifically. So a number of emerging laws and regulations all either in effect or coming into effect very soon. So for like Cyber Resilience Act and DORA, those come into effect within the next 12 months, for example.

Priyanka Raghavan 00:44:31 Okay. Well so I think what would be interesting for us listeners is also would you have some examples if it’s okay to share where violations of these laws caused significant damage?

Charlie Jones 00:44:42 Yeah, so SolarWinds is probably the one that gets the most attention because of the charges of fraud and the ongoing actions that are being taken by the SCC right now. So I won’t spend too much time on that. But I think another interesting one we briefly discussed before was for the MOVEit application. So I didn’t go into it in too much detail, but for those listeners who aren’t familiar, MOVEit a file transfer application, it’s published by Progress Software. It had a critical severity vulnerability that was being actively exploited within it by a ransomware group called Clop. And they ultimately got ahold of a very large volume of downstream customer data. And as a result of that breach, Progress, who’s the publisher of software has not only an ongoing investigation from the SEC very similar to SolarWinds, they have more than 20 customers that are seeking indemnification from the attack. They have insurance providers who are seeking separate compensation for being essentially the at-fault party as a part of the publisher consumer relationship. And then on top of that they have like more than, I think it’s 50 plus class action lawsuits from individuals that are claiming to be affected by this data extortion. So we can very quickly see that this isn’t like a hypothetical threat factor that holds potential risks. It’s very real and it can cause very substantial, not only financial but reputational impact as well.

Priyanka Raghavan 00:46:04 So actually, I think this clearly demonstrates that if you don’t look at your third-party risk, which you actually are not, you do not directly own, even then you are kind of accountable if things go wrong.

Charlie Jones 00:46:18 Absolutely. And it’s starting to be looked at as negligent essentially in the legislative and regulatory realm. As you said, even if it’s not your own software, it’s third-party software because it goes back to an earlier point. But you have made a decision strategically as a business to not develop it in-house to outsource it. So you’re still responsible for protecting it in that capacity.

Priyanka Raghavan 00:46:41 Yes. I think I’ll think about this the next time I pick a package where I’m just doing a PIP install, I’m just like, okay. Yeah, I think it does really make sense. I would actually like to kind of close this session with maybe one last thing I want to ask you. In your opinion, what are the top three things that companies or maybe even individuals should do to protect themselves from software supply chain attacks?

Charlie Jones 00:47:07 Yeah, I mean it may seem repetitive, but I think it goes back to foundational security. There’s nothing incredibly complex that you need to do out there, but I think it first starts with understanding the software that you’re using, understanding which of that software presents the most risk to you because you’re not going to be able to oversee or test or inquire about all of it. And you can do that using some of those inherent risk characteristics that we talked about. And then finally, and most importantly, thinking about what are those repeatable stage gates for testing that most critical software that you can deploy throughout the entire life cycle of use. So you can actually establish trust with it before using it. And then we actually talked about it too, but making sure that last step, that testing is actually supported by some level of technology, some level of automation so that you can actually keep pace with the speed of business without impeding or impacting operation. So we want to be viewed in security as a value enabler, not a value protector, right? So how do you keep pace with business while still protecting your business accordingly?

Priyanka Raghavan 00:48:15 And I guess, I did say this was the one last question, but based on what you said, are there any tools that you’re aware of that can support this testing? Or is it just the things that we’ve already used just for like testing this software?

Charlie Jones 00:48:30 It’s a lot of the things that we’ve talked about. I mean, ultimately it depends on the persona you are and the lifecycle stage that you’re in as a developer or consumer, right? There are tools that look at source code, there are tools that look at the evaluation of packages as they come through your pipeline. There are tools that you can only look at when you’re about to ship that final package because it’s in a binary format, right? So it just ultimately depends on where you are trying to get assurance over the software that you’re building. I typically suggest starting with a very final stage right before you’re about to ship something because that’s the simplest? But over time, like there’s always that concept can you shift left into your development pipeline and find issues earlier? But it, it’s all about understanding where you are in your maturity life cycle and figuring that out. And then what the needs of your business are ultimately

Priyanka Raghavan 00:49:22 Makes one think that if you really don’t require a package, it’s probably you are better off just not using it, just don’t get unnecessary packages. I think that’s another thing as a developer, and I also remember we did this other episode on obfuscation and there was an interesting question that I post to the guest there, Prof. Ross Anderson. And he, I think I remember asking him, so should I, would it be better off if we write the code ourselves than actually get a third-party confidant? And he was like, he said yeah, maybe in a lot of cases you’re better off doing that than actually bringing it up because there’s a lot of these factors and now talking with you, I think I can see a lot of the regulatory aspects as well. So yeah, I think there’s lots to think about as an enterprise on software supply chain. So yeah, thank you for this.

Charlie Jones 00:50:10 Of course. It was a very fun session.

Priyanka Raghavan 00:50:12 Yeah, okay. And I guess before I let you go, there’s one question on best way for people to reach you in the cyberspace, what would that be?

Charlie Jones 00:50:20 Yeah, I’m always active on LinkedIn. I post quite a bit of kind of educational or content or thought leadership surrounding software supply chain security more generally. So please don’t hesitate to reach out and connect. I’d love to continue this conversation.

Priyanka Raghavan 00:50:34 This is great. Thanks for coming to the show, Charlie.

Charlie Jones 00:50:36 Thanks for having me. It was a lot of fun.

Priyanka Raghavan 00:50:38 This is Priyanka Raghavan for Software Engineering Radio. Thanks for listening.

[End of Audio]

Join the discussion

More from this show