Steve Summers speaks with host Sam Taggart about securing test and measurement equipment. They start by differentiating between IT and OT (Operational Technology) and then discuss the threat model and how security has evolved in the OT space, including a look some of the key drivers. They then examine security challenges associated with a specific device called a CompactRIO, which combines a Linux real-time CPU with a field programmable gate array (FPGA) and some analog hardware for capturing signals and interacting with real-world devices.
Brought to you by IEEE Computer Society and IEEE Software magazine.
Show Notes
Related Episodes
- SE Radio 639: Cody Ebberson on Regulated Industries
- SE Radio 587: M. Scott Ford on Managing Dependency Freshness
- SE Radio 541: Jordan Harband and Donald Fischer on Securing the Supply Chain
Transcript
Transcript brought to you by IEEE Software magazine.
This transcript was automatically generated. To suggest improvements in the text, please contact [email protected] and include the episode number and URL.
Sam Taggart 00:00:18 This is Sam Taggart for SE Radio. I’m here today with Steve Summers. Steve is the security lead for aerospace and defense systems at NI and focuses on the security of mechanical test systems. He has worked in the test and measurement industry for more than 25 years. In full disclosure, I personally am an NI partner and LabVIEW champion, and today Steve and I are going to talk about securing test and measurement equipment. And before we get started, we’ve talked about similar subjects on this podcast in episodes such as Episode 639, Cody Ebberson on Regulated Industries, Episode 541 with Jordan Harband and Donald Fischer on Supply Chain Security and 587 with M. Scott Ford on Managing Dependency Freshness. Welcome Steve.
Steve Summers 00:01:03 Thank you.
Sam Taggart 00:01:04 Let’s start by defining test and measurement equipment. What exactly are we talking about securing?
Steve Summers 00:01:10 Great question. When I talk to engineers, of course I talk about the ability to test products that they’re making. But if I’m talking to my grandma, my grandpa and trying to explain what we do in test your measurement, what we do is we help engineers test the products that are delivered to customers, right? When you buy a new phone, you don’t want it to come out of the box dead. If you buy a new car, you don’t want any of the parts to not work. So we’re helping to test all of those components and the systems before they deliver. Really what we are, it’s the interface between the physical and the virtual world, right? Because if you’re testing an airplane wing, you need to bring those signals into your computer somehow. And because we’re playing that interface role of connecting from the real world to the virtual world, that makes security kind of interesting and also really important because now we’re actually touching things.
Steve Summers 00:01:57 And in the test world, that means one thing, but the fact that we play that broader role of just interfacing to the real world means that in some cases we’re controlling pumps and valves and electrical circuits and electrical grids, and we’re doing solar power testing and those kinds of things. All of that is more interesting in this new security world because now if somebody can break into one of our test systems or into one of our systems that’s connected to the real world, that gives them a way to go from their malicious habitat, right, into an actual physical thing, which might be a self-driving car, it might be a picture frame as we’ll talk about it. It might be all kinds of different things. So that’s what we’re trying to get to, is how do we secure those things that allow us to connect to the real world so we can do things like perform test.
Sam Taggart 00:02:42 So if I understand you correctly, what you’re saying is that the consequences can be much higher with this type of equipment as opposed to a computer system that’s just a database for a bank or something like that?
Steve Summers 00:02:53 Yeah. If you think about some of the more interesting stories we see on the news, you hear about banks and schools and hospitals being hijacked for money, and that’s really bad. I’m not trying to downplay that at all. That really stinks. But the stories that become really interesting is when they cut off our gas supply, when they cut down an electrical grid, when they interfere with our traffic lights, when they interfere with the products that we have. And so this world of operational technology is how we kind of differentiate from informational technology. So this world of operational technology is a big fat target because the consequences of it can be so much greater than just draining your bank account.
Sam Taggart 00:03:29 So when you say operations technology, is that when I hear people refer to the word OT, that’s what they’re referring to?
Steve Summers 00:03:34 Exactly. And so you’ll see in some of the government documentation, they’ll differentiate between an IT system and an OT system. And that’s what they mean is operational technology.
Sam Taggart 00:03:43 So if I wanted to understand that correctly, then it would be something that is connected more informational, more databases and transferring data back and forth, whereas OT is more interacting with the real world.
Steve Summers 00:03:54 Yeah, so think about operational technology as you can think about it as the back end of the office. So the front end of the office, all the websites and the finance systems, all of that is informational technology. And the back end is the PLCs, the robots, the automation, the field, things like valves and airports and all of those pieces. Those are all operational technology.
Sam Taggart 00:04:13 So you used the term PLCs. Do you want to say what that is just for those who might not know.
Steve Summers 00:04:18 Yeah. So when you start getting into automating something, right? If you’re automating a production line, or if you’re automating a roller coaster, you need a controller that can control that world. And most often that is done through discreet inputs and outputs. And one very common way of doing that is with programmable logic controllers. And those are PLCs. So those are made by big companies like Alan Bradley and Siemens, and they’re programmed through digital logic. And those are very, very common. My company at National Instruments, we don’t make PLCs, but because we’ve played this world of the interface between the real world and the virtual world, one of the interesting things that we do is that we make analog controllers that can control some of those circuits. So sometimes, rather than just looking at a gate or a door and say, is that door open?
Steve Summers 00:05:03 If the door is open, then flash this light, which is what a PLC is great for. We look at things like how fast something is changing. , is something vibrating? Is it vibrating out of control? If so, then go turn this other pump on or turn it off. So we’re controlling analog circuits by reading analog signals. That’s a lot harder for a PLC to do. And so that’s actually something that we do really well because we come from the world of analog circuitry and doing all the other kinds of testing. And the other interfacing that we talked about.
Sam Taggart 00:05:32 In general, what is the threat model for these types of OT systems?
Steve Summers 00:05:37 So that’s a good question. So the threat model, it varies a little bit by application, a lot by application, right? So we are doing everything from testing a silicon chip on that’s going to be mass produced in millions. We’re testing some of those on semiconductor production lines. We’re testing laptops and cell phones, we’re testing medical devices, we’re testing airplanes and airplane components. And we’re controlling valves, as I was describing a minute ago, we’re controlling those other broader systems. And so that question of threat modeling is something that every engineer has to look at and think about specifically for their system. But if you were to generalize it, if you are at the end of the production line and you’re testing, that’s a juicy target for a hacker or a malicious actor to place some kind of malicious code that he can then spread in mass quantities out to the world.
Steve Summers 00:06:24 So a few years ago there was an incident where these picture frames that we’d buy and give to our grandparents for Christmas and you can put it on their network, their wireless network, and then you can update your photos to those photo frames. So those are cool, and like I’ve got one in my house. And when those hit the end of the production line a few years ago, there was a tester in the production area in China or wherever it was that had a virus and it was spreading that virus to the photo frames and those photo frames where theyíre being delivered, they’d go to our houses. And then on our networks, once it got on the network, it was spreading within our homes across that into some of the computers on our network. And so that attacker, by spreading and hitting that production target, he was able to then spread his virus out to a whole bunch of homes and other networks targets he may not have otherwise had access to.
Steve Summers 00:07:12 So that’s kind of an idea of what can happen in that threat model. Now imagine that he’s not targeting photo frames for grandma. Imagine that now he’s targeting controllers for an F35 jet, right? And he wants to put some malicious software on that. If he can get to the test system that is testing an F35 or is testing the 747, or if he can get onto the station that’s testing your cell phones, I mean that’s a pretty good target for him to get to so that he can drive his malicious code out to many, many different devices and critical devices. So I think that’s kind of the main one we think about when we think about test, when we think about these programmable controllers that we can put out there. Now you’re talking about a target that may be controlling a vital asset, right? Like an electrical grid, water purification systems, big systems like that. And that target and that mechanism, the threat model there is a little bit different, but still has a pretty juicy target behind that.
Sam Taggart 00:08:01 So if I understand correctly, the OT stuff that we’re talking about, you’re kind of dividing into two groups. So there’s the test group, and in that case the target is often whatever you’re testing. And the other group is more of like industrial type control systems or something along those lines. And in there the actual system that’s being controlled could be the target.
Steve Summers 00:08:19 That’s right. Okay. And there’s a lot of industrial control out there and there’s so much industrial control that when government regulators and security experts think about operational technology, they’re primarily thinking about industrial control systems. My point here is the other half of OT is something we don’t think about a lot, but it’s the test systems, it’s the testers. And so securing those testers is a really important thing that we have to also have threat models and defenses set up in order to protect that because we touch so many different devices coming out of those testers.
Sam Taggart 00:08:50 And I also imagine that could scale really well as well. If you have for example, factory producing iPhones, how many iPhones can they produce in a week or a month?
Steve Summers 00:09:00 Exactly. Yeah,
Sam Taggart 00:09:01 That’s a lot of targets.
Steve Summers 00:09:02 Yeah. And some of them are pretty smart devices, right? So, a valve turns on and off and you can do some things, but some of these devices, most devices are consumer products are made, have some kind of a controller inside it. And so if they can get to the operating system, the firmware that’s down in those systems and embed something, they not only have breadth in what they can expand to, but there’s a lot that those devices are capable of and the world is going more and more in that direction, right? So as we expand now more into this Iot world and your refrigerators, your toasters, your cars, all of those things become more connected to each other. That just opens up the gate now for more of these attacks to come in and hit those things.
Sam Taggart 00:09:41 It’s interesting you mentioned firmware because I talked to a lot of test engineers and part of the test sometimes is making sure that the device that they’re testing has the latest firmware, so they’re writing firmware to the device, in which case if somehow somebody maliciously injected something in there, it would get into the device.
Steve Summers 00:09:56 Yeah. Or a lot of these test lines, they’ll put some test firmware down on the device and then remove that and then download the final test wear. So yeah, most, or not most, but a lot of test systems have access to the firmware to write that software down. So an attack there could be lethal.
Sam Taggart 00:10:13 Another big challenge with a lot of the test and measurement systems is that many of them are programmed using a language called LabVIEW and perhaps another tool called TestStand. Do you want to talk about what those are a little bit and how they work and some of the challenges?
Steve Summers 00:10:25 Yeah, and those are challenges for us specifically because those are our products, right? LabVIEW is a great engineering tool. It’s a programming language. It’s a programming language that allows you to program graphically. So as a programmer, we often think in terms of flow and how a program might flow. Like first I’m going to collect some data from this device, so I’m going to record the temperature coming off of this device and then I’m going to evaluate that temperature. And based on that I’m going to make a decision and then I’m going to output some signal. And each of those is kind of a step. Well, in LabVIEW, you actually just draw with icons, you draw that flow out. And so there’s an icon that acquires the temperature and there’s an icon that does some kind of math and there’s an icon that puts that on a chart.
Steve Summers 00:11:05 There’s an icon that evaluates that against some limits. And it’s a cool software because for somebody who doesn’t know how to program, you can just drop that down and you have access to all of the programming tools that programmers have. And around the world there are thousands and thousands of LabVIEW developers, and I personally love LabVIEW because it’s fun to program in. But I also get to do things that I don’t really have to have a degree and to be able to do. If you are a good software developer and you have good software engineering skills, you can bring those into the lab your world and you can really leverage those. So for example, the fact that it’s graphical means that in one picture you can draw two different loops that are operating at different speeds. And so now you’ve got a multi-threaded application without doing any kind of thread handling.
Steve Summers 00:11:49 And all of that happens naturally inside of LabVIEW. And you can just have these different loops doing different things at the same time. So it’s a pretty fun world to be able to do this stuff in LabVIEW. LabVIEW though does present unique challenges for security because the industry has developed a lot of standard tools around text-based languages to evaluate the security of a text-based language, right? So if I write an application in Python or in C++, there’s a lot of tools that I can use to go and scan my code. When I write a code in LabVIEW, it’s graphical and I don’t really have access to those same tools. And so the approach that you take for evaluating your LabVIEW code is a little bit different than in other text-based languages because we just don’t fit into that broader ecosystem of text-based languages.
Steve Summers 00:12:33 Now the other thing you mentioned was Test Hand. Test Hand is a sequencing engine. So if you think about when you run a test, let’s say you’re going to test a printer, you’re going to run through and test maybe a hundred different functions of that printer to make sure that they all work, right? So you’re going to rotate one of the wheels and make sure that it turns the correct amount. You’re going to look at the torque on that wheel and make sure that that wasn’t out of line or whatever. So you’re going to run about maybe a hundred, maybe a thousand tests. And as a programmer, when I write my tests, I have to think about writing the individual step and how I’m going to access the real world, right? How do I record the torque on that wheel? How do I record the amount of turns that it turned when I told it to turn?
Steve Summers 00:13:13 How do we record the voltage going into the wheel motor? That kind of thing. That’s the step function. But then there’s also how do I pass data from one step to the next and how do I put that into the report? How do I manage the user that is logged into everything? And that’s what we would call the test executive functions, right? So it’s managing those steps that you write, Test Stand is written to do all of that for you and allow you to write those steps in any language that you want and you can mix and match those. So if you have a team of developers, some of them use Python, some of them use C, C#, some of them use LabVIEW, they could each write their code and combine those back together. And then the executive function. So stepping from step to step and writing the report, all that stuff is done for you inside of Test Stand and testing Test Stand for security.
Steve Summers 00:14:00 The challenge there is that most testers, most security experts don’t really understand that differentiation between running an actual step and a sequencer. So when they want to look at like, where’s the code? Well, Test Hand is not code, Test Hand holds code. So how do you test the container? And again, that’s not a real mature security market. So we’ve had to kind of develop our own approaches to those and then work with security experts to train them to say, hey, this is what you’re looking for and this is how well it works. And just kind of work with them to make that happen.
Sam Taggart 00:14:32 So if I understand correctly, then Test Stand’s kind of like a meta language. So I would then Test Stand, I define these are the test steps that I want to run and this is the order and maybe these repeat each other and these loop around and these go in the database. And these don’t like to find all that at the Test Stand level. But then the individual steps are all small chunks of code that reach out to the real world.
Steve Summers 00:14:53 That’s right. So you can execute and write those small snippets of code really quickly without worrying about how it’s going to fit into the overall piece. How am I going to sequence you, like you said, looping around? Because sometimes you want to hit a step and then loop several times before you jump out of that loop and go to the next step. And sometimes you want to loop until it fails a certain number of times. So all of that logic is what I’m calling the test executive functions. And yeah, test count does all of that separate from the individual codes. What that means is you have to think about your security at a couple different levels. You have to think about the security of my code, right? What I’ve written in C++ and the components that I’ve used to make that step work versus the Test Stand environment and how it’s sequencing through and whether or not anything is exposed there to any malicious actors.
Sam Taggart 00:15:35 So you’ve kind of got two security fronts to work on.
Steve Summers 00:15:38 Yeah.
Sam Taggart 00:15:39 You mentioned analysis tools for security that exist for other programming languages. One I’ve heard a lot of is I think it’s SaaS versus DAS, which is like dynamic versus static code checking. What does LabVIEW and or test end offer in those areas?
Steve Summers 00:15:54 You’re right. So there are two, a couple of ways to look at the testing your code, right? SaaS or SaaS or DAS or just static and dynamic. And in the dynamic world it’s not much different. Testing LabVIEW code versus any other kind of code. Because in the dynamic world, you’re looking at as it’s running, what does it look like, right? And what’s open? How’s it using and swapping its memory and doing all that kind of stuff. And the way that LabVIEW does that is the same that anybody else does anything in any language, right? So it all gets compiled down to assembly and it does its thing. So the tools that look at the dynamic testing are really no different from LabVIEW than they are anywhere else. So that part’s easy. The hard part is in the static testing because it is this graphical language.
Steve Summers 00:16:35 So when people come in and they want to do this static analysis, they’re asking, how do I scan my code and look for malicious code or bad code? And the problem with that is that static testing is so huge, it’s a huge vast field. So if I were to come and ask you to go and look at your code that you’ve written in C and you will tell me that there’s no security vulnerabilities in it, how would you do that? You might start by looking to see, did I make any calls that are known to do bad things? Did I make any calls that allow me to overwrite memory? But attackers know so many different ways to attack our code. So we have to be thinking about how am I going to protect against all those different things. So security protection in something like C++ or C is in a wide-open field.
Steve Summers 00:17:21 You have to just account for every possible way that somebody can attack you. And that’s what these large static analysis tools do is they’ve got experts that sit around and think all the time about how would I find the ways that people attack code? So for example, we know that one of the common ways that people attack code is that they will issue a database command into like a password field or something, and it will take that field back when it’s supposed to take it to the database. And instead of taking it to the database, it’ll execute that function. So the way that you block that is that you verify any of the commands that you send into your database to make sure that it’s sending what you think it’s sending. Like if you’re supposed to send a username, you only send the username and you strip off any other database commands from that.
Steve Summers 00:18:03 So that’s something that a static tool will go and look for. But malicious actors are coming up with new attacks all the time. So people have to continually be updating those static analysis tools to keep looking for those things. In the LabVIEW world, there’s a couple of things that make that a little bit harder. One is we don’t have the huge user base that you have. We’ve got thousands or tens of thousands of users of LabVIEW, but we don’t have the millions of users that you have with Python or C. So we don’t have the volume of people that are looking at this problem and creating these mature tools that can do everything, right? So that just makes that naturally harder. And also the fact that we are a graphical language makes that harder. So we have to create scanning tools and we do have scanning tools, but we make those scanning tools and we allow you to program those scanning tools, go look for things inside of LabVIEW, designing that to go look for every possible attack that the other people are looking for in the text-based tools.
Steve Summers 00:18:56 It’s a huge undertaking, a huge task, and we haven’t been able to do that fully to this point. So we’re behind them on that, which means that if I’m a LabVIEW developer, I am probably going to have to do some manual checks, right? So when I manually have to think, is there a place in my code where I’m calling a database and have I done anything there that would expose the database call to something that the user enters, or am I blocking that? And so we’ve created some of those kinds of guides to say, here’s the top security things to look for. And if you are creating some LiveView code, then you need to look at whether or not you’ve implemented these things correctly. We have some automated tools that can help with that, but it’s going to be a combination of at this point of doing some of the automated work mixed with some manual review to make sure that your code is secure.
Sam Taggart 00:19:38 Yeah, I was going to say, in my experience, that’s what it’s been is them automatic review flag certain things and then you have to go and double check them.
Steve Summers 00:19:47 Yeah. And, to be safe, we probably would have to over flag things and say, hey, you’re making a database call here, did you do it right? And over here you’re calling the command line and what are you doing that for? And so just checking and having you flag that as a developer to say, yes, I know what I’m doing here and I’m controlling for the inputs to that.
Sam Taggart 00:20:04 Both of us have been working in the test and measurement industry for several decades. What changes have you seen over that time in terms of security, particularly people’s attitudes towards security and maybe some major attacks or regulations or things that have happened over the past decade or two?
Steve Summers 00:20:21 Yeah, that’s a great question because things have changed a lot for us, right? Over the years. If I think back to when I started, which was back in the 90ís, people were really more concerned about just getting data into my computer. And then over the next 10 years there was more of an effort to say, how do I use that data then I’ve got this in my computer, right? So if I’m producing a part of a car over time, I wanted to look at not just did this unit pass or fail, but let’s look at how many of my units are passing and failing and why are units on this line passing more often than units on that line? So how can I become more efficient? And that required that we started to network our testation together so that we could see and share and use that data.
Steve Summers 00:20:58 And now in the last year, the last few months, it’s become a lot more important to say, hey, how can I take all of my data and pull all of that together so that I can start running AI on that to have AI identify some trends and things that are happening inside my test station. That’s really interesting to be able to do all that. But it does require that you network all of those stations together. When we started to see engineers putting things together to create these networked systems and sharing data among their systems, we started to see this conflict, or at least this friction arise between the test teams and the IT teams. So the IT guys always controlled the networks, they always controlled all the computer stuff. And now these test guys were bringing in these new systems and these new systems we’re now going to connect to each other and do things.
Steve Summers 00:21:46 And when the test team came to the IT team and said, we’re going to drop stuff on your network, the IT guys said, hey no, we don’t even understand what that stuff is. Don’t put that on my network. So the test teams set up their own networks and those networks really didn’t need to have any kind of connection to the outside world. So they created a network, but they, as they called it air gapped that from the rest of the network. So they had their own little network, just an intranet so they could share data among those different devices, but they didn’t really care about security because they weren’t connected to the real world. And there was no reason to really worry about it because we just weren’t talking about security generally for these test systems. And as time has gone on, two things have happened.
Steve Summers 00:22:26 Number one, those isolated networks have now needed to become not isolated anymore. As you implement AI tools and you need to connect to these models and do all kinds of other stuff and you want to report your data out, they now do need to connect to the corporate network to share that data in and out. And that creates that surface that where you can attack through. And now the IT guys say, hey wait, if you’re going to put this on my network, security becomes really important now then the other thing that we’ve seen happen is that over the years we’ve seen attacks on those air gap networks. So even though we hoped that nobody would ever figure out how to attack an air gap system, people have figured out how to do that. And I think the most famous example of that is the Stuxnet thing that happened over in Iran where they were processing uranium, and these gyroscopes were controlled by PLCs and those PLCs were attacked and a virus got to those PLCs that made the results of those gyros off a little bit and that delayed their uranium.
Steve Summers 00:23:27 And in this case we might be rooting for that with Iran and getting nuclear weapons and all that kind of stuff. But the thing that was really important to notice about that is that those systems that they had inside that factory were air gapped and they were able to get the virus spread among those by walking in with a USB stick and somehow getting that USB stick plugged into that intranet that even though it was air gapped now was sharing that virus among its different units. So if you go today, as we were looking at those units and we’re saying, hey, I have an air gap system, it’s probably safe. Well we know that it’s probably not safe. There’s other ways to get to that air gap network that could affect that. And we’ve seen that with a number of other systems over the years too, where we’ve seen some of the gas pipes and some of the other attacks that have happened, several of those have happened on systems that we thought were safe because they were air gapped.
Steve Summers 00:24:12 So over the last, I’d say three years, we’ve seen a really big push from the IT and security teams to go back to the test teams and say, hey, that system that you have that is air gapped, it still needs to comply with all these security requirements and we still need to make sure that it’s locked down and we still need to make sure that it’s going to keep us safe. And that has put these test teams kind of in a defensive position to figure out how do we update our systems so that we’ve got zero trust so that we’ve got controls with the boundaries, we’ve got controls inside of these to make sure that any attacks are going to be protected and defended.
Sam Taggart 00:24:47 That brings up another question I hadn’t thought of until now. How do you deal with aging control systems? Because I imagine some of these systems have been around for 15 or 20 years and they’re probably still running really old operating systems and things like that. How do you handle that?
Steve Summers 00:25:03 Not very well is honestly the answer. If you look at the way that many of these test projects have been funded, and this is true from making little toys for little kids all the way up to big Department of Defense projects, the way that they get funded is that when you have a project and you’re going to make a new car, right? We’re going to make this version of this car. The company funds that project and they fund the test system as part of that project and they really don’t like to put any money in for continuous maintenance and continuous upgrades on that system. So they kind of like to just lock it and leave it right where it’s at. And that’s true on cars where that lifetime might be five years, 10 years. But it’s also true on airplanes and military airplanes where the lifetime is 20, 30 or 40 years.
Steve Summers 00:25:47 And so we have had customers come to us and say, I want to buy your equipment, but I want you to tell me that this exact build of hardware and this exact build of software are going to be available to me for the next 20 years. And that’s really difficult to do for all kinds of different reasons. But now with this new emphasis on security, it’s not only hard to do, it’s a bad idea to do because one of the top priorities in doing security is continuous upgrades. You’ve got to keep your system up to date and if you’re not keeping your system up to date, then you are falling behind. And malicious actors can go and attack you with old technologies or attack your old technologies with new and innovative ways to get around that. So it’s a real challenge in the test industry because we don’t get the funding that we need to do continuous maintenance, but we’ve got to figure out how to do it. Because if we don’t, then the systems, and again, the military systems are some of the most critical systems. They fall farther and farther behind and become more and more exploitable by malicious actors. It’s not something that’s been figured out in the industry so far.
Sam Taggart 00:26:51 Currently a lot of regulations seem to apply to government purchases and military expenses and things that are export controlled. What effect do you see these regulations having on regular commercial products?
Steve Summers 00:27:03 Yeah, that’s a good question because in the US we seem to be hesitant to try to regulate commercial products. There’s a little bit of oversight, you can get a UL stamp, but it’s not really required on anything. Maybe there’s some industries where that’s not true, but the US doesn’t roll out broad regulations for commercial products when it comes to security. So the US government can control that in the way that they buy. So they can roll out with any of the government contracts, they can say, if you’re going to sell this to the government, it has to meet these security requirements. It has to be safe in this way, it has to be safe in that way, etc., etc.. And so we have seen over the last couple of years, new regulations come in from the US government that apply to US government purchases.
Steve Summers 00:27:43 And so the big one is coming through the Department of Defense and that is this program called the Cybersecurity Maturity Model Certification or CMMC. And CMMC says that if you’re going to sell to the government or you’re going to communicate with the government, even your products have to meet these requirements. And there’s 110 requirements that are laid out in a document from NIST called, NIST 800-171. And if I’m going to handle government data as part of my transaction with the government, I have to show that I can protect that data to all 110 of those requirements, including my production line, right? So my production line, if I’m producing like, I don’t know ignition for a F35 jet or something, I have to show that the test system is going to meet all of those requirements so that it’s not going to be attacked and end up in the results we talked about earlier.
Steve Summers 00:28:31 But the government only can really roll that out through the government contracting system, which means if you’re selling something to the government and the biggest part of the government that buys stuff is the Department of Defense. So that’s kind of leading the charge when it comes to that in the US for commercial things, I haven’t really seen much of a protection there. There’s a little bit that maybe gets rolled into medical devices, but those are more quality initiatives, less so security. I’m trying to think if I’ve seen other things. So they’re kind of up to the companies. And so some of our customers who are, I’ve seen it from some of the automotive manufacturers, I’ve seen it from some of the electronics manufacturers, they come to us and they say, if you’re going to sell it to us, your products need to meet a certain standard of security. But there’s not a broad regulation that requires that. Now if we switch, we can talk about Europe and that’s a little bit different. But I want to pause there and see if you have any questions about the US system first.
Sam Taggart 00:29:19 No, that all makes sense to me. So let’s go ahead and talk about Europe.
Steve Summers 00:29:22 So Europe is taking a different stance and they are a little more controlling when it comes to like commercial devices. And they have used pretty effectively for I don’t know how many years now, the CE stamp, right? So if you’re going to sell something into Europe, you’ve got to have a CE stamp that shows that you meet a certain level of quality, which will include some of the materials that you use, the emissions that come out of it, the electronic radiation that comes out of it, those kinds of things. So if I’m going to sell into Europe, I’m going to get that a CE mark and we’re all used to that. And if you turn over most of your electronics, you’ll see a CE mark on the back of it that shows this product can be sold to the US but it could also be sold to Europe.
Steve Summers 00:29:56 Now Europe in 2023 rolled out a new regulation that was finalized in 2024, takes effect at the beginning of 2025. And then we have two, almost three years to enact all of the things that are in that regulation. And the regulation from Europe is called the European Cybersecurity Resilience Act, where we call it the CRA for short. That CRA says if you’re going to sell any kind of digital product, is how they determine it. And a digital product is anything that connects to something else and has a digital interface. So if it runs software, if you’re going to sell a digital product into Europe, it’s going to have to get a new CE mark and that new CE mark has behind it a bunch of cybersecurity regulations. So those include things like developing the product with a secure development framework in mind. It includes basic cyber hygiene, like having default passwords on devices like a network router, those kinds of things.
Steve Summers 00:30:54 And it includes that if you sell software, the firmware that’s on a device into Europe, it has to be delivered with no known exploitable vulnerabilities. And so, as software goes along, say Log four J came out a couple years ago, it’s like this component that was affecting a lot of us. The European regulation says that if you’ve got LOG FOUR J in your device, you can’t sell the device into Europe. You’ve got to remove that and make sure that it’s not in there and you’ve got to have a full analysis done before you can do that. So this new CE mark shipping things into Europe is going to force lots and lots of us to really have a good cyber hygiene in our development systems, in our test systems and in the devices that we make so that we can continue to ship those into Europe. The full ban on that comes into play at the end of 2027.
Sam Taggart 00:31:44 So now I’d like to pivot a little bit and I’d like to do a deep dive on a particular product that NI sells called a C Rio. Can you tell me a little bit about what a C RE is?
Steve Summers 00:31:54 Yeah, C re or the full name is CompactRIO, so I’m kind of flipping back and forth probably on the name. But a CompactRIO device is cool. It’s an input output device. That’s kind of how it started. And it’s a rugged input output device. It’s a modular system. So imagine an eight slot chassis about the size of a, I donít know, a football maybe. Yeah. So you’ve got a chassis that big that has either four or eight slots in it. And these modules you can put in that each module will give you an interface to a different kind of sensor. So you’ve got a thermocouple sensor, we’ve got a microphone sensor so you can, you can acquire data from accelerometers or microphones, there’s digital lines, there’s high voltage and low voltage lines. And so as I said earlier where we interface to the real world, these are the modules you interface to the real world with.
Steve Summers 00:32:39 That’s what you connect those sensors into is these different modules. And the first version of this, which we call Compact Deck, connects those modules back through ethernet or USB back to your computer and then your computer tells it what to do, it tells it to acquire the data and then it makes the decisions. Well we took a Realltime processor, and we’ve used a, a couple different variations, but we’re using Intel chips right now and we push that Intel chip down into that chassis itself and it runs a real-time operating system. So you can write your code, push it down into that and have it run locally, disconnect the cable and leave it doing whatever it’s going to do out there and kind of run its own thing. So you can kind of think about it as like a Raspberry PI, except it’s got way more capability because you can plug in these different modules and it’s running a much more powerful processor than that, but it is running a Lennox operating system.
Steve Summers 00:33:29 But that Linux operating system, it’s based on a real time kernel of Linux. And so it gives us real time performance. So that gives us determinism and very low jitter and high reliability so you can trust that system to run really well. So that’s one of the cool things that we do with CompactRIO. And then the other cool thing we do with CompactRIO is we push an FPGA chip down there and you can program that FPGA chip. So we should talk about that FPGA chip too. But let me pause there, see if you have a comment or question about that.
Sam Taggart 00:33:55 Yeah, no I wanted to talk about both parts. I think let’s talk about the RT Linux first. So this is a very specific distribution of Linux that NI maintains.
Steve Summers 00:34:05 That’s right. It’s an open source. We have the distribution on GitHub but it really only runs on the NI platforms because it’s pretty tied into the actual hardware that’s there. We’ve got a lot of magic that’s in the back plane of these chassis that include timing chips and other things. And so it’s pretty specific to that platform. So I can plug in these different modules and then I’ve got this real time operating system. If you log into it, it looks and feels like Linux because it is a version of Len Linux but it is a Realltime version so it’s missing some of the bells and whistles and the user interface things. It’s missing that in order to maintain that high level of determinism that we need to get for a Realltime controller that we put down there. So I maintain that distribution and we put that on GitHub right now we are running off of Linux six point, we’re about to release a six point of one based on Linux 6.6 and we’ll start working on kind of an update to that kernel that will come out again in another year. So we continue to upgrade those to take advantage of features but also to remove some of the vulnerabilities that pop up in the stack.
Sam Taggart 00:35:02 What is different about securing an RT Linux installation as opposed to just a regular Linux desktop or server?
Steve Summers 00:35:09 A lot of it’s the same. In fact, we are able to leverage a lot of the same tools. So, I have customers that call me and ask me, just today a customer asked me how do I store certificates in your Linux Realltime system? And the answer to that we found by looking at the way that Red Hat Linux does their certificate storage because it’s just standard Linux stuff, it’s a certificate distribution. So anyway, we found that solution for that, tested it on our solution and it works the same. So a lot of it works exactly the same. Where it’s different is that we’ve had to optimize the N Linux somewhat to meet our own model, what our customers are trying to do. And specifically one of the things we try to do is we make it possible to program this target using lab use.
Steve Summers 00:35:53 So I can program using my graphical icons, I can program this thing and then I can download my code. And we tried to really simplify that experience for our customers so that they can develop their code and deploy it without really doing a lot of extra work. And that makes it highly usable but it, it does make it more vulnerable overall because the users have to be able to, they don’t have to log in to get into that system. So making a CompactRIO system secure means that you have to go in and disable some of the things that we’ve turned on to optimize ease of use and you have to disable those things to optimize the security of the system. And so we’ve actually had to spend time over the last couple of years documenting exactly all the ways that you can convert one of these compact real systems from its standard optimized for use case. And we created, it’s about 30 or 40 steps of things that you turn on and that you turn off in order to optimize it for security. But it’s Lennox. So the cool thing about that is it’s really easy to write a script that runs through and does all that for you. So we created a script we posted on our GitHub repository that will go through and basically convert your CompactRIO from optimized for use to optimized for security. And it changes your interaction with it a bit, but it does make it secure.
Sam Taggart 00:37:08 So if I understand correctly, there would be a development and environment mode or settings or configuration where it’s easy to develop with and it’s easy to move files back and forth and do all the stuff you need to do and then when you go to deploy it, you would lock it down before you ship it off somewhere.
Steve Summers 00:37:23 Yeah, one of the ways that you can see what’s happening on it is we have a little web server that runs there and reports to you through a graphical interface, what’s running, how it’s running and all that stuff. And when you go to deploy it, you need to turn that off because the way that we get into that is through a web server that’s not as secure as it needs to be. So we turn all of that off when we go to deploy it and that makes it secure. We have customers using these devices in some very secure areas and doing some pretty cool stuff with it. But we do help those customers to make those secure so that they can’t be attacked.
Sam Taggart 00:37:54 Speaking of security, you mentioned updates to NIRT. How do you get updates to the CREs? Do they have like a package manager or something?
Steve Summers 00:38:03 So, there’s a couple ways. Because the thing with our CompactRIO in the Linux real world is we have two types of customers, two customer bases. There’s ones that are Linux, people that are looking for a highly powerful, highly capable system. And those guys, they know too much for their own good and they like to get in and they like to really do stuff. And then there’s my customers that come from the Windows world and their programming and this is just a device that we’ve told them that they can download their lab view code to and they don’t even want to know that it’s Linux down there. They don’t want to know any of that magic that’s down there. They just want it to be magic. And so we have to figure out how to cater to both of those groups. And so if we have a script that they can just run and update things with and you can log in and we say go log in as root and do all this stuff, half of my customers will do that and they’ll love it.
Steve Summers 00:38:49 But the other half of my customers, they’ll have no idea what I’m talking about. They haven’t seen a text-based prompt on an OS since Windows 3.1, right? So that’s kind of confusing to them and so they don’t want to deal with it that way. But the other ones, the ones that use my package manager, they’ll deploy that and they’ll update their system like it’s a connected device and they’re just right clicking and updating the firmware and that’s how they want it to feel and they won’t really know how it’s happening. For some of my Linux guys that drives them crazy not knowing what’s going on down there. And so both parties, we have to cater to both of those. And so yeah, we have both ways. You can go to GitHub and you can download a package and you can update that and you can make all the command calls that you need to make to update the system or you can update it from Windows with a couple of right clicks on a graphical interface.
Sam Taggart 00:39:29 So while we’re speaking of package managers, there’s a package manager that runs on the CRO that handles like the Linux updates, but there’s also two other package managers involved in the LabVIEW ecosystem as well, correct?
Steve Summers 00:39:42 Yeah, so there’s, yeah, there’s a couple different package managers and a couple different things you have to keep updated because we’re talking here about the LabVIEW software, we’re talking about the Linux Realtime OS software, there’s also some drivers mixed in there. And so balancing all of that means you have to become an expert in the workflow for our products. And again, that workflow varies based on if you’re coming to us from the Linux world or if you’re coming to us from the LabVIEW world. But we have to try to support those different things. I honestly don’t even remember off the top of my head the names of all the different package managers. But yeah, there are a couple different ones in there that help you out.
Sam Taggart 00:40:13 I know a big topic in cybersecurity in general recently has been package managers and supply chain security. Has there been any incidences of any of that in the NI ecosystem? How does NI work to prevent that?
Steve Summers 00:40:27 I have a lot of customers worried about that. Fortunately I have not had any customers come to me with an actual case where they’ve said this has happened. I don’t have any cool stories to tell you and I’m glad that I don’t have any cool stories to tell you that about that. So customers come in and the whole supply chain, because supply chain is a topic of several of the requirements in this, in this state 100-171 and that applies to both software and hardware. So how do you ensure, like if a company comes to me and they buy my software and they download it from the web, how do they ensure that what they received from us over the web is what we intended for them to receive? So they’ll ask me several questions. So they’ll ask me during your build process, how do you protect the code so that your final product that gets built is what you think you were building.
Steve Summers 00:41:10 And then once you have those bits done and you go to put those on the web, how do you verify that those bits made it to the web and that nobody else interfered with that and put the wrong bits on the web. And then when I download those bits from you, how do I verify that what I received is what you posted there for me to receive? And the way that we do all of that is through hashes and check sums. So we’re constantly creating and, and as we make handoffs from one place to the next, and especially when we put that on the web, we put two different hashes, two different check sums that are done two different ways. So when my customer downloads those installers, they can verify those check sums to make sure that what they downloaded is what we had intended for them to download in the first place. And it’s really hard for a, a malicious actor to spoof that check some to make that pass and to spoof two different ones is, is impossible. So that’s how we do that.
Sam Taggart 00:42:01 Is that a manual check or does that get automatically happen? At some points
Steve Summers 00:42:06 It’s a manual check, but there are automated tools that help you to do that. So that kind of gets into the next thing, which is now my customer, now that he’s downloaded the code, how does he verify that nothing has changed on his system after he’s downloaded and installed it? Right? Because I could set it up and run and have my code and every day come in and start up that computer, start at the code and run it on my production line. But a malicious actor could come in and swap out one of the DLLs in the middle of the night and how would I know that he did that? And so there are file checking mechanisms for doing that that just kind of run automated that you can point it to a folder and say, hey, run this and you should see this check sum every day or every time you run. And if that checksum ever changes, it means that somebody changed that file. Now you don’t want to do that if that’s a data file that you’re writing to, because then you’ll constantly be alarmed by that. But for static files that should never change. It’s a good idea to put this file checking in place so that you’re constantly checking that checksum and make sure that that file doesn’t get changed.
Sam Taggart 00:43:05 Do these C CREs have any kind of secure boot technology to make sure that like whatever kernel boots is what NI intended?
Steve Summers 00:43:12 Yeah, so as we boot up, we will do some kind of a check sum. We’re actually, that’s a a thing that we’re improving right now because we haven’t had a TPM chip on the movement compact re in the past. And so maybe we need to stop and talk about what a TPM chip is good for. Real quick. Yeah,
Sam Taggart 00:43:28 Just real quick,
Steve Summers 00:43:30 TPM is trusted platform module what TPM chips let you do. The simplest way to think about it is that they are a storage place for secret information like passwords and stuff. So if I have code, then I’m going to run on startup. If I can take a check sum from that code and check that to make sure that it’s correct, that’s going to make sure I’m running the right code. Well, where are you going to store that key to check against? The best place to put that is in a TPM chip in hardware that’s locked down. And that’s the whole point of a TPM chip is that it’s really difficult to change those keys. So when I start up, I can check and say this software that’s running does it check against my TPM and the key that’s stored inside my TPM. If it is great, everybody’s happy.
Steve Summers 00:44:12 So you use TPMs in a lot of different ways, right? Windows uses TPM on boot up just to check and make sure that your hard drive didn’t get swapped out and that everybody is the right hard drive. But you can access as a user, there’s lots of programs that allow you to access those TPMs and to store other kinds of information. So you can store your keys, you can store web certificates, whatever you want to store there, you can do that. And so we are adding those. We have a, a version of our CompactRIO now that has a TPM chip so customers can do that check against their software, but right now it’s a little bit more manual and we’re working through to make that more automated.
Sam Taggart 00:44:44 Great. We have about 10 more minutes. I got two more topics I want to talk about. So
Steve Summers 00:44:49 All
Sam Taggart 00:44:49 Right, the first one you had mentioned earlier is FPGA. What does that stand for and what is an FPGA?
Steve Summers 00:44:55 Yeah, so this is kind of a cool technology, but if you look at what it takes to make an integrated chip, right? An integrated chip is what you, if you open up your, your laptop and you look at all those chips in there, you have all these chips that have digital logic inside of them. And the problem with an integrated chip is that to make one, it costs a million dollars and it takes a long time to create everything. You have to send it off to some fab like say, I mean it literally costs can cost like a million dollars to create a new chip. And so an FPGA is what’s called a field programmable gate array. And the important part of it is those first two letters, it’s a field programmable, which means it’s an integrated chip, but instead of being fixed in its personality, it’s full of a bunch of hardware gates and you can program those gates to take on any digital personality that you want to download to it.
Steve Summers 00:45:40 So I can program it and then use that in devices. And we see those in a lot of like lower volume devices. So if you’re not going to make a million of a device, it doesn’t really make sense to go and create custom ICS for that. Instead, you can buy these FPGAs and program those FPGAs. We don’t make FPGAs, but you go to companies like Xilinx and they make those FPGAs. But what we’ve done that’s innovative is that we created some hardware. because again our, our whole goal is to interface to the real world. We made some hardware that has these FPGAs on it behind some of our analog circuitry so that you can program that FPGA A to do whatever you would program the board to do so that it can make decisions and do things that a chip would do without even involving your CPU and your computer.
Steve Summers 00:46:28 And so we have a few different products that use those FPGAs and we have a version of LabVIEW that lets you graphically program that FPGA. So most programming for those F PGA A for programming for Xilinx or the other companies you’re programming with HDL, I don’t even know what HDL stands for anymore, but it’s a programming language that’s targeted at FPGAs. And that’s a kind of a, a highly unique programming style. I’m sure some of your listeners are HDL programmers, but with LabVIEW you can program and we will compile that down into the HDL code and download it to the FPGA chip. And we put one of those chips on that CompactRIO device. So now that CompactRIO device has really three elements to it. It has the modules, it has the Realtime processor, and it’s got a programmable FPGA chip on it. And we expose that to you as a user.
Steve Summers 00:47:14 So now when I’m architecting my application, I can decide what functions do I want to have running on the Realtime operating system. And with that I’ll get performance where I can run loops that are like 10 microseconds or maybe a couple of microseconds. If I’m controlling a valve or something, that’s plenty speed. But I can also use that FPGA and in that F-P-G-A-I can download and I can run things at hardware speeds where I can do things much, much faster. So I can do inline processing of some of the signals, or I can count things, I can control loops. Now if I do a control loop on the FPGA, I can close that control loop in somewhere around five or 10 nanoseconds as opposed to five or 10 microseconds. So I can go many times faster than I can with the processor. And both of those will go a lot faster than what I can do with the Windows processor on a Windows computer.
Steve Summers 00:48:02 So it gets into where I can now architect things really, really well. But, the interesting thing about FPGAs is people don’t really understand them, especially security people. And so I’ve had some of my customers, their security teams have come to them and said, I have a notice here from the NSA that says you cannot use FPGAs because they’re not secure. And we have to stop and say, hold, hold on. What, what? What do you mean by this? Not secure. When you turn power off to an FPGA, all the gates open and it’s clear and it’s open and you can write things to the FPGA if you feel like you need to, to kind of mess it up to do things. And we have those kinds of routines to help clear an FPGA. So we’ve met with customers to try to explain to them how an FPGA works to their security teams and then explain to them how to clear that.
Steve Summers 00:48:48 And then we also work with our test teams to explain to them how to use that chip securely. So if you think about some of the concepts, we’ve talked about in the last 15 minutes, the most secure way to use an FPGA, at least the way that we’ve architected ours, is to leave the FPGA open. And when you boot up from the drive on the Realtime system, check that the FPGA bit file has not changed on disk, and then download that bit file to the FPGA so that the FPGA is now running code that is the code that you downloaded and nobody can come in and, and play with that and change that code.
Sam Taggart 00:49:22 Another question that popped into my mind, you mentioned that the LabVIEW code gets compiled down into VHDL. Does that make it easier to do some static analysis on the VHDL code? Are there any tools for that or does that not really exist?
Steve Summers 00:49:35 There are even on LabVIEW for Windows and on the Realtime side, we compile the code down into assembly. Mm-hmm . So you’ve got a bunch of bits. And so there’s code that runs looking at the text-based code that’s looking at like the words that you and I speak, right? It’s looking for the if and then and the other programming commands. But there are static analysis tools that look at the binary files and they try to look for is there something there? And that way they can find things that are deeply buried inside the code. The problem with that is it seems to miss a lot and you get a lot of false positives. And so customers that run against the binaries, they’ll contact us and say, hey, we ran against your binary and we think we found this thing because it had some detectable pattern.
Steve Summers 00:50:16 And when we look into it, sometimes it’s right. Sometimes it’s like they found something that does not exist, and we have to kind of work with them on that. It’s got some kind of a match, but it’s not a really good match. And then we look at the reporting and go, we know that there are other things in there that they should have seen that they didn’t see. But the binary check is, it’s an okay way it, it’s maybe like a third way to kind of look at things. It’s not a guaranteed way to make sure that your code is not running any vulnerable components.
Sam Taggart 00:50:42 I have one last topic I want to hit on, and I think this is a good one because it does help differentiate IT versus OT. A lot of OT devices are connected to industrial communications networks. Can you talk a little bit about what those are? What makes a difference from regular networks and maybe some of the challenges of trying to secure those?
Steve Summers 00:51:03 Yeah. When I think about industrial networks, I think about communication protocols like Modbus or CAN or profinet, profibus. There’s a lot of different ones that have different advantages based on what you’re trying to do. So some of them are used in wastewater treatment plants. Some of them are used in power grids, some of them have faster or slower response times. Some of them can handle more or less data than other ones. And in a way they can be more secure than other network devices because people don’t understand like how do you hack into a mod bus network? But on the other hand, a lot of these networks over the last 20 years have migrated away from, they were running on maybe 485 serial buses or other kind of weirder connections between them. They’ve migrated over to be running on the ethernet and on the T CCP IP network.
Steve Summers 00:51:50 So Modbus has become mostly now Modbus, T-C-P-I-P, where it runs on that network. So what kind of made them different before has kind of gone away and they’re kind of on that same network. And I think, I wouldn’t trust that a malicious actor just doesn’t know how to use it as a good security block, right? So I think you have to think about how do we, how do we block that? The hard thing about those is that some of those protocols were made before security became a primary concern. And so a lot of them are made without thinking much about how do we protect these devices on this particular network. And so those have become kind of a secondary thought of either they haven’t layered security into it or the security feels like it’s kind of layered on top. For example, the security protocol might be block all of your ports except for this one where Modbus is being passed through. And that’s not the greatest overall security. It’s what a lot of our infrastructure around the world is based on for wastewater treatment and gas and everything else. And so they have layered a lot of security on top of that that I’m not that versed in, but it, it does present a unique challenge. because you have to think about those devices in their own networks and not as part of like your Windows and IT infrastructure.
Sam Taggart 00:53:00 Interesting. A question that popped in my mind, so you mentioned Modbus, TCP, so Modbus, TCP, can that run on the same network cable that runs my normal TCP IP traffic and if I plug in wire Shark will I see those packets going right next to my other packets?
Steve Summers 00:53:16 Yes. If you’re running a big facility, then you don’t do that, right? You run dedicated cables for doing that. But if I have a small facility where I’m just, I want to go and grab the data from that pump over there and bring it back and it only speaks Modbus, then yeah, it would just be on your regular network and you would see that with your Wireshark.
Sam Taggart 00:53:34 Okay. Very interesting. Well, thank you for joining us today and talking about security.
Steve Summers 00:53:39 Yeah, it’s fun. Thanks for inviting me.
Sam Taggart 00:53:42 For SE Radio, this is Sam Taggart. Thanks for joining us.
[End of Audio]