Search
Charles Weir

SE Radio 584: Charles Weir on Ruthless Security for Busy Developers

Charles Weir—developer, security researcher, and Research Fellow at Security Lancaster—joins host Giovanni Asproni to discuss an approach that development teams can use to create secure systems without wasting effort on unnecessary security work. The episode starts with a broad description of the approach, which is based on Weir’s research and on a free Developer Security Essentials workshop he created. Charles presents some examples from real-world projects, his view on AI’s impact on security, and information about the workshop and where to find the materials. During the conversation, they consider several related topics including the concept of “good enough” security; security as a product decision; risk assessment, classification, and prioritization; and how to approach security in startups, greenfield, and legacy systems.



Show Notes

Related Episodes

Links

Transcript

Transcript brought to you by IEEE Software magazine and IEEE Computer Society.
This transcript was automatically generated. To suggest improvements in the text, please contact [email protected] and include the episode number and URL.

Giovanni Asproni 00:00:19 Welcome to Software Engineering Radio. I’m your host Giovanni Asproni. And today we’ll be discussing ruthless security for busy developers with Charles Weir. Charles is a developer security researcher with over 30 years of experience in the software industry. He’s currently a research fellow at Security Lancaster, where he leads the research on how to improve the security of software delivered by development teams. He was a technical lead for the world’s first smartphone, the Ericson R380, and was up security lead for the world’s first Android payment app EE Cash on Tap. And he’s the author of the Developer Security Essentials package, a set of workshops that help developers to understand and apply security principles in their work. Charles, welcome to Software Engineering Radio. Is there anything I missed that you’d like to add?

Charles Weir 00:01:08 No, except I’m a fan of Software Engineering Radio, and I was on a very, very early podcast.

Giovanni Asproni 00:01:15 Yeah. Was that one was about Small Memory Software, perhaps?

Charles Weir 00:01:20 That was it.

Giovanni Asproni 00:01:21 Okay. But let’s talk about ruthless security today, yeah? So can you give us a brief overview of what security for busy developers is about and why we should care?

Charles Weir 00:01:32 Well, security’s hard work, it’s expensive. It require a lot of effort. In fact, you could allow it to take all of your time developing and then some. So that’s clearly not going to be practical. So we all have to make choices about what security we implement and what ruthless security is promoting is making those choices hard, using hard data to ruthlessly avoid the things that simply you feel might be good, but doing the right things.

Giovanni Asproni 00:02:13 Okay. So is a way of prioritizing risks.

Charles Weir 00:02:16 Indeed. That’s how we do it.

Giovanni Asproni 00:02:18 And deciding which risks you want to take and which ones we can forget about. Am I correct?

Charles Weir 00:02:23 That’s correct.

Giovanni Asproni 00:02:24 Okay. I would like also to ask, when talking about security, we also often end up talking about privacy and sometimes conflating the two. Now this is perhaps not entirely related to ruthless security, but I think it’s a distinction that I’d like to ask you about. So can you tell us the difference between them and how they relate to each other?

Charles Weir 00:02:45 Well, security is usually about someone else. It’s about bad things that we don’t want to happen happening through our software. It’s often divided into confidentiality, integrity, and availability. The CIA and which is people don’t find out things you don’t want them to do. People don’t change things you don’t want them to do and people don’t stop the system working. So the sorts of risks we’re talking about of financial loss, physical harm, loss of service, that sort of stuff. Privacy is about personal information, like what we’ve done, what our birthdays are, whatever it might be. And examples of problems might be surveillance or accidental leaks of my personal information to the world or somebody deliberately sharing them where they shouldn’t go. And the bad things that can happen are fraud as a result, because somebody’s got information they ought to have, embarrassment, which could be very severe, or people manipulating people in different ways.

Giovanni Asproni 00:03:58 And what is the relationship between the two? Because I guess for privacy, a level of security can be necessary.

Charles Weir 00:04:05 There’s a big overlap. Yes. And frequently people just conflate the two, but privacy is often about things that aren’t strictly speaking problems with the software. They’re not bugs, they were just in the design. It wasn’t quite that thing hadn’t been thought about or thought important. So privacy’s in a way, a more subtle one. It’s, and you can’t tackle it easily using the techniques we’ve got from the military of hardening and stopping vulnerabilities.

Giovanni Asproni 00:04:44 In your security workshops, you mentioned three vital ingredients you donít need for software projects and say risk assessment, risk information and development integration. Can you tell us a bit more about those ingredients, why they’re vital and how they relate to secure development?

Charles Weir 00:05:01 So it sounds huge, in fact it’s probably not. These are three things that we believe every development should at least think about. So risk assessment is looking at what could possibly go wrong and thinking of how you might deal with it, how likely it is and how bad it would be if it happened. The risk information is what you might need in order to do those calculations. So it might be someone else’s previous assessment, your previous assessment even it might be industry knowledge, it might be other people coming in and sort of advising you on what could possibly go wrong. And the development integration is what makes the other two happen. So it’s how and when are you going to do this? And again, we would recommend building it in so that you at least spend a little time on this. We’re talking perhaps a meeting or something at least every six months or so, but possibly more often depending on your project.

Giovanni Asproni 00:06:19 And also another question, general question about security. Security can be a very broad subject. And when thinking about it in the context of software development, in my experience at least many developers seem to think about the OWASP top 10 or aspects around the design implementation of their own systems. How are we going to do authentication authorization in these particular APIs or things like this, but not necessarily the surrounding context. So some example I can give you things like the environment where the software will be used or will be run, or maybe we’re installing this in the cloud, say in AWS, how are we going to store the value secrets and keys in a way that is, that is good for our security purposes? And so when talking about security development, from your point of view, what is in scope and out of scope?

Charles Weir 00:07:13 My answer to that is, it’s all in scope to consider, but it’s certainly not going to end up in scope to implement. So, as a software developer, we need to think about things like what could happen, what the hardware will be and how it all works as a system. So you’ve got things like social engineering as you say, often as developers or as a development team including the product owner and the other people involved. We have to say, well actually we can’t do anything about that. And so you get a security, you often need to document that in some way to say, well if it happens, it’s not our problem. We believe it is this problem or it’s going to be an act of God because there’s nothing anybody can do about it. So one of the ways of dealing with a security problem is to have a boundary and say, here’s the limit of our concern. This is what we’re going to deal with. This is what is not our problem. It might be the purchaser’s problem, it might be the end user’s problem, it might be any other stakeholder’s problem. But so long as we have defined that we have done a responsible job.

Giovanni Asproni 00:08:42 I see. So if I am understanding correctly, I’ll try to say shortly, what I understand here is, is highly context dependent, somehow. So it’s context dependent what he’ll do about it. But you want to consider all possible aspects, at least at the beginning to decide where the boundaries are in the first place.

Charles Weir 00:09:01 Yes. That’s it.

Giovanni Asproni 00:09:03 In your secure development site, you state that good enough security is essential for almost every software development. Now can you expand on what you mean with good enough and how to judge if we reach that level or not?

Charles Weir 00:09:19 I suppose I one might actually phrase it the other way round and say, for some software developments good enough, security is none. So I would suggest that there’s quite a number of development. So like examples of software development where frankly you don’t have to do any security or any privacy, but it’s worth having a quick think first whether that’s actually true. So examples might be you do some complex thing on a spreadsheet to do some sort of sophisticated analysis. Alright, who’s going to attack that? Nobody’s going to know it exists. Who’s going to, has it got a privacy implication? Probably not. But it could do have to just, just think a little bit. Depends on what’s on the spreadsheet. Is there any other issue? Probably not. So, I would suggest of anything more than putting together a quick script to do something you might well want to do a quick think, have I got any responsibilities here in security and privacy? But for quite a lot of mundane applications the answer will be, nah, not really.

Giovanni Asproni 00:10:35 And for the applications in which instead security matter, which seems to be probably the majority of the applications out there, you know, if we exclude the, I donít know the scripts I may write to do some automation at home or my own spreadsheets for my home accounting stuff like this. But for all the other applications that need security, how do we decide when good enough is enough? I mean, how do we decide the level of security we want?

Charles Weir 00:11:07 That is a business decision, it’s a management. If you wanted to study that would study it, that would be a management style study. It would be about benefits to your organization or yourself and whether the costs are worth it. We might think about whether there could be metrics. And this discussion comes up whenever developer centered researchers come together, both industry and otherwise. And the answer is we don’t know of any metrics that really capture this. In my research, I’ve used metrics to see whether development teams are improving their security as a result of workshops or improving their thinking about security. And that is measurable and useful. But to, to actually be able to say anything useful about the outcomes is very hard because it all becomes involved in other business concerns and marketing concerns and is really very difficult to put a number on that would be of value to anybody else. Now of course, if 20 years have gone by and for very little money you have managed to defend your internet facing app that’s doing something highly sensitive against all, you’ve never really had any problems, then I would say, okay, that’s a really good metric. Good we’ve got it. But it’s that kind of outcome is very hard to put a number on.

Giovanni Asproni 00:12:55 The way I’m seeing this now is that the risk assessment, the risk information you mentioned before actually have a very important role to play for, for this to decide good enough. Because I would imagine that the risk assessment and risk information at this point means that part of that comes from business itself, the decisions they make, the trade-offs they want to make. Is that correct?

Charles Weir 00:13:18 Indeed. And this is a very business, it depends on the culture of your business, the risk, the nature of the industry you are in, personalities of those involved really this is, it’s a very human thing to make these calls. What we can do as developers is to provide information to help with those calls so that we’re talking about evidence-based decisions rather than guesswork.

Giovanni Asproni 00:13:52 So we are already saying here that to take these decisions is not simply for the teams alone to decide on this is really the development teams talking with the rest of the business, which includes whoever makes business decisions, product managers, maybe small startups could be the CEO other people to decide how to proceed, how to agree on a good enough definition that will work for their own context.

Charles Weir 00:14:18 Indeed, and I learned this for myself, one of the early teams I worked with as an academic, they were very good. They found all sorts, a whole list of OWASP top 10 problems and they had them on the backlog and they came to me and they said, Charles, you know about security, tell us which ones should we deal with. And I looked at it and I thought, yep, that’s a product owner question. I have no right. You know, no matter how much I know or what my opinions are, I have absolutely no right to tell a development team how to prioritize security. That is the product owner’s job.

Giovanni Asproni 00:15:02 Why should we settle for just good enough and not the best we can possibly get?

Charles Weir 00:15:07 The particular example that comes to mind, the particular story that comes to mind for this one is something called the risk management framework, which was invented by the American authorities. I’m guessing it would’ve been perhaps in the nineties, 1990s. And it was a list of pretty much all the possible things you could do to make your software system secure. And for a long time it was mandated as any, you know, important US government project had to satisfy this risk management framework. And eventually they had to abandon it because the project had become so expensive that nobody could afford to commission software. Yeah, I did hear someone muttering that the abandonment of this was a communist plot, you know, it was, it was an enemy thing and I thought, no, the communists would’ve been delighted at, or the enemy would’ve been delighted at the American authorities spending all that effort doing completely useless work. So that was an example of where totally good security was just far, far, far too expensive. There’s nothing wrong with the framework. It had lots and lots of good stuff. It was just too expensive to do as a whole with everything every time.

Giovanni Asproni 00:16:46 So expensive. Both I guess in terms of money and time as well.

Charles Weir 00:16:50 Indeed, yes.

Giovanni Asproni 00:16:52 Have you got any examples you can give us from real projects you worked on where, the good enough approach was followed?

Charles Weir 00:17:00 Well the first one that I was involved with was the e-cash, which was the first mobile money app that we worked on. And we were novices at security. There was very little information out there available though we did consult a couple of nationwide experts and, but what we particularly did was a risk assessment with impact and probability. And based on that we decided to tackle some of the possible security threats and not others. And we had an audit trail, indeed we shared it with our clients of what we decided to tackle and what we hadn’t. And also as I mentioned earlier, the, you know, what we’d said was out of scope and you know, might well be the, the priority of another, another component in the same system. But in terms of the good enough approach, in a sense it’s the almost every project automatically follows the well we’re not going to do everything.

Charles Weir 00:18:11 So really what good enough is talking about is saying, well let’s be really selective in this way about which things we do. And that I’ve worked on with several projects recently. There was a lovely one with a piece of software for government that needed to be really, really, really secure. It was a kind of like a risk management framework sort of piece of software. And there was no question of not doing every single thing they could to implement security. But what they realized after doing the analysis was that there was a time-based element to this and they could do the, the important, they could do the certain things earlier than other things such that their customers and here we’re talking agile software development such that their customers could provide feedback and understand what was going forward before the need to pile on all of the other detailed security features. So, those are two different kinds of good enough security. It’s good enough for now and good enough forever.

Giovanni Asproni 00:19:36 Actually. Is it even possible to have something that is good enough forever in your view?

Charles Weir 00:19:41 You are quite right. Of course it isn’t. It’s, and indeed an important part of this I mentioned earlier is the revisiting this risk assessment. Because quite a few things may change. You might get probabilities of things may change, you might have a whole outbreak of some new sort of attack and you read about it in your papers and you think, we’ve upped the likelihood of that. Or it might be that your software is doing or planning to do new stuff and you’ve got a range of new possible threats or possible things that could go wrong. Or you might find just as the sort of database expands or the whole scope of what you are doing expands, you might find that something that you’d considered unimportant with a relatively small number of people has suddenly become really important. So there’s lots of dimensions of change that could end up needing a rethink, but the rethink needn’t be huge and involve a lot of people for a long time. This is a matter of we need to revisit this, might we need to put a different priority or to start dealing with some new risks.

Giovanni Asproni 00:21:05 So you gave us an example where security was really important, this government project, but are there in general some domains you can think of or you work with where good enough means actually quite a lot?

Charles Weir 00:21:18 I’ve done a recent project in the health domain, which was a new one to me, health devices, health IOT. And it was interesting because yes, in a way the risk was, when you’re talking about people’s lives, yes it’s really, really important that things don’t go horribly wrong. But the approach is very similar. It’s just a matter of, I say just, it’s a matter of deciding which in this particular context are the security things and the privacy things we need to worry about.

Giovanni Asproni 00:22:04 Now moving on a little bit. So in my experience, security and privacy are often treated as less important than features when developing a software system, yeah? Nobody will tell you that. Everybody will deny that. But this is what I’ve seen in practice in many, many projects. In your site you say that good security and privacy are actually important selling points. So can you expand on that?

Charles Weir 00:22:27 Yes, they have become so not in every case by any means, but the archetype I suppose is Apple. Take a look at Apple, that security is now that big selling point. If you know they are up against Android, what is the reason why Apple really say you should go for Apple? It’s because they implement in some way better security. So, it’s working very well for them. They do get sales because of that and it really does work. There’s another company I’ve worked with who do tracking devices for valuable things that you are shipping around the world.

Charles Weir 00:23:14 There’s a lot of competition for tracking devices, but if you are shipping a valuable thing around the world, you probably don’t want your adversaries, people who might steal that valuable thing to be able to access the information of exactly where it is now. So that security is an important selling point. It’s not just the ability to track that thing, it’s the ability to track that thing without anybody else being able to know where it is except the people who should. So there’s a couple of examples I find when I’ve done workshops that almost every group of developers is able to identify some aspect of the security or privacy they’re implementing that actually is a potential selling point.

Giovanni Asproni 00:24:09 What can developers do to convince decision makers that security and privacy are important selling points? So you’re telling us that they are, but we know as developers that sometimes we say, well these features come first, we look into these other things later on, this often happens. So how can we convince people, how can we sell the importance of security to, I don’t know, product managers, other manager, other decision makers in the company?

Charles Weir 00:24:38 So this requires an important insight. So I’m going to highlight this one. Product owners and people in that role think in terms of positive selling points. So they are not terribly interested in bad things that might happen. It’s just not in the way of thinking about it and quite rightly not. So, if you want to make security and privacy into some kind of selling point, you need a way to turn it round so that it’s positive. Indeed the very word security is a classic of this. What is security? Security is seen as something positive. You know, you’ve got good security on your house, what does it mean? It’s a negative thing nobody can break in. So somebody at some point, some forerunner of Chubb or the locksmiths or whoever it might have been has thought, yes we’re not really selling people not being able to get in, we’re selling security, we’re selling this positive thing.

Charles Weir 00:25:52 So the important thing is not to try and sell it in terms of by emphasizing how bad the bad thing would be. One, just as a side observation, you know, most senior management have got a whole list of the horrible bad things that might happen and they just say, oh just add it to the pile. We don’t worry about that stuff. We just carry on plowing into the future with thinking about the good things that might happen. But if you offer a positive aspect of the security of what you’re implementing, then that becomes something that a product owner can easily match against the other things they have to decide on. And I’m certainly not saying that security should always be or indeed normally be the first thing those security features, this is very much a business decision and it’ll be different even with two organizations doing almost exactly the same product, they will take different decisions but if they are comparing it as a positive thing against other positive features they can do, that decision becomes manageable and sensible and it seems to work quite well as an approach.

Giovanni Asproni 00:27:24 So if I understand correctly, what you’re saying is try to think about why you want to do to implement security in that way and find something positive, maybe saying your customers will be more confident in your product will be easier for them to take the decision to buy it because the competitors simply don’t offer this nice aspect. Something along those lines. So putting a, well I was about to say a spin, but it’s more about, highlighting the good things that may happen if security is implemented in some ways.

Charles Weir 00:28:04 Yes, and the good thing though is as presented to the potential buyer. So we’re not trying to persuade the product owner that good things will happen. We’re giving them a little label, a little kind of advertising slogan that they can then take to the outside world. And best of all of course is, if the product owner actually is involved in this creation of the good thing. There was a company I worked with, an excellent company that was doing a range of different sorts of websites and they thought about this they had to do to add security and it was costing the customers and they were sort of, the customers were a bit concerned about this, you know, can’t you just do it? Why are you charging extra for the security stuff? And the team thought about this and thought, why don’t we do gold, silver, and bronze security? Then people can see what it is that they’re buying.

Charles Weir 00:29:12 They can match the level of security against how risky this particular service, this particular web service is. So if somebody says, I want to create this highly sensitive website that collects very vital data, you can say yep, that sounds good. It needs gold security that will cost you yay much. And if they come back and say no, we only want bronze security, you have got something to discuss with them, you know, and that way of simply promoting the distinction and the amount of effort they had to make was, to my mind brilliant. I loved it.

Giovanni Asproni 00:29:58 Now when talking about security, like any other system qualities, conventional wisdom tells us that it is difficult to retrofit into software systems. Sometimes borderline impossible depending on the system. But when I was doing some research for this interview, I came across a book called Blitzscaling Security from a pen tester called Spark Flow. And he says that in situations where the success of a product or a service is still in doubt, think young startup short of cash, maybe they don’t have the market fit yet, you know they’re trying to find customers and investors. In those situations it’s better to forget about security to avoid delays and potential losses of market share and as well the associated costs. So he suggests to make the product work first. So what do you think about this disposition?

Charles Weir 00:30:53 Actually I agree. I agree with Spark Flow I think and even if I didn’t, any startup that doesn’t do that is probably not going to get very far. So yes, you certainly want to go ahead without worrying too much about implementing security and privacy features, but that is not the same as not thinking about them. So I would urge that same startup to think and do a risk assessment right at the beginning and to have that risk assessment and to know what it is that they are going to do when the boat comes in, when it’s successful, when they can, they’ve got plenty of money to spend on it. Because the thing that is really difficult to change is the kind of interfaces with the outside world and the how people use the product and what it is you are selling. And if you get that right at the beginning with just dummy versions of all of the security stuff you were thinking of putting in or none at all or so long as you’re aware that you’re going to do this, you will make slightly different decisions than if you just go ahead and cobble something together and hope for the best.

Charles Weir 00:32:27 Also, I would suggest that a security risk assessment is quite a good thing to give a funder. Doesn’t cost an enormous amount and looks good, you know? Yeah, they’ll all be aware of the importance of security and privacy. They’ll all be worried about it if you can say yes, I know we’ve not done it but here’s where it goes, then you have got a good story for those investors.

Giovanni Asproni 00:32:53 So to me, since that, your answer boils down to as long as the decision is a conscious one and you’ve got a plan, you’ll be fine.

Charles Weir 00:33:05 I always think of Zoom in this context. So Zoom as a relatively unknown company, advertised their security because they had HTTPS connections, but they suggested they had end-to-end security, which they didn’t and it was only when they were really successful that I noticed they bought up a company that had that particular functionality available so they could offer the story that they’d always had. So to me, yes they had good enough security at all the points, it satisfied, we know of problems but none of them are related to the technical inter susceptibility of it. And then when it was big enough to be a serious threat, they could deal with it.

Giovanni Asproni 00:33:57 What are the most common mistakes that software architects and developers make when architecting or implementing security in their systems and think in terms of situations where they end up with not enough security or maybe too much so they spend too little or too much. So are there any typical mistakes that you see people making?

Charles Weir 00:34:19 My impression is that the interesting and the big problems tend to be kind of in the functionality. So the simple and conventional answer to your question is what are the common mistakes developers make is go and look at the OWASP top 10 list of bad mistakes that developers make and don’t do them. But actually even the OWASP top 10 is only of interest to people with hacking expertise and there are other errors you can make and they tend to be related to functionality where I’m grasping here for examples but, but my general impression is that errors around functionality that allow people to make mistakes or cause data to be shown, stuff to be shown where it shouldn’t be shown or make files available where they shouldn’t be available are the kinds of mistakes that are actually though not strictly speaking technical errors. They are actually the most damaging ones and by far the most common ones.

Giovanni Asproni 00:35:46 Now talking about risk assessment. So in some descriptions of the workshops you presented at various conferences, you wrote that security and privacy risk assessment is easy and need not to take long. Can you expand on that?

Charles Weir 00:36:04 I was astonished by this. So I come from an agile background, so my first instinct is to do the simplest thing that could possibly work. So when I started doing risk assessment workshops, I had in the back of my mind, oh I’ll probably have to teach risk assessment, I have to give examples of all the things that might go wrong and there’d be an awful lot of work in involved. But as an experiment I tried giving a competent team just saying what could possibly go wrong, what are the risks associated with security and privacy that could go wrong with this? And to my surprise, most developers could come up with a really good set of problems if they concentrated on just that. I’m not saying it’s a complete set of problems, probably not, but it was a very good set of problems also that didn’t really take very long.

Charles Weir 00:37:09 Typically now an hour and a half, two hours, some went back and did, did a more complete risk assessment on their own two hours the next day. They we’re not talking huge investments of effort and we’re talking a good deal of learning by the development team as they do it. So a lot of the benefit of doing this as a group is that everybody comes away having seen and thought about what might the problems be. So even though it may come back to being ultimately to being a product owner issue, once you’ve done that risk assessment, everybody is aware of what could happen and they have a context for some of those security discussions and that is tremendously valuable.

Giovanni Asproni 00:38:03 Are there situations for these assessments where actually having a security expert would be a must?

Charles Weir 00:38:11 A must? I don’t, I can’t think of any situation where failing to do it would be better than doing it without a security expert and that’s no reflection on security experts who are a great in, you know that it’s great if you can get one, they’re just difficult to get a hold of. Similarly, we did experiment with and without product owners as well, we had workshops with and without security experts and we had workshops with and without product owners and yes it helps to have a product owner there. They will have a view on what’s happening, they’ll probably be much more aware of the likelihood of security problems and the importance of them because that’s the world they live in. But if you don’t have them, it’s much better to do the workshop than not or to do, let’s say, to do the risk assessment than not.

Giovanni Asproni 00:39:14 So my understanding is that security experts would help but are not indispensable for these kind of assessments.

Charles Weir 00:39:22 Exactly. Very much so.

Giovanni Asproni 00:39:24 We can reach a good enough level usually with just the development team and maybe with a product owner as well, better if the product owner is there, but even if the product owner is not available, which is quite common in many agile teams, very often still these workshops can give a very good return on investment in terms of risk assessment.

Charles Weir 00:39:46 Indeed and in terms of learning for the team, yes.

Giovanni Asproni 00:39:50 Talking about cybersecurity problems. So you claim that they are less likely to happen than cybersecurity experts tend to suggest and other problems are more likely. So can you expand on that and also have you got any data to support this?

Charles Weir 00:40:09 This was mainly based on looking at the information commissioner’s office data. So there’s astonishingly little information about how actually likely cyber problems are. There’s a superb survey that surveys a lot of United Kingdom companies about cyber and you can get hold of the full data and analyze it and we’ve done that. So again, from that we can see that although there are some really nasty cyber companies have had really nasty cyber problems actually are not common at all. Even amongst the organizations involved. And the ICO data shows us that what is very common amongst the sort of leaks is things caused really by human error or humans needing to get on with their jobs and security getting in the way and that having some bad outcome. So class things like sharing logins and the like, yeah you have to do it sometimes but it may have bad effects.

Giovanni Asproni 00:41:28 And also you talk about using an industry-wide risk model, say that you helps with the discussion. What is an industry wide risk model in this context?

Charles Weir 00:41:38 So the world is remarkably short of security and privacy risk models. I’ve come across one or two, there’s a recent NIST that’s the American Department of Security and analysis of a, it’s an embedded device to go in your body and which is interesting, but there’s surprisingly little available because it’s usually any kind of risk assessment is usually seen as company confidential or organization confidential. So the actual knowledge that is available about what are the real problems and likelihoods for a given industry is quite small. The banks deal with it by having informal kind of meetings between the security experts of different banks. No names, no pack drill, they meet up and they discuss, you know, oh I had lot of problems with that, and we’re seeing an awful lot of this and so they do have a fair bit of sharing between the leading players there.

Charles Weir 00:42:58 It’s not common elsewhere the NCSC, the UK cybersecurity outfit is doing their best to pull in more information but again that only appears as warnings about new trends. So it is very difficult to find out what is, if you are say in the health sector, what things should you be particularly worried about that. So one thing we did notice though when we did this for a particular sector, the health sector, was that the numbers didn’t seem to be at the level that would vary enormously between industries. So a kind of generic threat model for applications generally was a great deal better and more useful than none at all. That sound right?

Giovanni Asproni 00:43:58 Yes. Let’s talk a bit now about implementing security. So let’s start, we have a team, they want to start looking to implementing security to their system. What approach would you suggest them?

Charles Weir 00:44:15 So this is where we use risk assessment. So it’s sometimes called threat assessment, but that suggests threat modeling is usually used to mean more looking at existing software and seeing where there could be problems. Risk assessment is looking at the whole system and thinking why would there be problems and therefore where would they be and what do we need to worry about? So the classic way to do risk assessment, it’s very well understood in other aspects of health standards use it a lot, anybody doing heavy engineering or you know almost anything which has to adopt a risk-based approach. Well they may employ people who do nothing but risk assessments and risk analysis. So this is well-known stuff and you divide risks into impact and likelihood. So the impact is the range or usually you just put a number on it, a range to say how bad would it be, what would be the cost in money lives, whatever.

Charles Weir 00:45:29 If this bad thing happened and the likelihood is well how often would you expect this to happen per year if, and then that has, if we don’t do anything about it, and then of course if we deal with it. So you have the before and after. But to start with, we only deal with the before, what would you expect if you don’t do anything much about this? The normal way to do that, is you divide into bands and it’s low, medium or high. We do suggest that if you’re doing this, you have an initial discussion about what you mean by low, medium, and high. So put some sort of numbers so that when people are talking about this is low, this is medium, they have an idea that there’s a shared understanding of what that is. But the key thing to say about it is, and this has been a surprise to me, is that low, medium and higher actually different orders of magnitude in almost every case.

Charles Weir 00:46:36 So typically we find when we’re talking about probability or when we’re talking about impact, if we say low or that might be 10, if we say medium, that might be a hundred. If we say high, that might be a thousand. And this is really important because it means that when you do a step up that’s 10 times as likely, 10 times as much overall welly that that risk has. So almost everything else fades into insignificance. It wouldn’t matter if you had five different lesser ones because it’s still the top one is still by far the more impactful, the more the higher risk for the organization.

Giovanni Asproni 00:47:26 My understanding of what you’re saying is basically when you have the low, medium and high, if we go with these three levels, low is something that is probably not very well if you’re talking about likelihood, not very likely may happen, but is quite, would be quite unusual if it happened. Medium is already something that is quite likely to happen. So you’re saying different order of magnitude of probability here in this case, which means a lot more likely. High seems to be are almost certain that this will happen, so we better do something about that.

Charles Weir 00:48:05 Yeah, or it could be high is it will happen to one company in 10 this year. That would be a typical high for a security kind of thing. As an example for in the statistics we’ve been using in the Hipster workshop, that’s the latest project I’ve been working on. I worked out the likelihood of a typical person getting a burglary or fraud and the burglary turns out to be about one in a hundred and fraud turns out to be about one in 10. So fraud is high-that’s high likelihood. Burglary is, you might say medium and being struck by lightning is ultra low, you know, two or three orders of magnitude down.

Giovanni Asproni 00:48:55 And now I’d like to ask you some recommendations for systems. So far I think what we talked about works quite well for greenfield projects. We mentioned also the fact like startups may be not having yet a market fit or say, okay, make the product work first and then you’ll take care of security or otherwise you do your assessment, involve the business and they set the boundaries of where security is important for the particular project and get on with it, yeah? But have you got any specific recommendations for legacy systems where security was not considered at all in the original design?

Charles Weir 00:49:35 This is less common now than it probably was 10 years ago. It’s an interesting problem, but ultimately from an engineering perspective it’s the same. You still do the risk assessment, you then think about what are the risks you are dealing with, where are you threatened? And then you think about the mitigations, the things you’ll have to do and change in your existing system or often put a layer in front to defend it in order to make that work. And after that it’s engineering. The reason I should say why I concentrated on this kind of risk assessment and working with product owners is that my experience has been that software developers are very good at solving that kind of problem. And I’m a software developer by background. You know, if you say, well we’ve got this problem that we need to implement this kind of security or we don’t want the possible things coming into the database to be able to corrupt it because there’s code in there, then a good developer will find a way of doing it. There’ll be some technique or other or you can look it up or you look it up on the web or you, that will be a way of dealing with it. The important thing is to know that the problem is there.

Giovanni Asproni 00:51:08 Are there any security measures that you always recommend for all teams and systems? Things that can be done or maybe should be done, have very low cost but potentially a high impact anyway. Is there anything like that?

Charles Weir 00:51:24 Only risk assessment. So the answer might be it’s a very quick risk assessment, don’t see any risks in this. But that is the only thing I would say that is worth doing for almost any project. Yeah, we’ve talked about the ones that are small enough, not to worry. After that, no, but quite often you’ll find in terms of just sheer hygiene, you may well find that keeping the components up to date is pretty important. You might find that just automatically checking and fixing bad things in the code, which you can get tools to tell you about, that might be useful. But there’s very little point in getting those tools and paying for them or paying for the effort to install them if you’re not going to actually do anything with the results. And that’s not an unusual outcome unfortunately. Or maybe fortunately it might be that not doing anything about the problems is again the right decision. This is a business trade off.

Giovanni Asproni 00:52:32 We are going back to the risk assessment again.

Charles Weir 00:52:34 Indeed. And that’s what the risk assessment’s for.

Giovanni Asproni 00:52:37 Next question is about implementing security tasks and prioritizing them. So how can teams with, tight project deadlines, which as far as I know are most of the teams out there can actually prioritize security tasks to implement without access to security experts, most teams don’t have this access, so for them it could be a bit difficult to decide what to do first and what is more important?

Charles Weir 00:53:09 Well again, this lies between the product owners and the development teams. So what is unusual, they’re not unique about security and privacy issues is that they are functional requirements that tend to be identified by developers or certainly technical people, unlike most requirements which tend to come from the customer. So the problem as we’ve already looked at, that becomes selling these new things, these new problems that have come up and involving the product owners so that they know and understand and know why they’re likely to be important to the customer. Then using the customer in its widest possible sense. So if it’s not important to the customer, I would argue then that’s probably a don’t do it, but if it’s likely to be, it’s so we need these two techniques that we’ve talked about. We need the risk assessment and we need the turning the discoveries round into positive things for the customer so that the product owners can make those difficult judgements. And neither it turns out, and again I’m surprised at this, but it turns out that security experts are not necessary. They’re useful but they’re not necessary for either.

Giovanni Asproni 00:54:44 And what is the best approach to keep security good enough and in line with the evolution of the system and the changing needs of its users?

Charles Weir 00:54:53 So this is about revisiting the risk assessment and how often depends entirely on the nature of your project, typically people seem to talk about every three to six months, but also you should be keeping a weather eye on it, on the assessment with each new story or task you take on to see whether there is an implication for security implication for that task that’s already kind of in the list of threats that you’ve got.

Giovanni Asproni 00:55:30 Okay, I think now I’ve got some mandatory questions about AI, which is all the rage nowadays. In your view has artificial intelligence, the potential of raising the bar for deciding a good enough security level?

Charles Weir 00:55:48 I’d say it doesn’t really work like that. So the way it works is good enough would be related to the current trends in what’s going on. So if you read in the newspapers that that attackers are using AI to make a certain sort of attack just automatic and therefore it’s happening to almost everybody instead of just sort of certain targeted outfits, then you might say, alright well we’re going to have to make sure we are not vulnerable to that one. Or you might not, it might not matter to you. So it becomes part of the background likelihood of certain sorts of attacks might well be changed by the advent of AI being used by hostile people.

Giovanni Asproni 00:56:37 Yeah, he already said that teams don’t necessarily need security experts. I mean they are good to have obviously, but they’re not indispensable. Now we already know that according to many artificial intelligence will make developers redundant according to some in four or five years. What will happen to security experts? Can AI replace them as well?

Charles Weir 00:57:00 I am surprised at the suggestion that AI will make developers redundant in four to five years. It will make their jobs different in one to two years. Yes, redundant–I’m not sure that I see that one very soon. Similarly, security experts, yes, AI as we see it might make their job a bit easier. Indeed, there are companies like Darktrace have been using different sorts of AI to help address threats for years. Now. I’m not sure that I can see sort of chat GPT making a huge amount of difference to security because so far as I can find out, the actual amount of knowledge on the web about this is relatively small and therefore Chat, GPT and Bard and the likes have really got no information to play with. So you can’t, they can’t create knowledge that isn’t already there and the knowledge they create tends to be at the very least in need of, you know, certain checking. So I’m not sure that I can see that sort of AI really helping very much. I mean I could, well be wrong. I’m actually interested and concerned about the possibility of AI use having security issues and that is again going to be part of a whole new aspect to the risk assessment. That is, you’ll find a great deal in the media at the moment about, the potential hazards with AI use. Yeah, indeed.

Giovanni Asproni 00:58:54 I guess that both developers and security experts can relax a bit for a few more years then. I hope you are right on this and I think at the end of our interview, I think we’ve done a great job of introducing this ruthless approach to security for business developers. Is there anything that you’d like to add to mention?

Charles Weir 00:59:15 We should say that the materials for two different styles of workshop are available for free on the web and that you are very welcome and please get in touch with me if you have ideas and suggestions. I’d be delighted.

Giovanni Asproni 00:59:33 Thank you, Charles. I can confirm that we’ll put the links to the website, to the materials in the show notes. People I guess can follow you on Twitter and then how else they can get in touch with you?

Charles Weir 00:59:48 If you Google Charles Weir Lancaster. That’ll find me.

Giovanni Asproni 00:59:53 Okay Charles, thank you very much for coming to the show. It’s been a real pleasure. This is Giovanni Asproni for Software Engineering Radio. Thank you for listening.

[End of Audio]

Join the discussion

More from this show