Search
Tanya Janca SE Radio guest

SE Radio 658: Tanya Janca on Secure Coding

Tanya Janca, author of Alice and Bob Learn Secure Coding, discusses secure coding and secure software development life cycle with host Brijesh Ammanath. This session explores how integrating security into every phase of the SDLC helps prevent vulnerabilities from slipping into production. Tanya strongly recommends defining security requirements early, and discusses the importance of threat modeling during design, secure coding practices, testing strategies such as static, dynamic, and interactive application security testing (SAST, DAST and IAST), and the need for continuous monitoring and improvement after deployment.

This episode is sponsored by Codegate.
Codegate



Show Notes

Related Episodes

Other References


Transcript

Transcript brought to you by IEEE Software magazine and IEEE Computer Society. This transcript was automatically generated. To suggest improvements in the text, please contact [email protected] and include the episode number.

Brijesh Ammanath 00:00:54 Welcome to SC Radio. I’m Bridjesh Ammanath and today our guest is Tanya Janca. Tanya is the author of Alice and Bob Learn Secure Coding, Alice and Bob Learn Application Security, and Cards Against AppSec. Over her 28-year IT career, she has won multiple awards, including OWASP Lifetime Distinguished Member, and Hacker of the Year Award, and is a prolific blogger. Tanya has trained thousands of software developers and IT security professionals via her online academies, SheHacksPurple and Semgrep Academy, and her live training programs. Today we are going to talk about how to integrate secure coding into the software development lifecycle. We have covered secure coding concepts in Episodes 475, 568, 541, and 514. Let’s get started with fundamentals. Tanya, what are some fundamental security concepts that you feel every developer should know?

Tanya Janca 00:01:50 I really want everyone to know the idea of “least privilege” — the idea that we only grant exactly what a user or a person needs, so they only have access or permissions, or they can only see or do the things they actually need to instead of just opening the door all the way when we don’t need to. Another concept that I think is really important is usable security. Making sure when we design secure concepts that they’re not terrible for the end user because users are really smart and tricky, and they will get around them. And so if we make our security features more pleasurable to experience, it’s a lot more likely that users will do what we want and make the secure choices. I could go on. I’m wondering how deep you’d like to go on this question?

Brijesh Ammanath 00:02:43 We’ll dig deeper into each of these principles or the concepts that you mentioned as we go through the podcast. For the immediate next question, I wanted to ask you about trust and why it’s critical to stop assuming trust in systems and data.

Tanya Janca 00:02:59 Yes. So usually what I do is I explain the concept of implied trust. So users, human beings, actually in general, we trust; we’re very trusting compared to other animals. So if you look at panthers if they see each other, they usually, they fight or they have a baby panther. And there are lots and lots of different animals and animal kingdom that just have zero trust. When they see another of their kind, they try to kill them. Whereas human beings, we’re very trusting and as a result, we have an amazing society, right? We’re able to travel all over the planet, I’m able to send you money and you’re able to go buy a thing and then mail it back to me, right? That’s incredible. And so when we design our systems, we tend to design them with implied trust. So for instance, we used to design our networks where someone would get onto our network, we would make sure they’re the right person and they are allowed there.

Tanya Janca 00:04:00 But then once they were on the network, they could go anywhere and do anything. And that assumed trust. It assumed that this person knows, oh, well I’m not a database administrator so I shouldn’t go on the database servers. When in fact it turns out not every person is trustworthy. And so we need to not trust any sort of input or connection or integration to any of our systems. So if we’re getting input from a user, whether it be Tanya enters something into a search bar of your web app that you made, or there’s a hidden field and someone could have changed it, there’s something in the URL parameters. We got something from an API, we got something from the database. That’s all input to our system. And if we could validate that it’s what we’re expecting and that it’s okay to use before we make any decisions or do anything, we would avoid a lot of vulnerabilities.

Tanya Janca 00:04:58 Let me tell you. Same with connecting to things and integrating with other things. So we’re calling an API, are we sure this is the API, we meant to call, or maybe we are the API. It’s, is this front end allowed to call us? Is this a friendly front end? Is this another API calling us? Should it be calling us or is this actually a malicious actor? If we could not trust by default and always verify before we take our next step, so before we use that data or we open the connection or we allow them to touch our database or access our database, I feel like at least half of all vulnerabilities would just disappear overnight.

Brijesh Ammanath 00:05:40 Do any real world examples where assumed trust cost failures come to mind?

Tanya Janca 00:05:45 So as an example, just SQL injection. You get something from the user. So let’s say you are filling out the form, you seem nice, but I would still validate data from you. So you put something, let’s say we’re logging in somewhere, and so there’s the username and there’s the password. Let’s say because we’re not doing password less, we are not fancy. And you put into the username field a bunch of code instead of your actual username, right? So instead of putting whatever your username would be, you put in a space or a letter or something and then a space, and then a single quote. And you add on the classic injection code, which would be or one equals one space, dash, dash. So you put the two hyphens at the end and the SQL code, you’re like, I don’t need to see the rest of this.

Tanya Janca 00:06:39 I don’t want to be syntactically, correct, just end the statement. And then it goes through. And I am trusting. So instead of using parametrized queries and instead of validating that data, I take it, I concatenate it to my select statement and I just add it all together and ask the database to execute it. So instead of checking that input to see if it is just letters and numbers like it should be for a username, because that would be not trusting, right? Making sure that’s the correct thing, then I concatenate it together and send it to be executed. So I’m trusting there’s no code in there. If I was not trusting, I’d used a parameterize query because it takes those parameters on the database server, whether it’s no SQL, SQL, whatever query language you’re using, and it removes any power it has. And it says this can only be treated as data and I’m just super trusting.

Tanya Janca 00:07:36 And so I execute it directly against my database. And on top of that, if I wanted to really do full trust, I would do it with database owner permissions because I’m such a trusting person, right? And then bad stuff happens. And so there are many, many stories of different breaches that I’m thinking of where there is assumed trust or there is some sort of assumption that everything’s going to be fine. I feel like there was that, this was about a year ago, there was, we called it MFA fatigue. So basically a malicious actor kept sending multi-factor authentication challenges to the system administrator over Christmas, I believe it was the Christmas holidays. And they just kept sending them randomly over and over, and the person was, something’s broken, but guess what’s closed help desk, right? And so they couldn’t say, hey, could you turn this off?

Tanya Janca 00:08:33 And so eventually after hours and hours and even days of constantly receiving alerts, the person just put yes. And then the malicious actor was in. And this was part frustration, but part also just, I’m sure it’ll be fine. I can trust my systems to protect us. I’m sure this is just broken. I just need this alert to stop. And I mean, what would I have done if I had received literally the 200th alert in a row over Christmas day? I mean, probably turn off my phone, right? But I feel, oh my gosh, almost every single hack, if you look at it, a lot of times there’s an implied trust or there’s trust where there shouldn’t have been like every single phishing attack that’s ever happened. It’s a person who’s being tricked into clicking a link or opening something that they should not. And it’s because they trust that it is okay. Because they’re looking at it and they’re like how could someone possibly know this much information about me? Of course I should click this link. It’s unfortunate because it plays on part of what makes human beings wonderful and makes us so successful. And us constantly trying to train users to be less trusting, I feel is not a winning battle. I feel we need to have technical controls for this rather than just training. As a person who sells training.

Brijesh Ammanath 00:10:00 What is the CIA triad and how does it help in defining secure systems?

Tanya Janca 00:10:08 Oh, so classic. So CIA stands for Confidentiality, Integrity and Availability. And it is our charge so the information security or IT security team. And that includes the AppSec nerds like me. It is our charge to protect the confidentiality, the integrity, and the availability of the systems and the data that are under our care. And generally a lot of companies, availability is the most important one. So are our systems up? So if you sell something online, you want that page up, right? If you have a store, you want the store to be open. Availability tends to be number one for a lot of businesses. But when it comes to, for instance, healthcare integrity is pretty darn important as well. because if we gave the wrong amount of medicine, if we operated on the wrong organ, if we operated on the wrong person, that would be catastrophically awful.

Tanya Janca 00:11:10 When we think of a person with integrity, it’s, is this person trustworthy? Is this value? Is this data, is this system trustworthy? And then confidentiality is, is it a secret? Have we kept the secrets we are charged with keeping? And confidentiality is still important, don’t get me wrong, but it tends to often be the least important when it comes to businesses. Compared to for instance, a governmental agency that is keeping state secrets, or for instance, the tax office doesn’t want everyone to know everyone else’s financial data. That’s where confidentiality would really come into play.

Brijesh Ammanath 00:11:47 We’ll move on to the next phase, which is focus on the secure software development lifecycle. And we’ll get started with the basics. So what does secure software development lifecycle and how does it differ from traditional SDLC?

Tanya Janca 00:12:01 Fantastic question about my favorite thing. So the system development lifecycle is the methodology that you follow to build software. If you are not following one, then you will not necessarily have great software at the end, and you probably won’t have adequate documentation. You won’t be sure that you are going to create a good piece of software each time. And so a secure system development lifecycle is taking whatever methodology the people use where you work. So let’s say they’re doing DevOps, they’re doing Agile, they’re doing Waterfall, and you as the security person, you add security steps ideally to every phase of the system development lifecycle. In my opinion, and I’m super biased as a person who’s obsessed with securing software, and that is my job and career, I think every single-phase needs at least one security activity. And so as an example, so whether you’re doing DevOps or Agile or Waterfall, you still at some point have a list of requirements, right?

Tanya Janca 00:13:09 And so I would want there to be security requirements. For instance, know there’s going to be a pen test before we go to prod, let’s say, or there’s going to be a secure code review at this point in the project. We’re going to have a threat model at this time. We’re going to use these security tools in our IDE to check our code. We’re going to follow our secure coding guideline or standard as it may be. Let’s say you’re building a web app with a beautiful front end that’s in a very nice JavaScript framework. And then you have a whole bunch of backend APIs and some of those APIs call a couple of serverless apps. And then there’s a database, and then it also connects over to a sister company that you have over to one of their APIs and sends data three times a day.

Tanya Janca 00:14:00 So you would want to have in your requirements, these are the things you have to do to secure the API, these are the things for the front end, these are the rules for connecting to a third party API, this is the API gateway we use, the serverless app should follow this, we use this type of serverless app, et cetera, et cetera. So really getting kind of specific on what you want to see, I said kind of, not kind of getting specific on what you want to see. And then up next would be design. And so if you’re doing Agile, you might be designing the main part of the app first, and then you might be designing additional beautiful, amazing features that go on after. But during your design phase, perhaps you do a threat model on the main part of the app. And then whether or not you have time to threat model the other things, perhaps you do a whiteboarding session.

Tanya Janca 00:14:54 That’s one of my favorite things. So I combine the threat modeling and the whiteboarding. So threat modeling is, I’m friends with Adam Shostack, who’s very, very famous for threat modeling. And I know this annoys him. So Adam, if you’re listening, I apologize, but I like to think of it as evil brainstorming. So basically you get together and you talk about this is what we’re doing and what could go wrong. And you brainstorm all the different threats that there could be to your app, and you basically make a list of all the threats. And then you think about, okay, so which ones of these are we actually worried about? Because for instance, an asteroid could hit planet Earth and take down your data center, but I don’t feel any design considerations I make in my app can help with that. So I’m going to leave that risk off and just accept that risk.

Tanya Janca 00:15:43 Versus a definite threat could be, could someone do a replay attack against this app? Do we have defenses against that? And because it’s transferring money from one gift card to another gift card, we want to make sure that someone can’t replay that transaction. And then if we don’t have a double check to make sure that there’s money on the other gift card, if we allow it to just run the transaction again without a double check, this could be a problem. Right? So that’s a threat. And then of course you come up with defenses for the threats that you find disconcerting. And so I lke combining the evil brainstorming session with a great big, huge whiteboard and you just draw out the design and I just ask a ton of questions and ask them to tell me about their app. And I just keep drawing and drawing. And I’m not an artist. You do not need to be an artist, but I find that so many things come out in that conversation. And sometimes the developers discover issues that aren’t security issues, but just issues with the design. It’s, oh wait, you thought it was going to work like that? Oh no, this is what I envisioned. And so talking all the things out can really help, and documenting. I could go on, I could give examples for every single phase, but I feel I’ve talked a lot.

Brijesh Ammanath 00:17:02 No, I think that’s very good. So at a very high level, secure SDLC incorporating security into each of the development life cycle. And what we’ll do is we’ll double click into each of those phases. We’ll start with requirements and then go into a bit more details into each of those phases. So for requirements, how can teams effectively define security requirements alongside functional requirements?

Tanya Janca 00:17:27 You’re really good at this. I mean, that’s why you’re a podcast host. I feel development teams shouldn’t have to bear the brunt of this entire responsibility themselves. I feel that security teams should be providing a list of default requirements for each project based on technology and based on policy. And I’m going to explain both of those. And then they should meet with the team to talk about specific requirements. So by default, every API just needs certain things. It just does. Every web app, frontend needs certain things, every serverless app needs certain things, IoT, et cetera. And so ideally, the way I used to word it when I was doing AppSec full-time, instead of speaking and teaching about AppSec full-time, is I would say, okay, so we have your requirements basket. What technologies are you using? And I’m, oh, you’re using Java. Great. So I’m going to want you to follow the Java secure coding guideline.

Tanya Janca 00:18:28 So that is a thing that’s in your basket now of requirements. Oh, you’re building a web app. Is it a monolith, is it a microservice architecture? Et cetera, et cetera. And I just keep asking questions and I just keep putting things in their mythical basket. And what I’m doing is planning to add it to the requirements document. And then we would talk about what does your app do? What is it going to do? And so for instance, is it going to handle some health data? Because guess what? We have a policy and there is a law in many countries that health data must be accessed and protected in certain ways, right? Are you going to touch credit cards? Okay, so now we have to do PCI compliance, et cetera. So those would be policies and or legislation. So you might have a policy that states everyone follows the secure coding guideline, or brand-new web apps, have a pen test or whatever other rules that you might have.

Tanya Janca 00:19:26 And so you would add all of those as well. And then as a security nerd, I would want to read over any functional requirements that exist and see if any of them have a partner security requirement, if that makes sense. So sometimes, there are functional requirements that just make it clear to me that there’s a security control needed. So functional requirements are usually things that the business has asked for, the product owner has asked for, and this is kind of similar to threat modeling. Because you’re looking at, so this is what they want and this is the mission or the main purpose that this system is being built. And it’s, how can I help you protect that mission and make sure you succeed? And so that needs to be more of a conversation. And then ideally you give them this list and it’s not a thousand years long, right? It needs to be a realistic list. I also usually try to classify the app of how sensitive it is at this point, right? So is this app mission critical to our business or our organization? Does it hold extremely sensitive data? Because then it might be a high-risk app and or project, whereas it might not be, it might be medium or low risk. So there’s more or less security requirements as a result.

Brijesh Ammanath 00:20:44 Got it. We can then move into the design phase. And you’ve already talked a lot about threat modeling, but I’d like to take a step back and help explain to our listeners what is threat modeling?

Tanya Janca 00:20:58 So the idea of threat modeling is to identify design flaws within your system by talking about threats that could take advantage of flaws. So it’s if you just met up and you’re, hey, what flaws could there be in this system? Generally the people that designed it don’t think there are any, right? Because otherwise they wouldn’t have made it that way. And saying, oh, are there any flaws here? It sounds weird, but that’s very difficult. But if instead you say, if you are going to hack your app, how would you go about it? Or to the product owner, what keeps you up at night? What are you worried about? What would be the worst thing that could happen with this system? And they might say, so let’s say it’s a system that gives medication, it gives the wrong medication or a dose of the medication that’s wrong and it hurts a patient.

Tanya Janca 00:21:52 That’s the worst thing in the world that could happen, right? And so then you immediately start making sure that can never happen versus if you’re like, well, what could be flaws in the system? That’s a harder question, if that makes sense. So there are different methodologies for threat modeling, I use STRIPE, which is based off the STRIDE. It’s a very popular methodology where each letter stands for something, it’s an acronym to help guide you in questions to uncover threats. And so STRIDE is Spoofing, Tampering, well I could go through the whole thing, but basically each one of the things, the ideas you want to figure out, can someone elevate privileges. Is there an integrity problem here, et cetera. And I changed it to STRIPE with a P for privacy because although quite often security folks aren’t in charge of privacy, it’s really easy to add privacy in at this phase and make sure it’s covered properly as opposed to making privacy engineering an entirely separate topic.

Tanya Janca 00:23:00 And most organizations aren’t big enough to have a privacy department. And to be quite blunt, I really care about my user’s privacy and my privacy and my loved one’s privacy. And so I saw a really smart lady named Kim Watts talk about this at a conference. Ever since then, it’s just, okay, so would this affect the privacy of our users? Would this protect the privacy of our staff? Because sometimes the users are your staff, right? My teammates matter to me, I’m sure they matter to you. And so you walk through each one of these letters and each part of your system, if you could bring a data flow diagram, that would be awesome. And an architecture diagram or a design diagram. But an architecture diagram is great. Each different parts, so this part talks to this part, right? Okay? So repudiation, which is a security word, but basically how can we make sure, are we keeping track of who did this?

Tanya Janca 00:23:56 Is there a way this person could deny that it was them? Could someone else go do these transactions that would be spoofing? Could someone else do a transaction and pretend it’s me and charge my account, right? What could happen here that could go wrong? What are you worried about? And I feel having this discussion including, so generally you invite a security representative, you invite a product representative, so the product owner, business rep, whoever, and then at least one technical person. I feel you really open people’s eyes when you have a threat modeling conversation. And I find that those developers, they design differently after a threat modeling conversation, especially if you threat model the mission of your organization, if that makes sense. So if you start with that conversation as training, they look at everything differently from then on. So for instance, when I worked at Elections Canada, we threat modeled the election and it’s, what’s the worst thing that could go wrong?

Tanya Janca 00:24:59 And for every democracy, there are two things that they’re very worried about. And one is voter suppression. That is people tricking people into not voting or scaring them or preventing them from voting when they legitimately should be able to vote. And the other is that the public do not fully believe the results. Because that is a nightmare. It’s a nightmare for your country, it’s a nightmare for the elections department, et cetera. And so how many different ways can we assure that neither of those ever happen? And so then every single system from then on, you have that, those two threats in mind no matter what the system is that you’re modeling, if that makes sense. And so threat modeling’s educational, but I’m just going to be a little biased here, it’s so fun. It’s really a fascinating activity. I really enjoy it. And just to be clear developers, if you’re listening and you go to your first one and you’re not good at it, that’s okay because this is a muscle and it is your evil muscle, and you have spent your whole career figuring out how to make things work and how to satisfy customer’s needs and solve amazing complex problems.

Tanya Janca 00:26:08 But now you need to take off your developer hat, as my mentor used to say to me, and put on your malicious actor evil hat and think about how you could undo all the greatness that you did, which is really hard at first, but once you do a few threat models, it’ll be hilarious. You’ll be at the movie theater and you’re, this security is pathetic. I could so see 12 movies for free if I wanted to. It sounds funny, but a lot of security, especially physical security, really isn’t that good. It keeps out the honest people. And when you start doing threat modeling, you start seeing flaws in systems everywhere and you design better systems, flat out, you just do.

Brijesh Ammanath 00:26:53 Right. Moving on to the Coding phase, what are the most common secure coding guidelines developers should follow?

Tanya Janca 00:27:01 So I’ve written some books and in my first book it had the most basic secure coding guideline ever. Itís anyone ever can start with this for web apps. And itís when you go on a roller coaster when you’re little and you have to be a certain height or you’re not allowed on, it’s if you want to put an app on the internet, you must do these 17 things or you’re just not good enough. And the first one is you need to validate and then sanitize or escape all input. So you validate that it is what you’re expecting to see. So you validate the size and the type and the range. So let’s say it’s a date of birth. So guess what date of birth better be in the past? And it probably shouldn’t be more than 150 years ago, and it should probably be an actual date that someone submits, right?

Tanya Janca 00:27:52 And it should be in the date format that you’re expecting. And if it’s all those things, you’ve validated it and it’s good and it’s safe to use. But let’s say it’s a search term. Well that’s a lot more complicated, right? Imagine stack overflow, they have to accept code. It’s so hard, right? So you would validate, let’s say that it’s no longer than 150 characters, maybe that’s how long you’re allowing people to do. And then you want to make sure probably needs to be one or more characters in a search term, probably more than one, but let’s say it’s one. So you validate that, but then youíre like gosh, I have to accept a lot of really dangerous characters. So I’m going to go through, and you can either sanitize them, and that means taking out the scary characters and replacing them with something else. Or just even removing them completely depending upon what you’re doing or you escape them.

Tanya Janca 00:28:45 And so you generally just add a backslash in front of any bad characters. And so that’s number one, just validating every single input to your app and making sure that it is reasonable to use. And then sanitizing or escaping any special characters you must accept. But if it does not validate, you reject, you do not fix it. Youíre like, I’m sorry, no one is 500 years old, science is not that good yet. Please try again. You just reject it. Bad input. We’re expecting a date range between this and this. Please try again. Here’s the format we’re looking for, please try again. The second thing would be at all output to the screen for web types of applications must be encoded. And depending upon if you’re a bit of a cowboy and you’re doing inline JavaScript all throughout your HTML, then you might have to do a whole bunch of different types of encoding.

Tanya Janca 00:29:38 You might have to nest it quite a bit, but ideally we’re not doing that because life is easier then if you output and code everything that goes to the screen, then we’ve turned off the possibility of cross a scripting between those two. Well, we’ve generally prevented cross a scripting. There’s more protections for that. The third one would be always using parameterized queries and never, ever, ever doing inline or dynamic SQL. That is a recipe for injection. And same with no SQL, so if you’re using MongoDB, it’s still very injectable. So no matter what the type of database is that you’re using, using whatever version of their parameterized queries. So prepared statements, store procedures, there’s so many different names for them, but database servers are very powerful and they will take away all of its superpowers. If you use parameterized queries, definitely feel developers should use security headers.

Tanya Janca 00:30:37 So HTTP headers that instruct the browser to perform certain security functions for you. So content security policy header is the most powerful, amazing one, especially for stop and cross ascripting. But I want us to use all of them. That makes sense, right? Almost all of them are worth using. I created a security header cheat sheet that you can get from my website. So if you go to newsletter .SheHacksPurple.ca, there’s a resources tab, and I’m adding more resources there all the time. But basically there’s a cheat sheet that you can get that it tells you what every single header does and when you need to use it. And spoiler alert, most of them are you have to and then you could just copy and paste the configuration. So content security policy header, there’s some work there, but most of them, there’s almost no work. Like HSTS or HTTP, strict transport security, the long form, it just makes sure that if someone tries to connect to you with HTTP, it just redirects them to HTTPS. And it never, ever allows anyone to connect unencrypted. There’s no need for that anymore, right? The internet is lightning fast. We’ve discovered many ways that people can abuse HTTP. And so it just makes sure that there’s never a mistake, right? And it’s so effortless. It’s one line of code to just make absolute sure. I’ll talk about security headers all day if you allow it.

Brijesh Ammanath 00:32:13 I’ll make sure that we add a link to the cheat sheet in our show notes. But to summarize it, to make sure that I’ve got everything that you mentioned and the top four in your mind from a secure coding guideline would be to ensure that we validate and escape the inputs, we encode the outputs, we use parametrized queries and we use security headers.

Tanya Janca 00:32:35 Absolutely.

Brijesh Ammanath 00:32:36 Okay, great. How does code review change when we adopt secure coding practices? Should a security professional be part of the code review process?

Tanya Janca 00:32:46 Ideally, because there’s way fewer security people than there are software developers. Ideally you’ve trained your software developers that are doing the code review on secure code review. So essentially you have some sort of secure coding guideline or you give them some sort of guidance and it’s these are the things that we want you to look for when you’re reviewing code. So if you give them secure coding training and I actually have a free secure coding course on the internet, and if we could link to that, that might be helpful. And it covers the 17 things,

Brijesh Ammanath 00:33:19 We’ll add a link to that.

Tanya Janca 00:33:20 Awesome. Basically if you could give them a secure coding course and say, when you review code, look for these things. And even better if you could give them a checklist. And I’m huge on checklists and so all my courses have checklists because, that’s how I like to work. And so if you can give them a checklist of when they’re reviewing code, then they know what to look for. And so as an example, whenever there’s input to a system, it’s like you need to check that there’s input validation and either escaping or sanitizing and you need to make sure absolute sure that it happens before you do anything with that input. So we don’t want to take the input, make our query to the database and then validate it after. We must do it before we do anything else with it. And so going through and explaining to the people reviewing code, these are the things we want you to look for and this is what it looks like when it’s good.

Tanya Janca 00:34:20 And this is what it looks like when it’s bad. Because if you think about it, if they don’t know what it looks when it’s bad, it be easy to miss. And so for security controls bad looks like missing in the wrong place or incorrectly implemented. So missing is the most common where someone has not implemented, let’s say an anti-CSRF token, they just haven’t done it at all or they’ve implemented it, but in this case incorrectly. So I’ve seen an anti CSRF token being passed manually when for instance, .Net does it for you. So there’s just no need for you to also pass one. You need to validate it, but you don’t have to manually create one and pass it. It does it for you, which is awesome. Good job .Net. A bunch of them do it and a bunch of them don’t, right? And so if you make sure you’re, this is what it looks in .Net when this happens, and this is where you should validate this.

Brijesh Ammanath 00:35:22 Sorry to cut you Tanya, but what’s an anti CSRF token?

Tanya Janca 00:35:26 Yes, I’m so sorry. So CSRF stands for Cross-Site Request Forgery. And when we perform a transaction on the internet, we want to also pass a token back and forth. And it sounds weird, but it can totally be in clear text, it doesn’t even matter, it’s just a random value. And we pass it back and forth. And when we do the final transaction, we check that the anti CSRF token is still correct that they’re giving us the right token. And we do this because of phishing. So I don’t know about you, but I’m currently logged into Amazon and probably a ton of other sites that I use regularly. And I’ve clicked the remember me and all of that because I trust my own computer and my home network. But if I clicked on a phishing link that was to buy a great big TV and send it to you instead of me, right?

Tanya Janca 00:36:21 So I click on this phishing link that you, you’ve become evil you by the way, in this scenario. And so you send me an email, I’m having a bad day, I don’t think, and I click on this link when it goes to Amazon.com, Amazon’s, hey, where’s your anti CSRF token? And you aren’t going to have it as the phishing person, right? Because it’s stuck in my browser going back and forth. And then it can tell this is a CSRF attack and the transaction does not go through. And whereas on my computer where I’m logged in, I have the anti-CSRF token. And if for whatever reason, it’s needed to refresh, it’s expired or whatever, it just says, hey, is this actually Tanya and I re-authenticate and then it lets me buy my theoretical giant television. So there are several frameworks that will do that for you and several that do not.

Tanya Janca 00:37:15 And so first of all, informing everyone, yeah, it does this for you so don’t worry about it. Chill out, you’re all good. You don’t need to review for that. Or it does do it, but you need to do the final check on the backend. So for instance, thereís a lot of really cool JavaScript front ends that will create one and pass it to you. But if you’re not validating it on the other end, there’s no protection, right? So telling the people, doing the code review these things and that this is where this would happen, this is what this would look like, that’s what I find is best. So secure coding training essentially that includes, so the way I teach, I’m always, so we talk about a thing and I give a lot of examples and we look at some this syntax, but then I’m, here’s some code and this code is bad and I want you all to tell me exactly why it’s bad and usually it’s missing something or it’s in the wrong place or I’ve done a terrible job or whatever, right?

Tanya Janca 00:38:10 And then I’ll improve it. I’m, okay, so this code’s better. Why is it better than what we saw? And then sometimes I’m, this code’s the best code. And usually I’ve incorporated multiple things that we’ve learned at this point into it. And I’m, what’s good here? Am I missing anything? Why is this code the best of the three codes, right? And doing that review together and talking about it, it sounds weird, but weíll go through, and we’ll highlight things and, and we’re looking at, but I’m like, but why? I’m super annoying with the why question because I, they know, I know, but I want to know that they know. And so having a discussion, so even if you’re in the class and you didn’t know why, when you hear your colleague hit that light bulb and they’re, oh, because we took it and then we used it and then we validated it.

Tanya Janca 00:39:00 Oh crap, that’s what we did in the wrong spot. Yeah, we have the right security control in the wrong location. And then we go through and of course at the end it’s in the right location, right? And so I feel walking through and discussing code review can really help. And also using to be quite blunt, using code review tools you could use. So conflict of interest alert. I work at a company that sells a static analysis tool, but all stack analysis tools are very helpful. And so you can use a stack analysis tool to help you look for implementation issues like where you’ve incorrectly implemented a security control. It will also help you see a lot of places that you’ve missed a security control and so most of them or at least half, will allow you to write your own rules that you can put into the tool.

Tanya Janca 00:39:55 And so they’re usually called custom rules. Some marketing teams are calling them secure guardrails. But basically if you have a secure coding guideline and the stack analysis tool isn’t picking up all the things you want it to pick up, you can write your own rules to pick up the things that you need it to do. So often the security team does this, but the Devs can do this too, right? because they’re just writing patterns and Devs are amazing at patterns. And so basically you can do this to enforce anything in your coding guideline. So that could mean we all use camel case, no one uses snake case. It could mean we name our variables this way, or we all use the security header and if we’re not using it, I want it to flag it. And so you can write rules and kind of customize things for yourselves, especially if you are using a language that doesn’t have a great rule set. So like Elixir or something where maybe your SaaS provider only has 10 things at checks, but there’s way more that you want it to check. Or C and C++. A lot of SaaS tools aren’t really strong in that area. And so you could write your own usually with the help of the security team. But there are developers that are, get out of my way, I’ve got this. So it depends. But I find manual code review partnered with automated or basically static analysis, you’ll get the absolute best results, definitely

Brijesh Ammanath 00:41:26 Perfect. The SaaS tool allows us to do nicely move on to the next phase, which is around testing. So what are the key types of security testing that should be included in STLC,

Tanya Janca 00:41:38 Depending upon what your system does, performance and stress testing, which are not quite the same, but often done by the same person at the same time, just making sure that you can handle a huge load and that you perform well under heavy loads because availability is really important to the security team and well everyone. It’s important to everyone. And although technically usually people don’t consider that a security test, I consider it a priority for the security team, depending upon what the system does. I would say doing some sort of final static analysis check, making sure that there’s no obvious security bugs. I would say doing, I scan my codes for secrets. So a secret would be something that a computer uses to authenticate to another computer. So an API key, a hash, a certificate, a password, a connection string. There’s many, many types of secrets, but it’s computer to computer instead of human to computer.

Tanya Janca 00:42:37 And so I scan my code for secrets because I don’t believe secrets should be in code. I believe they should be in a secret management tool or another place that’s safe. So some frameworks offer you basically a secret store, a place that is safe where you can put it and you access it programmatically and, but most of them don’t. And so a secret management tool can help with that. So I scan for secrets because I don’t want to give my secrets away. If I could do linting for code quality, so I don’t consider a linter technically a security tool. However, if you are ensuring you have good code quality, it’s just better you’re building a better, more reliable application. And that generally means also better security. So I am very pro linter and then dynamic analysis. And so there are several different types of dynamic analysis tools.

Tanya Janca 00:43:31 So dynamic analysis means your app or your API or your serverless or whatever is running. So it can be on a Dev server or a test server somewhere, but it’s running. And these tools interact with your app live, and they can make a mess. So usually the security team runs those. An example would be Burp Suite or Zap. There are also tools that are specific for APIs because a lot of the super automated DAST, Dynamic Application Security Testing tools, DAST. And a lot of them really suck with APIs. They’re good with a big monolithic web app, but when it comes to a microservice architecture, they get really lost or with a SPA, Single Page web App. They’re just, they’re terrible. So you would want to use something more specific for an API and they’re, I don’t know of a good dynamic tool for SPAs yet.

Tanya Janca 00:44:24 So basically then I would, depending upon the system and the budget, if you can have a penetration test done, so that’s where a security expert comes. And they interact with your application live. They usually use something like Burp Suite App or both. They usually use a whole bunch of other tools, and they will manually test your app. They’ll have scripts run, they’ll try to brute force things, they’ll buzz every input. So fuzzing is really important. Fuzzing is where you test every single part of the input validation of every single field. And I remember the first time I saw a fuzzer run it, put the letter A into the field and I’m, okay, this is pretty boring. And then it put 50 of the letter A, I’m okay. And then 500 and then 5,000 of the letter A. And it goes through and tries all these special characters and sees what it can get.

Tanya Janca 00:45:18 And then it, it tells the tester, I put these characters in and it acts weird, please go destroy this app. And you use this information to eventually create an exploit and you figure out where there’s flaws in the input validation. If you are doing proper validation with an allow list and you’re doing it on the server side and you won’t, the fuzzer won’t get anywhere. But almost everyone uses a block list, even though almost everyone that has errors uses a block list or they’re doing it in the front-end JavaScript. Instead of doing it on the backend that theyíre supposed to, they’ve made a mistake, they’ve put in the wrong place, then the fuzzer will show you your errors. It’s really a powerful tool, but it can make a gigantic mess. So generally the security team runs dynamic tools, including fuzzers, if you can.

Tanya Janca 00:46:12 So this is a weird one. So it’s called testing, but I wouldn’t put it in the testing phase. You put it out into production or you put it in during all your tests and then again in production. So it’s called IAST, Interactive Application Security Testing. And that happens, it’s a binary that goes up inside of your application and it does static and dynamic analysis as your app runs. But it only works if your app is being actively used. And so if you have it in your app just on the Dev server, well, I don’t know about you, but I don’t do super thorough testing on the Dev server. I’m kind of kicking it around and playing with it a bit, but it’s not the same as having 2000 users on it every day. Right? And so you generally deploy it during a penetration test and QA testing and then in production and it tests your app from the inside out.

Tanya Janca 00:47:05 IAST is quite expensive and causes a bit of latency. And it is a ton of work in order to install it. Installing it is so complicated. It has its own name, it’s called instrumentation. So generally I only see IAST at banks or really super mission critical systems where there’s a lot of money involved. I would say maybe 1% of all my clients use IAST. And so, but it’s still really cool technology. It’s very interesting, let’s be clear. And so those are the types of tests that I want to do. So manual testing and automated testing, oh, and I missed one, oh my gosh. I want to secure my supply chain. And so there are two things I would do. One is use a Software Composition Analysis tool, so SCA to check all my dependencies, see which ones have vulnerabilities in them.

Tanya Janca 00:48:00 And then ideally it also checks if I have a dependency and it has a vulnerability, does my code call the vulnerability? Is it reachable from within my app or is there no path in the code that ever gets there? And so if it’s not reachable, I might fix it later. If it’s really, really high risk, then I might fix it quickly. But generally, if it’s not reachable, I’m not that concerned. Yes, it’s a time bomb in your app theoretically, but I mean if you have the math library, are you doing every single type of math? Are you doing derivatives and calculus and geometry? Probably not, right? And so if you are doing geometry and it’s in the, I don’t know, calculus area, your app’s not going to suddenly need to do calculus probably. And so if it’s not reachable from once in your code, it’s not usually exploitable and then I just leave it.

Tanya Janca 00:48:56 But the other thing for securing your supply chain, ideally part of the requirements phase of your project, there’s a checklist for your supply chain. So these are the security settings that we want for our CI, these are the security settings that we have for any sandbox area. These are the security settings or the rules for releasing code and the CI, here are the people that have approvals, here are the people that are notified, et cetera. Even people forget, but it took you a while to set up your IDE, just right backing that up or writing down even just these are the plugins I have and that I’d want to use if my laptop got ransomware and I had to set everything up again, these are the things that I use. Just knowing that and being able to set everything up again very quickly is really important.

Tanya Janca 00:49:46 So, but you would probably just need to do that once for your supply chain for the project. Just make sure that you’re following all the policies or the rules or the checklist, whatever it is that your organization does. But for software composition analysis, I would run it every time I check my code in, just in case I’ve upgraded a dependency unfortunately to something that’s not secure or a new vulnerability has been found since the last time I checked in and, oh this is not very good. I should do something.

Brijesh Ammanath 00:50:18 That’s quite an exhaustive list. So you’ve covered manual and dynamic and automated tests. You’ve covered performance tests, secrets using of linter, you’ve covered SAST, DAST, IAST, and supply chain securing the supply chain as well.

Tanya Janca 00:50:35 I’ve done a lot of security testing in my life.

Brijesh Ammanath 00:50:40 I do have a ton of questions on each of them, but we won’t be able to cover all of that. But in terms of tools which actually run on production, say IAST, does that do not impact the performance of the system and don’t users see degradation when you’re running the test?

Tanya Janca 00:50:56 For IAST? There is latency, there absolutely is. And do users see it? I think that if you have a system that needs, so the latency of course according to the people that make IAST is very small, I would say that’s something you really need to validate for yourself. So all of these systems or all the security testing tools anyway, you can turn off a bunch of tests if you want to. So they go faster. All of them are designed that way, knowing Devs want to move fast. And so the security team wants you to be able to move fast too. Or I would hope any decent security team knows that’s a priority. And because it’s the developer priority, it should be their priority too. And so with IAST or anything that you wanted to test in production, quite often you can just remove a lot of tests that you don’t think are that important if it’s going too slow.

Tanya Janca 00:51:52 I also often suggest testing in off hours if that’s a possibility. So I used to work for the Canadian government and although Canada has five time zones, because we’re ginormous, there’s still many hours per day where theoretically no one or almost no one’s at work, right? And so we would schedule as many things as possible to run during that time. But if you are, for instance, running an online marketplace, it needs to be open all the time probably, right? And so then that’s a lot more difficult. But yes, you’re right, it totally could cause latency. And that’s one of the reasons that I asked is not as popular and it’s used so rarely. I would say though, no matter what, if you are going to have a production system that has any importance to you, I’d want to have monitoring and logging turned on. And although that does cause a small amount of latency, I want to know that my app is down before anyone else knows. I don’t want my customer to call me and tell me it’s down. I want it to already be back up before they get through on the phone.

Brijesh Ammanath 00:52:56 Yeah, makes a lot of sense. Also, can you expand on any security considerations, developers or the team should think about post co-live in terms of maintenance and continuous improvement?

Tanya Janca 00:53:09 Yes, this is a weird one because when I go to do application security at different places, I like to spend 50% of my time on apps that are already in prod, which I call legacy, which I do not mean to offend, just to be clear. I know if your app came out six months ago, you don’t feel its legacy. I have to have a name for it. And so wherever you want to call that, let’s say I’m calling it the same thing as you. And a lot of workplaces are, no just focus on the new apps. But most organizations, unless they’re a startup, have more apps in prod than they’re currently developing, right? And older applications, we knew less about security when they were developed. And unless they’ve had a big update or a refactor or rewrite or a lot of security attention, they’re often not in a great state.

Tanya Janca 00:54:01 And so I try to have half my time on those. And so I try to set up automated testing on all of them. So an easy thing you can do is on your code repository, set it, get a static analysis tool, a secret scanner, a software composition analysis tool, and set them to scan every Sunday or whatever day works for you. And they can’t hurt anything because they’re all static. So they just need read only access to the code and then just go check out the reports every Monday, right? that would be one thing that you could do. And we do this because the tools get updated with new types of tests. So the tools are learning, we do this because software ages very poorly. The longer it is out in production, the longer there is a chance for a malicious actor to figure out something wrong with it, right?

Tanya Janca 00:54:53 You could set up dynamic testing. So pen testers always say it must be production or you don’t really know if the test isn’t perfectly accurate if it’s not production. But I gently disagree, I’d rather have a pre-prod or staging environment that is a perfect mirror to production, except for there’s not as much power behind it, right? So the performance isn’t as good because it’s staging, which is fine, but if every other thing matches, which I feel it should, then you can do a fantastic test there. And so running dynamic tests there maybe once a month or more, if you have the cycles, you can automate them to run regularly with dynamic testing, there’s API tools that can just run all the time and it just checks the requests and responses to the APIs and tells you if it sees something disconcerting. So I would like to have a lot of automated security testing happening, but on top of that, I need logging turned on.

Tanya Janca 00:55:53 And I need to talk a little bit, I would say, at length in both my books about logging, because I’ve had to do incident response to security incidents at a lot of places that I’ve worked. And if I get there and there’s no logs or there’s really not very good logs, there’s no evidence for me to press charges, there’s no evidence for me to figure out what happened. There’s no evidence for me to figure out how to prevent this from happening again. It just as when you’re trying to troubleshoot something, if there’s no logs, how am I supposed to troubleshoot this? It’s very similar except for I can’t even debug it, right? Because it happened in the past. So it’s not I can put a ton of break points in the code and run it and see what happened. If there’s no logs, I’m literally completely unable to investigate.

Tanya Janca 00:56:42 And so logging’s really important. So if we have monitoring, turn on, we find out if our system, hopefully we find out if we’re being attacked, we find out if our system’s down, we find out if our system’s struggling, with logging, we can go and investigate, see what’s happened. And some, sometimes it’s just a coding problem, right? It’s a regular bug, it’s not a security attack. That’s fine. I still want to know. I still want us to be able to fix it and have visibility there. On top of that, on all of those are some newer tools called observability tools and they help us investigate and they are super nifty observability focus on, let’s detect what’s happening right now, where logs are, what happened in the past, right? And observability focuses on, so I am detecting an incident happening, right? An attack is happening right now so that you can take action right now if you have a cloud provider and your apps are in the cloud, you can also have the cloud detect certain things.

Tanya Janca 00:57:46 I believe Azure calls it threat protection. And you can create a logic app and with that then call a serverless app or instruct the cloud to take certain actions. This is more advanced and this is something generally the security team would do, but if you detect something that it looks like injection, send an email or phone the security team immediately and block that IP address permanently or, this looks a DDoS attack or maybe instead of a DDoS, let’s say a DoS, so a denial of service attack rather than a distributed denial of service attack, which is much more difficult to respond to. We’re seeing this one IP with a ton of traffic, so we’re just going to block it right away. No legitimate customer is going to behave that way. So we feel confident just automatically attacking it and notifying the security team.

Tanya Janca 00:58:38 Those are things generally the security team would set up for you, but ideally, they’re going to talk to the developers about them because they don’t want to break stuff. I really don’t want to be the security team that is the threat to availability, right? That’s bad. That’s a bad look. And so ideally, they’re going to ask advice and guidance from the developers and work with them on these things. So logging, monitoring, if you can have your app send alerts as well. So again, I talk about this a lot. So when you get to for instance, the global exception handler, this means all your tries and catches have failed, right? Everything has gone wrong. If you call the global exception handler, maybe there should be an alert that goes to the Dev team that says, hey, the global exception handler got called. Maybe you need to figure out what went wrong here and look into this.

Tanya Janca 00:59:29 Or maybe someone has tried to log in 10 times in under one second. That seems very wrong to you and maybe an alert should be set. And this is again, something the security team would work on with you of when you would want to trigger an alert. And where this alert goes is the alert an email? Is the alert a phone call? Because I didn’t know the cloud can phone you. I know because when I worked at Microsoft Azure phone to my boss to tell on me that I checked a secret and into production, however, I checked to pretend a secret and into production so I could make a demo of what you’re not supposed to do. Okay. But Azure then reacted and phoned my boss and my boss was whoa, did you know Azure could make phone calls? I did not.

Tanya Janca 01:00:15 He’s also, what the heck are you doing? And I explained and then we made fun of Azure. But anyway, I feel the security team would work with you on these things. And so what does an alert look like? Does an alert go to your Security Information and event Management system, your SIM? If so, what format does that look like? Does the SOC, the Security Operation Center know what this alert means and know what to do? So I feel this is different for each organization, but I like it when an app can call for help when it needs it.

Brijesh Ammanath 01:00:50 Yep. Makes sense. I think we have covered or double click into each of the phase within SDLC and see what specific security measures should be considered in each of those phases. Are there metrics or KPIs, Key Performance Indicators that teams can track to ensure security is integrated effectively? How do they measure success?

Tanya Janca 01:01:11 Oh, I love this question. I’m a big fan of metrics and gathering data and then using data to improve. And so generally when I run an AppSec program or I’m part of an AppSec program, we choose a specific security posture that we want to be at. And different apps have different risks and therefore need different postures. And by posture I mean how secure it is, how tough and rugged it is, how many tests we’ve done, how many layers of protection we’ve used. So for instance, I did counter-terrorism at one point in my career and we did every single thing you can think of. And when I was the CISO for the election in Canada, we did every single thing you can think of at least twice, literally twice. But I’ve also written apps that don’t need very much of anything. And this super famous example I use is I used to run this lunch and learn program.

Tanya Janca 01:02:08 I ran a community of practice for my dev team for many years and it got very popular and eventually I ran it and we streamed it across the Canadian government to all 70,000 software developers. And we just had this little web app with the schedule that is very low priority if it goes down, it is not important. The data inside, it’s not important. And the system was not connected to other systems. It was just a hard coded database with what I put into it. No one else accessed it. And it was just select statements, right? And so the risk, and I don’t need to do a bunch of security testing on this, this is fine, right? And it was just within my governmental department, so only 2000 people could see it, et cetera, et cetera. there was just the risk is so low, right?

Tanya Janca 01:02:52 So I would say that I create goals for my program and certain security postures for each system, and then I measure myself against those. So my first goal every time I start somewhere is I want to do an inventory of all my web apps and APIs and serverless apps. And I need to know where the code is, where links are in every environment, what team that this belongs to and how to contact them. A brief description of what it does, its sensitivity rating. So usually I have one to three or one to four. So, this is a four, I need to do the works. This is a one I don’t have to do very much. And then any documentation just links to documentation. If I can figure out how it fits into the larger architecture, that’s even nicer. But just doing an inventory thing.

Tanya Janca 01:03:41 And then I want to be able to run whatever scanners I have on 100% of those apps and then look to see which ones are in a bad state. And then I prioritize them, and I figure out what state I want them to be in. And that is the start. And then I take all of those results and I shove them into Excel because Excel’s the best security tool ever paid, Excel and browsers. And I mash all that data up and I figure out what our top security concerns are, mistakes we keep repeating and I educate on those immediately and I tell all the Devs, I’m really worried about those two or three or four things. And I start to try to get action on those big things immediately. And if I do that for 90 days, then I remeasure everything. So yes, I did complete the inventory or Iím half done or whatever.

Tanya Janca 01:04:30 I have rated the apps or I have not. I have gotten, especially when you re-scan three months later instances of these things that I’ve been educating on went down or they’re the same or it’s worse, in which case I’m a total failure. Usually they go way down. And then I can see, okay, so this is where I’m at, this is how much traction I can get with the developer teams right away. This is how close I am to a security posture I feel is responsible and reasonable for our organization. And then I set better goals. That’s just my crash first 90 days when I start somewhere. I came to that over many years. But if you already have a security program, your goals might be all the Devs hate our stack analysis tool. So this happened to me. I went somewhere and we’d signed a three-year contract with a big company and all the Devs had disabled it everywhere.

Tanya Janca 01:05:24 They hated it and they’d had bad experiences with it, so it didn’t matter if I could implement it in a new way that was nicer. They were just, we hate it, no. So I ripped it all out and I did proof of concepts with a bunch of other ones, and we found one that they liked and I rolled it out everywhere. And that was my project for 90 days and just how well am I doing against this project? And dev feedback was part of my rating of myself and my project. Are they satisfied with this new tool? Are they using it? So when I started seeing them use it without me, I was just, oh my gosh, oh my gosh. It’s working. And so I feel your security team meets to set goals and then measure against those goals as opposed to, oh, last quarter we had 200,000 vulnerabilities and we know we have 199,000 vulnerabilities.

Tanya Janca 01:06:18 I feel, are those vulnerabilities a concern? Just because some automated system picked it up, it doesn’t actually mean that it causes business risk, right? I feel a lot of companies, I met with a company a few weeks ago and they’re, well, how many bugs per app is reasonable? Are they even really bugs? They’re, we don’t have time to look at that. I’m like, well then, we have a problem. If you’re, I don’t have time to even look at that. You wanted Dev to take time to fix it. Yeah.

Brijesh Ammanath 01:06:50 Excellent. We have covered a lot of ground over here, but before we wrap up Tanya, what’s one piece of advice you’d give to developers or teams looking to get started with secure SDLC today?

Tanya Janca 01:07:01 I have two pieces of advice and one is really cheap. If you are going to look up how to do something online, this is just general advice. Look up how to do it securely because whatever is rated at the top on any website ever is the least secure way to do it. It’s unfortunate, but it’s extraordinarily common. If something is at the top of the Stack overflow, whatever, I love Stack overflow, but it’s often all the security features have been turned off in order to make it work in every instance. So please look at the most secure way. So now that I’ve gotten that advice out of the way that I really want people to know, I would say so I’m quite biased, but I have a class that I made that is free, that’s online that we can link to that will teach you how to build your own secure system development lifecycle.

Tanya Janca 01:07:50 And it’s completely free. There’s no upsell. The idea is that I got some grant to host all my courses for free as part of the acquisition deal, because that’s what I wanted was for them to be free. Because I want people to have more secure SDLCs. And so it’s called Application Security Foundations, and it will teach you about every single step that you can do. And then it helps you build your own program. And I was teaching that live to companies and helping them build their programs as a Consulting gigs. And then I was like, how can I make this so everyone can do it themselves? How can I teach a person to fish? And so it starts off with telling you all the different activities that exist, all the different types of tools that exist, all the different parts of your program that you could have.

Tanya Janca 01:08:39 And then as you learn each one, it’s like so how would you apply this where you work and what would make sense for your org? And then you learn about policies. So what policies could support these things? What guidance could we give? How could we teach developers about this, et cetera, et cetera. How can we scale this program in the most effective way? And it builds and builds on your program over the three courses, and every single course is free in the academy. There’s no charges. And the idea is that at the end you have this nine-page plan to launch a full AppSec program or to improve upon the program that you have. And I did that because I really want everyone to build better software. I just do. And so, you could start by taking that class, but if you don’t want to take a class, that is okay.

Tanya Janca 01:09:29 I would start with creating a secure code guideline. Think about the coding that your organization does and start with that. If you have no guidance for developers whatsoever, a coding guideline can really help. And you build it and then you get feedback, and then you update it and then you get more feedback and then you update it because your first copy, trust me on this is not going to be great. I know I’ve built some not great ones and I’ve worked and worked and worked to create better and better. And once you have it, and people agree it’s pretty good, you want to teach it, you want to socialize it and make sure that everyone at your organization knows it exists. They know where to find it. And ideally, you’ve literally taught it to them. That would be the absolute best. That has been a large part of many of my AppSec jobs, is coming up with a guideline and teaching it so that developers know what we want from them. And the guideline can include, we use the SaaS tool, or this is the secret scanner, or what whatever tools you expect them to use. It could just be four things to start. If that’s all the traction that you think you can get, that’s okay, but you really, really, want to start somewhere and that might be a good spot.

Brijesh Ammanath 01:10:43 Perfect. Thank you, Tanya for coming on the show. It’s been a real pleasure. This is Brijesh Ammanath for Software Engineering Radio. Thank you for listening.

Tanya Janca 01:10:51 Thank you so much for having me.

[End of Audio]

More from this show