Chad Michel, Senior Software Architect at Don’t Panic Labs and co-author of Lean Software Systems Engineering for Developers, joins host Jeff Doolittle for a conversation about treating software development as an engineering discipline. They begin by discussing the need for engineering rigor in the software industry. Chad points out that many developers lack awareness of good engineering practice and are often unaware of resources such as the Software Engineering Body of Knowledge (SWEBOK). Among the many topics explored in this episode are design methodologies such as volatility-based decomposition and the work of David Parnas, as well as important topics such as quality, how to address complexity, designing for change, and the role of the chief engineer.
This episode is sponsored by ClickSend.
SE Radio listeners can get a $50 credit by following the link below.
From the Show
- Guest twitter: @chadmichel
- Lean Software Systems Engineering for Developers
- Software Engineering Body of Knowledge (SWEBOK)
- Righting Software by Juval Löwy
- Code Complete by Steve McConnell
- On the Criteria to Be Used in Decomposing Systems Into Modules by David Parnas
- Volatility Based Decomposition by Juval Löwy (video)
- Framework Laptop
- Exploring Requirements by Gerald Weinberg
- Design Stamina Hypothesis
- Andon Cord (wikipedia)
- W. Edwards Deming (wikipedia)
- Conway’s Law (wikipedia)
- Information Hiding (wikipedia)
- Don’t Panic Labs
From IEEE Computer Society
- Is software engineering really engineering?
- An introductory software engineering course for software engineering program
- Professional Engineering and Software Engineering
- Putting the “Engineering” into “Software Engineering”
- Teaching systems engineering to software engineering students
- Architecture-centric software engineering
- Software Engineering: A Profession in Waiting
- Envisioning the Future of Software Engineering
- Lean Software Startup Practices and Software Engineering Education
- Teaching Complex Software Engineering Concepts through Analogies
- Exploring the Role of Creativity in Software Engineering
- Design Engineering: A Curriculum on Design Thinking
- Design Thinking
- Decoding Software Design
- Design Thinking in Practice
From SE Radio
- Episode 520: John Ousterhout on A Philosophy of Software Design
- Episode 518: Karl Wiegers on Software Engineering Lessons
- Episode 470: L. Peter Deutsch on the Fallacies of Distributed Computing
- Episode 407: Juval Löwy on Righting Software
- Episode 359: Engineering Maturity with Jean-Denis Greze
- Episode 331: Kevin Goldsmith on Architecture and Organizational Design
- Episode 132: Top 10 Architecture Mistakes with Eoin Woods
Transcript brought to you by IEEE Software magazine.
This transcript was automatically generated. To suggest improvements in the text, please contact [email protected] and include the episode number and URL.
Jeff Doolittle 00:00:47 Welcome to Software Engineering Radio. I’m your host, Jeff Doolittle. I’m excited to invite Chad Michel as our guest on the show today for a discussion about systems engineering for software developers. In 2000, Chad graduated with a degree in computer engineering. He then graduated from the University of Nebraska, Lincoln, in 2003 with a Master’s in Computer Science. After college he worked for multiple software companies before ending up at Don’t Panic Labs. Chad is a lifelong Nebraskan. He grew up in rural Nebraska and now lives in Lincoln. Chad and his wife have a son and a daughter, and Chad is the co-author of Lean Software Systems Engineering for Developers . The book that we’ll be talking about on the show today. Chad, welcome to the show.
Chad Michel 00:01:26 Hi, how’s it going?
Jeff Doolittle 00:01:27 It’s going great. Glad you’re here.
Chad Michel 00:01:29 Thank you.
Jeff Doolittle 00:01:30 Well let’s dive right in. The book starts out with a distinction between software engineering and software development. So, give us your perspective on what distinguishes those two concepts.
Chad Michel 00:02:28 You know I really wasn’t around the rigor of software engineering. I mean the number of people that have heard of SWEBOK and actually know what that means in software, in the software industry probably isn’t that many. It’s just because they’re not really thinking about things as building software systems or doing software engineering. We sometimes use kind of an analogy in some fields there’s like kind of like the physics part of it and then there’s like the engineering, like the civil engineering that’s using that physics. Computer science is kind of like that science part, but we don’t really focus enough on that engineering part. The software, the really the rigor about how to use what we’ve learned from computer science and actually build real software systems that really achieve what our customers want. It takes a lot more to do that and it’s, it takes a lot more rigor, a lot more practice and a lot more discipline than we typically see in software development. I mean we can all see the, how much discipline it requires, just try to get software developers to task out their work in a DevOps or any sort of tool, DevOps, euro, whatever. And you see it takes a lot of effort just to get kind of like that rigor around, we want to build things, we want to know are we going to be done on some sort of schedule, when are we going to be done? All of those things are part of that rigor around software engineering.
Jeff Doolittle 00:03:41 Yeah, so you’ve used the word rigor a few times, which I think is clearly a critical piece of this and I like that distinction you made between sort of the science side and I think what you might call the applied side. Yeah and of course it’s a scientific rigor as well. I don’t think you mean to say that there’s no rigor there, but a different rigor. It’s an engineering rigor. Am I understanding that correctly?
Chad Michel 00:03:59 Yeah, and this one, kind of going back to that metaphor and I don’t know who this one, this metaphor particularly comes from but the science is, they’re building so they can learn something? A you know in the engineering fields we’re hopefully we’re learning how to build things and really that applied piece of it and there’s some good resources out there. I mentioned the Software Engineering Body of Knowledge, SWEBOK, there’s, our book helps with that. There’s Val’s book on writing software think great reference there. I’ve always loved Steve McConnell’s Code Complete . That’s a book that I’m like always trying to shove down people’s throats even though it’s a little old at this point. There are a lot of good sources out there but it, it is kind of the soft area that’s not really talked about nearly as much.
Jeff Doolittle 00:04:41 Yeah, well in my experience sometimes the old ways are best. I mean, in 1972 David Parnas wrote his classic paper with a wonderfully long title On the criteria to be used in decomposing systems into modules . And yet for 50 years now I think we’ve still been kind of dancing around what he discovered 50 years ago. Does your experience kind of gel with that
Chad Michel 00:05:03 100%. I think maybe a good test if someone’s leaning down that path of software engineering versus software development. Just are they aware of that paper?
Jeff Doolittle 00:05:12 Yeah, if they know who David Parnas is.
Chad Michel 00:05:14 Yeah, David Parnas is, there’s a good chance they’re headed down the right path. Obviously, an engineer, he didn’t come up with a really good clean short name for it. Maybe a little bit of a marketing person and we’ve all would’ve got there faster?
Chad Michel 00:05:26 Maybe too much of an engineer like many of us, I think we are pretty good at coming up with that long, very descriptive name. But just having that kind of reference of how to break things down, how to decompose things into in a way that we can live with for a long period of time. That’s one thing as an organization Don’t Panic Labs we’re very focused on is we want to build things that we feel we can live with for a long period of time. People aren’t building bridges just so they can have the bridge for two weeks and then throw it away. They’re building bridges for a long time. We want to build our software systems where we can live with them for a long time. None of us are probably going to build things that are going to last as long as, some bridges that are out there.
Chad Michel 00:06:03 But we want to shoot for more than a year or two and then we want to throw away our software. We want systems that live for a long time and part of that is that engineering rigor part of that discipline there. And kind of back to your one point I did, I do think those sciences are very rigorous. I wasn’t calling them not rigorous. Just like there’s a particular rigor into being a software engineer. Particular goal of really wanting to, I think as an organization and myself and I know Doug who I wrote this book with, very focused on wanting to kind of elevate that game for software professionals. We want to be seeing ourselves more as engineers, software engineers as opposed to just people to get some requirement, throw something out on a webpage and make it look like it works. We want to kind of move as an industry I think, from that to something that’s, you know better seen as a true engineering discipline that can actually have predictable outcomes.
Jeff Doolittle 00:06:52 Do you face any resistance maybe to that idea? Because, you could take what you described a second ago and there are methodologies and movements that very much seem to promote. Just start code slinging, throw things at the wall, see what sticks and iterate towards your solution. And so I’m curious if you face any resistance to the idea that we need, rigorous engineering principles in the software industry.
Chad Michel 00:07:15 100%. We see resistance. Now I will call out internally, I think everybody’s pretty on board. It’s internal with Don’t Panic Labs but we work with others or other organizations. We can get some resistance. Part of it comes from this word agile. People will throw they, well that’s not agile at you. We get that occasionally when we’re working with other organizations. Usually kind of have to kind of back them all the way up and explain we want long-term or we want agility in our project, not necessarily to be something that’s agile or some sort of quote agile. We want agility and part of that agility is actually having a plan and we’re not going to be able to have a plan if we don’t take some time and think things through. So I do think there’s a resistance there.
Chad Michel 00:08:00 There’s also a resistance in that, people really want to just get going. I think there’s always kind of this, the very first thing people want to do is we’ve got some idea get people, get fingers going on keyboards and that’s also kind of a trap. Because once you start down a path it becomes, you get a lot of inertia one way. It can be really hard to move that inertia to another direction. That’s why we really encourage people before writing software, do your designs on a whiteboard, kind of store things out on a whiteboard, maybe do some work in a Word document or my personal favorite’s more of a text file. I prefer more text files myself but I don’t really care how someone does it. Just don’t go straight to code. I try to document out what are the steps you’re going to take to do things?
Chad Michel 00:08:43 What are the potential things you think you may run into? Is there any like spikes or a little bit of research you need to do to prove you can do this? All those things, get all those things documented. Document how these pieces are going to interact within the system as a whole. All of those things can be critical to kind of knowing how are we going to actually deliver this software. But back to kind of the original question, do we get resistance? Yes. And it all comes back to that people really want everybody it going. I think sometimes it’s from a business side, like they really want to get something stood up for a conference or something. I’ve run into some resistance sometimes people are like can we just not do testing this time? Those sorts of things, those do come up.
Chad Michel 00:09:24 Luckily I will call out, we work with a lot of, I think really good customers in a link in the ecosystem. I mean maybe we coach them a little bit but I think in general most of our customers are come on board pretty quickly with this idea. So I don’t think we get a as much resistance as some probably do. But I will say we do get resistance but the resistance isn’t as bad as you would think. People love the idea of predictable outcomes so if you can ever show them predictable outcomes, I think that wins a lot with people as well.
Jeff Doolittle 00:09:53 Yeah and that speaks a lot I think as well to the fact that at the end of the day most of the software that we’re building is supposed to be for some business purpose and that’s the language of the business is schedule and cost and risk. And if you can’t speak the language of schedule, cost and risk, then you really are going to have trouble communicating with the business and actually solving their problems in an efficient and effective way.
Chad Michel 00:10:16 Actually you mentioned that one kind of schedule and cost. It was kind of a movement there for a while. I don’t honestly know if it’s kind of a big thing right now, but kind of like this notion of no estimates. They weren’t even going to try to estimate things and I’m a huge fan of estimates. Well I don’t necessarily love doing them by the way. I’m not going to claim that I love doing estimates but I do think that forcing you to do estimates to me is kind of like a really good check on kind of my, how much I’ve really thought through these individual pieces. It really forces me to look at it maybe with a more critical eye or just look at it slightly differently. Like how would I build this and how much time. Those sorts of activities I think are a critical part of building software and I really again really helps to create more confidence, more success.
Chad Michel 00:11:03 You mentioned the, you have to talk in that kind of language. Almost everyone has to make trade-offs. There’s very few companies out there that effectively have unlimited budgets and almost no one has unlimited time. They need this product to hit some market. They maybe don’t need it exactly June 15th or, very particular date. But they probably need it within some sort of timeframe for a variety of reasons. So if you can’t deliver it in that timeframe, they may literally need to make different choices as a business and not being able to provide them any level of certainty is a huge risk to the business. And if it’s going to end up costing 10x what they thought it was, they also need to know that as well. Because they may make other decisions with what to do with that money.
Jeff Doolittle 00:11:44 Yeah. And they often don’t find out until they’ve already spent the 10x or they’re halfway to the 10x. This reminds me of the old adage if you, if you fail to plan, you plan to fail. And it’s similar to another one, like it’s kind of related from economics, it’s markets fail use markets . So I would say the same thing. Plans fail, use plans. It doesn’t mean you don’t plan, it just means the plans are imperfect. But a good enough plan can be better than no plan at all.
Chad Michel 00:12:11 And reminds me of that famous Eisenhower quote. Plans are worthless, planning is essential. There’s a lot you gain just by the act of planning and I don’t think anyone ever assumes your plan is going to go perfectly. I think that’s kind of a myth. And then I think sometimes people think when you’re doing planning you’re assuming it’s going to go perfect, you’re not. But it is going to give you a better shot of actually having some sort of one. It’s actually giving you a target to hit and you have done critical thought. A v big thing we talk a lot about, we talk about it in the book and we talk a lot about as an, as an organization is critical thought, actually thinking about what you’re going to do. Just that act of planning can be huge and thinking can we actually achieve these goals?
Chad Michel 00:12:51 One thing I think we benefit from a little bit as an organization we do a lot of projects for other customerís customers? So they are almost demanding some of that critical thought of us of that resource budget and time. I think that in some ways makes it easier for us because we have to present that back to them. It’s probably a little harder if you’re an internal organization to make the argument. We have to make those decisions. If we spend that time making those calls, they should. I’m not arguing they shouldn’t. I just think in some ways it maybe makes it easier for us because our customers are often demanding that of us as well.
Jeff Doolittle 00:13:23 Yeah, you have the natural forcing function and I think as professionals we should expect the same rigor internally and treat our employers money like it’s our own money and don’t treat it differently just because it’s somebody else’s money I think is a, an important thing. So let’s continue on to the next section of the book where you talk about complexity and there’s three kinds of complexity you discuss. So you, you can maybe dive into those a little bit with us, which is objective complexity, requirements complexity and solution complexity. So let’s talk about those three and then let’s dig into how software engineering helps address those different areas of complexity.
Chad Michel 00:13:58 Yeah, I’m going to hit the, I think it was the second one you said kind of first.
Jeff Doolittle 00:14:01 Okay. Requirements.
Chad Michel 00:14:03 Requirements complexity. Agile tools and we talked a little bit about it. I have some really do some good things there and really help with that kind of requirements complexity piece. And I think as an industry we’ve improved there quite a bit. I think we still have a way to go and if I look at us as an organization, that’s one area where I want us to continue to improve. But that really focuses on kind of improving that requirement complexity area. There’s not been a lot in the industry really focused on how we manage solution complexity. Now there are things out there, there’s volatility-based decomposition, which I know you and I are both a huge fan of, but there’s other people trying to do other options there as well to kind of reign in some of that solution complexity. Then there’s also what we call objective complexity.
Chad Michel 00:14:45 Kind of this notion of how or what are the goals for that customer related to this. We often try to document like what are the big impacts our customers trying to have and trying to work backwards from those impacts. What are the measurables that would actually show that we achieved those impacts with this software development work. It allows us to not lose sight of, because it’s pretty easy on a software project to get down to some sort of requirements list and thinks that is the, what the goal of the project is, is to build those requirements. And that is to a degree, I mean if you’re looking at it from our perspective, if we deliver those, we’ve technically done what we said we were going to do with that customer, but that’s maybe not why they came to us. It was that objective complexity. What were they really trying to achieve? It’s good to have that. Because every once in a while, you may want to jump back in and look, are we actually trending in that direction or we for some reason have we lost our site?
Jeff Doolittle 00:15:38 Yeah, I think that’s good too because, requirements very often are duplicated or inconsistent or even contradictory and very often there are also solutions masquerading as requirements. And so going back to ask the fundamental question, what objectively are we actually trying to solve, can help provide some constraints on your requirements because requirements run a mock, you might end up building the wrong thing. .
Chad Michel 00:16:02 And I loved your other example that’s in there that I think every software developer or engineer has ran into is two requirements that mean the exact opposite. You can’t do both. I love those moments When you get to something you, we can’t do both of these, can we? That’s always a great moment in requirements.
Jeff Doolittle 00:16:18 Yeah. And you usually learn something again about the nature of the business, which I think that’s again where software engineering sits between software development and the businesses. Are we actually understanding the nature of what we’re trying to accomplish here? Or are we just code slinging and crossing our fingers and hoping that before the heat death of the universe we arrive at a viable solution.
Chad Michel 00:16:39 Sometimes I wonder though, on that are we just hoping that heat death comes sooner?
Jeff Doolittle 00:16:46 Oh man. Well, so taking those three areas of complexity and you basically say in the book that agility comes from managing both requirements and solution complexity. So I think that’s an interesting summary. Talk a little bit about what you mean there and why it’s your belief that agility comes from managing, because I don’t think people put the word agility and managing together usually. So talk about when you manage requirements and solution complexity, why does that lead to agility?
Chad Michel 00:17:14 Actually it is interesting. I hadn’t really thought of the fact that we had those kind of combined, but, if we’re not managing things it’s just things are happening, right? And we’re just kind of running around and we don’t really have any control. If we want to have agility, we’re going to need to have some control over how things are going. Part of that may, it involves having control over our requirements, at least knowing what’s going on with them. Also having control of our solution. If we don’t, we’re just going to be down some sort of path. I mean it’s not of necessarily, it shouldn’t be a problem for us that if we’re halfway through a project, the business comes to us and says we need some other feature really quickly into this work. Right? That shouldn’t be a problem. We want to be agile.
Chad Michel 00:17:54 That does mean potentially that means things are going to change and potentially we’re going to have to change what scope we do. We’re going to have to change some dates. But ideally if we have a good plan in place, we can move things around to accommodate that need that comes up later. I think we shouldn’t be afraid of things changing on projects. It’s going to happen. Businesses are going to learn new things. Now hopefully we don’t go from being an e-commerce solution to being a CRM solution all in one project. I mean there’s, there’s limits to it. But I think we should encourage businesses learning new things about what they need and being able to manage those and providing that agility for them. And a part of that, again we hit it a couple times, we got to provide good guidance. We got to have the rigor to give them, well if we, if we bring this requirement in, it’s either going to move our scope out or we’re going to have to change some scope or we may have to make some sort of design change to help facilitate that.
Chad Michel 00:18:49 And a design change may require some extra time to handle that. But we want to be able to have those conversations with our customers. We don’t want, and this is kind of along that solution complexity side, to be in a situation where someone comes to us a new requirement, we think oh we have to rewrite the whole thing or oh no, we don’t want to go back into this area again. We don’t want those conversations and we don’t want one new requirement. I mentioned changing our design. One new requirement ideally shouldn’t require massive changes to the system. It should be fairly encapsulated to where it only causes minor or maybe not minor, but changes to a few modules or services within that system. If we can achieve those goals, we should be able to continue to provide that agility for our customers for a long time. Because that is what software engineering’s largely about is making sure we can keep, keep maintaining and growing the system for a long time.
Jeff Doolittle 00:19:41 Yeah. In your experience, do people sometimes struggle when you try to apply real world analogies to software engineering though? Like for example, when I think about systems in the real world, I tend to think about simple things that people are familiar with. Like say a water heater and maybe you have a gas water heater but you want to replace it with an elect because weíre electrifying now so we want to replace it with a an electrified water heater and I don’t have to bust out the drywall, redo all the plumbing, change all the fixtures in the entire house. I just go to the place where the water heater is and I pop it out, I pop in the new one and maybe there’s some fittings and some adjustments and some adapters. But generally speaking I can just do that? And I see you nodding your head but the listeners of course can’t see you nodding your head. So it seems to me that help me out with that. How do you overcome that kind of resistance when people say, yeah but that’s a water heater, that’s not software Chad.
Chad Michel 00:20:32 That’s true. Water heaters more physical, you can see it, right? You can feel it. I do think that helps a little bit people to kind of understand things. There is some sense of being able to feel things touch them that maybe helps. Seeing software is difficult because it is largely like seeing thought almost, which is difficult. We’re seeing a lot of text that’s difficult. But at the end of the day we want our software to behave much the same way. I mean we almost want there to be like a water heater service that has some sort of contract, some sort of rigid definition and we should be able to, oh we need to replace this one because for some reason we should be able to write a new one and put it in there. That is a, a large goal of what we try to do as, as building our software projects and with different levels of success and different levels of things.
Chad Michel 00:21:19 Like a water heater’s a pretty good example because as we bring it in, I like your part of the example is that it’s never going to fit perfectly. Now you went from gas to electric, that’s going to be an easier transition because you don’t have the venting. But if you did want to go from electric to gas, I don’t know why someone would do that. But if they did then you do have to start thinking about well now we got to vent the gas out or something. There’s going to be potentially some other things but largely the function of that water heater, the cold water coming in or the whatever 54-degree water coming in and the hot water coming out, that’s that part of the contract’s the same. There is going to be like a little bit of difference is, get getting the fittings aligned.
Chad Michel 00:22:00 That’s kind of something’s going to happen. And in your case going from gas to electric is easier because you don’t have to worry about the venting. So that those sorts of examples. I do think they are similar to what we’re trying to achieve in software. Especially from an engineering perspective. people often call, or I’ve heard it said numerous times, engineering is the art of good enough. We’re trying to get stuff to a good enough. We could try to create a world where you could literally just take a water heater it’s almost like a, a plug and you’re just unplugging one and plugging a new one in. Right? There would be a world in which we could have tried to do that. That would’ve been a high bar to everyone to achieve and
Jeff Doolittle 00:22:39 Yeah, it might be too expensive, right? But yeah, maybe there’s a case for that. There’s a company that’s making laptops nowadays called the framework laptop and I’ll put a link in the show notes for listeners and they’re extremely modular so you can pop out a USB plug and pop in something else that’s like a media port or a disc or more memory or this kind of a thing. And to achieve that requires significant amounts of design, to be able to achieve that modularity. And I think the same thing is true in software that if you really want modularity you never get there by just oopsing your way into it. It requires design.
Chad Michel 00:23:17 Yeah. 100% agree. I did not know that there was a company doing that. That’s interesting. because I on the laptop side, sorry to go on a tangent here, I am recording this on a MacBook here, Apple’s gone the exact opposite . I think right now there is nothing customizable about this machine. I don’t know if that’s true but I think it’s true.
Jeff Doolittle 00:23:36 I have the same situation, right? Yes. I’m sitting here with my MacBook Pro and it’s highly non modular. Although I imagine that in, there’s modularity in the aftermarket. And then there’s modularity in the build process and so I imagine there’s probably a bit more modularity in the factory than there is in the aftermarket and that’s a design decision. And Apple’s a design company so they just have made some decisions about who gets to tweak their design and who doesn’t. And we as the lowly consumer don’t get to tweak the design, we just get to use it.
Chad Michel 00:24:11 No, that’s a good point. There is quite a bit of change at least in terms of like different GPUs. Well actually now with the, now with the new ones, maybe not so much from the
Jeff Doolittle 00:24:19 GQ
Chad Michel 00:24:19 Right. But the memory that is somewhat customizable, hard drive still customizable. So yeah, there’s still, there’s still some, at least now they have good keyboards so it’s all, I don’t know if I really want to swap that anyway, so it’s got a good keyboard.
Jeff Doolittle 00:24:31 Yeah. So and now we can take that back to software and say if we want modular software, how do we get there and modular for who, is it modular for the end user? Is it modular for the developer? And I think the similar analogy could apply there.
Chad Michel 00:24:46 Yeah, that’s a good one. And we’re often like as an organization trying to create situations where our software can be changed by developers and it’s important to us that it’s not just changeable from us as a software development company if we’re working with someone, we are often trying to kind of, hey, this is how this is all set up. We do try to do some kind of training on it as offboarding if the other company is going to take it over. We do both models where companies take it over and run with it for a long time and where we keep it and if, if someone is, we always try to make sure they have the knowledge so they know how with that, how we intended that design to change.
Jeff Doolittle 00:25:20 Right. And you just said how we intended that design to change. Which reminds me of a quote that’s pretty common in the book, which is design for change. So talk about that a little bit and then we’ll continue on through the book. So design for change, how does that relate to what you’re doing with Don’t Panic Labs and software engineering generally?
Chad Michel 00:25:39 I think this is maybe, might be the key principle. This and kind of just being, it’s kind of all wrapped up in that kind of critical thought. We want to design this, our solutions, we assume it’s going to change. Many of us, Don’t Panic Labs have the benefit. We didn’t get to work on great software our entire career by the way. We worked on a lot of other things. Sometimes it was other things we took over from others that weren’t wonderful in previous lives. Sometimes it was things we helped build ourselves that we saw kind of age not the way we wanted. And a little bit of that was, it was probably had a good design at the time or at least seemed like a good design but it really couldn’t handle the changes to the business requirements. Over time the system slowly degraded to where every new feature became that much harder to add to the system.
Chad Michel 00:26:27 And many of us at DPL came from prior organizations and that was something we really didn’t like about those solutions. Because it didn’t feel like we could just keep adding things. It became like every new thing added kind of a weight to carry. So with solutions we’re building, we’ve been very focused on trying to make sure we design for change. We want to assume it’s going to change a lot over time. And almost thinking through, well if it changes in this way, how are we going to handle that and will our solution be able to live with those. So we’re not designing necessarily just for what the requirements are where we’re designing for those kinds of seams or those volatilities for that system. So we can, this area, the system’s going to change. We can make changes to this one module or service without having to destroy everything.
Jeff Doolittle 00:27:11 How do you do that if somebody’s going to say yeah you arenít going to need it or somebody’s going to say, that sounds like big upfront design and obviously, you mentioned before somebody could change from e-commerce to CRM. But I don’t think you’re saying design for any and all change. So there’s got to be some constraints here because this is engineering and engineering is all about constraints. So tell us a little bit about how do you do that? Like how do you design for change without trying to design for any and all change and also how do you avoid speculation of things that maybe aren’t actually likely to change?
Chad Michel 00:27:44 That was a good one. I kind of like you, you are never going to need that. That that is a kind of a good kind of test. From our perspective, I always like to start with getting a good sense of what the requirements are from the customer. We spend a good amount of time kind of getting those, a good sense of those. But then we try to boil it down to what are the core things the system’s really going to do. And the e-commerce is going to be fairly different than the CRM, we’re not going to end up with the same core set of things. And then we are really designing for what that core thing the system needs to do or the core most important features of the system or the most important use cases of that of this system.
Chad Michel 00:28:21 What we do is we usually go through and design that kind of design each one individually, each use case individually. And this kind of then over kind of by the time we get all of those designed, kind of like what pieces we would use to build that, we kind of roll that up and look at, well what does our overall system design kind of look like? And usually it’s somewhat iterative. We kind of go, well this didn’t, this doesn’t make sense or this service here is being used by every single thing in the system. That’s maybe a sign of a problem. And kind of trying to see how these things are kind of interconnected. It’s usually iterative I would say. We never get the design right the first time, we’ve never probably done that. It’s usually kind of like we look at it and kind of iterate and keep going.
Chad Michel 00:29:01 I really like myself to do that first version of this like on a whiteboard. So it’s really fast. You can kind of like wire kind of just do things quickly and go, oh this kind of makes sense and kind of change and change and change. And by the time you get done I often like to kind of ask myself then a few questions like, this product, through those conversations I’ve heard the customer kind of hint that we might be going this direction with this or these are kind of like the future states of things. And often kind of trying to very intentionally picking up how that customer kind of envisions this changing. Because usually, they have decent insights there and kind of think, well if that change came through, what would I do to this? And then there’s probably some other ones that are kind of like common.
Chad Michel 00:29:42 Like I know data access is going to potentially change over time and I know some of the workflow is going to change over time. Thinking through like, well what if the whole signup flow, we have this signup flow now what if we completely direct change that in a later version. How bad is that going to affect the system and think about kind of poke it I guess I’m kind of like, I see a design once we get our design done, I like to really poke it and kind of see how does it hold up. The other thing I think I really benefit as an engineer at Don’t Panic Labs is I like to bring in someone else like Doug or someone and let them poke at it a little bit as well. I think it’s really nice to kind of present your design to someone and have someone else try to like really find the holes or find the spots for while you, if this change comes through, it’s going to be a nightmare.
Chad Michel 00:30:26 And I also think it’s good to have engineers look at it as well or other engineers because sometimes people potentially a little closer to the code might go, well that’s going to be really difficult because at X, Y or Z. And I do think it’s nice to get some of those, that feedback. I think you’re really looking for when we’re doing design it’s often iterative. You are looking for those kinds of reasons why not to do it this way or reasons why not to do it that way. And at the end of the day, the other thing, you mentioned it earlier, we have to do this under some sort of time constraint. I think it’s really important, got to get this done in a reasonable amount of time. We kind of hold ourselves, we always want to get all of this done in a week or less of time. And if you don’t, I think you’re just going to end up iterating and never get something better. There’s an old quote, perfect is the enemy of good and I think it’s really easy, any sort of design to start shooting for perfect. And I know I’m 100% guilty of this myself. If you gave me a year to design something, it would be perfect and unbuildable. If you gimme a week, it’s way better.
Jeff Doolittle 00:31:26 Yeah, yeah. The good enough by definition is another way of putting that quote. Yeah, absolutely. And I like what you said about presenting to others. I think when we talk about engineering really is a team sport and as you’re describing, how you do this with software. I’m working currently for a construction software division of a large conglomerate, right? So I think construction and I think, well what happens, well the architect creates blueprints. That’s a design based on constraints and those are real world constraints, physical constraints, material constraints, right? All kinds of things. And then what do you do? You hand that to the engineers and the engineers are going to tell the architect whether the design holds water or not. And I mean that figuratively speaking, but I don’t know if you’re building a dam that they’re literally going to tell you whether it’s going to hold water or not. And what’s great is that can impact the original design. So there’s, you mentioned before is design is iterative and I think that’s, I think a lot of people miss when you talk about design. They think the big W word Waterfall, we’re going to get all the requirements to do the design and we can never revisit it again. And what you’re saying is no rigorous engineering means you absolutely need to change the design if you discover things in these other processes that are going to have an impact on it.
Chad Michel 00:32:37 And I would say this too, as we kind of go through those reviews, almost guaranteed something changes. It’s not always going to be the same type of things, but almost someone’s going to bring up something that’s could be better or could be like shooting for better is like an actual problem. Someone could find it literally won’t hold water immediately. We’re not building a lot of water holding software. But if there was it could be like, or potentially maybe in our world maybe we won’t be able to handle enough requests with this sort of design for some reason. We know this, this single service here and how you intend on hosting it because we often do like to think through as well how we intend on hosting it as part of our design. Someone could be, hey, you’re never ever, ever going to get the volume you want out of that service.
Chad Michel 00:33:20 And those sorts of things are great to have people poking at. So we’d prefer to learn those upfront because it’s very expensive to make one of those changes potentially after it’s up. Or potentially even worse after our customers or our customer’s customers are using it. You really don’t want to find those things out after users are using it and going, well we can only scale up to a few thousand users. That’s just not going to be acceptable. And trying to make those changes later that rework can kill software projects that coming back and just constantly churning on things. And the other thing I always like to think about rework and one, it’s really bad for the business, coming back and constantly redoing things. I actually think it has just as much value to engineers. I don’t think engineers like redoing things either. I think they like to do something and move on to the next thing. Just constantly churning on something isn’t fun or isn’t good for anyone.
Jeff Doolittle 00:34:11 Yeah. And I think too, when you think about rework in the real world, the impact is evident. If I’m constantly tearing out the walls and redoing them, eventually the structural integrity of the house starts to be suspect. And I think the same is true in software, it’s just less visible. And this is where things like good abstractions can help you because if you’re, at least if the abstraction is not changing, but in a poor design, you’re going to also be constantly changing the interconnections between these things and then your system is going to become more brittle and more fragile with time as the structural integrity of your software becomes suspect.
Chad Michel 00:34:47 Think about, we usually don’t talk too much about, but yeah if we’re coming back and making too many changes over time, yeah, it’s like a house. You’re going to eventually have some sort of structural problem if you’re constantly messing with it?
Jeff Doolittle 00:34:58 Yeah. And who hasn’t seen the code that’s got, you can tell three different people have worked on it and they’re all variations on who’s more creative or who’s more terse or who’s more whatever. They got their special way of doing it and you’re like, you see the different perspectives on it and without some overarching design, like this is how we do it. We build a house, it’s always 18 inches on center. Sorry if you’re in the United States, if you’re somewhere else, I don’t know what you do, but you have a standard that you follow and that’s where engineering comes in. You say this is just how we do it here. We’re not going to recheck every single assumption every single time. Some things are standardized.
Chad Michel 00:35:34 You’re absolutely right. I think that’s critical. Sorry, you got me thinking about this. What is the standard in Europe for studs now? I’m going to have to look. We use inches here so what else could it be? I never really thought it has to be something.
Jeff Doolittle 00:35:48 Yeah, yeah. But
Chad Michel 00:35:49 There’s definitely have to have some of those standards, just common ways of doing things. It’s critical for us to be kind of successful. I want to hit on one of your comments you made a few seconds ago too. I think something I always find kind of amusing in software is when you’re looking at a project and you mentioned you can kind of see where every developer you’re there. I always view it as like you’re going through geographical kind of like layers like you’re going through the Grand Canyon, you can see all those layers sometimes in software where can see pretty much that same set of layers. You can see this was this person for a while, then it was this person for a while, it was this person for a while. And you can kind of see how those changes over time with projects. It’s not a good thing. I’m not calling that a good thing. It’s not like going to the Grand Canyon and being in awe right of something. But you’ll see that in software as well.
Jeff Doolittle 00:36:32 Yeah. It’s kind of, it’s software archeology in a way. And then you get blame, and you realize you used to write it that way, but now you write it this way and you’re even making it hard on your future self, which is also a challenge, right?
Chad Michel 00:36:45 Yes. That’s always a horrible one when you get, like I do a get blame on something and it was you, you’re like, oh, how did I do that? I think we’ve all had that as engineers. We’ve all made that mistake or had that moment. We’re like, who wrote this code? And then you look into it and you’re like, oh that was me.
Jeff Doolittle 00:37:02 That’s right. That’s right. I call that the Homer Simpson principle. Homer’s got vodka and mayonnaise and Marge says that’s not such a good idea. And he says, ah, that’s a problem for future Homer. Boy I don’t envy that guy. So don’t hate your future self.
Jeff Doolittle 00:37:15 So you’ve mentioned rework and on page 31 of the book you talk a little bit about sources of defects and rework, and you say something very interesting in there it says requirements of design are implicated in a large percentage of bugs with some studies showing them responsible for more than 50% of all defects. Now that’s interesting because I have a feeling a lot of people when they think about bugs, they’re only thinking about software bugs. But you seem to be implying here that you can have bugs and defects in your requirements and in your design. Talk about that a little bit.
Chad Michel 00:37:46 That one I feel pretty confident in. We mentioned kind of some studies and whatnot. I feel pretty confident most of those issues, especially the ones that cause a large amount of rework were actually introduced all the way back in the requirements. That someone didn’t get some requirement right. About how something should be stored or where something should be stored or if we design something wrong. All of those sorts of problems are where we get some software developer getting some ticket that comes in two weeks before the product has to ship and then spends a whole week fixing something because of something that probably wasn’t because of some small code bug they put in. That was probably some sort of requirement that was wrong early on. And those sorts of problems kill you.
Jeff Doolittle 00:38:28 Yeah. Maybe it’s the screen that the developers assumed needed to be built because the requirements were unclear and they spend two weeks building it and then the business says we don’t need that screen. Yeah. Or vice versa, where’s the screen we said we need, like to your point, that’s missing and it’s not there and now you’re spending. But I, I think that’s an interesting concept that there, Gerald Weinberg talks about this in his wonderful book called Exploring Requirements and he talks about that the cheapest place to find a defect is actually in the requirements. But I don’t think a lot of people in our industry think about bugs and defects that way. And yet he shows from his studies that it can be a thousand to 10,000 times more expensive to find the bug in production than it was to find it in the requirements. And I think that’s what you’re saying as well here.
Chad Michel 00:39:12 Yeah. One, I think if you find it in production, the stress level of that find is also so much higher.
Chad Michel 00:39:19 One thing I’m always kind of focused on or a little bit is trying to create an environment. What’s really enjoyable to work in software. And I don’t think finding bugs in production is good for anyone. The developers don’t like it. And maybe even more importantly the customers don’t like it. It’s a problem for everyone. So the sooner we can find those problems the better. And while we’re going through requirements, I just want to point out almost no one, there’s usually very little energy to finding issues while you’re in that requirement phase. Right? The amount of energy. Oh yeah, that is wrong. We just, we fix it. There’s a little maybe more as we go into design. Because if we have a design and it changes that design people, like there’s a little bit of work to going through and fixing it but usually not much.
Chad Michel 00:40:00 But by the time something’s into code, fixing something like that, like even on a poll request, if someone comes in and goes, well that’s not right yet that whole requirement was wrong, it could be a lot of effort. A developer could lose easily an entire week of work having to go back and fix something. And by the time you’re in production, you’re often in that world. Once you’re in production, I think you’re often in the world of having to do some sort of hot fix to work around it and really fixing it. You often have to play that game of we got to get something so they can keep doing what they need to do and really fix it, which is very expensive at that point. You know you mentioned a thousand times more potentially you’re fixing it multiple times. You’re doing like the quicker version to get something. So it kind of works around the issue and the bigger version to really back that out to how it should have been in the first place.
Jeff Doolittle 00:40:45 Yeah. And then if you’re not testing, the odds are that you’re probably introducing new problems while you’re trying to solve that problem. So now you’re literally playing whack-a-mole.
Chad Michel 00:40:54 Yeah. No one likes to play whack-a-mole and that’s something that’s pretty easy to get yourself into a situation where you fix one thing, the next build fix, you release something and you have to fix it again. You get that comment that no one wants to hear, we’re going to need another hot fix. No one wants that.
Jeff Doolittle 00:41:09 Yeah. Although it seems like, a lot of what the word refactoring, which you have not used gets bandied about a lot, but you have used rework. So what’s your perspective on the difference between refactoring and rework?
Chad Michel 00:41:23 I think rework is inevitable. We’re never going to get it to zero, right? But we do want to trend it as low as possible. And the reason is I think if we’re spending 50% of our time in rework, projects are almost going to cost effectively double because we’re losing half our time not even working on what we want to be working on. Refactoring, there’s kind of this notion and there’s some books on it, kind of this evolutionary architecture kind of strategy. You just get stuff going and you kind of just keep refactoring. That kind of is the same thing as just, again, we talked earlier about not planning, you’re just going to go and just kind of rework it into something that’s usable. My big counter there is, I think what we see in software is things don’t tend towards order. They have a natural tendency to go toward disorder and software systems, it takes a lot of effort just to keep them ordered.
Chad Michel 00:42:12 So if we don’t start ordered, I think it’s really hard to ever refactor them back into something that’s great. So that’s why it’s maybe so important to get it in a good kind of off on a good start. One thing we bring up a lot is Martin Fowler has this concept of the design stamina hypothesis. I donít know if you’re familiar with that or not. And like the chart basically shows a good design line, you can kind of get onto that good design line and hopefully stay there. But it kind of points out there’s this non-good design line that go really quickly. You potentially can show value faster if you’re on that non-good design line, but really quickly it’s going to kind of tip over and you’re not going to be able to continue to add features at the rate you want. If there’s one thing that’s been drawn on whiteboards at Don’t Panic Labs the most, it is that chart because we use it a lot to kind of say, hey, this is, we’re trying to get you on that good design line.
Chad Michel 00:43:00 We don’t want to be on that line where six months later every new features really painful. And if we aren’t doing a good job upfront getting that design, we’re going to be in that rework cycle where every new thing that comes through, we’re going to feel like we need to refactor all the time. And that’s just not a good situation to be in. We in general want to try to avoid having to make big refactors to systems to support one new requirement or change. Now every solution has kind of variable amounts of that. Sometimes you do have to make something where you maybe have to, some new feature comes through or some big change. Maybe you do have to change a couple services. Again, back to your water heater example. Maybe the pipes don’t quite line up. We are going to maybe have to make a little bigger changes at times, but we want to keep that as small as possible if we
Jeff Doolittle 00:43:45 Can. Yeah. And sometimes bad things happen, like when I was a kid, we had a house that was plumbed with these faulty connectors and what they had done is instead of using copper piping for the hot water, they used PVC pipe and then these connectors had a tendency to fail. And when they started failing, we literally had water coming down from the ceiling to the downstairs. And the company that designed this, they had a massive insurance claim from a class action thing. So they literally had to not only replace all the plumbing in the entire house, they had to redo all the drywall, redo all the, everything like that. So there’s an example of a bad design. And I think in software we don’t always see it. So what I was kind of gunning for with refactor was rework, my understanding of refactor from the original people who came with the ideas, which is guys like Martin Fowler and Joshua Kerievsky is refactor what happens when you have tests and the abstraction doesn’t change. That’s it. So if the tests are changing, the abstraction is changing, you’re now doing rework. If you’re improving the algorithm without changing the tests and the abstraction, that’s refactoring. And I think there is a time and a place for that. The problem is, we use the word refactor when what we really mean is we’re redoing the tests and the abstractions. Or there are no abstractions and everything’s coupled to everything. And that’s like the definition of rework.
Chad Michel 00:45:01 No, from that perspective, I like refactoring. If all we’re doing is changing the code in one service or something, that in general should be okay. Especially if we have tests in place. So it should pass the same set of tests.
Jeff Doolittle 00:45:13 Absolutely. Let’s talk a little bit about quality, because I think we have a lot of quality assurance groups in software, although in my experience, usually quality assurance and software is actually quality control, which is like, we’ve already built a car, now we’re going to crash, test it, and you’re going to push the buttons and check. But quality assurance is different. So talk a little bit about the difference between quality control and quality assurance. And then let’s talk a little bit about what you say in the book is that we should aim for institutionalized quality and what that means.
Chad Michel 00:45:42 I’m going to step back and kind of answer your last part of that first. So I think it’s really, I think where we want to be is we want quality to be kind of a, something that everybody buys into. Super essential for everybody’s role. Whether it’s project managers, whether it’s, quality assurance professionals, whoever’s kind of doing that, whether it’s software testers, whether it’s software engineers, whether its people writing the requirements. We all have to have this quality mindset. It has to be on everybody at all times. Quality is one of those things where if any one piece of that kind of chain or any anyone working kind of falls below, I think it creates a huge gap and we’re going to end up with quality product problems. So it has to be kind of institutionalized. Everybody has to buy in and if we don’t, we’re going to end up with problems. The example of that andon cord, everybody has, has to have the ability to kind of pull it and all stop, we have to fix something. Kind of going back to those examples.
Jeff Doolittle 00:46:32 Actually explain that a little bit more. Because listeners may not all be familiar with the Andon Cord, so talk about that a little bit if you don’t mind.
Chad Michel 00:46:37 Andon Cord goes all the way back, I think it was Toyota, back in the day, kind of with their factories. They had Andon Cord that if anybody was seeing problems with the line, as they’re kind of running things through their very well-oiled machine of a factory, anybody could pull that and they’d all stop and fix the problem, whatever it is right then and there. Interestingly, and I always thought this was maybe the more interesting part of the story we all kind of forget is they also, they saw it as a good thing for someone pulling that cord as well. I think immediately anybody that hears this is like, whoa, don’t pull that. We don’t want to you know, because we got to keep things going. They actually considered an advantage of someone brings up a quality problem, we’re going to all stop and fix it. I think in software we often kind of just like almost assume just quality is going to be lower than we want by default. And kind of coming back to that institutionalized quality, we all have to see that we want a really, really high bar for quality. Everybody wants things that work. Nobody really wants things that don’t work. And it’s so much harder to take things from a mediocre quality to a high quality than I think we realize as well. It seems easy just to try to get that last little bit. It takes a lot of effort and we can’t just wait till the end and then like, oh, quality time and get it there.
Jeff Doolittle 00:47:48 Well, we can try .
Chad Michel 00:47:49 We try a lot.
Chad Michel 00:47:52 And your example was really good earlier. We get almost to the end and then it’s one marble in one marble out. I think really when you’re waiting till the end, that’s kind of the experience you get is, you’re going to do some weird fix to fix something and then some other thing’s going to break. We want to shoot for a high bar right from the start. And that comes down to having that. Everybody has to have that same quality mindset. Everybody has to be understanding how important it is. And you can’t just have one process. There’s not, if there was one thing you could do to solve this, we would all do it. And it would be over, because we all would. But unfortunately, I think it requires everybody to kind of have this mindset. We have to have engineers writing good code, we have to have designers designing good systems.
Chad Michel 00:48:34 We have to have UX designers designing good user experiences and making sure we have all those things in place. We have to have, like in our case, like a dev lead doing a good job with that code review. Just because the developers are doing their best. They’re still going to miss things. Someone’s got to catch those things. And we have to have someone, we use our dev leads to do this. They have to also make sure does this fit within the requirements of the architecture? It’s really easy just to start breaking the architecture have a good design and no one’s kind of reigning things in to stand that design. They have to own that to keep people in that design as we get it onto deploy. We have to have people testing it there. We have to have people going through and doing a good job with UAT.
Chad Michel 00:49:12 It’s not just about having software that works, it has to work well for those end users as well. So it’s a lot of things to really get things kind of over that hump. And if you really think about how many like or big bugs can just come of small things in software, it is a really daunting problem. And that’s kind of why I wanted to focus that we’re institutionalized qualities. The only way to get there, no one person can do it. You just can’t hire software test engineer, problem solved. It’s a big problem to solve. And software is not very forgiving. We’re dealing with zeros of computers, don’t, aren’t trying to help us. They’re just objects that do things. We have to put all the quality in ourselves. It’s a big challenge in our industry.
Jeff Doolittle 00:49:54 Yeah, I agree. And that culture of quality. What’s interesting is you mentioned Toyota and that’s back from the 1950s. And if listeners aren’t familiar with Edvard Deming, I would encourage them to read some about him because he was the one who came up with the concept of total quality management, which ultimately became lean manufacturing. And a lot of those concepts have been sort of adopted by a lot of software people who were trying to do agile methodologies and things like this. And yet very often there’s no Andon Cord. And what’s worse is if there’s an Andon Cord, people will say disparaging things like, well that’s just QA overhead. It’s like well hold on a minute. And I like what you’re saying, it’s like, call it overhead if you want, but the point is, everyone is empowered to pull the cord and stop the line.
Jeff Doolittle 00:50:35 And what’s funny is Toyota’s made amazing progress, they wrecked the competition in the fifties. In fact, there’s a whole story about how the auto manufacturer CEOs from the US went over there to figure out what they were doing, right? Like they were just slaying it. And I think that’s ironic that in software we think, well if we can pull the cord, we’ll never ship. Actually if you want to ship quality and you want everybody else to say, hey, how are you doing it over there? Maybe there’s something to that culture of quality.
Chad Michel 00:51:02 And they were yeah, outperforming people that weren’t focused on quality. I think the non-focused or that lack of focus on quality appears as you’re going to get things done faster. But again, it kind of has a, it’s very shortsighted. You may create one car faster but you’re not creating a bunch of cars faster. And potentially all of those cars are going to have so many defects. You’re going to be dealing with other issues as a company as well.
Jeff Doolittle 00:51:23 Yeah. It’s the whole measure twice, cut once. Like I’ll joke when I’m woodworking, it’s like I cut this board twice and it’s still too short . Well that’s what you get.
Chad Michel 00:51:36 Thatís a tough one too if you’re cutting it and it’s too short, you’re in a really bad situation. .
Jeff Doolittle 00:51:40 That’s right. I’ll cut it again. It’s still too short. Cut it again. Let’s talk about the role of chief engineer. I have a feeling this is another one of those that might raise people’s hackles. Oh we don’t want to call somebody chief or things like that. But look, setting aside, I tell people all the time, I say software architect, forget the title, right? There are certain things that software architects do. And so if you’re doing those things then you’re architecting. So in a similar way, share with us what you mean when you talk about a chief engineer and how that kind of fits in with you use an analogy of a football team. So maybe that’ll help listeners kind of understand what we’re talking about with the chief engineer.
Chad Michel 00:52:14 Yeah, a hundred percent agree with kind of your notion of titles can be kind of, well one, they’re very confused in this industry, so it’s kind of somewhat meaningless because they have such wide differences between companies. I think you can really only have like meaning within a company. But what we’re talking about here is having someone really understanding and really thinking through everything as a whole. It’s really common in software to kind of have everybody in their own little silo. This person’s really thinking about kind of like their requirements gathering. This person’s thinking about the project management piece. This person’s thinking about how we’re hosting it in the cloud or something. We really need someone thinking about this thing as an overall process and design and system and someone that can, everybody can go to kind of get those answers. This is how we’re going to do this on this right here. And someone, to me a good portion of this that kind of ties back into your, you mentioned kind of a coach. That person also has to be, I think somewhat teaching others how to do things. No software team is completely stable as no sporting team. Like no football team’s, completely stable. New people are going to be coming in. Even if you have a team operating perfectly, the next season you’re going to have different people to some degree.
Jeff Doolittle 00:53:25 Injuries,
Chad Michel 00:53:26 Injuries. Yes. And in software, I mean it’s probably not injuries, right? Could happen.
Jeff Doolittle 00:53:32 I don’t know, maybe .
Chad Michel 00:53:34 Yeah, it could be injuries but it’s probably more likely people leave or sometimes cases like us as an organization, we’ve grown quite a bit in the last few years. So if someone’s bringing on new people, they’re not going to be familiar with how we want to do things. So that chief engineer needs to kind of help guide or teach or educate those people as well. We need someone that can get, hey, this is what we’re seeing from you. We need you to do things in a different way. Education, I think is a large part of that role. And as if you’re like a coach, that’s often a large part of what they’re doing. They’re like, hey, this is how you get into your football analogies. This is how you get into your three-point stance. No, you’re too high, you’re too low. Kind of working on those sorts of things.
Chad Michel 00:54:12 No, you’re a receiver at the edge, you should have started with this foot in front and taken this step. Or you’re a DB. No, don’t step like that. Start with your left leg back so you can see down the line better or something like that. All of those sorts of details, someone needs to be able to know how that’s being done and kind of orchestrate that and coach those people. And that’s one of the big things we’re thinking with that chief engineer. And again, kind of why the coaching analogy I think rings fairly true. Because there is a large portion of it is education. Because no matter where you’re at now, not all your staff are going to be on the same page. And even if you do get them to where everybody’s kind of naturally seeing the same things, your team’s never going to be stable. It’s going to be changing. You’re always going to be bringing new people on. Hopefully not continually, you don’t want to just be constantly churning. But there is going to be changes. Hopefully it’s not like trying to think of a, there’s probably a good analogy here of some sports team that completely got destroyed between seasons or something. We don’t want those situations. That’s tough. But new players coming on board, you’re always drafting new players. And you have to get them up.
Jeff Doolittle 00:55:17 Yeah. And it’s interesting because design for change, I think until now I was thinking of that mostly in terms of the software. And yet interestingly when we talk about chief engineer, we’re talking about how you design for change with the people. And as I’m often apt to say we’ve discovered the problem. The problem is people and the solution is people. And that’s the problem, which is kind of a riff on Mel Conway who has taught us that organizations will build systems that reflect their own communication structures. And so in a similar way here, it’s let’s recognize things change. People come and go, what can we do about it? How can we have consistency of communication, complexity, containment, things like this. And your proposal is, let’s have somebody whose job is to have that holistic view and onboard people to the project and keep them consistently pulling in the same direction.
Chad Michel 00:56:06 Yeah. And it’s really important. Because you mentioned at the end of the day it’s always a people problem back to kind of like institutionalized quality. You got to have everybody moving in the same direction. And you have to understand not everybody’s going to come in with a necessarily lot of experience. Software, engineering software, especially if they’re coming from another organization, there’s a high probability that their role was some requirement came through the door. I put some stuff on a webpage with maybe some sort of backend that kind of did it and we shipped it. There’s a good chance that was their world. And you can’t expect them overnight. Like and literally, they started working with you on Monday, by Tuesday they’re writing software the way you want. That’s not going to happen. And even if you have good processes in place, good training, as they come on board, once pressure starts hitting them, they’re going to start just defaulting back to other behaviors. But it’s going to take someone kind of working with them over the course of years to really get them to where they’re writing software the way you want. It’s not going to be something you’re just going to flip a switch in someone and go.
Jeff Doolittle 00:57:05 Yeah. And you mentioned earlier in the book objective complexity, requirements complexity and solution complexity. And I think maybe it’s part of requirements complexity, but I think expectations complexity is another challenge too. 20 years ago you could just throw a webpage together with, the little PHP page with a little MySQL database behind it and just start going. But boy, people have super computers in their pocket now and on their wrist and their expectations are so high now that good luck meeting their very, very high expectations by just slapping simply together and shipping it.
Chad Michel 00:57:38 You mentioned that’s actually, we don’t have expectation manage, but that’d actually be a good one to maybe add to the list. People over time like what they want out of their software projects has gone up a lot. Like what we got away with in 2005 wouldn’t tolerate anyone nowadays. We have an app we’re building for a customer that just as this little side feature has a full chat–piece to it that’s just a like a side thing of that application. It’s not really that critical. 15 years ago that was an app in of itself.
Jeff Doolittle 00:58:08 Right. And now that’s like an afterthought.
Chad Michel 00:58:11 Yeah. Like, oh we need to add chat to this. Okay. It’s just not the same as it would’ve been 15 years ago and 15 years from now, everybody’s going to expect that chat to be like some sort of AI communicating with you back and forth and telling you stories while you’re doing it. Who knows? I don’t. I’m just making stuff up here.
Jeff Doolittle 00:58:28 No, but who knows. Right? And I think, again we have to use real world analogies because we’re dealing in abstractions and with text and code all day. But, I think about what the expectations were of a hunter-gatherer 20,000 years ago and a cave with a fire and some furs were sufficient and that it’s woefully insufficient for us now. And I’m sorry to those of you who are still slapping and shipping together PHP apps with the MySQL backend, but essentially, it’s like a nice comfortable cave and yeah, it’ll protect me from the elements and it does solve the core use case, which is I need to survive to live another day. But maybe my expectations for my domicile are a bit higher now in the 21st century .
Chad Michel 00:59:05 If you’re still doing PHP in MySQL, maybe at least consider Laravel.
Jeff Doolittle 00:59:09 Yeah. And I don’t mean to pick on PHP, I’m just thinking about like my early days of the nineties when I’m just like literally was doing these slap shit.
Chad Michel 00:59:16 We all were.
Jeff Doolittle 00:59:17 Yeah. Before I discovered that software engineering does not equal software development necessarily.
Chad Michel 00:59:22 A 100% agree. And it’s one good thing though you mentioned the expectation management things changing. Over time things keep changing. That’s one of the things I love about this industry is what we’re doing now won’t be what we’re doing necessarily 10 years from now, but hopefully we can build software systems that will. We can still want to work on that we’re building now that we can still want to work on 10 years from now.
Jeff Doolittle 00:59:43 Yeah. And as the book comes to a close, you talk about what will it mean for software development to be an engineering discipline? And specifically you talk about transformation of individuals and teams. So maybe as we start to wrap up here, share a little bit about with people who are interested in finding out more about how we apply rigorous software engineering principles in our context. What can we do to assess the current maturity of our team and start providing paths for people who want to accelerate their software engineering journey.
Chad Michel 01:00:12 I think probably a good starting point is getting some really good leaders in your organizations that really do value software engineering. Again, you could use the, we talked earlier about kind of the Parnas test. Do they know who David Parnas is? You’re getting some good leaders in your organization. We’re actually, as an organization, kind of like actively right now working on an assessment, a way to assess developers from some of those competencies all the way back to the SWEBOK. Not really necessarily ready to talk about it yet or put it out there completely, but trying to get an assessment of where people are at and provide training and education to get them there. And we spend, not every Friday, but a lot of Fridays this summer we’re actually doing internal workshops for our developers. Kind of taking them through some of these core concepts and trying to kind of level them up.
Chad Michel 01:00:57 I mean one, everybody gets some training in situ while they’re working on things. But we’re very intentionally trying to make time to take people through kind of our workshop series where we’re going through things. On Friday this week, we’re going through information hiding. It’s part two of information hiding we’re going through with people. Now, some of them that have been with us for a long period of time, they’re probably a little bit like, yeah, I get this. But we have to remember, we have an organization that has changed a lot over time. So some of them, it’s going to be like, oh, I had no idea who David Parnas was. Maybe last, I think we hit the first part of it last Friday. There’s probably some of them that that was maybe their first introduction to him. So it’s providing those steppingstones and keeping this going.
Chad Michel 01:01:34 And even for the people that have been in it quite a while, it’s sometimes nice to rehear things after a while. It’s pretty common, at least for the way I think many of us learn, is you can kind of learn something once, but when you kind of rehear about it six months later or a year later, it maybe takes a different meaning and more of a deeper meaning. And then at, at that time, at the time it was like, well I heard something, but it wasn’t until you really got it the second time or the third time. Does it really actually start to sink in?
Jeff Doolittle 01:02:02 Absolutely. And I think information hiding is a good example of that. You say those words and immediately people think they know what you’re saying. But if you’re not doing volatility-based decomposition, you’re not doing information hiding. And I know Parnas doesn’t call it volatility-based decomposition, but that’s essentially what he’s talking about is find the areas of significant change, encapsulate them in services or modules as he calls them. And this is 50 years ago. Right? And I think that’s part of the challenge too, is words struggle to carry the weight of the meaning they’re trying to convey. And so we have to repeat ourselves and we have to keep growing and maturing our teams as they are constantly evolving and changing.
Chad Michel 01:02:42 And probably the last piece there, and this is one that I know we care a lot about, is we’ve tried to help a lot in our community, like the Lincoln, Nebraska ecosystem, trying to help grow others within that area as well. Because we’re not necessarily trying to make ourselves wonderful, by not helping others. We want everyone to kind of benefit, and we can learn a lot from others. Kind of along that same lines. Just because we’re doing something really well here. We’re going to learn from by interacting with others, oh, they’re doing this, and it is really working out really well. So it’s, we got to all work together to try to continue to get to kind of for all of us to improve. Because none of us have it all figured out. We’re all on a journey of trying to get better and, at times you’ll feel like, oh, I got this really well figured out.
Chad Michel 01:03:25 But sometimes just hearing someone else’s different take on it can be pretty informative. Going, oh, this person’s trying something different. And it might have some better results for us, because software development’s hits hard. I think it was a Donald Knuth quote, “software is hard, and it is a difficult discipline to be part of. And software engineering is a lot of effort.” But if we all work together, be part of the ecosystem, this podcast, things like this, I also have a lot of value. Because it helps to kind of spread the word about software engineering itself, not just software development. And try to get people thinking more about how to write software we can live with, how we can encapsulate volatilities as opposed to just what new features are available and what new framework that’s, you get a lot of that kind of content out there. Not that that content’s bad, there’s some great content along those lines, but I think there’s something about software engineering has an even bigger impact on the world and there’s just not as much of that content out there either. There’s way more focused on what’s available in this new framework right now.
Jeff Doolittle 01:04:27 Yeah, I agree. I tend to say that people focus on tools and technologies, which there’s nothing wrong with being a great technician, but that’s not systems design and that’s not software engineering. Right. Being really good at using a circular saw that’s like really awesome and cool doesn’t mean you know how to design skyscrapers. And that’s not insulting the guy that is a whiz with the circular saw. It’s just a different discipline. And I think a similar thing is true here, and people don’t rise to the occasion. They fall back on their training. So when you’re training in these tools and technology, you’re going to always try to jump to coding or jump to spinning up a Kubernetes cluster or jump to spinning up a Kafka whatever. Right. And it, again, those aren’t bad things, but what purpose are they serving more broadly?
Jeff Doolittle 01:05:08 And that’s where I am really excited that we’ve had you here on the show, because you don’t fall back on software engineering when you haven’t learned how to be a software engineer. So if you want your training to be what you fall back on, then I think books like yours are a great introduction for people. And there’s a whole lot more in the book too that we didn’t have nearly enough time to cover in our brief time. But hopefully this gives people a sense of you and what you are about Chad and Don’t Panic Labs and people can grab the book and start diving a little bit more into what it means to do software engineering.
Chad Michel 01:05:41 Thank you so much for this time. This has been a lot of fun. Good to meet you. And always enjoy talking software engineering with people. Yeah, a lot of people want to focus more on the tools and technology. You’re going to find a lot of people that want to talk about that. But I really enjoy these talks about software engineering and thanks for this opportunity.
Jeff Doolittle 01:05:57 Yeah, glad you’re here. And if listeners want to find out more about what you’re up to, it’s Chad Michel, M I C H E L on Twitter. And then also they can find Don’t Panic Labs at donítpaniclabs.com, which is the company that Chad and his co-founder, Doug, are operators or owners or whatever your titles are. Grand Puba, I don’t know, .
Chad Michel 01:06:16 There’s six of us that own kind of Don’t Panic Labs.
Jeff Doolittle 01:06:19 Okay.
Chad Michel 01:06:20 There’s four others. But yeah, Doug and I kind of wrote this book, Lean Software SystemsÖ
Jeff Doolittle 01:06:23 Awesome. Awesome. Well, hey Chad, thank you so much for joining me today.
Chad Michel 01:06:27 Thank you so much, Jeff.
Jeff Doolittle 01:06:28 All right. This is Jeff Doolittle for Software Engineering Radio. Thanks so much for listening.
[End of Audio]