Search
Casey Muratori

SE Radio 577: Casey Muratori on Clean Code, Horrible Performance?

Casey Muratori caused some strong reactions with a blog post and an associated video in which he went through an example from the “Clean Code” book by Robert Martin to demonstrate the negative impact that clean code practices can have on performance. In this episode, he joins SE Radio’s Giovanni Asproni to talk about the potential trade-offs between performance and the qualities that make for maintainable code, these qualities being the main focus of Clean Code.



Show Notes

Related Episodes

Links

Transcript

Transcript brought to you by IEEE Software magazine and IEEE Computer Society.
This transcript was automatically generated. To suggest improvements in the text, please contact [email protected] and include the episode number and URL.

Giovanni Asproni 00:00:16 Welcome to Software Engineering Radio. I’m your host Giovanni Asproni. And today we’ll be discussing the tradeoffs between code quality and performance with Casey Muratori. Casey is the host of Handmade Hero, an educational programming series, and the lead programmer at Molly Rocket 1935, which is an upcoming interactive narrative game about organized crime in the 1930s New York. His past projects include the Granny Animation SDK, Blink2, and the Witness. Casey’s past research contribution include the n-way linear quaternion blend, the immediate mode graphical user interface (IMGUI), and the first geometric optimization for the Gilbert Johnson KT algorithm. His current research interests include interactive fiction, concurrent computing, and operating system architecture. Casey, welcome to Software Engineer Radio.

Casey Muratori 00:01:02 Thanks so much for having me on your program.

Giovanni Asproni 00:01:04 Well, thank you for coming. Let’s start. Now this interview is actually being prompted by your blog entry with a video entitled Clean Code Horrible Performance published at the end of February this year, which caused, let’s say, some strong reactions. Yeah?

Casey Muratori 00:01:19 Yes, certainly did.

Giovanni Asproni 00:01:19 With many agreeing with your points and many other disagreeing very strongly as well. So you polarized the audience a bit.

Casey Muratori 00:01:26 From what I could tell in many ways I seem to have given the audience an excuse to argue amongst themselves more than anything. There were so many arguments between people that did not involve me that came out of this video that that seemed to be the biggest actual result or the thing that it caused the most was other arguments that I was not even aware of.

Giovanni Asproni 00:01:49 Okay so can you actually give us a very brief summary of the main points of the video for the listeners that maybe haven’t seen the video yet?

Casey Muratori 00:01:56 And I can also perhaps give a little bit more context to the video because one of the things that is interesting about the video is it comes from a course that I made and nobody really knows the context of it if they haven’t looked at that course. For example, they’re just seeing like this little slice, right? So I have a sub stack, it’s at computer enhanced.com and on there, there’s like a table of contents that has the listing of what is in this course. And the first thing that it has in this course is a prologue. And the prologue is not designed to teach anyone any of the concepts yet. It’s just designed to teach people, I should say that the course is about understanding performance or being aware of performance. It’s designed to teach people effectively, like what are the things that can affect performance?

Casey Muratori 00:02:44 And I’ve got a thing in there about IPC, about multi-threading, about cache, you know, that sort of stuff. And this material for the Clean Code Horrible Performance, was something that I basically included as a bonus video that didn’t quite make the cut for the prologue. because I was like, yeah, you know, virtual functions are kind of this subset of a different thing we talked about, which was this sort of idea of waste, which is making the CPU do things that you really didn’t need it to do. And so this is just like a little tiny snippet from this larger context. And of course it’s the only thing that anyone has really seen outside of the people taking the course, which is a paid course. So I don’t blame people like they don’t want to go take the course totally fine by me.

Casey Muratori 00:03:25 But this video has to be understood as something that is part of the larger context of things you need to be aware of when you’re thinking about performance. So in this particular video, what I talk about is the fact that in a lot of the sort of traditional textbooks that people cite when they talk about clean code, including, I don’t say this overtly in the video, but including the book that is literally called Clean Code and is the first hit on Google if you search for the word clean code. So it is if a novice who does not have their own idea about what clean code is was to type clean code into Google and hit return, they will find like that book, they will also find right on that front page like summaries of the rules from that book, which include the rules I use in the video and so on. So in this video I go over what some of those rules are and specifically what some of the rules are that actually are directly related to code architecture. Because some of the rules when people talk about clean code, have nothing to do with code architecture. They’re things like, what do we name variable names? Well that’s about readability to the programmer, but it will not affect the program at all.

Giovanni Asproni 00:04:32 So with the code architecture, you mean things that actually influence the runtime behavior of the program somehow after compilation maybe.

Casey Muratori 00:04:39 Yes. Or that influence literally what the structure of the program is in any analysis. Even separate from the compiler because you are making choices about what code can and cannot go together. To give a simple example, there’s a book on refactoring by Martin Fowler. There will be literally parts that say, oh, we were doing these two things inside one loop and instead I’m going to break it into two loops and do one thing in each loop. That is a transformation of the code that assuming those two loops are not analyzable and fusible by the compiler anymore because you’ve moved them, then that is literally just a different program. The compiler can never generate the same program from that thing that you’ve done. Right? So when I say that they affect the architecture of the program, what I mean is the two source code sets can no longer generate the same program by any compiler that we currently have.

Casey Muratori 00:05:28 Maybe some future compiler could undo that change because it’s way smarter. You know, I mean the new AI is descend and they can analyze what you’ve done and say, oh this person broke those two loops apart. I’m going to use them to get it right. But currently the technology that we have doesn’t do these transformations. So I took the rules I could find in there that were specific. They were ones that were like, you should do these transformations, like prefer this to that. That had to do with architecture. And I was just looking at them and going like, you know, these are basically just really bad for performance. In most languages if you apply these rules, they’re telling you to do things like using inheritance hierarchies and using like dynamic dispatch a lot of times. And when you actually look at what those transforms do to code, it all just slows it down.

Casey Muratori 00:06:12 It never speeds it up. It can only slow it down and it slows it down in ways that most people don’t even appreciate. And again, this is stuff we cover later in the course. This video isn’t meant to give you a full understanding of why, but most people look at that video and because I guess, I don’t know, this is just what they think in their head is happening or something, they think that I’m only talking about the cost of calling through a V table, but that’s not the only cost. There are also a lot of optimization costs you pay because when you call through a virtual function in languages like C++, if the compiler doesn’t directly know exactly what type it’s dealing with, it cannot optimize through that virtual function call. And that is by far the biggest cost. Not the dispatch.

Casey Muratori 00:06:54 The dispatch can be bad as well, but it’s the optimization cost. So I was trying to just show like I showed with the other things in the prologue of this course, hey, these are all real things. Like when I’m talking about this, I’m not talking about abstract things. Like when I said some of the things in there were like multi-threading or IPC, I didn’t want to just throw out the terms. I want to show you little easy to understand snippets of code where all I have to do is one little tiny change and oh it gets a lot slower. Right? And I did that for everything. I did that for IPC, I did that for multi-threaded, I did that for caching and this was just one that I did for like, hey, virtual functions really bad, right? So I can understand why people, when that’s all they’ve seen, they think maybe that I’m saying that like, oh, virtual functions are like the only thing that’s wrong with like per code. It’s like, no, it’s just one of many things that are bad techniques. And so that’s, you know, kind of I think what incited these sorts of large amounts of reactions to it.

Giovanni Asproni 00:07:49 Okay. Have a few questions about some of the aspects. Okay. So I don’t really want to focus on the video per se. I want to focus here on the trade-offs if you like. So we’ll talk about now a performance then later, maybe a bit about what, let’s call it good code looks like, because clean code seems to be a trademark. You know, talking about clinical, the fact that it’s, there seems to be, has some kind of specific meaning nowadays. So let’s say good code. Yeah. The first question I had is that I noticed that in the video you focused a lot on CPU cycles optimizing for CPU cycles. Now many modern applications actually have, are network or Io bound. So in this context, what is your view about applying clinical principles? So let me give you an example. So you said how virtual functions can be very expensive, but when you are Io bound, the network bound, probably that cost is not that much in the grand scheme of things. So I’d like to understand your position of the clinical principles if they are still good, bad, I don’t know, even in context where the performances are actually influenced by other aspects.

Casey Muratori 00:08:51 Totally understand the question and I think for me, I always have sort of a difficult time with the specifics of that particular analysis. So personally I’ve never seen a program that has those characteristics. Meaning I’ve never actually seen a program that is actually Io bound. I have seen programs that were written in such a way that they become Io bound. But the actual nature of the thing the program is doing is almost never Io bound. And the reason I say that is because when I say never, I mean in a current context, obviously if we rewind the tape to 1970, it’s pretty easy to find something that might be bound by a spinning tape drive or something like this. But modern Io subsystems deliver massive throughput. Modern network architectures deliver massive throughput with extremely low latency. I mean my ping times from my home in rural Washington are sub 10 millisecond ping times to major servers.

Casey Muratori 00:10:00 So when people say that they’re Io bound, usually when they say that it’s because the way they structured their program artificially made it Io bound. And there’s a lot of ways that this happens. So for example, if you structure your communication with a server so that you’ve basically just not really thought about how you’re going to request things and you haven’t really thought about which side is going to do which computations and things like that, you will often see network-based programs, for example, doing linear sets of dependent Io operations. They’ll send one small thing over to the server and the server will send some small thing back, the client will do a little bit of work and then send another thing over, wait for the response and so on. Right? You can actually see this if you open up a lot of webpages that load slowly.

Giovanni Asproni 00:10:46 You mean, basically applications that are very chatty in a way. So lots of small requests back and forth.

Casey Muratori 00:10:53 They’re serial dependency chains, right? They’re serial chain of things that it has to wait on the Io four. And if you look at a lot of webpages, you’ll see this is why they’re slow. There’ll be these long diagonal lines of basically like network weights. Right? Now if you actually go to look at what it’s doing, there’s usually no actual reason that the programmer had to have structured it. They were in control of both sides and they merely chose to do it this way. So then they say, well it doesn’t matter how slow my JavaScript stuff is here because most of my time is waiting on the surface. It’s like, well no that’s not actually true. That’s just because you also didn’t fix the way you were doing your Io. Right. Does this make sense?

Giovanni Asproni 00:11:38 I understand the point, but what I’m thinking also, so for example, experience mostly with enterprise system, if you want to call this big services running behind for some big banks or big companies that are used by lots of people and usually have to do to deal with the quite big databases. So in those situations, sometimes queries are not as fast as you would wish them to be, even if you optimize as much as you can the database for, because there is simply a lot of data. Yeah? So in those situations you really become Io bound. Of course sometimes things are not done properly as it’s the case, but even when they’re done properly, the difference between the CPU performance and the performance of the Io can be really dramatic there. So these are the kind of systems I’m thinking of.

Casey Muratori 00:12:25 But I guess I don’t know why a database system would be Io bound in that sense, right? Like, I mean are you talking about the database does not have the ability to be distributed or do memory caching or any of those things for some reason?

Giovanni Asproni 00:12:37 No, I’m talking about from the interaction with the database from the point of view of the service running, you know, running the query. So my service needs to read something from the database, runs a query and receives the data back. This operation is actually very often time-consuming depending on the amount of data you have. Of course also the set-up matters. There are lots of things, but in general the performance of these operations is much, much slower than what you have at the CPU level. Yeah.

Casey Muratori 00:13:09 Well so I obviously don’t disagree with the fact that if you have two things that need to communicate and one of them is performing slowly, that the other one then has the ability to perform more slowly if you like, without actually affecting the runtime of the program. However, what you have done, if that’s the design philosophy, in other words, if your philosophy is well, we’ll take whatever the slowest piece of our system is and just design everything else to be as slow as that thing is, now you’ve basically guaranteed that you can’t really optimize your database because if you now decide, well you know what, we want to speed up this system. So we go and we optimize the database and then for some reason nothing gets faster, why didn’t it get faster? Well because all of our other things were taking advantage of the fact that that piece was slow. So we’ve basically just we’ve, what we’ve done there is we’ve solidified low performance into our architecture, which I again wouldn’t approve of.

Giovanni Asproni 00:14:09 Let’s see if I’m understanding what they’re saying. Basically you’re saying if you say that the database is the bottleneck. Yeah? And so because of that you take less care of the code of your service. It’s like, well then if you try to improve things on the database, they won’t give you an advantage because anyway your service is sluggish. So you’re saying in other terms, that’s no excuse to actually write code that is low on your service to start with.

Casey Muratori 00:14:38 Yes. And I would go, you one better and say we have examples of this in the wild, so this isn’t hypothetical. A recent video I did is called Performance Excuses Debunked, I think. And what I do in that video is I just go through all of the performance announcements that have been made at like Facebook, Uber, Twitter, where they announce, hey, we improve performance by this much and here’s what we did. If you go look for example at Facebook’s performance announcements, you’ll actually see announcements where they’re like, we tried to speed up the server but we quickly hit a limit to how much we could speed up the server because our web front ends were so slow that it didn’t matter how fast we turned around the responses, right? So the case you’re talking about, it literally happened and we have like documented proof of it happening and then they just, all you end up doing is then you have to flip around and go, all right, now we got to rewrite our front ends because they were all written to be slow and so when we sped up our backend, nothing happened. Right?

Casey Muratori 00:15:39 So the way I look at it is when people make the claim that they are Io bound or that they are bound by something where it doesn’t matter how they architected their software, even if they’re right, which oftentimes I think they say that and it’s not really true, like they haven’t actually analyzed that program to prove that that’s true. But let’s say some people have and they’re right, I still think it’s a poor choice because all you’re really doing is then making it harder for the future team that’s going to have to improve this system to actually make those improvements. You’re going to upgrade your hardware and, on your server, and you’re not going to get any faster. And the reason is because you had all these slow front ends that were wasting a bunch of time. Right?

Giovanni Asproni 00:16:19 Okay, let’s move on. Now, one of the rules you break in the video is called the code should not know about the internal soft objects it’s working with, which is basically the modularization mechanism device by David Partners in 1972. Yeah? Which nowadays is a kind of staple of good system design for many reasons. You know, flexibility, comprehensibility, shortening of development time, if the, if you have different teams work on different modules, stuff like this. In the video, by doing that you get a big improvement in performance at some point. So I have a couple of questions related to this. The first one is what do you do in case these objects you depend upon are actually external libraries or frameworks you depend upon? I mean, do you go there and check and try to depend on implementation details that can give you an advantage or you kind of live with it and just use the public API even if you go a bit slower. What is your approach to that?

Casey Muratori 00:17:19 Well, I mean I don’t know that I would say that I have a particular approach one way or the other, but what I would say is people do both of those things. So sometimes if you can live with whatever the performance is that you have, then you’re like, well I’m not going to touch this particular libraries thing because they’re going to change it or they’re going to update it and I don’t want to have to create work for myself every time they do. And that’s a tradeoff you can totally make. Other times when performance is important, people literally do exactly what you, you said, in fact people have done exactly what you said with libraries. I’ve written because there were, we can go into this if at some point if you want to, but obviously I used to ship literally commercial libraries. That’s what my job was at RAD Game Tools to hundreds of game developers had to use these libraries and so on.

Casey Muratori 00:18:04 In general what I tried to do is make actual contracts for data types. So I would try to basically say to people, okay, anytime we have a data type that we’re very confident we don’t need to change, we will expose it to you. And the reason was specifically that it makes it much easier for developers to work with those data types because now they can have their compiler see them and even they don’t necessarily have to touch them, but the fact that they’re exposed mean that the code gets faster when they compile because their code doesn’t have to call accessor functions that they can’t see. It can just do those accesses in line and it gets much faster. But occasionally we would have somewhere we’re like, these are probably going to change and we would keep those internal. We wouldn’t expose the definition of those.

Casey Muratori 00:18:45 And sometimes, because I mean in games obviously there’s tight performance budgets, especially back in those days and maybe not so much anymore. You would have people who would even go and they would take out the internal definition sometimes to use them and we’re just like, that’s fine as long as you know that we might change these in the future. So you’re going to have a little more work to do if you want to roll that forward. But in general, I would say the more important point in that overall is that I think the idea that you’re supposed to hide as much as possible from the people using your code is actually not a very good idea. And it’s not a very good idea for three reasons really. I would say not, you know, performance is only kind of one of them. The first reason is because if you can solidify on a type and say this is actually what it’s really going to be, then you can expose that type and get good performance benefits for people.

Casey Muratori 00:19:33 It allows the compiler to do a lot of things that it couldn’t have done if it was hidden. So that’s just kind of a good thing. But number two is it also aids in people’s understanding of what the code is actually doing and how. So the reason that I bring this up is because a lot of times people who advocate for clean code, they say something that I don’t think is actually true in practice, which is that hiding implementation details from people makes it easier to understand what the code is doing. I don’t actually think that’s true because most of the time when I need to actually read people’s code, it’s because it is not doing something that I thought it should be doing. Meaning if I’m ever to go looking at what a library is doing in the first place, then that means it didn’t just do what I expected because if it just did what I expected, then I just call the API and I never really need to look at the code or understand what it’s doing. Right?

Giovanni Asproni 00:20:20 Wouldn’t that be either a bug on the library or just wrong expectation on your side?

Casey Muratori 00:20:26 Both. It could be either, right? That is usually when I am reading the code, I’m usually reading the code either because there’s a bug in the library or because I thought it was doing something that it wasn’t doing and the documentation maybe doesn’t say or whatever. When I go to read it, the more layers of abstraction and the more layers of encapsulation that that library has put between me and what’s going on actually make it take longer for me to find the problem. And this is why I feel that the claim that it improves readability, I wouldn’t claim it to be false, but I would say that it’s dramatically overstated because a lot of times I find it actually makes it harder for me to do the work I need to do.

Giovanni Asproni 00:21:03 Isn’t it actually two different definitions of what readability means or understandability means? Because my understanding of information hiding is that I expose what should matter to you. Yeah? This is what you know, this is the contract, this is how the module works. And so you basically say, okay, you do what I need, fine. You don’t do what I need. I may look for something else. What you’re saying is instead, well, if he doesn’t do what I expect, I’m curious to see what he’s actually doing because I may have a better understanding and maybe get to do what I want. But it seems to be a different, a different approach. I don’t know.

Casey Muratori 00:21:40 Well you don’t really have the luxury of not using it, right? I mean that’s only at the very start of a project. If you are two years into something and you use this database or this module or whatever and your whole program is built around that module and you’re shipping it in production, if you call some function and it’s not working or the latest update it doesn’t work or whatever, you don’t have the luxury to be like, well, we’ll just do something else. That’s not real programming, right? That is not the reality of our lives. Our lives are oh crap, this thing is broken. Somebody needs to go figure out why it is something we are doing? Did we change something in our code that maybe broke with the library thought was happening? Or maybe the performance is now a problem for some reason. You know, that sort of thing.

Giovanni Asproni 00:22:24 Okay, let’s move on. Now I have the second question I said before about this information hiding. So in a video when you say that you broke the information hiding principle basically to gain a performance advantage, looking at the video from my point of view, you actually cheated, because you didn’t really break anything. You look at the problem from a different angle from what I can see. So from my point of view, when I was seeing the code you wrote and stuff was just modeling the problem in a different way to satisfy different criteria in this case higher performance. Yeah? So, can you comment on that?

Casey Muratori 00:22:57 So I guess I would say I don’t know that I have any particular comment on it. I could talk about I guess why I don’t see it that way, but I don’t have a problem with someone who wants to think about it that way. So I wouldn’t necessarily disagree with that opinion. If, if you think about things in terms of like there are certain ways of looking at a problem and you thought it was a different way of looking at it, that’s fine with me, but I can elaborate it more if you’d like.

Giovanni Asproni 00:23:20 Okay, no that’s fine. Because it’s simply the way I saw that. Well I invite of course, the listeners to look at your video. The link will be on the link section of the podcast.

Casey Muratori 00:23:31 Here’s the comment I guess I could make on that. So if you look at what happens in typical code bases that follow the polymorphism, like don’t use ifs, don’t use switches, use class hierarchies sort of design pattern, which is actually quite common in the wild. One of the things that I didn’t think was a very accurate comment that people have made about this video is that somehow people don’t do that, lots of people do that. I assure you we can, I can give you tons of examples of people doing it, but it’s a very common thing. I agree that some regions of the programming sphere don’t do it, but it’s not like nobody does that anymore. It’s very common still. But anyway, when you are presented with a design like that, and I think I said this in the video, when someone goes and needs to optimize this code because they’re like, this is running too slowly and we want to speed it up, if what they see is a switch statement, it’s very easy for you to basically turn a switch statement into what I would call either a vector complete or a merged case, meaning something that just does one computation that does all the computations at once.

Casey Muratori 00:24:39 And this is a very common technique in vector complete programming. We do it all the time when people aren’t used to programming vector complete. This is a very foreign thing. So I can understand why it’s like something that people would be like, I have no idea what this is. It’s very weird, but it’s, it’s very standard. It’s just standard vector complete stuff. If you’ve ever done GPU programming, it’s just a normal thing that you do. Switch statements make it very easy to see. You’re like, oh, okay, these people need to do this kind of multiplies, these people need to do these kinds of lookups and we just, this is the most straightforward merge I can do. And this will compute all these different quotes unquote object types in one block of code that doesn’t have any branches in it. Which again is crucial for vector completeness.

Casey Muratori 00:25:17 So it’s a standard thing that you would do if instead what you are presented with as an optimizer is a class hierarchy. This analysis takes way longer. You have to go through and untangle this class hierarchy, figure out what all the different possible leaf functions are, you don’t even know what files those might be and you probably have to use a tool, et cetera, et cetera. And so what I was trying to show in the video was if someone now needs to improve the performance of this thing, because again, this is a course on performance where programming that this comes from, it is very simple to improve the performance of a switch statement. It is much more time consuming to do the same transform on a class hierarchy.

Giovanni Asproni 00:25:51 Okay, I’ve got another question here instead. This is not related to this particular aspect of code but is related to some of the principles you touched and some of the others you didn’t really touch upon in your video. So there is some evidence that this clean code principle, at least uh, quite a few of them have some positive effect on developer’s cognition. That means when you read code or write code somehow is easier to understand or easier to do stuff with the code. You know, change it extend that is, I put in the links a book that is called The Programmer’s Brain by Feline Hermans. Now what is your opinion about those principles? Looking at them just from that perspective of actually understandability extensibility. So let’s say if performance was a non-issue, which is obviously an abstract thing at the moment, but if it was not a non-issue with those principles, is there anything that you still don’t quite like or they would be, okay fine, it’s a matter of tradeoffs. I want to understand your perspective on that.

Casey Muratori 00:26:47 I would say that it really depends on what the comparison is or what the baseline is, I guess is what I would say. So when I have looked at things like examples that people who advocate these particular clean crowed principles, when they do a comparison and they’re like, okay, this is more readable when I’ve pulled it out into this class par or something like that. Usually when I see those, my first thought is you did not actually try to make the highly readable and still quite performant procedural version that doesn’t actually use the class hierarchy. So I would say I have two different opinions about this particular thing. If you are comparing the virtual function class hierarchy version to the messiest possible sort of like old school, tons of duplicated code, lots of pound ifs, you know that sort of thing version, sorry to, I probably shouldn’t say this, but let’s just say the Guo make code base as an example, right?

Casey Muratori 00:27:50 Which is really hard to read through and it’s just got so much stuff through and through it. I agree, I agree that the class hierarchy version looks better. On the other hand, there are easy ways, in my opinion, to make procedural code very readable. You pull out code that’s used multiple times into functions that do what they need to do. You name those functions sensibly. The code is very readable at the end of that process. And in my opinion it’s no different in terms of cognition from the class hierarchy version, but it also performs much faster. So to me it’s like, yeah, I agree that there are code bases who you could take and say, hey, the class version is much more readable. But I also would’ve said, well there’s a better way to have written that anyway, that has nothing to do with classes. So why are we claiming the, the advantage comes from the classes as opposed to just not doing all of these kind of other bad things, you know, that were present.

Giovanni Asproni 00:28:41 Another question related to this, so you seem to take issue especially with the class hierarchies more than anything else. Am I correct?

Casey Muratori 00:28:49 Absolutely. Because that’s what the video is about, right?

Giovanni Asproni 00:28:51 Okay, now what these in your view actually good code, you know, so clean code is one view, as we said before, maybe clean code at the moment has a kind of specific meaning nowadays. So let’s assume it’s the meaning that we find it a clean code book. So let’s change term. Yeah. So is clean code is a one view of what constitutes good code? What is good code for you in your specific opinion?

Casey Muratori 00:29:12 If I had to say the general principles that I look for, I would say number one is that the code in general reads roughly like what the program actually does. So I tend to think that good code reads like its series of operations. It’s also one of the reasons I don’t super like class hierarchies is because they tend to hide a lot of what’s actually going on in the program. And I don’t tend to like that, right? I tend to like code that reads like what it does because to me that’s the most, when I think about code readability, I don’t think about just, I think I know what the code does. I want to actually know what the code does. So I like code that if I read it, I am not surprised by what I then find out the code was really doing behind my back, right?

Casey Muratori 00:29:57 So I’d say that’s thing number one. Thing number two is that the code minimizes the degree to which things I might change in how the code operates will not have to require equivalent changes somewhere else that I don’t know about. So for example, if I have a piece of code and it’s got, this is a very simple example, but hopefully people could extrapolate to more complex ones and it’s got a constant in it that’s like this is the radius of the earth or whatever. If I want to change what I’m considering the radius of the earth, I don’t want to find out later that there’s somewhere else in the code that has to agree with this part of the code that also has the earth radius encoded somewhere else. And you can imagine lots of different versions of this that are more complicated, that aren’t just a constant, they’re some function that’s get called something about how a database is locked that has to be maintained.

Casey Muratori 00:30:47 I don’t want there to be times when I think I’m changing the only piece of code that does something and find out later there are these other things that the programmer secretly had agree that I had no way of knowing. Number three would be that the code in general is not doing anything that precludes me from making sensible optimizations to it later. Meaning it has not been, I guess the word factored is probably what most people would use. It has not been factored in such a way that precludes the possibility of future optimization without a rewrite. And I know that’s a very hard one that’s really, that one is almost what my course is about. It’s about how to write code that doesn’t prevent a future optimizer from actually making the code run quickly. And so that one would probably be my third one because that I think is probably the most difficult thing of all to know how to do because it requires you to understand what people might need to do to optimize it in the future. But that to me is another component of good code because otherwise you get in these cycles, and I documented this in my performance excuses debunked video, you get in these bad cycles of people having to constantly throw away entire systems and rewrite them because they were written in such a way that precludes optimization. So I think those would probably be my three roughly if I had to say.

Giovanni Asproni 00:32:02 Okay, follow up on this one. So looking at your videos, you are keen on objectively measurable criteria. So what objective criteria can you apply to assess the code goodness according to your preferences?

Casey Muratori 00:32:15 So one that I’m sure will be, again, since my other video was already very controversial, I don’t mind saying another thing which is controversial, which is that something that I actually use actively all the time when I am actually programming is I check my lines of code and I don’t do this because I’m trying to optimize my lines of code down to the smallest actual number because that actually isn’t very useful. I don’t want to be like doing weird tricks, right? And I use lines of code only because I don’t have a convenient tool that measures something better, like number of operations or those sorts of things. But I check my lines of code fairly religiously. And the reason for that is because it’s the best objective measure that I’ve found for that first point that I said, which is that the code reads like what it does.

Casey Muratori 00:33:00 Because I find that a lot of times if you start adding lots of sort of like code that doesn’t really do anything like infrastructure, like oh I’m going to create these class hierarchies or I’m going to create these sort of extraneous things that I’m using to sort of abstract the code or do things like that. I find that they tend to balloon the lock count of things. A good example of this would be like the LLV code base or something where it takes 10,000 lines of code to do four lines of codes worth of work or something, right? And I try to keep that as close and to learn sort of the correspondence, if you will, of roughly how much lock in a particular language I think corresponds to general features to keep myself from going down paths where it’s like, wait a minute, am I just massively overcomplicating what this thing actually has to do?

Casey Muratori 00:33:46 So I feel like that’s a good objective metric. We should kind of like features per line of code is something I kind of actually want to keep a good handle on. And there’s two reasons for this. One is because I find that it’s easier to read the code, meaning it’s easier if there’s less code. I can read it, I can understand it, I can keep it in my head. The second one is because it’s easier to modify and optimize the code, oftentimes if there’s just not that much of it, the more code I have, the harder my job is. If I have to go change something because oh my god, there’s so much of it, obviously that means that if I want to change this feature, I’m going to be touching 10,000 lines of code instead of 100 lines of code. So I try to keep an eye on that.

Casey Muratori 00:34:22 And then for the optimization one in general, I’m always measuring the performance of the code. I just keep that as a constant thing. So whenever I’m running a program, my programs always have performance metrics on the side just by default. And I always watch those performance metrics because I always want to make sure they are reasonable. If your current performance metrics are pretty reasonable, then you’re usually in good shape for eventually someone optimizing something. Whereas if your current performance metrics are awful, it’s probably a sign that you’re doing some things like those big, long serial dependency chains you’ve, you’ve added like this really inefficient way of doing something that it’s going to be very hard for someone to optimize and so on. So when I look at my performance metrics, I always want to see like, okay, if there’s a performance hotspot showing up or some slowness showing up it, it’d better be in a routine that I know is purposely some end cube thing that I know I can just replace or something like that. If it’s showing up diffusely throughout the program in general, that’s a big no for me and I would fix it. So I do have some objective things, however, I don’t know that there’s objective ways of measuring really things like readability, right? It’s very hard to measure those things.

Giovanni Asproni 00:35:29 Well, I mean you can measure that with experiments as they’re,

Casey Muratori 00:35:32 Yes, but they’re hard to conduct routinely.

Giovanni Asproni 00:35:34 Okay, and then a question related to what you were saying before, it’s like a friend of mine a few days ago, he was actually mentioning about this interview and we were chatting about potential questions and he said, after all these years, I think in terms of sending messages to delegates, I can open the box and build functions, but my brain thinks methods. So this to me seems to be about familiarity somehow the way we are used to do stuff. So what do you think about the link between the perception we have about good code and our familiarity with specific technologies? Is there a link in your view there?

Casey Muratori 00:36:07 Well, I mean I think there has to be, I mean I think it’s undeniable that there is some link there. And the reason that I say that is just to, again, to go back to the language analogy, I mean there’s just no question that I’m never going to speak another language as well as I speak English. It’s just not, it’s just not, right? It’s too late at this point. Now if when I had been raised I was taught Italian and English at the same time or something, then maybe I can have an honest opinion about those two that has nothing to do with my familiarity of it or something like that, right? But in general, the languages that I know or the, the paradigms that I know, the idioms that I know I’m obviously going to be more effective in than the ones that I don’t know.

Casey Muratori 00:36:49 I have a lot of experience with object oriented programming. One of the reasons I feel very comfortable critiquing it’s because I’ve written large, I shipped the first version of our characterization system was object oriented programming. One of the big reasons that I don’t like it is because when I actually went through and analyzed all the problems, they all basically had to do with that. And when we removed it for the second version, it was a massively more successful product with much higher performance that was trivial to maintain. Whereas before it was the, all those things were the opposite, right? And so it comes from my experience with these things and being fairly confident that just there are much better ways to do it inside of these particular paradigms. But I think once you step outside of those paradigms, we just don’t know, like I have no ability to comment on, on functional programming, right? I just don’t, because I don’t have enough experience with it to say that this or that would be the right way to do it.

Giovanni Asproni 00:37:45 Okay. Now let’s move on and talk a bit more about performance itself. Yeah? So I guess I suspect I know the answer to the first question I have in mind, but I’ll ask it anyway. If you, I think you touch upon this before because sometimes, and I see this very often, many say computers are quite powerful nowadays. So why should we be worrying about performances? You know, if our code is readable, we use these techniques that makes this much more readable. You know, we use lots of inheritances just to, and for us, this makes our code better in our particular context and the result is acceptable to end users. Why should we care about performances if all these things are verified?

Casey Muratori 00:38:32 So, um, if the actual statement were true, meaning that the result is fine for the end users, then you actually don’t have a problem. And I certainly don’t have a problem with it, however, it’s never the case. So in that video, the performance excuses debunked, that is one of the ones I tackle. And I literally show that Google, Microsoft, Facebook, they have presented consistently. They are like, we have research showing that no matter how fast our software gets, if we make it faster, users are more engaged with our products, they do more operations, et cetera, et cetera. Right?

Giovanni Asproni 00:39:11 That that’s fine. So when I say acceptable, it’s kind of the users are willing to live with that. You know, it’s kind of, okay, it’s fine for us.

Casey Muratori 00:39:19 Then you’re missing a huge business opportunity is what I would say. Because the research is 100% conclusive on this. That the more responsive your software and the faster it performs, the more engaged users are, the more likely they are to spend more money using, buying more things for your software, getting more contracts, adding more seats. Or in the case of what is today’s software model, there’s oftentimes a lot of sort of upsell to it, right? I mean, if you look at consumer facing software as opposed to business software, you are actually trying to get users to look at ads or go to a particular thing or buy a premium version, all of those things. No one as far as I know has suggested that it is not true that higher performance equals more money. It’s just straight across the board. Even Adobe says this, right? And they are shipping basically monopoly apps into a creative space. They’re just like, performance equals money. You want it. So I would say that the day has not yet arrived when the performance is so blazingly fast, no matter what code you type in that we’ve maxed out that that would be a great day because then we can stop having podcasts like this or me posting videos because all software will be instant. But we’re nowhere close right now. We’re nowhere close to that.

Giovanni Asproni 00:40:39 Now, this is more about network, I guess more than Io because many modern enterprise systems, let’s call, I mean let’s call them like that, you know, with the big companies offering services maybe to other companies or to end users, many of them nowadays are basically a mesh of services talking via the network with each other.

Casey Muratori 00:41:01 You’re talking about like microservices kind of sorts of things.

Giovanni Asproni 00:41:03 Microservices kind of, or distributed monolith more frequently, this kind of stuff. Have you got any suggestions on how to approach designing for performance in those situations?

Casey Muratori 00:41:16 Well, the question’s a little bit potentially broad because performance tends to be something that’s somewhat specific. So it really depends on which one of those you’re talking about. So if it’s actual microservices model, we already have examples of people switching away from microservices to monolith and getting faster. The reason for this is obvious is because, hey, the network is slower than in memory. So if you combine things together, they can process more quickly. This is not surprising, right? The way I like to say this is microservices are not a software engineering solution for the computer. They’re a software engineering solution for Conway’s Law. They’re something that is there to solve a problem that a business has of the form. We have one team that’s going to do this microservice, and we have one team that’s going to do this microservice. And having a cut between them is our way of solving an org chart problem, which is Conway’s Law.

Casey Muratori 00:42:18 So microservices, in my opinion, never were a thing you would do because you wanted to write good software. They’re a thing you do because your org chart, it makes that a more logical thing for you to ship because it naturally breaks down the program in the same way your org chart is broken down. Again, this is just Conway’s Law. I have a, I have another video on Conway’s Law. In fact, if you’ve never heard of it, if a lister’s never heard of it, they can go watch that one. And I explain that one, but it’s one of the most true laws in software architecture. So it’s not surprising that you can merge two services together and get optimization. This is really just another, it’s exactly like my Clean Code, Horrible Performance video. When you consider things as having to be artificially separate, it gets slower. Almost always. The only time it doesn’t get slower is if you know that the amount of processing work that they will share is never, there’s no overlap, right? But if there was ever a time when these two things could be processed more efficiently by considering together you are strictly making the code slower by separating them. Microservices are literally that just in the network structure as opposed to in your data structure in a program, you had a separate one which was distributed monolith.

Giovanni Asproni 00:43:36 Well it’s, I’m saying that it’s the one that you find more often. Yeah? So many teams want to go through microservices, but they end up with a mesh that where the services actually depend very strongly from each other. Yeah. To the point that,

Casey Muratori 00:43:50 So yeah, I would say I don’t know that my, I don’t know that I have any specific recommendation other than I think it’s not a great way to solve an engineering problem, but I understand why it happens, right? It’s Conway’s Law is actually a real problem. And I think the way I put it in that the video that I did on Conway’s Law was that as long as we understand that what we’re doing is making the code worse, but it’s the only way that our org can function. If you keep that mindset, it’s not necessarily a bad thing. Meaning you can make a decision to separate something and know that the code is going to be not as good as a result, but that was the only way you could figure out how to structure your organization to get it done then that may be true, right? We’re not miracle workers, right? We can’t just suddenly say that we’re going to engineer all this stuff perfectly because that simply might be too difficult. And that’s the original Conway’s Law paper.

Giovanni Asproni 00:44:46 The impact of the choice of programming languages on system design and performance. Do you think there are aspects due to the languages themselves that you need to consider?

Casey Muratori 00:44:57 Absolutely, yes.

Giovanni Asproni 00:44:58 Would you suggest everybody to use only C++ and avoid Java? Or what kind of, yeah, what do you see as an impact? I can give you also some, maybe an example.

Casey Muratori 00:45:11 I understand what you’re asking. I can give a, yeah, I can give a sort of broad answer to that. Sure. So I think this again is really just comes down to what I hope people would do more, I guess is what I’d say, that they haven’t been doing. It’s not so much that I don’t want everyone to start programming C++. In fact, I don’t really like C++ very much to be honest. I program in a very like light C++, it’s more C like, right? Because I’m just like, ah, I don’t know about this and there are new languages now, right? Like some people really like Rust for example. There’s new languages coming out that you know, people can consider and whatever. So I don’t want to suggest that C++ is somehow a great language that I really don’t think it is.

Casey Muratori 00:45:51 But either way, I would just like people to understand that the performance implications of a language choice can be very large. For example, if you were to program in Python and run it uncompiled, like you’re literally just running Python as Python and you don’t go get Mojo or something or try to use Cython or something like this. It’s literally a 100X slower than the same code written in C, not 2X slower, not 10X slower, but like it’s a hundred times slower to go through an interpreter for Python because that’s just, it doesn’t JIT right? And that doesn’t mean you can’t use Python. What it means is that if you’re going to actually use interpreted Python in a project, you should know that fact. So, what I object to is people saying they don’t have to know that or care about that because they do.

Casey Muratori 00:46:48 If they know that, then they can make an informed decision. They can be like, oh well realistically how much code in this project really is going to be written in Python? Is it mostly going to always call out to other libraries that are not written in Python? Am I sure that there’s not going to be feature creep and we end up with way more Python than we think we’re going to have? Right? Can you make these kinds of guarantees, and maybe they can. And then, no harm done. They knew it was a 100X slower. They actually did the work to verify that that was not going to be a problem for this project. And then they made the project and it was fine. Right? The thing I object to is the not knowing and claiming that it will be fine. It won’t be; we have so many examples of this.

Casey Muratori 00:47:34 Like I said, that performance excuses debunked video, I go through example after example, after example of exactly that choice. PHP, it will be fine, just use it. No it won’t, right? And then you have to go through all of these. In fact, I guess you might say one of the things that is at least a bright spot here is that people are learning this. They have now optimized PHP a lot, right? Python is now getting things like Mojo and things like Cython to try and make it not as big a deal, right? So we are learning, but what I would like people to do is to be aware of these things before choosing a language. Because you don’t want to choose a language and go in blind. You don’t want to go, I’m sure it will be fine if you haven’t actually had the performance experience and knowledge to know that it will be because you are basically setting yourself up for one of these big rewrites where someone has to come in and rewrite all the PHP code or, like Facebook had to do, make a special compiler just for them to speed up their PHP code because it was too slow, right?

Casey Muratori 00:48:40 Those are huge engineering undertakings that you’re basically foisting on someone else by making that bad decision upfront.

Giovanni Asproni 00:48:46 Okay. And I guess similar considerations may, may be made about other technology choices like frameworks, libraries, the kind of hardware, anything like that. I think, I guess we can make the same kind of considerations. So the programming language is one aspect, but then if you use some particular libraries, you’re saying Python, if use the libraries that are actually C native libraries, we may be fine, depending on what we need to do, depending on what we need to do. If you use, well, even pure Python ones we may be fine, but in some occasions they may be a bit too slow because it’s like a hundred times lower. It’s quite a bit depending on what you need to achieve.

Casey Muratori 00:49:26 Exactly. It’s really just about, this is literally why I made the course. I feel like the thing that’s missing these days is not actually optimization work, because that’s a separate thing. What it is is just awareness. It’s awareness of how much it costs to do these different things. Because if you know how much these things cost and can actually account for them, then you may make a decision that is quote-unquote bad for performance. Meaning it will yield worse performance. Yes. But if you know the bounds of that, you can really decide to do it because you are actually making a real trade-off. What you cannot do is not know and claim you’re making a trade-off. If you don’t know how bad the performance will be from your decisions, you aren’t making a trade-off. What you’re doing is just ignoring a problem, right? So when people say like, oh, I’m going to use this particular coding practice and I’m not going to worry about the performance of it because it’s a trade-off. Well, if the person really knows exactly how much performance they’re giving up there, then yes, that was a trade-off and it may very well be a valid one. But on the other hand, if they don’t actually know and they don’t measure these things and aren’t familiar with how much performance they’re giving up, that wasn’t a trade-off that was just a random choice. Right?

Giovanni Asproni 00:50:50 Yeah. Okay. Now that you mention tradeoffs, you actually have questions about trade-offs as you introduced the subject.

Casey Muratori 00:50:57 Perfect segue.

Giovanni Asproni 00:50:58 Perfect.

Casey Muratori 00:50:58 Happy to help.

Giovanni Asproni 00:51:00 So, and actually we’re just mentioning techniques and things. So they were, just now, when you say people not really knowing the trade off, so what are the common misconceptions that, or pitfalls related to good code that developers should be aware of? Specifically regarding performance?

Casey Muratori 00:51:16 I can give you the number one. There is a glaring one which I would like to make a video of and I just haven’t quite gotten around to it yet. But it is, it is literally, there is literally one which is the biggest, most damaging one ever. So at one point Donald Knuth the professor of Stanford University of course famously wrote the basically reference group I see it in your bookshelf. It’s right behind you in fact.

Giovanni Asproni 00:51:43 Yes. Yeah, yeah, yeah. I’ve got that full.

Casey Muratori 00:51:46 Yes. He at one point was quoting Sir Tony Hoare when he said, premature optimization is the root of all evil. Right. This is a very common phrase. I’m assuming you’ve heard it.

Giovanni Asproni 00:51:59 Well actually, I can tell you that Donald wrote that in a paper in 1974, actually.

Casey Muratori 00:52:05 Yes. And it was a quote, it was, he was quoting somebody else.

Giovanni Asproni 00:52:08 Didn’t actually say, strike your programming with GoTo statements. I can give you the entire quote because I had a question about that quote. So you can, you are preempting me. So off you go. Oh continue.

Casey Muratori 00:52:17 I apologize. I didn’t mean to steal your thunder.

Giovanni Asproni 00:52:18 You don’t have to apologize. It’s fine. , go on.

Casey Muratori 00:52:22 So if you read the context of that quote and what they were actually talking about, both of them, it’s very clear what they meant. And by the way, I happen to agree with it, it would be kind of foolish not to agree with them. They are two of the best, most famous programmers who ever lived. What they meant was that when you write code, there are optimizations you can make that are like assembly-language-level optimizations that will make the code quite brittle and also quite difficult to read. And I’ve had to write code like this in the past once or twice when you have like some routine that is absolutely critical to the performance and you know you can squeeze an extra 20, 30% out of it or something if you go in and hand-optimize this thing really carefully or whatever. Right.

Casey Muratori 00:53:18 But it’s going to be a mess after that. because you’ve done all this, you had to maybe reorganize some data. You’re using all kinds of stuff nowadays you’re using SINDy and you’ve got all these kind of things going on in there that make it very hard for anyone to read and you have to comment it so you remember what the heck was going on in there. Right. And what they were talking about in that quote was that they were basically saying like, look, don’t do that to code before you know that that code actually runs often and actually accounts for performance that matters to you. Right. Don’t just do that because you know what I mean? That’s what they meant. They, I mean I’m paraphrasing a bit, but that’s the context that they were talking about it in.

Giovanni Asproni 00:53:59 Yeah, I can tell you that in the full quote for North, basically he said that we should forget about small efficiency, say about 97% of the time and says premature optimization is the root of all evil, yet we should not pass up our opportunities in that critical 3%.

Casey Muratori 00:54:18 Yes. And I think this is literally what he was talking about, right? He was like in some small percentage of the code hand optimizing it is going to matter and you should do it, but don’t just hand optimize some routine because now you’re into hand optimization because you’re making life worse for everybody else because now they have to deal with this really intricate code you’ve written that is harder to maintain, harder to read. All the things that clean code people claim about the code they’re replacing is actually true about that kind of code. It’s very hard to work with. And I say that as someone who has had to write it, I really do think that you don’t want to have to do that to code unless you know that you need to. Right? However, the way this quote is now applied is that you don’t need to think about optimization till the end of your program basically.

Casey Muratori 00:55:06 What people think they meant by that is that you don’t need to think about performance when you write your code. You just write whatever you want. And then at some point in the process, when you then go measure, there will be a few hotspots and you do some optimization on that or someone else does the optimization on that and you’re done. They did not mean that. Right? That is totally not what they meant. And in fact Rico Mariani I believe posted a thing where he actually talked to Tony Hoare about this and he was very upset that people were taking it that way because I mean Donald Newth has all of his stuff as an assembly in his books. Do you really think this person is someone who would say don’t think about optimization or performance when you’re designing your program? It’s like of course not.

Casey Muratori 00:55:57 They would be horrified I think to know that people were interpreting that quote to mean that you don’t have to think about performance when you architect your program. I think, I mean I would love to hear from them directly. I mean they don’t really speak in public much anymore, but I’m pretty sure they would say No. Of course when you architecture program, you should be thinking about performance. If you don’t think about performance when your architecture program, you’re going to get into a situation where no amount of hotspot optimization will ever improve its performance. And so to me that is like the number one worst folly that we have nowadays is people thinking that they don’t have to architect for performance upfront. You absolutely do. What you don’t have to do is hand optimize the code. That is something that you only need to do in those very specific hotspot cases. But architecting for performance needs to be done from day one.

Giovanni Asproni 00:56:49 So let’s see if I understand what you mean here. So architecting of performance as I see this from day one is basically using the appropriate data structures for the problem you are solving, for example. So I’ve come across code in the past where people should have used sets instead use vectors in C++ causing a horrible slowdown of the system and things like this. So I think here is, if I understand you correctly, the architecting for performance is not thinking that I should remove from this from here and blah blah blah. It’s more kind of let’s do the right thing, the kind of obvious readable writing. And then if we need to do something specific that requires some hand tuning and possibly introducing some unreadable code, there has to be a very strong reason for that. This is the way I’m interpreting it.

Casey Muratori 00:57:34 Yeah, I would say yes, but I would also add something to that which is that you want your architecture to naturally force things into segments that you know will need to be optimized. Meaning, when you are architecting a program, it’s critical for you to have an awareness of all of the different things that probably will cause you performance problems. So, you need to kind of be aware like where am I processing large numbers of things? Where am I doing things that could have serial dependency chains, like I do one thing and wait for the result kind of stuff. Those are almost the more important things because if someone used vector instead of set for example, I agree that’s bad because now I have to go through and change all of these places that used it and it’s a pain.

Casey Muratori 00:58:18 You don’t really want to have to do that. So I agree, it would be great if we didn’t have that problem, but to me that’s almost not as bad of a problem as we designed this whole thing to basically be like, oh I don’t know, I have this object that I just like make a function call on and it gives me back this thing. And unbeknownst to me that does a network request every time or something. Right? Okay. Because now all of my code is using these objects and it’s all written serially and now someone comes and they have to optimize it, but the only way to optimize it is to aggregate those requests that’s basically complete rewrite because the code is serially waiting on them. So there’s no way for that object to even batch them up because they need the result right away and you don’t know how to break that. Right.

Giovanni Asproni 00:58:59 The problem is that most people seem to ignore this aspect. So seem to think that all these things come for free. You do that, there is no price to pay and this is the biggest problem I think you seem to be having with the approach. Am I correct?

Casey Muratori 00:59:13 It’s like the only problem I’m having with the approach because in general, like I said, I don’t think we have objective measures of code quality beyond performance. We can measure the performance directly obviously. Otherwise it’s very hard to objectively measure the performance. So if someone says that this is a more readable style or that they can write more quickly in this style, they may well be right. I’m not in a position to tell them that they’re wrong about that. Even if they are wrong about that, I can’t prove it. So really what I’m talking about is just the fact that there are objective things we can measure, which are runtime costs for these things and cost to the structure of the program in terms of performance, we can measure those. So you need to know those and if you do know those and still think that it’s a good choice to use whatever it is that this other structure that you’re going to use that has a negative in that objective metric, if you are aware of that and make that choice with full knowledge, then that is truly a trade-off and you can call it a trade-off and that’s valid.

Casey Muratori 01:00:08 If you don’t know those things, then really what you’re doing is making an excuse. You’re not making a trade off because you don’t know what you’ve traded, you’re just making excuse to not know this other thing and that’s very bad and can have very bad results.

Giovanni Asproni 01:00:20 Okay. And so I had another question to ask, but I think I probably know the answer, but this was about more the tradeoffs with developers’ time, computing calls and this kind of things. So nowadays most of the time developers’ time is actually much more expensive than computing costs. So I guess that favoring practices that help cognition seem to be often a good tradeoff for many companies.

Casey Muratori 01:00:45 I guess I would disagree. There’s two ways I would disagree with that though. One is that I actually don’t find that I have one meta disagreement and one practical disagreement. The meta disagreement is that a lot of people say that they’re doing things that improve cognition, but I have not seen the evidence that it improves cognition. So that’s just so I might dispute the premise of many people’s claims that that is what they’re doing.

Giovanni Asproni 01:01:10 Okay so, yeah, I was basing this also on the, you know, and that research I mentioned before that seems that at least many clean code practices actually help cognition in some way. So I was basing on

Casey Muratori 01:01:20 Compared to what though, right? That’s the other problem. Compared to what? Right?

Giovanni Asproni 01:01:24 Well compared to not using them do something else. I would imagine so people apparently,

Casey Muratori 01:01:28 But there was a something else, right? In other words that something else may not have been the best example of what you can do with code that is more performant. Right?

Giovanni Asproni 01:01:37 That that is a fair, that is a fair objection. Yeah.

Casey Muratori 01:01:40 Yeah. But anyway, we can put that aside. Let’s assume that it actually is helping developer cognition. The problem that I have with that is that it doesn’t actually work in the real world. So this is what I tried to show in that performance excuses debunked video, people claim that they’re saving development time, but then you look at their history and they’ve had to throw out their entire system, not once, not twice, but oftentimes three or four times in the space of 10 years for performance. So they claim that they were saving developers time, but they’re spending years rewriting entire systems. So I dispute any claim that says that we are saving developers time because the only way you’re saving developers time is if you then don’t have to do these huge performance rewrites. But you are. So I don’t think the actual facts on the ground support a claim of saving time because maybe you saved some time the first time you wrote it, but then you wrote it three more times .

Casey Muratori 01:02:40 So I don’t know man, like I don’t think that was a savings if it had taken us twice as long to write it the first time, but we actually didn’t have to rewrite it several times after that, that would’ve been a net savings. Now what I will say is that, let’s say that being sloppy gets you to market fast your first time, right? This is the thing that might happen. We need to get to market quickly. We got to ship this thing in six months. We’re just not that good at architecting for performance. Maybe we don’t even have anyone who knows how to do performance architecture in the first place. That’s a real scenario that can happen. My hope is that, like I said, one of the reasons I made a course is I would like to educate people about this stuff so that that’s not a situation people find themselves in so that they can just kind of naturally make systems that are better for performance and they don’t have to make that choice.

Casey Muratori 01:03:28 But let’s say you do have to make that choice that’s valid. If you just have to do something to get to market now and you kind of know it’s, but it’s like we just have to do it. That is a valid position and it is borne out by the facts on the ground. There are plenty of companies that had to do that. And then pretty much what you’ll see is two or three years after that they announced they had to rewrite everything for performance. But that could be a strategy, right? That could be a valid strategy because when we look at the history of successful software companies, we see this pattern a lot of having to rewrite. What I would say to that is, to me though, that’s still an opportunity. It says to me that we could improve software engineering by figuring out how to make the default way that people program more performant by education, by making languages more efficient.

Casey Muratori 01:04:15 By default, you’re not giving people a really slow PHP at the outset. Making it so that we make the decision so that PHP when it’s released is faster. Because in general we have a more of a culture of performance. So they’re not going to ever have languages sitting in front of them that are slow by design or something like that. So to me, even though I do think it’s valid to take an approach of slow, poorly performing version one, try to follow it up with a version two, that’s fast, that happens a lot in practice. I still think of that now though. That’s an opportunity, an opportunity for us to fix something about the way we’re developing software because I don’t think it has to be that way. I just think that our current programming practices and tools encourage that and we can shift that. We can make it better than that.

Giovanni Asproni 01:04:58 Okay. And now we’ve got, actually the last question which is really related to this is what is a good approach in your view to include performance considerations when designing a system? So a team is starting with a greenfield system, what should they do so they don’t forget about performances?

Casey Muratori 01:05:13 It’s pretty tough. If I had a short one sentence answer to that, that I wouldn’t have made the course, I think that the only real answer to that is you have to educate yourself about how modern performance actually works. You need to know something about how CPUs work and what they can and can’t do quickly. You need to know something about how compilers work and what they can and can’t transform into efficient code. And you need to know something about those serial dependency chains so that you know that you can’t just hide network requests and wait for return values because it simply won’t ever be fast no matter what someone then tries to do to it. You have to do these things in parallel. And so understanding this, what I would say is the, I’m pretty sure the only answer to this is not, here’s an approach.

Casey Muratori 01:05:57 It’s rather programmers have to include performance education as a core thing that they do. Like when we consider someone qualified to do computer science, they come out of a degree program or they whatever, or they’re working in a company, it should just be considered a thing that they need to know is how to think through the performance issues that they might face. And it’s not that hard. It’s not compared to learning all of some modern language like C++ with its thousands of features and all that stuff, it’s not that hard to think. because CPUs actually have to be kind of simple. Like they’re way more complicated than they were, but they’re nowhere near as complicated as a modern language, right? So it’s not that hard to kind of learn like, okay, here’s how these Io subsystems work, here’s how the CPU works, here’s how the compiler works.

Casey Muratori 01:06:47 I basically get it and now I can just, when I program, I can kind of keep in the back of my head a vague understanding of what’s going to have to happen as a result of my decisions. And then I think people just kind of naturally would move away from big serial dependency chains on Io and lots and lots of waste, like you know, big interpreters in the way or language features that can’t be optimized. As an industry we’d start to move away from that and then it would get a lot easier for everybody because if we’re all operating with that knowledge, you’re not going to get slow frameworks anymore, right? The frameworks are going to be fast because the people who made them were thinking it through. So I think we just kind of need a cultural shift. I don’t think there’s this like, here’s an approach.

Casey Muratori 01:07:34 It’s like, no, you kind of have to learn it. And once you learn it, the approach kind of becomes self-evident because you just kind of know like, oh, all right, if I do this, I’m obviously creating a huge problem for my CPU, so I’m not going to do that. Right? And I don’t know that there’s another way around it because it’s easy to take any piece of code and make it very slow just by making a few bad decisions and without understanding why those things are happening at the CPU level, it’s pretty hard for you to guess what they’ll be. But once you know what the CPU has to do, it’s actually quite easy.

Giovanni Asproni 01:08:04 Okay. Thank you. Thank you. So, well I think we’ve given our listeners quite a bit of food for thought. Okay. It’s been quite an interesting conversation for me, but I’ve, a very quick question. The last one, very short answer on this one. If there was one thing you’d like a software engineer that was listening to this podcast to remember from this show, what would that be?

Casey Muratori 01:08:31 I think it would be that performance really matters in practice and we have copious data that says that that’s true. So I would encourage everyone to actually learn about performance because the idea that you can ignore it, which is kind of very culturally common nowadays, is simply not borne out by what we actually see happening in business. I mean, so that’s what I would say. Please look at the actual evidence and see how much performance really matters to lots of companies because it does matter and I think that it would be better if engineers in general took the time to really understand it.

Giovanni Asproni 01:09:13 Okay. Thanks Casy. So where can people find out more? So I think they can follow you on Twitter. You’ve got a Twitter handle and, but then also is there any other way to get in touch with you?

Casey Muratori 01:09:25 The course and stuff that I mentioned and just my public posts are on computer enhanced.com. We, you can just type in computer enhanced.com and it goes there and that’s where I post anything that’s of any value is what I post there.

Giovanni Asproni 01:09:38 We’ll put everything on the link section of the podcast.

Casey Muratori 01:09:41 Then that’s it. That’s everything.

Giovanni Asproni 01:09:43 Okay, thanks a lot Casey, for coming to the show. It’s been a real pleasure.

Casey Muratori 01:09:48 It’s been my pleasure, thank you for having me.

Giovanni Asproni 01:09:49 Thank you very much. And this is Giovanni Asproni for Software Engineering Radio. Thank you for listening.

[End of Audio]

Join the discussion

More from this show