Jason C. McDonald, author of the book Dead Simple Python, speaks with host Samuel Taggart about leveraging quantified tasks to improve estimation, particularly across projects. They discuss the origin of the concept and its relationship with story points, and Jason offers examples to show how quantified tasks can capture nuances in software tasks that are often lost with story points. He also points to the ability to compare them across projects as a major advantage of quantified tasks. Among other topics, they also consider how to use quantified tasks to analyze the stability of a codebase. Brought to you by IEEE Computer Society and IEEE Software magazine.
Show Notes
Transcript
Transcript brought to you by IEEE Software magazine and IEEE Computer Society. This transcript was automatically generated. To suggest improvements in the text, please contact [email protected] and include the episode number.
Sam Taggart 00:00:19 For Software Engineering Radio, this is Sam Taggart and I’m here today with my guest Jason C. McDonald. Jason is a software development manager, speaker and the author of Dead Simple Python, among some other books. He’s the founder of Mouse P Media, which is an open-source software organization where he trains software development interns. So Jason and I are here today to talk about something he’s developed called Quantified Tasks, which is a replacement or alternative for story points. So Jason, let’s start with talking about story points. What they are, where they came from, what people use them for.
Jason C. McDonald 00:00:53 Sure. Yeah, happy to be here, Sam. So story points comes from the Agile methodology that is, that came out of the Agile methodology, specifically Scrum. And the idea was to be able to estimate how long it was going to take to accomplish a particular task or a release. As soon as I say how long, of course all the Agile purists are screaming, no, it’s not about time, which that’s the goal is it’s not supposed to be about time. But in practice, when you implement Scrum, when you implement story point estimation in an organization, that is what it becomes about because managers want to know how long is it going to take us to ship this code? So story point was devised as a way of measuring how much effort it was going to take to accomplish a certain goal, a certain task. But because we don’t have something like inches or pounds or hours or some sort of handy metric that we can use to measure the size of a task, they instead decided to measure it kind of in the imperial way.
Jason C. McDonald 00:01:57 If you think back before we had the standard metrics, everything was measured relative to something else and usually the king’s foot and throughout, throughout Europe, ergo the imperial measurement of the foot. So story point is a way of measuring tasks relative to other tasks. So you come up with some fairly arbitrary scheme of numbers. You have one task, you give it a number, it almost doesn’t matter what number you give that first task and then you measure everything else in relation to that task. Is it bigger or is it smaller? Which is an interesting idea, but like with the foot in imperial times, it lacks standardization, it lacks repeatability, it lacks objectivity. As soon as you change kings or in this case teams, your measurements are suddenly very different.
Sam Taggart 00:02:49 Yeah, well I remember reading the book The Agile Samurai, by Jonathan Rasmussen. I think that’s the right author’s name. And it had something in there where it talked about how from psychology we’re really good at measuring, at looking at two things and telling which one’s bigger, right? So that relative measuring we’re pretty good at, but the absolute measuring of saying, okay, this one’s bigger than that, but which one’s — is this 1.9 and this is 2.1 or like what — it’s a lot harder to do that.
Jason C. McDonald 00:03:14 Yes, exactly.
Sam Taggart 00:03:15 So I think that, yeah, that’s some of the psychology behind it. So what scales do people usually use for these? Like I’ve seen all kinds of stuff from like numbers, Fibonacci sequences, t-shirt sizes. Have you seen anything else? Or are those pretty muchÖ
Jason C. McDonald 00:03:29 Those are basically in a nutshell, anytime you’re dealing with a numeric scale, what’s preferred to something with some sort of exponential or at least non-linear growth. That’s why Fibonacci’s so popular. And it’s funny when we say Fibonacci, because it’s not Fibonacci, it’s usually a modified Fibonacci sequence where they add a number that they drop and I can’t remember exactly what, but they, they’ve modified the Fibonacci sequence to be a cleaner exponential curve. And it’s the idea being that the larger the number is, the more distance there should be between that and the next number. The reason for that being, the larger the task, the more likely is there’s something uncertain in there, something you don’t know, or you don’t understand. So the difference between a task that’ll take you 10 minutes and a task that’s going to take you an hour or between a Fibonacci one and a Fibonacci two is very small.
Jason C. McDonald 00:04:20 But the difference between a task that’s going to take you, and again, I know everyone’s cringing that I’m using time here, but again, that’s what it ultimately boils down to. A task that’s going to take you a week versus a task that’s going to take you two weeks, there’s a huge difference between the two because you don’t know what other things you don’t know. You don’t know what you don’t know. The more complicated the task, the more effort involved, the more unknowns there probably are. Probably massive asterisk here, which is again, where things get a little bit wibbly wobbly, timey, whiny.
Sam Taggart 00:04:54 Yeah. What I always talk about estimates and stuff, I always try to think plan on short intervals. The shorter interval you plan on, the more certainty you can have, right? Like if I wake up in the morning and say, this is what I’m going to do today. I can be certain about that. If I say this is what I’m going to do next week, well there’s a lot of time that passes between now and then and a lot of I could get interrupted. My plans could change completely.
Jason C. McDonald 00:05:14 Right. Short cycles being one of the key principles of Agile.
Sam Taggart 00:05:19 So my next question is what do the current approaches fail to capture? Like you created this alternative, why did you create this alternative?
Jason C. McDonald 00:05:28 Well, and I have to go off on a tangent briefly too. When I created this system, I did not create it on the basis of replacing story points. In fact, when I created this originally, I didn’t even know story points existed because I conceived of this while reading the book Dreaming in Code by Scott Rosenberg back during my first year as a software engineer. I read this book and there was this remark in there by Scott Rosenberg that there are certain tasks which he refers to as black holes, which just suck up inordinate amounts of time. And it doesn’t matter how much work you throw at it, it never seems to get done. And he said, every project has one of these more than likely, and the more of these you have in a project, the less likely the project’s ever going to get finished. Most famously in his book he pointed to the disaster of the FAA, trying to get a new air traffic controller system built that never did get built because it was full of those black holes.
Jason C. McDonald 00:06:22 But he made an interesting remark that most software engineers can tell you whether or not a task is a black hole within the first 10 minutes of working on it. So that created, planted the seed in my mind that okay, there’s something about a task that we’re capable of assessing. There is something objective here that can tell us whether or not this task can even be completed in a reasonable span of time, if at all. And so that began this process of me developing what would iteratively become quantified tasks. What became evident as I started working on teams that use Scrum, that use story points was that what story points was failing to capture was not only this probability that the task was a black hole, it was being sort of indirectly captured by the size of the Fibonacci number, but not reliably. Maybe that eight is a black hole, maybe not.
Jason C. McDonald 00:07:24 Like those numbers, all of this thought was going into them. I was hearing all these conversations about, oh well there’s a lack of documentation on this, or–oh that code is really bad or, we’re not sure how this particular technology works or if that requirement document isn’t fully finished. So there’s all these factors that were going into the sizing conversation, but none of that information was being captured. So you would have two tasks that would be an eight and the eight would’ve arrived at by completely different criteria. One just lacks documentation, maybe a lack of precedent. And the other had some sort of snarly complexity that we didn’t know how to untangle, and yet none of that information was in that estimate, which made it very hard to tell how accurate that eight was or who should even take on the task.
Sam Taggart 00:08:14 Yeah, I think that probably stems from the problem of different people doing the estimating than necessarily doing the task. Or maybe if you have a big backlog, you estimate, you throw in the backlog and then it’s several months before you actually get to it or weeks or whatever.
Jason C. McDonald 00:08:28 Mm-Hmm, exactly.
Sam Taggart 00:08:30 I could see that causing problems.
Jason C. McDonald 00:08:31 Because estimation is very subjective and it’s individual. So every team has their own scale as it were. And you could have six teams that all use the exact same Fibonacci scale and you can’t compare their tasks. You can’t compare across because they’re using different criteria and chances are they never wrote down what those criteria are. And even if they did, they’re probably not going to agree with that. Well we use a different, innate means something different to us because of this unique facet of our team. And that can even happen within a project. You lose half of your team, gain some new developers. Now your estimations are completely different. So you can no longer even size to the earlier task in your own project.
Sam Taggart 00:09:17 Yeah, I’ve heard someone once say like, if you change a member of your team, you have a whole new team, which I thought was a very interesting comment.
Jason C. McDonald 00:09:25 Yes. My staff mentor at canonical path before once remarked to me that every time you add someone to a team, the teamís shape changes because the individual and the team both adapt to the unique attributes and traits of the individual you’re adding.
Sam Taggart 00:09:43 Hm-mm. . Great. So quantified tasks have these multiple numbers, right? So in story point you just have one number for which fails to capture this stuff. So how do the different numbers and story points, what are they and how do they capture this nuance?
Jason C. McDonald 00:09:58 Right. So there are three facets of quantified tasks. I have to clarify that upfront. So there’s a triad of numbers that are related to planning and then there’s a pair of numbers that are related to stability and bugs and we can come back around to those. But the triad of numbers that I’m focusing on here for estimation are distance, friction and relativity. The thing to understand is that, you capture these numbers individually, but then there’s also a very simple formula that is used to derive an energy point score and energy points is the drop-in replacement for story points. So I’m going to come back around to energy points. The three numbers though are valuable even by themselves. They’re the kind of the three ways of measuring the effort involved in a task. One of the challenges with story points is that we’re measuring it relative to our own skill, rather implicitly we look at it, we think, well that would take me a day.
Jason C. McDonald 00:10:59 Well the problem is you may not be working on that. In fact, you may not even be at the company anymore, anymore by time someone is working on that. So we can’t rely on that subjective personal, this is this size to me because of my skill level. We have to factor our skill out of this. And that’s why I have these three specific numbers. Distance is how roughly how long would it take me to complete this task if I knew everything? And that little clause is what takes subjective skill differences out of the equation. If you knew everything, how long would it take you to complete this task? This separates out raw work fingers on keyboard, just implementing from closing any sort of knowledge gaps that you may have either as a developer or as a result of having to invent. So it isolates just that raw work variable out and it’s always measured relative to your sprint.
Jason C. McDonald 00:12:00 So whatever size sprint, whatever size development cycle your company uses, that is what distance is relative to. Because time is not really the point here. It’s raw work that you’re measuring. The second number is friction, which is how many resources or I guess you could say what, what lack of resources is there to help you complete this task. Again, it’s not how much do I have to learn, it’s what resources exist because it’s going to take significantly less developer effort to complete a task that involves reading some well-maintained documentation and a tutorial, versus having to read between the lines of an API document that hasn’t been maintained in six years and inventing the rest. There’s a massive difference in the amount of work involved there, the amount of effort involved, which by the way, measuring friction has the upshot that it automatically surfaces all of your low hanging fruit because low friction tasks are great first contributor tasks. Some brand new to the team, someone who’s brand new to the language can pick that up because there’s more resources for them to figure it out.
Jason C. McDonald 00:13:14 And then your more senior members can take the higher friction tasks because they’re going to need that expertise to get it done. The third number then is relativity. So there’s this saying in story pointing, make your best estimate multiply by three . And that works because that multiplying by three is accounting for everything you don’t know. Well, how much you don’t know is something you can usually figure out upfront. If you look at a task you can go, you know what, we have no idea how this tool works. This is a complete invention here and I’m not even sure this algorithm is possible in this language. Now you have some unknowns. So relativity becomes this measurement of kind of a ratio of how much you know versus how much you don’t know about the task itself, how much that is, how much is obvious versus how much is discovery and invention.
Jason C. McDonald 00:14:09 And this becomes your multiplying factor when you create your energy points. So energy points you get by adding your distance and your friction scores, which by the way forgot to mention, all of these are on a scale of one to five. So you add your distance and friction scores together and then you multiply by relativity, which is again one to five and that is going to give you an energy point score that score by the way, because you’re using three fixed and reasonably objective systems for measuring, are repeatable across teams. So that number you’re getting that energy point score you are getting is comparable not only to every other quantified task estimated task in your project’s history, but to any other quantified task estimated issue you’ve ever worked on. You actually can form a personal relationship to that number, which makes it now directly useful.
Jason C. McDonald 00:15:09 Because like for me personally, I typically can complete about 23 to 26 energy points per sprint if I know the stack. That holds true regardless of the team I’m working on. So that allows me to tell you how long it’s actually going to take me to clear six story points. I know that that’s actually a number I can give and that allows for better selection of work, better distribution of work, and better time estimates for the managers because we have this score that is repeatable, it’s objective, and we’ve recorded the factors that went into it, the amount of work, the amount of research, and the amount of innovation that’s going to go into it.
Sam Taggart 00:15:51 So for teams that are using story points now it’s dropping in quantified tasks, just simply changing the way they come up with their number? Is that basically it? Like using these three numbers and then, or is there more to it?
Jason C. McDonald 00:16:03 Basically, yeah. That literally is it. And the interesting thing is, all the teams have implemented this on, and all the teams I’ve talked to about this have remarked that this is basically the conversation that they’re trying to have about story points anyway, but they hadn’t really boiled it down to those three questions concretely. So things wind up drifting off into different priority levels. See this takes the question of what’s more important, what do I assign more weight to in my estimate out of the equation? Thereby removing that subjectivity that varies from team to team because we have a formula and we have three, one to five numbers that really constrains how you do estimations. So now estimation becomes in a team asking those three questions, how long would it take relatively if you knew everything, how many resources are there, documentation, health of the code, availability, subject matter experts, yada yada. And how much do we know versus we don’t know. Those three questions wind up guiding the conversation during estimation and it’s, we would sound like that should take longer than story pointing would. But in practice it actually tends to go faster because you’ve now laser-focused your conversation and everyone can throw out their estimates and specifically narrow in on what they maybe disagree on and find the numbers that everyone is comfortable with.
Sam Taggart 00:17:33 Yeah, I can imagine spending less time arguing I gave this a three, you gave it a five, why? Right? And then you could explain it a lot easier.
Jason C. McDonald 00:17:42 Exactly.
Sam Taggart 00:17:43 Very good. So for people who are using some common tooling like maybe Jira or I don’t know, GitLab issues or whatever, or GitHub issues or whatever, do these things support using quantified tasks?
Jason C. McDonald 00:17:55 Because quantified tasks are a fairly new method, you’re not going to be able to fire up a tool and just find this like sitting there waiting for you. But most issue trackers do have some form of custom fields. Jira especially works really well with this because you can add the custom fields for distance, friction and relativity and actually have [email protected] if anyone wants to follow the setup process because Jira integrates really tightly with this. So you set up the fields for distance, friction and relativity, and then you set up a Jira automation task and once you fill in those three fields, the Jira automation task is going to run, and it will go ahead and calculate your energy points score off of those and automatically fill in that field for you. And I actually use this on a team earlier this year that I was leading and it worked very well for us because it would take us usually about 30 seconds to estimate a task and we would just fill in distance friction relativity from the dropdowns and then it would propagate our, our energy points, which would then show up throughout the system as a story point would as a story point estimate would. If you’re using something that doesn’t have custom fields, it does get a little bit more challenging.
Jason C. McDonald 00:19:11 So like GitLab issues, unless you’re paying for the ultimate tier, you’re not getting custom fields or any of that jazz, you’re not even getting story pointing. So in that situation, if you’re finding yourself limited by a lack of custom fields, I like to use labels in that situation. So GitHub, GitLab discourse, I actually have used discourse as an issue tracker , it actually works pretty well. But at that point you create tags for your different distance friction and relativity. You just create tags for each of them tags or labels. But above all, if you are missing everything, if you have no way of making this work, you can actually still use whatever. If your thing just has a spot for story points and you have no way of adding custom fields, just put the estimate at the top of the description, just put in and there’s an official notation for this. So like you could put like D3, F2R3, which would be distance three friction two, relativity three equals, and then whatever your energy point estimate is, and then copy that estimate into your story point field. Works as well.
Sam Taggart 00:20:18 Yeah, there has to be a place to put a description or something in there somewhere. At least it’s still there captured somewhere.
Jason C. McDonald 00:20:23 Exactly. And really that’s the point.
Sam Taggart 00:20:25 Yeah. So when you look at the one number, then you actually can figure out how it was derived. I think that would be useful.
Jason C. McDonald 00:20:30 Exactly.
Sam Taggart 00:20:32 Yeah. Which goes back to the point you mentioned at the beginning they’ve got two eights, well why are they eights? Yeah.
Jason C. McDonald 00:20:38 And the other option of capturing this information, by the way, is that it actually puts stakeholders, decision makers, et cetera, especially non-technical decision makers in a position where they can actually be useful. Now, because managers try to make sense of story points, and again, they tend to do that by equating it to time or to budget or whatever, but because energy points comes from three clearly defined scales and those estimates are actually captured in the task, what winds up happening is that if you’ll have say a manager who’s saying–how can we speed up development? Well now you can actually look at those numbers. You can actually look at; oh you know what? The reason things take us so long is because our friction is routinely very high, but we don’t have a lot of seniors on the team. We need more seniors in the technology we’re using. If we’re going to move faster or else, we need to bring down the, the friction, maybe we need to use some technologies that are documented. So it gives a starting position to have a conversation with your non-technical contributors because they don’t have to understand code or Agile to understand how quantified tasks work. It’s documented, it’s written down.
Sam Taggart 00:21:59 Yeah, those are really good examples. I quite like that. So one thing I see, teams that are using story points, they often like to talk about velocity. What are your opinions on that? How does that work with quantified tasks? Is that even a useful metric?
Jason C. McDonald 00:22:15 It can be. And I got to thinking a lot about the value of velocity and quantified tasks on the heels of that rather controversial McKinsey article about measuring developer productivity. Because one of the reasons I’d come up with quantified task in the first place was because there was no way of measuring developer productivity because there was no way of measuring how much energy went into a task. So because you can’t measure the work they’re doing, you can’t measure how productive they are. It’s kind of a bit of a QED. So if we understand the amount of energy that goes in, then it’s easier to understand how productive developers are being. The limiting factor in software engineering is not time, it is not money, it’s rarely even technology. It is always developer energy. If you are out of steam, no amount of money, time or additional team members is going to increase the amount of energy you have.
Jason C. McDonald 00:23:16 You are done, you are out of energy for the week. This is why, by the way, the saying never ship on Friday is so wise. It’s not because there’s something special about Friday. It’s because people are out of steam. They have been working all week, their brains are tired, therefore they’re more likely to make mistakes. So because energy points are directly linked to developer energy, then velocity becomes a useful way of measuring productivity in that you are seeing how much energy your developers are actually putting into things, how much they have available to them. You can start budgeting because you know how much is available to your developers can say, well I usually get done with about 20 story points a week. Well now you have a budget, you know okay, I can’t throw 40 energy points at them and expect them to get it done.
Jason C. McDonald 00:24:09 They’re going to be out of steam. They have 20 energy points on average to work with. So those numbers become a way of looking at the amount of energy your team has over time. And if you’re noticing a sudden drop off in productivity, a sudden drop off in velocity. Now the question is, where’s the developer energy going? Is it going into meetings? Is it going into non-coding related stuff? We know what their budget is, they’re consistently way below budget right now. What happened? Why is it there? And that becomes the beginning. It’s always the beginning numbers are never the end of the conversation. They’re only the beginning of a conversation, but it becomes the beginning of a conversation. Hmm, I see you’re trending way below your average. What’s going on? Where’s the energy going? How can we help you?
Sam Taggart 00:24:58 Yeah, those are very useful conversations. So the next question I have is, and we’ve kind of hit on this a little bit, but like I guess what problems have you noticed that quantified tasks solved that story points? Just like miss? And I think part of it is what we just talked about, but are there other things that you’ve noticed?
Jason C. McDonald 00:25:16 The main thing really just is that that repeatability, the moving the conversation into developer energy, I guess that’s the biggest thing is that story points because it doesn’t mean anything. And when I say that I’m actually quoting some Agile practitioners who say, well the nice thing about story points is they don’t mean anything. Well that doesn’t make them particularly useful for planning.
Sam Taggart 00:25:38 .
Jason C. McDonald 00:25:39 So I can’t just tell you that my window is 47 flerbs wide. That means nothing to you. You can do nothing with that information. If I told you it was say, three feet wide, now if you were going to fix my window, that would mean something. So if you’re going to use a measurement, it needs to mean something and it needs to mean the same thing to each person that’s using that thing, that’s using that measurement. It need not be changing from company to company or team to team or individual to individual because then you just have chaos. Imagine if every engineering firm had their own tape measure, their own proprietary tape measure with their own proprietary measurement system, the chaos that would ensue, we would never be able to build anything. So the unitness of story points really works against, not only because it’s not useful at that point, but also human beings, especially managers live to derive meaning out of things.
Jason C. McDonald 00:26:39 , Humans don’t like things that mean nothing. They assign meaning and if there is no meaning, they will invent a meaning. So story points are begging to be misunderstood, that’s the problem. It is crying out to be interpreted, but when there’s no interpretation, every interpretation manager’s going to come up with is going to be wrong. So by having a system that has a meaning, then that is also going to resist misinterpretation because you can’t pretend energy points is anything other than measuring developer energy because that’s what it’s. It’s not going to as readily be misunderstood as a time commitment or a budget because it is about developer energy and that’s capped. But since managers like time and money, because they’re the things they have control over, this number becomes useful to them because they can go, Hmm, we’re not moving very fast, but that’s because this team doesn’t have enough energy to go faster. If we added a couple of senior engineers to this team, maybe they would, or if we removed some meetings from their calendar or if we put some money into finding ways to improve how much energy they have available to them, maybe shorter working days for example. And so they can start using the tools they haveómoney, time, policy, to try to optimize how much developer energy is available, which is honestly what we’ve been wanting them to do all along, right?
Sam Taggart 00:28:11 Yeah, that’s great. So my next question is, can you talk about a specific project where they were using story points and you came in and you switched over to Quantified Tasks and how did that go, and did you learn any lessons there?
Jason C. McDonald 00:28:25 Yeah, I’m trying to think. What’s funny is I can think of a couple of projects where people didn’t want to switch. And this was a little bit earlier on. I actually remember one project where I had proposed this because the problems they were running into was chiefly that their estimates were both inaccurate and useless. And I proposed this, and this is actually the person who said to me, well I like story points because they don’t mean anything. And I’m like, what? And I think the problem there becomes when story points become the end instead of the means. But that was literally the case here, because this person was actually publishing their velocity charts online publicly as a marketing data. Oh as a way of marketing their company. I’m like, no, you don’t do that .
Sam Taggart 00:29:12 Well that’s odd. At the same time that he says it’s meaningless. Which is just weird.
Jason C. McDonald 00:29:17 Well it’s meaningless. And so he was able to artificially inflate the numbers and make it look like they were more productive than they actually were by misrepresenting velocity. So I think that’s
Sam Taggart 00:29:29 So I think thatís must be a consulting house or something, right?
Jason C. McDonald 00:29:31 Yeah. It was. And it was like, it was just, it was scary. So I think one of the interesting sides of this is because quantified tasks are not arbitrary and it’s hard to manipulate because of its limitations. It’s deliberate limitations because like it’s hard to manipulate inches, for example, and inches an inch. You can’t really manipulate that. So I think it helps expose some zombie Scrum mindsets where they’re using Agile for things other than Agile. And I think it’s where you see some resistance. The team where I rolled this out, I was able to roll this out at the beginning of the project. So it’s not like we switched from story points to energy points, but everybody there had been using story points up to that point. So they come into this project with a story point mentality and everyone’s a little bit skeptical. It’s like, this is weird, I’m used to Fibonacci numbers, why are we doing this? But they agreed to give it a try. And within one sizing session, with one planning session, every single person was sold because it’s like, I know what this number means now. The funny thing is energy points follows almost the same curve and range as the Fibonacci numbers. So you could drop it in and just at a casual glance at your board, you would not notice the difference. So
Sam Taggart 00:30:54 Along those lines though, when we talk about pushback, do you tend to get more pushback from management or from developers or is it fairly equal?
Jason C. McDonald 00:31:01 I think it’s fairly equal because managers want information. And I remember one manager saying, I don’t want to use this because I would rather people write better descriptions. And I was trying to explain like, well people aren’t going to write better descriptions because developers are lazy? Mmm , sorry. That’s what you literally pay us to do is to be lazy. We weaponize our laziness into saving the company money. That’s why we’re here. That’s why you hired us . So developers are lazy. They hate writing things down, they hate documentation, they hate writing good descriptions on tickets. I rarely see good descriptions on tickets, any team, because no one wants to waste time. At least they view it as a waste of time. So they don’t do it. So part of this is about limiting how much writing they actually have to do by capturing the things that they should capture every time.
Jason C. McDonald 00:31:50 So I think that’s where this manager didn’t understand where this would help because he just wanted more text and wasn’t really thinking about, well what are we capturing? Developers I think are generally pretty open to it, unless they are members of the cult of Agile as I refer to it. because there’s Agile methodology and then there’s the cult of Agile. And the cult of Agile is where it becomes a religion. Like I mentioned earlier. It is the end in itself. Like the outcome of Agile isn’t the point. Agile itself is the point. We do standups because we do standups, we do retros because we do retros and no one’s thinking about the outcomes anymore. And where you have that, there can be a lot of resistance as well because this goes against the rituals. It’s going to the same end. In fact, it’s getting closer to the end that you want. But because it requires a bit of a departure from kind of the religious adherence to Fibonacci numbers or T-shirt sizing or what have you, there can just be some lack of familiarity kind of resistance. Because people don’t like new things, they don’t like change. But beyond that, honestly, I haven’t really run that much resistance beyond that because if people are doing story pointing for the goal of sizing, then this just becomes a way of doing that quicker and more efficiently.
Sam Taggart 00:33:09 So you mentioned something earlier about how quantified tasks can be used for planning and estimation, but also something to do with bugs and volatility and stability. Do you want to comment on that?
Jason C. McDonald 00:33:19 Yeah, absolutely. So leading into this, one thing I want to point out about quantified tasks is that every number in there, at least every number you set directly is a one to five energy points is a calculated number. So that it is constrained like that. But every scale you set is a one to five scale and fives are always special by the way. In this system, a five is always special because it means that there’s something you need to pay attention to. So distance it means, this is going to take longer than a sprint, break it down. Friction means we know nothing about this. Like this is just way too complicated. We’re going to need to spike it to produce some of the lack of resources we have. Relativity five means this can’t be completed in the history of the universe. It’s just there’s, we know nothing , we know absolutely nothing.
Jason C. McDonald 00:34:07 This is a black hole. So fives are always special. And I bring that up because that applies to the volatility as well. So I can’t take credit for the idea of volatility. This actually goes back to a gentleman I was talking to on the dev platform, dev.TO, he wrote a very interesting article about that you could predict the stability of a project by how long a bug remained uncut. Because if you think about it, we catch a very small fraction of the bugs that are in our project. So if you catch 10 bugs, then it’s safe to assume there’s at least 10 more floating around. The reason that you can determine this from how long a bug has been living in your project is because it indicates when you have a quality control gate, as I like to call it, that has failed.
Jason C. McDonald 00:34:59 Because we have these throughout the software development lifecycle, you have the design process where there should be some discussion going on as you’re writing the requirements, et cetera. There’s the coding process. You should have code review and static analysis tools and tests. QA should have additional testing, should have additional linters and what have you. And then you finally get to production. And then by that point you’re relying on your users to catch it. So if you have a bug that appears in design and makes it to production, what that means is that your requirements review failed. Your static analysis tools did not catch it. Your testing didn’t catch it, your linters didn’t catch it, your CI failed to catch it, your manual review failed to catch it and now it landed in your user’s lap. So you have multiple points of failure at this point.
Jason C. McDonald 00:35:46 So using that same one to five strategy, what you wind up doing with this is that when you report a bug, you only record two numbers. And again, they’re one to five. You record two numbers. The origin, which is what point this emerged, which is surprisingly easy to figure out and your caught, which is when it was first detected. And you subtract those two numbers and the resulting number tells you which quality control gate failed and the higher that average volatility score is, because when you subtract the two, that gives you volatility, the higher that average is, the more failures in your quality control system you have and the more likely you have bugs that are making it into the broad.
Sam Taggart 00:36:34 Okay. So for these numbers, like what does the one to five represent? One is like it appeared in the, like the design phase and then five is like my users caught it. Is that kind of scale. Okay.
Jason C. McDonald 00:36:46 Exactly. So they map to the software development lifecycle. So one is planning, two is designed, three is implementation, four is verification and five is production. So if you catch a bug at the same phase, it originates you’re great. You have zero volatility because bugs are always going to show up. Yeah, yeah. But that means your quality control is flawless or at least darn near close. But if you have a high average volatility, then that is going to tell you that you got no quality control and all your gates have failed. And any mistake you make in planning is going to make it all the way out to the users.
Sam Taggart 00:37:19 Huh, that’s interesting.
Jason C. McDonald 00:37:20 You can also combine this with one of the planning metrics impacts, which I haven’t talked about yet. But you can also combine this with impact to tell you also the kind of severity that you’re looking at because cop minus origin times impact is going to give you this number. And thatís the actual volatility score that’s going to tell you not only if your quality control gates are failing, but it’s also going to tell you how serious the bugs are that are manifesting. Because, honestly, if you have a bug that manifests as a button, that is one shade of blue off that is nowhere near as important as the bug where the user presses save, and it deletes everything in the record. So that impact also factors into the volatility score.
Sam Taggart 00:38:10 Okay. So if I understand this correctly, it’s almost like a risk score. The first one is like the likelihood of something getting through, right? Because that’s how long it existed and you’re multiplying that by the impacts, it’s very much like a risk. If you take risks, it’s like probability times impact kind of or probability times consequence.
Jason C. McDonald 00:38:27 Exactly. Yep. So by the time you take the average across all the bugs in your project, which J can do this, if you take your average across all your bugs, you’ve got, if you have a solution volatility, that’s what that average is called, the solution volatility. If that’s greater than 10, basically your servers are on fire . You have some major problems, you should probably stop shipping features and figure out what the devil is wrong with your pipeline because nothing is reliable at that point.
Sam Taggart 00:38:54 Okay. So I have one last question. Is there any time that you wouldn’t recommend this approach that you’d recommend? Like just sticking with story points?
Jason C. McDonald 00:39:03 I have to be totally honest. I don’t think I would ever recommend sticking with story points because that would be like recommending someone measure everything relative to their own foot versus using a ruler. That said, this is not a panacea, this is a piece of the puzzle, but it’s never going to be the overarching solution. Do everything and it fixes all of your problems. And that’s not because quantified tasks are somehow insufficient for any given situation, it’s because it isn’t the whole solution. It’s a piece of a solution. If an engineering firm has lots and lots and lots of problems with quality control and planning and how they do things, but they switch from measuring everything relative to the foreman’s foot to using a ruler, that’s still an improvement, but it’s not going to really fix the rest of the problems in the organization.
Jason C. McDonald 00:39:56 So it’s like with Agile in general, a lot of companies erroneously think you can just throw Agile ceremonies at a project, and it’ll magically fix all the problems. But what has to change first is the mentality not only of the developers, but also with managers of switching from this waterfall technique of let’s just plan everything and make everything predictable and all of that to switching to an iterative approach of Agile. And then Agile is useful, but without that mindset shift, it’s really not going to do much if any good. In fact, it’ll probably do harm and quantify tasks the same thing. Yes, you can implement it in place of story points, but if you’re already doing zombie Scrum, if you’re already misusing Agile, this is not going to fix all of your problems. So I think teams need to look very carefully at why they’re having problems and really take a hard look at where the mind set needs to shift and then use this as one of the tools to help shift mindset, help shift approach. But it needs to be secondary to that introspection, not first.
Sam Taggart 00:41:11 Okay. Now, would implementing qualified tasks and though perhaps help like indicate some of the problems, like it’ll point out some of the problems? Or is thatÖ
Jason C. McDonald 00:41:20 It can in theory, but I think again, if teams are using like I mentioned earlier, it can indicate when you’re using zombie Scrum because there’s this general resistance to it. But it can be misused like anything else can be misused. A good heart’s law still stands, which is that as soon as you make a metric a goal, the metric ceases to be useful. So if a company were to adopt quantified tasks and suddenly make lowering the energy point estimates on everything to be the goal , then this is not going to be useful because good heart flaw is going to completely undermine it, and estimates are still just going to be a meaningless exercise in futility as opposed to a planning tool. So you still have to look at where your mindset is off as a team, as a company before you start throwing solutions at it. But I think it’s true as a rule, like we never fix a car by throwing a wrench at it. You can throw every wrench in your toolbox at a car and it’s not going to fix the car if you haven’t identified what the problem is. Once what the problem is, wrenches are very useful. In fact, it’s probably impossible to fix many problems without a good wrench. But if you don’t know what the problem is, the wrench is just going to get in the way.
Sam Taggart 00:42:42 Okay. Yeah, that’s a great explanation. Thank you very much. So if people want to learn more about this, there is a website, right? Quantifiedtask.org. Is that correct?
Jason C. McDonald 00:42:49 Yes. Absolutely. So I try to post articles periodically on how to implement this, but if you go to that website right along the top are the tabs for planning, estimation, and stability. And that explains how to use each section of quantified tasks and don’t feel like you have to implement this whole thing right now on your team. One of the cool things about this is that you can just take one piece of this, one part that your team is going to find useful, implement just that, and you’re still going to get value out of it. Don’t feel like you have to rebuild your entire issue tracker tomorrow to implement this. You can go gradually. I really recommend using this in the context of that iterative Agile approach. Let your team take ownership of this because if you just enforce it from the top, no one’s going to like it.
Sam Taggart 00:43:40 Yeah, that’s a very good principle to live by. So thank you very much, Jason.
Jason C. McDonald 00:43:45 Thank you.
Sam Taggart 00:43:45 For Software Engineering Radio, this is Sam Taggart. Thanks for joining us.
[End of Audio]