Bill Curtis

SE Radio 262: Software Quality with Bill Curtis

Venue: Dagstuhl Castle
Sven Johann
talks with Bill Curtis about Software Quality. They start with what software quality is and then discuss examples of systems which failed to achieve the quality goals (e.g. ObamaCare) and it’s consequences. They then go on with the role of architecture in the overall quality of the system and how to achieve it in an agile environment.
Bill proceeds with all steps needed to move an organization from chaos to innovation; the relation between Lean Management, quality improvement and CMM; how CMM can improve your organization and CMMi not; the difference between CMM and CMMi.
Going on with the practices “hot” companies like Google are using to achieve high quality; getting a management infrastructure in place which doesn’t make bad commitments; how to get a quality improvement program started and running; importance of collecting and analyzing data about your process; modern development compared to the Team Software Process; extreme programming and Scrum and their close relationship to build in quality; the importance of fast feedback and learning; the power of formal inspections; prototyping to get the requirements right; the role of good requirements.

Show Notes

Related Links:

Jim Collins “Built to Last: Successful Habits of Visionary Companies”
Team Software Process
Personal Software Process
Scott Ambler, Disciplined Agile Delivery

Bill Curtis’ Twitter: @BillCurtis33


Transcript brought to you by innoQ

This is Software Engineering Radio, the podcast for professional developers, on the web at SE-Radio brings you relevant and detailed discussions of software engineering topics at least once a month. SE-Radio is brought to you by IEEE Software Magazine, online at

*   *   *

Sven Johann:                        [00:00:34.23] This is Sven Johann for Software Engineering Radio. Today I have with me Bill Curtis, who is the Chief Scientist of CAST Software and Director for the Consortium for IT Software Quality. He is best known for leading the development of CMM and People CMM at the Software Engineering Institute. In 2007 he was elected a fellow of the IEEE.

Today I will be talking with Bill about software quality. Bill, welcome to the show.

Bill Curtis:                               [00:00:59.19] Thank you.

Sven Johann:                        [00:01:00.24] Did I forget to mention anything about you?

Bill Curtis:                               [00:01:03.21] No, it’s pretty good.

Sven Johann:                        [00:01:05.25] We’re here in the wonderful Dagstuhl Castle; it’s the first interview we do in a castle, it’s pretty cool. What is software quality?

Bill Curtis:                               [00:01:17.10] A simple definition that fits with the way people normally think about quality, we have a system that does what people expect it to do, and behaves in ways people expect it to behave, and meets the requirements and needs of the users. Traditionally, we’ve described it in a way that sounds more like the functional requirements – it does the things that users want it to do. The biggest problem is that users never state non-functional requirements. They assume you know those things, because you’re a professional, so they won’t tell you that you shouldn’t have SQL injection opportunities, buffer overflows and you should [unintelligible 00:01:46.01].

[00:01:48.28] If you don’t do those things, and the system behaves in ways that the user suddenly doesn’t expect – it fails, people break in and steal proprietary information, the performance degrades and so on – and the user expects you to know those things. So for the system to continue to behave in ways that meet the user’s need and expectations and satisfies the requirements, you really need to have this professional engineering knowledge about software, so you build it properly.  It meets the functional requirements, it does what it’s supposed to, but it does it in a way that continues to support proper use of the system and proper delivery of the service the system offers to the users.

Sven Johann:                        [00:02:28.27] What are some famous projects which failed to address that function?

Bill Curtis:                               [00:02:34.09] A lot of people right now know about the Obamacare myth (certainly in the United States), where the Obamacare system was brought online on October 1st and people were supposed to get on it and order insurance, because every American was now required by law to have insurance. People logged on and nothing happened. They just sat there and sat there waiting and waiting… Waiting for Godot, and wondering what happened. The system was just not working. What we found out is that the people who had built it really didn’t understand how you build web-based systems, and they just downloaded every possible thing that a user might want at any point in their lives, and it just swallowed up all the bandwidth. It was a really badly designed user interface.

Sven Johann:                        [00:03:17.17] How did they get the project in the first place?

Bill Curtis:                               [00:03:19.00] The problem was that once they got the law passed, they had to go build it, but the Obama Administration didn’t want to build it too quickly, because they wanted to get past the elections, so the Republicans couldn’t use an issue and kill it. They started late, and they kept adding things and changing requirements late into developments, and they had only two weeks to test the system. This is a massive system that integrates your insurance companies with federal information about what you qualify for. It was an enormous system and they had two weeks to test it, so it was a complete disaster. They could never simulate hundreds of thousands of people trying to get on this thing at the same time.

[00:04:03.09] As experts looked at it, they realized they’re just trying to pass too much stuff through the internet, and they don’t need all that. The users had a basic set of things they needed, and other things they could call for later. Looking at the system – we could look at some of the Javascript code, and we found a security hole in the thing. We sent a little e-mail, very quietly and privately, to the people in Washington, and said, “Here’s a little problem. It’s known to be a potential security issue and you might want to fix it.” And they did. A week later it was fixed. In doing it, they added an even larger security problem. Basically, they had people that didn’t know how to build these kinds of interfaces.

[00:04:44.13] To get it fixed, the Obama Administration went back to the folks that had built their websites for getting donations during the political campaigns, who did a pretty good job, but even they couldn’t fix this one. They finally went to Google and got Google to send them some absolutely top geniuses, and they finally got the thing fixed, because they were the gurus.

That was a classic one that everybody in the country saw. It was on the news, I was on TV four times, trying to explain what had happened and why, and what they could fix. It was a major embarrassment. Not only nationally, it made the world news.

[00:05:21.13] Another one I was involved in was back in my days with ITT, in the early ’80s, when they were the world’s fifth largest company. Half the company was telecom, and half was a number of other businesses. If you remember back in the early ’80s, and most of the folks that are listening were probably even born then, but that’s about the time we were moving from analog systems into digital systems. Everything was starting to move over to computers, microprocessors and what have you. ITT committed the entire telecom business to microprocessors, to move out of relays and into a digital world.

They wanted one architecture they could deliver anywhere in the world, as opposed to having separate national units, each of which would build its own switch, for its own country. We would have one generic switch with a generic architecture, that with minor modifications we could deliver to meet any country’s particular telecommunication needs.

They charged ahead on this thing, they hired an absolutely brilliant system architect that understood these kinds of systems and designed one. In fact, they had to negotiate with the Swedish government, we heard. I think he was declared a national resource in Sweden.

Sven Johann:                        [00:06:26.03] It was someone from…

Bill Curtis:                               [00:06:30.03] From Ericsson, of course. He was a brilliant guy. He had this beautiful architecture. He took the telecom system and broke it up into independent components who could sit on different chips and interact through messages.

Sven Johann:                        [00:06:46.22] It sounds like microservices.

Bill Curtis:                               [00:06:47.24] It sounds like microservices today, yes. There’s nothing new in the atmosphere, is there?

They started building this thing, but the problem was those were ’86-class chips. They didn’t have big memories and they didn’t have big processors. For the initial design it was fine, but they kept adding on more and more stuff. They wanted this to be the ultimate telecom system, that could do everything you could even think of, years into the future. The functionalities started exploding; they went from a basic 45-50 messages up to 450-500 messages – a factor of ten growth in the bandwidth between these things. The code was growing and growing, and when they started trying to integrate the system, you could just see the code flow over the sides of the chip and onto the floor. They were in trouble, and they knew it.

[00:07:35.03] Three months before the first competition, which was going to be in Stuttgart, to win the German market – because we were competing with a number of other companies – they asked me to go to Europe (because I was a measurements guy) collect some data, come back and tell us if we have a system to deliver in three months. [unintelligible 00:07:50.26] So I went over there, I started looking through the data, and the data was a mess. It was incomplete, and it would make no sense.

Sven Johann:                        [00:08:02.02] Which data?

Bill Curtis:                               [00:08:03.23] Management data. I’ll tell you why, because there was nothing I could find that would give me any progress report. So I said, “Well, I’ll look at the system integration…”

Sven Johann:                        [00:08:13.20] Nobody knew about the state of the project?

Bill Curtis:                               [00:08:16.25] Actually, some people knew, they just weren’t telling anybody. I’ll tell you more about that in a minute. What I saw was the amount of code being poured into the integration tests, which was going up exponentially. They were just shoveling models into the integrations tests. I said, “Well, let me just lay out what’s going in and what’s going out, and then rate all that.” There were all these module numbers going in the integration test and they never came out. And there were all these module numbers coming out of the integration test, but they never went in. In fact, there were twice as many of them. I said, “What on earth is going on here?”

[00:08:50.26] They got to rearchitecting the entire system in the middle of the integration test, because they finally realized the code was too big to fit on the chips, and they now had to carve the system up and redistribute things across all these different chips, or they lost control of the generic architecture completely.

Sven Johann:                        [00:09:05.16] Why did something like this happen?

Bill Curtis:                               [00:09:07.28] Because they weren’t measuring the size of the load modules; they weren’t measuring the size of what they were creating after they compiled the code. It was blowing up on them. All the German authority required was plain old telephone service, while we had all this other stuff on top of it, for every future capability you could dream of.

Sven Johann:                        [00:09:29.11] Over-engineering…

Bill Curtis:                               [00:09:30.03] Over-engineering, and gold-plating the thing. The fellow that ran our German our unit — because in the earlier years we had national units in each country, and that unit would service that country and build a switch for that country. But now the theory was, “We will have one switch globally. One architecture that will serve all national governments”, and you can then adapt that architecture and make modifications to meet the local requirements.

Sven Johann:                        [00:09:57.22] Sort of a product line.

Bill Curtis:                               [00:09:59.06] Yes, exactly. A product line that you could tailor, add some few things on, and take some things off. That was the theory. Well, the theory was now blown to shreds, because we’d lost the control of the generic architecture and thus we were split up all over the place. But when it came time to deliver in Stuttgart, we rolled in a couple small racks of microprocessors and our competitors would bring in these big hunks of large relay things, and it was a nightmare.

The system worked. The problem was, we realized, the amount of code being dumped into the integration test was going up exponentially, and the amount of code coming out was absolutely linear. We were strictly limited by the number of people that we have in the German facility fixing defects.

[00:10:46.09] The senior executives of the ITC said, “Okay, every man, woman and child in the United States who can write Chill goes to Stuttgart and fixes defects until the delivery day.”

Sven Johann:                        [00:10:56.04] Chill is a programming language…?

Bill Curtis:                               [00:10:57.02] Chill was a programming language in telecom. It was an object-oriented language, kind of like C++, but for telecom.

Sven Johann:                        [00:11:03.16] I remember a friend was talking about it.

Bill Curtis:                               [00:11:05.16] At any rate, we mailed a lot of people over there, they stayed there for months, fixing defects. The system worked. The day we delivered the thing, it worked. It did plain old telephone service, they [unintelligible 00:11:16.18] and they never took it out, because it worked so well. We won the market, and it was great. Everybody was thrilled, we had big parties, we got roaring drunk.

Then we won another market. I think we won Mexico, Italy, China and several others, and all of a sudden we realized, “Oh my gosh, we’ve lost the control of the generic architecture.” Now the cost of modifying the system to meet each set of national requirements exceeds the profit. We’re out of business.

[00:11:43.13] It came down they had to win 45% of the American market against the 5ESS from AT&T to even survive. The architecture was such a mess at this point that they couldn’t get it fixed in time. 5ESS locked up the American market, and ITT was basically done. They sold the system off to Alcatel. All the R&D money was already spent now, so Alcatel made money on it. It was already developed, they kind of had it working. But it was a major problem, and they weren’t measuring. They weren’t measuring size, they weren’t measuring a number of other things. They weren’t reporting management results.

[00:12:17.09] But here’s the story of the management aspects. Nobody in New York, which is headquarters for ITT, knew what was going on. They sent me over, but they didn’t know if they had a system to deliver. And I couldn’t find any data that would tell me anything.

Sven Johann:                        [00:12:32.18] Any data, you mean…?

Bill Curtis:                               [00:12:33.11] Any data on progress, how far, what kind of progress were we making. I’m just sitting there and wondering what on earth is going on. I’m in this office at the end of this hall, there’s not many people in that hall, and I said, “This data is garbage. I might as well just look at the garbage in the trash can.” I was so frustrated, so I just started digging through the trash cans, “I wonder what they throw away here.” Lo and behold, it’s the project manager report for European development centers, a report they made sure never went to New York, because they didn’t want the execs from New York coming over and bothering them.

[00:13:13.13] This had the progress report, all the progress – how much code was going in, the linear coming out…

Sven Johann:                        [00:13:19.27] But you could trust that…?

Bill Curtis:                               [00:13:21.23] It was only shared among the European managers at the development centers. They would not allow it to go to anyone else. It was their private report, so they could track it, but they wouldn’t let anybody in New York know, or the European headquarters, or anywhere else. Lo and behold, that’s where I figured out what was going on, looking at that.

The problem was they were losing control of the architecture; they lost control of the company. They had to sell their whole telecom business off, and cut the company in half, because they weren’t measuring simple things like the size of the load modules.

Sven Johann:                        [00:13:57.09] It should be clear that if you don’t have quality, it’s embarrassing. You’re on television, you’re on the front page of the New York Times, or even worse, you go out of business.

Bill Curtis:                               [00:14:12.03] Well, that happened.

Sven Johann:                        [00:14:13.11] When I talk to people about projects with bad quality, usually the first thing they say is, “Let’s do more testing.” But testing is not enough.

Bill Curtis:                               [00:14:27.12] Here’s a short story… The worst one most recently was Knight Trading in New York; it was a high-speed stock trading company down on Wall-Street.

They were upgrading a system, putting the new release in, and all of a sudden it activated a lot of dead code. Dead code is code you thought nothing could reach, that you don’t need anymore and you should take it out of the system, but they just left it, “What the heck?” All of a sudden, this new version somehow activated the dead code, and it made 430 million dollars of bad trades within thirty minutes, and they were bankrupt. Thirty minutes, out of business.

Sven Johann:                        [00:15:04.09] Wow.

Bill Curtis:                               [00:15:05.10] Yes. A lot of these hacks you hear about now; the hard one which has been in the U.S. was SQL injection. We’ve known about SQL injection since the late ’90s, how could these things be in the systems? We need – especially in security – to know about certain kinds of violations and weaknesses. Why don’t we have the same kind of effort we had with Y2K. “We have to get that out of the system, or everything will fail!” Well, we have to get these out of the system, or the hackers will take all your credit card numbers, and then we evenly distribute it over the provinces of Russia.

Sven Johann:                        [00:15:44.11] What can these companies do to improve quality and make things work?

Bill Curtis:                               [00:15:47.06] There’s a number of things they have to do – and I’m going to go back to some other things in a minute, but they have to look at the architecture upfront. You can’t refactor a bad architecture. You can throw it out, and that’s a smart thing to do. The notion that it’s going to emerge and you can refactor it, in the case of large systems it’s just nonsense; it doesn’t happen that way.

We found the highest quality is in systems where the method used to develop them was a hybrid between a traditional waterfall, upfront approach, and an Agile approach at back, where they’re building and testing, building and testing, so they get rapid feedback on the quality of the implementation. We get the architecture right – not complete, but we know we’re on the right path with it, and we’re not likely to buy the farm later.

[00:16:39.27] Then we have the rapid sprint implementations where we’re testing it to know, “Okay, is the code we’re building good code? Does it meet the requirements?” That seems ti give the highest structural quality.

Sven Johann:                        [00:16:53.11] I heard a while back an interview with somebody who is deeply involved in the Agile community, and he said the Agile community did a little disservice to the rest of the community by not focusing on architecture, or not enough focusing on architecture. Because you cannot refactor properties at the end. You need a decent…

Bill Curtis:                               [00:17:18.02] You’ve got the real leaders in that community, guys like Scott Ambler, and they’re absolutely clear: you need a sprint zero. It could be of some substantial length, where you really get to a point when you’re comfortable, you’re on the right track with your architecture, and then you start the rapid three-week, four-week sprints to get the code built. That works, and we’ve got hard data on analyzing systems that shows that that’s the best approach in terms of structural quality.

[00:17:44.10] What else can you do? The whole CMM was designed to solve a set of problems, and they were problems that were killing systems. If you remember back in the ’80s, it was never on time; it was way over budget, and half the time it didn’t work when you got there, or it was riddled with defects. A lot of us thought, “Well, we just need better languages and better tools”, and that’s what we were working on. One fella said, “No, you’re all wrong. That is not the problem. You will not solve anything with better languages and better tools, until you get management under control.” Of course, that was Watts Humphrey.

[00:18:13.16] Watts said, “Look, the problem is that when executives or project managers come up with schedules that can’t possibly be achieved, then the developers just start having to work at breakneck pace; they barely have time to get the code written, they don’t have time to evaluate or test it, so they’re just cranking out garbage. And they know it. They’re frustrated, and they don’t feel like they can work like a professional, and they can’t, because you never give them the time to do it right.

The level two in the CMM was designed to get control of commitments and baselines. If you don’t do that, the quality is going to be terrible.

Sven Johann:                        [00:18:47.15] What is CMM?

Bill Curtis:                               [00:18:51.03] Capability Maturity Model was a model developed at the Software Engineering Institute of Carnegie Melon to help an organization improve their process in several stages that continue to build a world-class software development organization. It was done in stages, and the first stage was you must stabilize the projects. You can’t have organizational-level [unintelligible 00:19:11.16], “Everybody has to do this.” It’s not going to work if the projects aren’t stable. If guys are working at breakneck pace and not having time to properly design or properly test the code.

Sven Johann:                        [00:19:22.17] A stable project is a project where…

Bill Curtis:                               [00:19:24.13] Two things: it has commitments that it believes it can meet, and it controls the baselines. It has control of the requirements baseline and control of the product baseline. That doesn’t mean you can’t make changes, but you then go back and say, “Okay, does this affect schedule, or our costs?” and you make adjustments. You can add people, or you can cut functionality, or whatever you need to do. But you don’t want people working at a schedule they can’t achieve, because they’ll never have time to work in a professional way.

[00:19:56.12] That was the first thing we had to fix, and that’s a local fix at the project level. So level two is about projects and getting people into an environment where they can behave like professional. They have a fighting chance to do work in a professional way. Once you have the project stable, then you can look and see what practices are working best. This team has a great set of design practices, those folks have some really good testing practices, over here they’ve got some good estimation practices – let’s integrate all this together into a common organizational way of doing it. We can tailor it for big projects and small projects, and new technology and legacy stuff, but the bottom line is we have a set of practices that we know in our environment work, and work well. And we know how to tailor them to fit different situations.

[00:20:39.10] Then I really an economy of scale. We have an organizational, a software culture. When that happens, you see it’s the developers that carry it; not the managers, it’s the developers. It’s like, “Look, stop doing that, because you’re screwing up the way we build software. It’s just going to delay us and it’s going to mess up the system.” So they will be the first ones to react when somebody starts violating your good practices, because they already know what’s going to happen when that takes place.

The developers really come to carry the culture, with a lot of pride in the quality of what they’re producing. Quality goes up dramatically in this case. They know what it takes to do the work, they can make estimates that are reasonable, they know what it takes to control it, they know what practices work well. That’s level three.  Higher up, it goes into statistical management, innovation and so on.

Sven Johann:                        [00:21:27.27] Does a higher level mean a better organization?

Bill Curtis:                               [00:21:30.17] It means a better organization if it’s done right. A lot of people knew where to buy a level five, not really achieve it. They just wanted to get the box checked. But the people that really go about it and really use statistics and measures as the way they manage the software – the truth is in the measures. They use that to control what they’re doing and adjust as they need to. We’ve seen quality improve by a hundredfold in higher maturity levels, when it’s done right, and when it’s an end-to-end process. When it’s managed with data on what’s going on, they can adjust much earlier, and they learn how to tweak practices to work more effectively. It’s really much like the LEAN concept.

Sven Johann:                        [00:22:10.27] Exactly, I was about to ask.

Bill Curtis:                               [00:22:13.08] It’s really statistical process control, but it’s different, because this is an intellectual artifact. Traditional LEAN and statistical process techniques were developer on manufacturing, where I’m doing exactly the same thing over and over, a hundred times an hour. Software is not that; I never do the same thing twice.

We have to think about how we use data in an intellectual environment, as opposed to a physical environment. Because this is an intellectual artifact, and in truth, the measurement of intellectual artifacts like software, or even text, is in its infancy.

[00:22:42.22] We’ve tried a bunch of things, and sometimes it worked and sometimes it didn’t. But there’s a lot we don’t know. We used to think, “Well, we just measure the structure of a paragraph, for instance back in the ’50s, and that will tell us how understandable it is.” Then we found out that doesn’t work, because how understandable it is depends mostly on how much you know about the topic. That’s not the structure.

That’s why in an intellectual domain it’s done [unintelligible 00:23:05.27]  according to physical laws, and we have to understand measures as indicators, but not as absolute as a law of physics would be. But you can do it properly, and it will work.

Sven Johann:                        [00:23:19.20] Let’s go back to the “Don’t do it properly…” That sounds all fantastic, but when I studied CMM, I had this negative thing, that “Oh, it’s heavyweight… You have to produce only documents…”

Bill Curtis:                               [00:23:39.00] CMMI is too heavyweight; it’s way overdone. It’s because they lost control of the architecture. The folks that built the CMM were not involved in building CMMI.

Sven Johann:                        [00:23:47.03] What’s the difference between CMM and CMMI?

Bill Curtis:                               [00:23:50.11] They wanted to integrate systems and software, number one. CMM was a  software model, and the system engineers had their own, and they wanted to integrate both of those. CMM was what’s called a staged model. It builds your organization level by level, but the model is “This is how the organization works.” There’s something called the continuous model, which is not an organizational model, it’s a practice model. It says, “Okay, the practice I care about is testing, so I’m going to take testing up to level five.” Well, if you don’t take design up to level five… Just take the practices you’re interested in and work on them, and it doesn’t reflect trying to grow an organization as an organization, so you don’t see this name ‘culture change’.

[00:24:36.08] You see it with CMM. CMM fundamentally transforms the organization in a series of stages. It stabilize, standardize, optimize, innovate. At level two it’s about projects, it’s almost tribal, in a sense. Each project has its own way of being stable, but then out of this I have to build a common culture and a common process that works for our organization. So now I’m moving up to the level of a civilization; it’s city-states becoming a country, in a sense. In doing that, you really get to a whole new level of where the organization functions, and a great description — it doesn’t take the CMM approach, but a description of how powerful this is is in Jim Collins’ books, Built To Last and Good To Great. He talks about the power of really great organizations is in their system, in their process, in how they operate as a company.

[00:25:28.27] They have a common way of doing it, everybody understands it and buys in, and therefore the system operates very effectively. That’s where we’re trying to get to with CMM. CMMI… Those two models are organizationally-based staged model and a practice-based continuous model are two representations of the same thing, and they aren’t. They’re theoretically not the same, and they’re practically not the same. But to make those into the same model, they built this architecture that was way overdone. CMMI got huge because they basically didn’t understand the theory and lost control of the architecture. There were all these practices, and it got very expensive to do the assessments, there was tremendous duplication of practices, so ultimately people just said no. “We’ll buy the book because there’s a lot of good guidance in there, but we’re not worried about doing the assessment, having to set and follow these practices.”

[00:26:21.22] It was unfortunate, and hopefully with the next generation of CMMI they’ll bring it back to something that’s more appropriate for most businesses. LEAN is the issue here. We want something that’s LEAN and really only includes the practices that you really must have.

Sven Johann:                        [00:26:38.04] That means you need to know all the back practices and you need to understand which ones are important for your company. What are companies like Google or Netflix doing? Are they following the CMM models?

Bill Curtis:                               [00:26:53.06] They would probably never tell you they’re doing CMM, or CMMI; they wouldn’t want to. But if you went in and looked at what they’re doing, they’re probably doing something very much like that.

Sven Johann:                        [00:27:05.22] Exactly. They don’t have to say they are doing it, but probably they pick out…

Bill Curtis:                               [00:27:08.10] The greatest piece of software ever built was space shuttle avionics. It was probably the most elegant, beautiful and high-quality system ever created. That was the model for what a level five was. They were incredible.

I visited and spent days down there, and I’d never seen anything like it. But they would never, ever claim they were level 5. When I went there, I said “Look, we know we’re not level five, because there’s things we don’t do. But this is what we do do, and we know it works. We know that when we deliver to software to NASA, we believe there’s no defects in there that can affect safety.” That basically means, since nobody knows what [unintelligible 00:27:40.28] It was very close to defect-free. You couldn’t prove it was defect-free, and every once in a while they’d find something, but most of the stuff they were finding had been a defect ten years before; their process was very good at delivering very elegant code.

[00:27:58.13] They would never claim to be level five, but they were just rigorously doing these practices because they know it served their need, which is basically almost zero defect software.

A company has to decide what’s the level of quality it needs. If I’m in telecom, I’m allowed a certain amount of downtime every year. They’re not going to pay me to go to zero, so I need to adjust my practices to get to that level, and that’s fine, it meets my business need. That’s the critical issue here.

[00:28:29.17] There’s a lot of data now; the higher the maturity – if you’re really doing it to improve the way you build software, defect rates go down dramatically, productivity goes up… People start being able to reuse stuff, because they trust it now. They didn’t do that before. So there’s all kinds of advantages that we see when it’s done right.

Sven Johann:                        [00:28:47.09] But I don’t want any certification for CMM level five?

Bill Curtis:                               [00:28:55.10] Look at that and see if it meets your needs. We know that people who will really implement a serious program like that have lower defect rates, and it costs less to build software; there’s good data out there that says that. You’re building it for fewer dollars per function point, per line of code, or whatever your favorite measure is, because you’re not wasting time.

In a low maturity organization, 40% of the effort is spent on fixing mistakes. In the LEAN world, that’s waste. What you want to do is eliminate the amount of wasted time you have fixing mistakes. You want to catch them early and get them out of the system, so that you’re not spending ten times more effort on the backend, undoing stuff, rebuilding stuff, reintegrating stuff and all that.

Sven Johann:                        [00:29:43.16] Especially architectural…

Bill Curtis:                               [00:29:45.09] It’s a critical part of the architecture, yes.

[00:29:48.15] Commercial Break [00:30:34.09]

Sven Johann:                        [00:30:33.08] It’s interesting that whatever method you invent… The first thing your read is you don’t have to apply everything; just take the stuff which makes sense for you.

Bill Curtis:                               [00:30:49.14] Here’s the problem – most people believed that before they had CMM. They weren’t doing the right stuff. They were believing, “All I have to do is have better design practices, better testing practices and better tools.” They didn’t touch management, because they were thinking about engineering. They weren’t thinking about, “The fact that we’re doing a crummy job at engineering says we don’t have the time to do it right.” Until somebody said, “No. The first step is you must get control of management. You must control your commitments, you must control your baselines.”

[00:31:19.01] You need guidance, and that’s what CMM provided. It’s saying, “Look, these are the important elements you must have to solve this set of problems. Once that’s solved, here’s the next set of problems.” It organized an approach to solving problems and it didn’t let you leave out some of the critical parts. That’s why I’m no fan of the continuous model, because it lets you leave out critical parts.  I could focus only on engineering and not focus on management, and I’m back to schedules I can’t achieve.

Sven Johann:                        [00:31:46.28] You’re saying there is a relation between CMM and LEAN? For me it was always like CMM and Agile…

Bill Curtis:                               [00:31:55.15] It should be, but the problem is the way CMMI worked and the assessment method worked, you had to have evidence that you performed every single practice. That wasn’t true in CMM. You had to achieve the goals in CMM. These practices are guidelines for what you might do to achieve those goals, but if you have a better way to do it, or you don’t have to do all, and you achieve the objectives of the goals, then fine. But CMMI said, “Every practice has to be right”, and that’s [unintelligible 00:32:23.10] LEAN. Because LEAN would say, “It may be a good practice, but if it doesn’t add value, then don’t do it.” So the skimpy method, that’s where we got this budding heads with the LEAN concepts.

Sven Johann:                        [00:32:39.22] You mentioned the continuous model. That is continuous delivery, dev ops…

Bill Curtis:                               [00:32:43.29] No, not at all. Continuous says, “I can pick any design practice and take it up and down a level. I want to be level five in design, but I’m only going to stay at level two in management.” It’s practice-based. Everything is this practice and that practice, but it’s not the organization. The essence of the CMM is we originally can see that the stage model is an organizational transformation. I’m integrating this collection of processes to fundamentally change the way the organization works. It’s not about design or about planning, it’s about getting a management infrastructure in place that doesn’t make bad commitments and is able to control the baselines and control change, so it doesn’t throw you into chaos.

[00:33:30.14] Then it’s about getting things organized in a way I have a common way of doing it that I can tailor in different projects. But it’s the organization’s way of doing it, it’s not “I just have a good design technique” or “I have a good testing technique.”

I’m not a fan of the continuous model. A lot of people are, I’m not. But if I were going to rearchitect these things, I would split them. Putting together, the architecture you have to have makes it — a lot of duplication, a lot of excess that doesn’t need to be there.

If you want a continuous model, have it, but don’t try to force stage to fit with that in a common way. Because what happened is they took the stage model and way overburdened it with practices; it was duplication…

[00:34:09.19] I would break them apart. Some additions and changes are needed, but the basics are there. But I would certainly break them apart and change the architecture to reduce the duplication.

They basically made a matrix. There’s this set of practices, then there’s this set of practices; one’s on the rows, one’s on the columns, and you have to fill in every cell. No good designer works that way. Everybody builds a matrix, fills in all the cells, and then says, “Which cell is important?”

Sven Johann:                        [00:34:40.04] What’s on the matrix now?

Bill Curtis:                               [00:34:43.05] They have specific practices on one axis. Those are the unique practices you would perform for management, design, testing or whatever. Then they have practices on the other axis, which they call generic practices. Those have to be applied to every single process area, in order to institutionalize it. One of those is configuration control. Well, I’ve got a process area called configuration control, so now I’m sitting here saying, “I’m gonna have configuration control on my configuration control.”

[00:35:13.04] When they got to level four, it blew up. I’m gonna have statistical process control of my statistical process control – that just didn’t make any sense. The theory blew up, but they didn’t do anything about it. I’d just get rid of the generics, because I know how to build all of that stuff into the basic model, so you don’t have redundancy. It doesn’t increase it that much, but it cuts the model by at least 40%, and almost 50% in some cases in practices. It makes it much more conforming with the original intent that Watts had and the design team and I had when we built the CMM.

Sven Johann:                        [00:35:46.29] Now if the U.S. Digital Government would ask you to design a new maturity model in terms of dev ops and continuous delivery…?

Bill Curtis:                               [00:35:59.09] You add those practices in. Level two is about getting the project stable, so I’d want the dev ops operation – if that’s going to be my process – to at least be properly estimated. Then we understand what we’re going to get done by when, so we can track, “Are we making progress, or are we not making progress at the right rate?” If we’re not making progress and we’re falling behind, then we need to take corrective action.

[00:36:22.01] It doesn’t guarantee your estimates are going to be right, but it does guarantee you’ll know they weren’t, so that you can take some corrective — maybe you cut functionality, maybe you extend the schedule, or whatever. But you don’t say, “I’m going to deliver Obamacare on October 1st and only test it for two weeks.”

Sven Johann:                        [00:36:39.09] [unintelligible 00:36:37.18] with one project, and then you tried to stabilize that one project…

Bill Curtis:                               [00:36:42.27] Yes, and you move it out across the organization, project-by-project. I don’t have to have the project one finished before, but I get these guys working on it, then I get the next guys, then I get the next… And what we found is the best way to do that is called Project Launch Workshops. Up front, at the very beginning of the project, the consultants (or whoever the experts are; if they’re internal, that’s fine) work with the project to build a plan, to understand, “Do we have the requirements pretty well set? Are there a lot of things that haven’t been stated that we need to go back and ask them about? Let’s make sure we understand what we’re trying to do, and we’ve got good stories.”

[00:37:19.10] Then we’ve got an estimate for how much work we can get done by such and such date, if it’s date-constrained. If not, then we look at what we have to build and say by when can we get this done. You then have a plan that you can track, and if you’re wrong, that’s okay, but you’ll learn from it. As long as you’re tracking, you’ll realize, “We way underestimated the amount of time it would take to do that. Next time we make that adjustment, and now we need to make a change in our plan.” You deliver less functionality, or you extend the schedule, and so on.

Sven Johann:                        [00:37:47.05] Is it hard to convince companies to try that out?

Bill Curtis:                               [00:37:51.00] No. Not after a disaster.

Sven Johann:                        [00:37:56.26] I am a consultant, and I just wonder if I have to wait until a big disaster happens to…

Bill Curtis:                               [00:38:03.00] No, you have to wait until the executive wants to make improvements. If the executives don’t back it, it’s not going to work. You have to have the executive management’s support, to make the process improvement program work. Once you’ve got that, you’re on your way, because they’ll say, “We are willing to do these things”, and managers that don’t want to do these things will no longer be managers. But they have to be willing to remove managers who won’t go on. The successful ones do, and you see some people continue on as managers, because they understood what management was, and other guys who just wanted to have a good time end up doing something else. That’s critical.

[00:38:37.08] Let’s see about the other part of your question. You have to have management support, you roll it out project-by-project, with these project launch workshops. Then whoever the consultant is comes back periodically and says, “How are you doing? Are you having any issues?” etc. So you don’t have to have a disaster, you just need to have an executive say, “We need to get this under control.”

[00:39:01.10] The way I’ve convinced them of the value of it, you [unintelligible 00:39:05.21], you ask him one simple question – How many good project managers do you have? They start counting their fingers and realize, “I don’t even need one hand to count the good ones.” If they’re a low maturity organization, they’ll suddenly realize they don’t really have good managers in place, and these guys aren’t trained. They were probably just promoted because they were good coders. You then say, “Look, as you start this program, the first thing I’m going to give you is project management. These people will learn what it means to manage, to plan projects, to track projects, to take corrective action, to work with their people, and what you will come out with is project managers who can deliver projects. Ultimately, they’ll learn what it takes to good estimating and deliver them on time.” And they suddenly say, “Okay, I get that.” Because if they’re executives, (hopefully) they’ve been successful at management and they understand what they’re not seeing and they need to see, and that really helps them convince themselves they need to launch this and enforce it.

Sven Johann:                        [00:40:00.17] Yes, but it’s probably not easy. Even if you’re convinced you need better project management, where do you find these guys?

Bill Curtis:                               [00:40:07.19] You’re going to have to train them, you’re going to have to find people with an orientation towards it. It’s not always the best programmers that have an orientation towards being managers. So that’s part of it. They’ll be trained, and that’s what the launch workshops do; you work with the [unintelligible 00:40:20.28] manager, to learn what it means to manage. They learn how to build a plan, they learn how to evaluate requirements to make sure they know what’s complete, what’s incomplete, and what they need to go back and ask about. They learn that they need to get some plan for configuration management, how they’re going to control the configurations and the versions, and they get that in place.

[00:40:41.29] Often, there’s early resistance from the developers, because “You’re gonna put me in a straight jacket and make me march like a robot.” The thing to explain to explain to developers is, “Listen, you’re already in a straight jacket, and you are marching – not like a robot, but like a crazy rabbit, because you’ve got schedules you can’t achieve, right?”, and they say, “Yeah.” “How much time do you have to sit down and really think through your design issues?” “None. I just have to get code out.” So they suddenly realize, “If you can get control of this, I can act more like a professional. It’s not about me adhering to some idiotic process; it’s basically freeing me up to engage in professional practice and follow a process that will know or were taught in high-school would actually work.”

Sven Johann:                        [00:41:24.02] It’s also about creating some awareness of these things, because what we see is architecture… A lot of people don’t think about architecture, but if you do a workshop and you say, “You have to look at that, you have to think about all the [unintelligible 00:41:37.20] and things like that.” It’s like, “Oh, okay… That’s interesting.”

Bill Curtis:                               [00:41:42.17] The other issue is that the next body of resistance comes from middle management, if you’re in a big organization. They’re not project managers now, they’re higher up, and they’ve gotten to where they are by whatever it is they were doing. They knew how to manage the relationships and push guys to work nights and weekends and bring in carryout pizza and all that. Now their whole way of existence is threatened. They can’t use these motivational techniques anymore, and they sometimes feel like this is an invasion of their private domain and the way they want to run their business.

Sven Johann:                        [00:42:14.06] Why is it threatened?

Bill Curtis:                               [00:42:15.24] Because things are going to change below them, and there’s an executive above who’s putting pressure on them and they don’t really have complete control over what’s going on below them, although they’ll have a lot more control if they get good management in place. A lot of them have had a way to run their business that was like, “I’ve gotta make these commitments, and I motivate people to meet them; they’re up here nights and weekends.” They can’t do that anymore.

Sven Johann:                        [00:42:38.19] How does it work at Google? Let’s stick to that example.

Bill Curtis:                               [00:42:44.24] I can’t tell you how Google works internally, because I’ve not been inside Google. I do know this – several things go on. Number one, they have a common code tree. They’re all working against this common code tree.

Sven Johann:                        [00:42:57.27] Against what?

Bill Curtis:                               [00:42:59.00] A common code tree. It’s where all the software…

Sven Johann:                        [00:43:00.22] Oh, they have one repository.

Bill Curtis:                               [00:43:03.26] Yes, and everybody works against it. They do buy some commercial tools, from what we’ve heard, but they frankly build a lot of their own tools, because they’re way beyond what commercial tools can do, and their requirements for the tooling is just out there. The future of software engineering is being developed in companies like Google, Amazon and Microsoft, because they really are at a level of usage – you know, how many billions of people sign on to Google, Amazon and MSN and all this. They have stresses that nobody else has yet, but a lot of people will have. They’ve having to develop the tools to manage the way they go about this. They’re having to build processes where they can update systems in-flight… How do you repair an airplane when it’s 35,000 ft. in the air? They’ve learned how to do that with software, while they’re in operation.

[00:43:51.21] There are specific things they do, and they have rules. There are processes that they have to pretty rigorously follow, to be able to do the things they’re doing at the scale they’re doing it, and under the performance pressure they’re doing it.

Sven Johann:                        [00:44:03.08] Performance pressure in the sense of innovation, or…?

Bill Curtis:                               [00:44:06.23] No, performance pressure in the sense of, “When I get on Google, I want an answer. When I get on Amazon, I want to see a product and I don’t want to wait five minutes; I don’t want to go and have a cup of coffee, come back and see if they figured out what the question was. So they have that kind of performance demand, which is extraordinary. They’ve got these huge server forums and everything else to help manage that demand, but that itself is a body of infrastructure of software that’s really quite something.

[00:44:31.14] They also have to innovate, because everybody’s competing against each other. Those three are competing to see who’s going to own the cloud, and everybody wants you to use them as the cloud service. There’s high competition and they really do have to innovate, but at the same time they can’t lose control of what they’ve got, because we expect it to perform 24/7, 365, at almost microsecond speeds. That requires you to have a very rigorous internal — they could care less about CMM, but if we went in there, we’d probably find an awful lot of the practices they’ve put in place would meet the requirements because they have to, to perform at that level. And they’ll use measurements a lot.

Sven Johann:                        [00:45:20.01] That’s really interesting. That’s for me the bridge to software process improvement, that — Google, I don’t know, but Spotify, they measure everything.

Bill Curtis:                               [00:45:30.20] Yes, most all of them do.

Sven Johann:                        [00:45:31.17] There is a measure for everything. They don’t put anything a little bit further if they don’t have the data that proves that a little thing works.

Bill Curtis:                               [00:45:42.18] Yes, absolutely. The whole concept of LEAN startup is get data on what works and what people want, go down that path and cut off this path that they didn’t care about. They really do have to be constantly innovating, just to compete with the other guys that are constantly innovating. That’s a high-maturity… Level fives have got constant innovation. Having a standard process, you can bring that back into an update and then learn what the new measures, thresholds and baselines are.

[00:46:13.17] It’s pretty sophisticated stuff, and the maturity model gives you a good guide for how to get there. Doing it because you want the certification is not the reason. Do it because you want to compete like the dickens, and you really want to get to where you’re able to innovate, you’re able to compete at a very high level; you’re not wasting a lot of time of fixing defects and trying to refactor architectures that you shouldn’t have to refactor. That’s the objective with this.

Sven Johann:                        [00:46:40.12] Follow processes…

Bill Curtis:                               [00:46:41.11] Follow processes, and one thing I recommend people to look at is Humphrey’s Team Software Process, because the data on that is extraordinary. It basically says, “As a developer, I want to develop myself the way an athlete would develop himself.” With athletes, everything’s measurement. They measure everything they do.

What Watts said is if people would measure how they make mistakes, how long it takes to do things,  their own personal process, and as they get together in a team, they can say, “This is what I can do, this is what I can’t”, and the team makes better commitments and they start learning at a much faster rate. They learn why they make mistakes, so they avoid that in the future. The data on the Team Software Process is absolutely extraordinary in terms of quality levels and productivity levels.

Sven Johann:                        [00:47:25.12] I can imagine. I only read the book on The Personal Software Process; it was already pretty interesting because they said stuff like, “Please check how you work. If you go to the toilet, make a note. If somebody asked you something, make a note.” What it showed to me was I rarely work more than 20 minutes concentrated on something. And once we have this awareness..

Bill Curtis:                               [00:47:52.16] Not everybody goes at the extent that Watts does. Watts was a unique individual. Watts had statistical process control over how he balanced his checkbook. He literally measured how long it took, how many mistakes he made, how long it took to fix them, and he got to where he never made mistakes, balancing his checkbook; he knew exactly how many pennies he had… He was an extraordinary individual. Not everybody has to get to that level, but if you want to be a truly great software engineer, you need to know what you’re able to do, so you can spot the weaknesses in your own personal way of doing software development, correct it, and learn why you make certain kind of mistakes and correct that.

Sven Johann:                        [00:48:28.02] And as a team — at Spotify they have the squads, and all kinds of measures on them. Like, what is the customer value they produce?

Bill Curtis:                               [00:48:41.29] That sounds interesting.

Sven Johann:                        [00:48:42.24] What’s the [unintelligible 00:48:43.04] to see in software process? What do you have to do?

Bill Curtis:                               [00:48:47.24] I need to know my data before I can integrate with your data. I need to know how I perform before I can tell you what I can do and what I can’t do, and how we coordinate. If I come into the team launch meeting and we’re trying to figure out how much can we get done in a reasonable amount of time, what are our commitments, we each have to be reasonable about what we can get done. Traditionally, the smart guy in the room would say, “Oh, I’ll do that. And I’ll do that. And I’ll do that, and I’ll do that”, and they would get grossly overloaded and they would become a bottleneck, and the team would slow down. Because nobody else could get stuff done, because they’re waiting on this guy to finish stuff, and it becomes a mess.

[00:49:21.22] Each person needs to know what they can reasonably expect to get done in a reasonable length of time, without working nights and weekends, and not overload themselves, but spread the work across the team so the team can perform at a sustainable level and make commitments that they believe they can meet, and then they can learn why they didn’t meet them. If I’ve got the data and I know, “Okay, I overestimated this, because I had to do this little thing”, so then it becomes much more disciplined.

Sven Johann:                        [00:49:55.23] When I had my first extreme programming in a Scrum project, exactly these things happened. Scrum follows these practices.

Bill Curtis:                               [00:50:07.04] Scrum when done properly is very powerful.

Sven Johann:                        [00:50:09.10] What is Scrum done properly?

Bill Curtis:                               [00:50:11.27] I remember seeing one of the developers of Scrum at one of the Agile Alliance conferences. He stood up, gave a long session and in the middle of it said, “You know, 70% of the companies I visit are doing Scrum-but.” I said, “What?” He said, “We’re doing Scrum, BUT we don’t hold daily stand-ups. BUT we don’t do daily bills. BUT we don’t…” – no, they’re not doing Scrum. They’re doing Scrum-but. They’re not truly agile, they’re not doing the method; it’s just [unintelligible 00:50:35.10]

In too much of what was called Agile, people just use that as an excuse to do what they wanted to do, whereas Scrum is a disciplined method, and if you do it properly, you’ll get very good results. A number of the other Agile methods are that way. There are various Crystal methods, that — I wanted to say Alistair…

Sven Johann:                        [00:50:57.03] Alistair Cockburn?

Bill Curtis:                               [00:50:57.17] Yes, that Alistair Cockburn developed. His concept was you can’t apply the same method to every type of project – huge projects can be different, because you’ve got multiple teams – so he adjusted the practices in each level of a method to work with the kind of organization, the kind of challenge you’re facing in that kind of project. He adjusted it, he tailored the process to fit the nature of the project. He had a very good sense in that. There was a lot of good thinking by some of the top guys that went into the thinking about what a truly Agile program is.

[00:51:30.05] The one thing they threw out that was in many cases a very powerful practice – the data is absolutely clear on this one; it’s one of the best practices in software engineering – was formal inspections. It is remarkable in its ability to capture and remove defects very early.

Sven Johann:                        [00:51:44.21] Formal inspections in the sense of pair programming, [unintelligible 00:51:47.21], design reviews…?

Bill Curtis:                               [00:51:49.17] No, pair programming is not formal inspection. Formal inspection is, “I’ve built/designed my component. I want two-three other people to review it.” Formal inspection means they will go off and spend time to review it, and then we’ll have an inspection meeting where they’ll come back and say, “These are the things we’ve found.”

I may have one guy looking for security problems, one guy looking for functional problems, so different people will take different perspectives. It was incredibly powerful. That’s gone, because that type of work needs three-week cycles. The three-week cycle that you’d have on a Scrum, or four-week cycles… In a single sprint.

One of the things that can replace that if it’s done well is structural analysis – static analysis and maybe some dynamic analysis. Because if you can do analysis on the system as built – not just the code units with an IDE tool, but really look at integration time, integrate the software and then run a static analysis at the system level, then at the code level you found code problems, at the system level you found system problems, and many of these are the same problems you would have found in a formal inspection.

[00:53:01.22] If you can use the advanced tools that are out there now for static analysis, dynamic analysis, behavioral emulation and some of these other things, you’ll gain back some of the power that was lost when formal inspections were dropped.

There are things that can make Agile very powerful, because you have much faster feedback, which is critical. If teams have time to take the feedback and absorb it, and then make their decisions about what they’re going to do about it, and what’s wrong with the system, they learn. They learn about parts of the system they didn’t know about before, they learn about assumptions they had that weren’t correct, they learn about certain kinds of structural issues that were inefficient, and that just makes them a much more powerful team.

Sven Johann:                        [00:53:43.14] But usually I should always do some sort of inspection? When I collect the requirements, then somebody should do that?

Bill Curtis:                               [00:53:50.19] It would be a really good idea to have some kind of formal inspection to see, “Are these complete? Have some of them been left so fuzzy that we really don’t know what it is?” And the same thing with stories. They’re not necessarily requirements – they’re like requirements, but they’re not exactly requirements. So if I’m going to live off stories, I need to understand what assumptions are being made that are being stated. What are the aspects of this system that you’re not going to get by reading a story, by getting a scenario? That needs to be thought out, so that you don’t go down the wrong path with your architecture.

Sven Johann:                        [00:54:26.19] My problem with inspection is it’s already too late. When I have to do a larger review, I’m like “Oh, god… So many details.” I just don’t know anything about it, and lots of things get lost. I’m just wondering if it’s a healthy back-and-forth of…?

Bill Curtis:                               [00:54:47.04] It is if it’s done in a well-run waterfall. The Agile world is always [unintelligible 00:54:51.12]. The finest software ever built was built with this software, and especially avionics software – the best I’ve ever built – was a rigorous, spacial avionic system. It’s probably the prettiest piece of software ever. It was built with a rigorous waterfall model. Most people aren’t doing waterfall with any discipline or rigor. They just get some idiotic date thrown at them and say, “Go build it.” There’s really not the discipline required.

[00:55:23.29] With the waterfall it was the bad management and the poor discipline that lead to a mess, and they said, “Look, if we could at least get faster feedback…” Because one, the customer doesn’t know what they want, so I really need to be able to give them a piece of this system – a prototype, if you will – get him to react to it, and if they like it then we can build the next piece on top of that. But I want to give him something that works, that they can look at and they can decide, “Okay, what do you want next? What stories make sense to you?” So there was that aspect.

The rule with the space shuttle was, “Okay, we build prototypes all the time. NASA wasn’t sure about the requirements. We build a prototype, they react to it”, but the rule was we throw out the prototype, because it was not built with a level five process, and it probably contains defects we don’t even understand.

Sven Johann:                        [00:56:05.28] So the prototype just verified…

Bill Curtis:                               [00:56:08.13] It was just to get the requirements straight.

Sven Johann:                        [00:56:11.28] Requirements straight, and a prototype to prove non-functional properties.

Bill Curtis:                               [00:56:18.24] You probably wouldn’t prove the non-functional with the prototype, unless you were aimed to try to figure a certain structural problem. Because you’re not building it to be structurally strong, you’re building it to demonstrate functionality, to get the user to react.

One of the problems that I’ve seen with Agile is that the goal is, “I want to respond to the customer, I want to be Agile, so as they learn more about what they want in their system, I have the ability then to change direction and get it more to what the customer really wants.” We assume the customer doesn’t always know what they want, but here’s the worst case: the worst case is going to a bank. How do banks grow? They grow by buying other banks. And each bank has its own process, its own way of doing things, and they don’t want to change. So if my representor of the Agile team is you and you’re from division A, you don’t care what division B wants. And division B is not necessarily going to want all the things you wanted. I’m just getting your view, but if I’ve got 20 different banks that have been acquired, they all want a different way of doing it.

[00:57:21.26] Now, all of a sudden, the requirements are getting massively complex, some of them conflict, so I can’t constantly respond to whatever user tells me. I have to be able to do all this stuff, because that’s what the users are requiring. It’s a case where somebody has to put adult supervision and say, “Folks, we have got to have a common business process here, because the IT systems that we’re going to create are going to be monstrosities. They’re going to be impossible to maintain. We will not be one bank, we’ll continue to be 30 different banks; there’s no economy of scale, and we’ll never get the ROI.”

Sven Johann:                        [00:57:52.25] Somebody has to do it, but mostly this person doesn’t show up, right?

Bill Curtis:                               [00:57:56.05] Exactly. The bottom line is a part of what Agile needs to do is when they do start getting a lot of stories from different people that are conflicting and there are a lot of different processes out there, you say “Look, you need to go straighten out your business process and get some consistency here, or else we’re going to build a monstrosity. We can’t continue to respond to you if you’re telling us 87 things.”

[00:58:16.24] We did an analysis on a bank one time, and had every brilliant guy, famous computer scientists do it. They had 183 processes for their credit card division. They had 18 different ways to open an account, one for every different credit card company that they services. By the time he was finished – he only spent two months – he had cut it down to 53 processes, and three ways to open an account.

Everybody has it a little bit different, but here are the generics. Get down to these basics, have small variance on it, and just cut their process down and make it much simpler, much more understandable to people that have to do the work, and the IT system then could be much simpler.

Sven Johann:                        [00:58:59.10] One question I discovered these days here was that 20% of the defects come from bad code, and 80% come from bad requirements. What does that mean?

Bill Curtis:                               [00:59:14.08] That number varies based on what study you look at, but there’s a large number of problems that come out of bad requirements. It’s requirements that weren’t stated completely, or were fuzzy and what not. [unintelligible 00:59:25.23] it’s a lot of that; they just didn’t state the requirement, and there was what was called a defect, but they just didn’t do something the user wanted. Or there were conflicting requirements. One person said this, another person said that, and they weren’t the same. It never worked out, so now this system’s got this problem.

Sven Johann:                        [00:59:47.27] How do I fix that?

Bill Curtis:                               [00:59:48.26] You get them to sit down and work it out until they come to an agreement on what has to be the way the system works. You shouldn’t make that decision as IT person or as a developer; it’s not your responsibility, it’s theirs. We have to stop taking the blame in software development for problems the business has, and getting their business straight, and getting their business process simplified and organized, and not trying to have 87 different ways to do everything, and then expect us to build that into a clean, easy to maintain IT system. It’s just not going to happen.

[01:00:21.20] When you discover the problem, you need to go back to the customer and say, “Here’s a problem. I can add these 87 features, but it seems to me there’s only a basic set of 20 here. This will be massively complex for you and me. You said this and you said that, and that leads to completely different algorithms”, or something.

Sven Johann:                        [01:00:42.18] From a contract point of view – I’ve been in this situation quite often, where I said, “Look, this makes no sense”, and the answer was, “Okay, but we have fixed requirements. Just do it, and when you’re done, we’ll make new requirements to build it out”, and that is really frustrating. What can you do about that?

Bill Curtis:                               [01:01:01.17] It’s really frustrating. If the customer says, “That’s it. These are the requirements. I don’t want to argue anymore about it”, you have two choices. You will build it exactly as the customer stated, deliver exactly what the customer said in the requirements, and then negotiate all the change requests for what you will make a small fortune.

The other option – if you’re true level five – if it’s a real disaster and you can see it’s not going to work, it’s going to be a nightmare and they’ll be mad in a year and a half, just say “Gentlemen, we want you to go to one of our competitors and see what they’ll say. We’re happy to let this business go to a competitor”, and you just back away from it.

[01:01:41.09] If you truly are a high-maturity organization, you’re not going to have to worry about business. People want you because you’re reliable, you’re less expensive, and you can walk away from contracts you know are going to be disastrous.

The people with Obamacare claimed to be level five, and clearly, they weren’t. If I were the manager in a level-five, I would walk away from that contract, saying “If you will not have started by this time, we will back away. You can’t test this thing; we cannot guarantee it’s going to work, with two weeks at the end of the process to test this massive system. You need to find someone who thinks they can do that. We know we can’t.”

If you’re a high-maturity organization, you have a tremendous power to negotiate, because you have a record of having succeeded and delivered on time, on schedule, high-quality systems. “We know, we are the professionals. We know what it takes. If you want to do it in half the time, be my guest, but not us.”

Sven Johann:                        [01:02:40.05] These days I only work for small and medium-sized companies, because the leaders are mostly also the owners of the company. They really care. If you explain that to them, “Everything we did here… Oh, we forgot stuff”, then they are like “Okay, there is one person above everything which is responsible, and wants that everything will be good.” So then I can go to this person. But if I go to a large, 50,000-people company, it somehow gets lost in the whole…

Bill Curtis:                               [01:03:13.27] If they’re immature, yes. If they’re mature, not necessarily. The biggest problem you have in a small company (especially in a really small one), especially if they put out an IPO and they have public ownership, or they’ve got money from venture capitalists, how many of those CEOs actually last more than three years, or even two years? The problem is they know how to get something started, but they don’t know how to get it to the next level of development. They don’t know the processes and disciplines.

[01:03:45.07] You see these guys get the thing started, it’s a massive success, but they don’t know how to take it to the next level. They don’t know how to scale it, because they’ve never had to manage. So they take that founder out and put a more experienced manager in. That happens all the time.

It’s exactly what happens in process improvement. You find middle managers and project managers that can’t move in this direction. They can’t bring in the discipline required, and you can’t scale a business without it. That was a fundamental lesson in Good To Great and Built To Last – you have to have a disciplined process in order to be able to scale it.

[01:04:22.01] The Toyota production method, the quality system and the process from which LEAN came start out with the premise you have a standard process. If you don’t, we can’t improve. [unintelligible 01:04:32.03], then you’re nowhere.

Sven Johann:                        [01:04:37.22] I heard that from David Anderson. He was asked, “Introduce kanban to my organization”, and he was like, “I can’t. If you don’t have a process already, I cannot do that. If you have a process, I can come with my kanban and we can improve the way you work. But if everything is still in chaos, we first have to introduce something like that.”

Bill Curtis:                               [01:05:04.22] One of the fundamental concepts in kanban is it’s a way to manage the amount of work you’re taking on. You can add it in if there’s not a slot for it. It is a disciplined way to control commitments, and to learn how much you can actually get done, because you’ve got a lot of blocking things if you’re screwing around. So yes, it’s a good process we’re looking at.

Sven Johann:                        [01:05:30.07] We already have more than one hour, so we should wrap it up. Important for software is…

Bill Curtis:                               [01:05:37.25] You must measure. You must control commitments, you must control the baselines, and your managers have to back the developers. They really have to provide an environment in which developers can work in a professional way. Developers really want to produce excellent stuff, that they’re proud of. If they’re thrown into an environment where they can’t work in a professional way, where they’ve got commitments they can’t possibly meet without working nights and weekends, eventually they’ll get fed up and leave.

Controlling commitments and baselines, measuring is critical… Augment the organizational level, getting to an organizational way of doing it, and a cultural build around that, and it will be quite something. The defect rates go way down, and it becomes much cheaper to produce software, you’re getting stuff out much faster, and because that’s better software, the business becomes more agile. Why? Because you can make changes much more quickly.

[01:06:34.06] The business agility is strictly limited by the quality of the code. If it’s hard to make the changes that the business needs to compete, they’re going to be behind their competitors. So measure, have a discipline, put discipline in place; management is responsible for providing that environment, so that the developer can use a professional process.

Sven Johann:                        [01:06:53.25] And software architecture. Get the architecture right, because…

Bill Curtis:                               [01:06:56.01] Get the architecture straight or you will pay a huge price, and that price may be death. You can’t refactor a [unintelligible 01:07:04.24] architecture on an important system.

Sven Johann:                        [01:07:08.01] In the end, it’s impossible to put safety…

Bill Curtis:                               [01:07:10.27] What you can do is you stop, and typically start over. You just rip and burn.

Sven Johann:                        [01:07:16.04] Bill, thank you for being on the show.

Bill Curtis:                               [01:07:19.03] My pleasure.

Sven Johann:                        [01:07:20.02] This is Sven Johann, for Software Engineering Radio.

Join the discussion
  • Very well done. Great topic, insightful responses. Gotta love the crusty veteran who has had oodles of smoke blown in his direction! 🙂

  • Very good, informative discussion. From a production standpoint, Bill was too domineering and didn’t allow for a discussion, and Sven really struggled to shape the dialogue in that environment. I loved Bill’s message, but he was frustrating as hell to listen to.

  • Very good episode. Really liked Bill’s perspective on CMMI, agile, Lean and host of methods towards quality improvement.

  • I definitely agree that bill dominated but this episode definitely stood out the most amongst others. It almost seemed like Bill was predicting nearly all of Sven’s questions. He seems to really know his stuff and it made this episode really enjoyable.

More from this show