Search
Steve McConnell

SE Radio 273: Steve McConnell on Software Estimation

Venue: Skype
Sven Johann
talks with Steve McConnell about Software Estimation. Topics include when and why businesses need estimates and when they don’t need them; turning estimates into a plan and validating progress on the plan; why software estimates are always full of uncertainties, what these uncertainties are and how to deal with them. They continue with: estimation, planning and monitoring a Scrum project from the beginning to a possible end. They close with estimation techniques in the large (counting, empirical data) and in the small (e.g. poker planning).


Show Notes

Related Links

Transcript

Transcript brought to you by innoQ

This is Software Engineering Radio, the podcast for professional developers, on the web at SE-Radio.net. SE-Radio brings you relevant and detailed discussions of software engineering topics at least once a month. SE-Radio is brought to you by IEEE Software Magazine, online at computer.org/software.
* * *
Sven Johann: [00:01:05.19] This is Sven Johann for Software Engineering Radio. Today I have with my Steve McConnell. Steve is CEO and Chief Software Engineer at Construct Software and the author of Code Complete, Rapid Development and many other books. In 2006 he published a book on software estimation. He also served as the editor-in-chief of the IEEE Software Magazine.
Today I will be talking with Steve about software cost estimation. Steve, welcome to the show!
Steve McConnell: [00:01:36.03] Thanks for having me.
Sven Johann: [00:01:38.03] Did I forget to mention anything important about you?
Steve McConnell: [00:01:42.09] No, I think that pretty much covers it.
Sven Johann: [00:01:47.20] Okay, so what is an estimate?
Steve McConnell: [00:01:51.13] When we ask “What is an estimate?” we have to differentiate between how people commonly use the term, which is actually done in a lot of cases in a way that is unhealthy and unproductive, and then we have to talk about what an estimate really is both in terms of the textbook definition of estimate and also in terms of what is a healthy approach to talking about estimates.
If you look at the dictionary definition of an estimate, an estimate is an analytical prediction of how long something will take, how much it will cost or how many features can be delivered in a certain amount of time. So the notion there is of a prediction, and normally the dictionary definition will also include some concept of an approximate, a tentative or a preliminary, or something like that, so we’re talking about a preliminary projection or a tentative forecast. I think that’s right, that is the best and most productive way to think about estimates in software.
[00:02:53.11] In practice, what we find is that people on both sides of the technical and business equation tend to use the word ‘estimate’ to mean analytical prediction for sure, but they also use it to mean a commitment. Business people will say, “How long do you estimate this will take?” and the technical staff will say, “We think it will take until 15th January”, and at that point that 15th January number becomes a commitment to the business; that’s going a little beyond just an estimate.
In addition to using the word to refer to estimate and commitments, it’s also used to refer to targets. The business side might say, “We’d really like to have this done by 15th January.” As the conversation goes on, somehow we conflate the terms of ‘estimate’ and ‘commitment’. This leads to all kinds of problems, and we can certainly talk about those if that’s of interest.
[00:03:51.05] To make a long story short, if we can differentiate in our conversations and at the very least in our own minds the differentiation between estimates, targets and commitments at least on the technical side of the equation, we set ourselves up to have far more productive conversations about estimates. We’ve seen huge value just in keeping the concept straight, because keeping the concept straight allows us also to keep the activity straight, so that we know when we’re estimating, when we are talking about a target, and when we are making a commitment. Keeping those terms straight helps us to avoid making commitments when we think all we’re really doing at that point in time is providing an estimate.
Sven Johann: [00:04:36.25] Commitment, target, estimate. My boss asked for an estimate, but he really wants to hear the target and a commitment from us to that target. Is that right?
Steve McConnell: [00:04:53.29] That’s right, and that’s a productive way to look at it. The back and forth here is that the business typically has something in mind that they think makes sense for the business – that would be the target. Then typically the business will come and say, “How long do you estimate this will take? How many features do you estimate you can deliver by this time?” What the business of course is thinking is “We hope that we can get a hundred percent of what we had in mind as our target.” What typically happens is people are optimistic, and so technical staff goes off and does some actual real estimation work – that I would actually call estimation work – and they typically come back, and optimism being what it is, they find out that we can’t really get close to the original business target; that it was very optimistic, and that the real capability of the organization is going to fall short of that and not be able to deliver that.
[00:05:48.17] What we’re doing there is we’re using the estimation activity to help us assess what is our ability to meet a target and is it sensible to make a commitment; an estimation helps us determine “Is this a commitment that we can probably make, or is this a commitment that we probably cannot perform to?” There is a notion of variability or probability in any real estimate, and there’s some wiggle room in any real estimate. When we say, “The estimate informs the likelihood of us meeting the commitment.” The estimate is not going to guarantee that we meet the commitment, but it is going to tell us we have a really good chance of meeting the commitment, or we have a fighting chance of meeting the commitment, or we really don’t have any chance of meeting the commitment and therefore we probably shouldn’t make the commitment in the first place.
Sven Johann: [00:06:46.02] When I read your book, you gave a very nice explanation of that with the suitcase when you go on holiday – packing a suitcase, how much can possibly go into that suitcase. Maybe you want to tell our listeners about that example. I think it makes it very clear.
Steve McConnell: [00:07:03.16] Yes, sure. That part of the book actually came from a column that I published in IEEE Software a little bit prior to that. The basic idea was that, if we’re going on a vacation and we’re trying to figure out what size of suitcase do we need to put all our clothes into; we’re really not trying to figure out the exact right size suitcase that is perfectly matched to the clothes, we’re trying to come up with a suitcase that is big enough to hold all the clothes that we want to take, but not so big that we’ve got a huge extra suitcase with a lot of room in it, that’s more of a chore to haul around the airport and can’t go in the overhead storage bin and so on.
[00:07:42.25] The point I make in that chapter is to say really what happens when we go on a trip is we pick our suitcase, and if we’re close enough, and maybe we have a few more clothes that ought to fit in the suitcase, so maybe we have to sit on the suitcase so that we can get it closed, but as long as we can actually get it closed by sitting on it, then we probably did okay; we’re close enough. That’s a pretty good analogy for what we’re trying to accomplish with software estimates.
[00:08:08.12] We’re not trying to say with our estimates that we’re going to hit the exact thing that we estimate exactly on the nose. Too many things change in software projects. The idea that we are going to hit whatever we estimate exactly on the nose would depend on actually delivering the exact same project that we started out to deliver, and that hardly ever happens. Too many things change along the way. But if what we estimate in the first place is close enough so that we can get to something that kind of looks like that, even if it requires some extra work or maybe a couple of weekends or evenings, those evenings and weekends are the moral equivalent of sitting on the suitcase. And if when all the dust settles we deliver more or less what we said we were going to deliver, in more or less the timeframe that we delivered, I believe that the vast majority of businesses will say, “You know what, life is uncertain, and all things considered maybe we didn’t get exactly what we thought we were going to get, but we got close enough that we consider this project to be a well-run project and a successful project overall.”
[00:09:11.14] That was the essence of that sitting on the suitcase story – to illustrate that in some sense it would be great if nothing ever changed and we could predict with pinpoint accuracy exactly what was going to happen. In real life though, that doesn’t really work that way. What we’re really trying to do is get close enough with our estimates that we can close the gap through reasonable amounts of extra effort or small-scale feature cuts, small scale changes that aren’t going to affect whether we consider to have had a successful project.
Sven Johann: [00:09:46.17] But why is that? Where is the estimation error coming from?
Steve McConnell: [00:09:53.01] I’ve been teaching an estimation class for almost 20 years now, and for at least the last ten years I’ve had students in the class go through an exercise where I say, “Alright, I want you in three minutes to brainstorm as many sources of change on your projects that invalidate the estimates on which the projects were based, and that happen on a routine basis. I’m not looking for black swan events where you couldn’t possibly have predicted it. I’m looking for the kinds of things that you don’t include in the estimates, that you probably couldn’t even get approved if you did include them in the estimates, but they happen on projects all the time.” I give them three minutes to come up with as many of those items as they possibly can, and in a group of maybe 20 people we typically end up with a list of about 30 or 40 things.
[00:10:48.20] We go through the list and I say, “Okay, I asked you not to give me the black swan events, just the kind of things that usually happen. Are these unusual things or are these common things?” and they always say, “No, these are common things. These happen all the time.” So the kinds of things that we’re talking about are things like key staff member becomes unavailable; they quit or they get reassigned to a different project. Management priorities change; the priorities you had at the beginning of the project get changed halfway through the project. Budget changes; you start the budget with ten people, but due to budget changes partway through the project you have to go down to seven people. Market changes – your competitor releases a capability that you have to respond to in the current version of the product.
[00:11:34.03] The list just goes on and on and on, and these are the kinds of things that happen every single project. In fact, you’d be hard-pressed to find an example of a project of any size that actually stayed truly stable from the beginning of the project to the end. When I say stable, I mean the assumptions that affect the estimate remain stable from the beginning of the project. Those would be assumptions about a feature set, staffing, organizational support etc.
Sven Johann: [00:12:06.25] That makes it really hard. I know a few execs who want an estimate, and in the end they compare just numbers. “Oh, these guys estimated a thousand person days, and it cost me 1,200. These guys are just bad.” But if I understand you correctly, since we have no chance to – or maybe it’s too negative to say no chance – how do I defend myself then? Is there any way? Do I have to write all these things down, or is it just my feeling that people are putting too often a gun to my head and say, “Hey, you totally estimated this wrong”?
Steve McConnell: [00:12:58.20] Depending on the specifics, that scenario can come up for a few different reasons. The unhealthy reason that scenario would come up is that the people who estimated the project are just bad estimators. That in fact nothing changed, and they should have estimated 1,200 in the first place, but they didn’t; they estimated 1,000, so they overran. That would be a legitimate basis for criticizing the group. That does happen sometimes. The fact of the matter is the state of the practice and estimation is still not that good. We still mainly see teams and organizations estimating using so-called expert judgment practices. Those practices, as typically practiced, are highly error-prone. So sometimes that criticism is legitimate.
[00:13:52.09] Other times, one of the reasons that a project might have been estimated at 1,000 days and ended up being 1,200 days is essentially untracked changes to the project concept. That’s pretty common. I’ve seen many cases where project teams will start with a project concept and a set of requirements. They’ll start working on the project; requirement changes or additions will start to come in, and everyone will sit around the table – including the business partners – and look at the changes and say, “Yeah, these are good changes. We need to do these. The cost of these changes is worth it.” But what happens is, even though everyone consents to the changes, they don’t track the cumulative effect of the changes.
[00:14:40.10] I couldn’t even tell you how many times I’ve seen project teams sit around the table and agree that a change is a good idea, and agree in a local sense on the impact on the cost and schedule that change is going to have, but to have no actual ripple effect from that change into the master budget or the master schedule. So it’s very common that a project that might have been estimated at 1,000 staff days ends up at 1,200. We may very well have added those 200 things consensually, and if you were to go ask the business, “Hey, was this thing worth it? Was that thing worth it? Was the delta between the 1,000 and 1,200 worth it?” they might very well say yes.
[00:15:24.25] What we are talking about there is not any kind of technical issue; it’s a communication issue, in terms of making sure the organization understands that the organization is making a decision to increase the scope of the project. That’s not an overrun, it’s an increase in scope, and those aren’t the same thing.
Years ago, supposedly when Bill Gates was still CEO of Microsoft, supposedly Bill Gates asked for status reports for projects in a specific format. I don’t know if this is urban legend or if this is true, but I like it anyway. Supposedly he asked for status reports on one page, where the top of the page gave the original schedule for the project, and then the second line gave the current schedule for the project, and then there was a bullet point list of deltas for why the new schedule or current schedule wasn’t the same as the original schedule.
[00:16:17.07] That’s a pretty brilliant approach, because if he’s asking for reports in that form, he’s recognizing that “I have a tendency to remember the original schedule and to not remember all the stuff that changed between the original schedule and the current schedule. There probably are good reasons for the change, but in the status reporting I want a quick summary of why my memory of the original schedule has now changed and why I should be focusing on the current schedule instead of what we said originally.”
Sven Johann: [00:16:49.07] Okay. I’ll try that out, it sounds good. There is a lot of uncertainty in an estimate; even if I do all that, I still can be very wrong, because when I estimate a project, especially in the beginning when the customer says, “I want to have Facebook for dogs. Please estimate that”, I know very little about this. The further I go, the more understanding I gain of the problems of the project, and then my estimates get better. I think there is a term for it, the cone of uncertainty. Maybe you can explain that.
Steve McConnell: [00:17:37.02] Sure. I talked a couple minutes ago about the state of the practice of estimation not being that great. One of the ways in which the state of the practice of estimation is not that great – even though people have been talking about this for a long time – is people are still estimating at the wrong time in projects, and putting too much weight on estimates that are created before you even know what you’re doing.
Again, every aspect of estimation is multi-faceted, in almost anything we could talk about. There is the business point of view, the technical point of view, the healthy business point of view, the unhealthy business point of view, and same for technical – healthy and unhealthy points of view.
[00:18:22.02] I don’t think there’s anything unhealthy about the business wanting to know on day zero of the project – before any work has been done at all – exactly how long it’s going to take and how much it’s going to cost. That’s a rational thing for the business to want. What’s not rational is for the technical staff to go back and pretend to give them what they want. Because on day zero of the project there are too many unknowns, as you said. We’ll have all kinds of variables – variables in how big the staff is going to be, who specifically the staff is going to be, when the project will even get started, when those staff will become available, what exact features are going to be in the feature set, the detail, the elaboration of each of those features… We have countless variables in the very early days of the project.
[00:19:11.14] To the degree that we could do an extremely rough sizing, one of the techniques that I like at that point in the project is for the organization to have a list of prior projects that they’ve completed, including the cost and/or effort and schedule, so even on day zero of the project, the business can go in and say, “Look, this new project to us feels about the same size as the [unintelligible 00:19:37.25] project, and according to our historical data on this, that specific project took 1,500 staff days and four calendar months.” That’s probably not a realistic combination, but whatever.
[00:19:55.23] Then the technical staff can come back and say, “Yes, it feels to us like the [unintelligible 00:19:58.24] project too, and for a very rough estimate, yes, I think it’s in the same ballpark. Based on further work it could be a factor of two higher or lower maybe, but for where we are right now, that’s a reasonable thing to think.” Or, the technical could come back and say, “No, I think you’re thinking of [unintelligible 00:20:22.12] 1.0, and what you’re talking about is duplicating the scope of [unintelligible 00:20:28.15] 1.0 and 2.0 and 3.0. When you add all those together, we’re looking at something that’s more like this, and it’s dramatically higher.”
[00:20:38.12] You can have those discussions very early in the project. They don’t give you planning number, but they can at least help to begin to set expectations in the right ballpark. Then once the project actually gets underway, the notion of the cone of uncertainty is that software projects are characterized by high levels of variability coming in from all sources – from the feature set, from architectural uncertainties, from staffing uncertainties… Even from things like how much is the project concept going to keep changing over the course of the project. We have all kinds of uncertainty feeding in, and if we are doing our jobs well on the technical management side, we’re going to be trying to attack those sources of uncertainty, to buy down the variability.
[00:21:23.09] If we’re doing a really good job, we’re going to be doing that in order. We’re going to be attacking the highest sources of variability first, so that at any given point in the project we have always attacked the highest sources of uncertainty that we could have attacked, and at any given point in the project the remaining sources of uncertainty are all smaller than the sources of uncertainty we have already attacked. That activity is what makes that a cone of uncertainty and reduces variability disproportionately and early, as we work our way into the project.
[00:21:55.22] Lots and lots of project teams don’t operate that way. We see all kinds of teams do things like leave the hardest problems for the end, which is the absolute worst thing you can do from an estimation or project control point of view. So the cone of uncertainty is a way of describing what can happen on a healthy project; it is not a description of what happens on every project by any means. It’s a description of what’s possible on a well-run, healthy project.
Sven Johann: [00:22:22.16] That sounds to me like the uncertainty and risk management are very close together, or the same thing almost.
Steve McConnell: [00:22:33.28] I think they are, if you take a broad view of risk management. If your view of risk management includes what I would characterize… When we talk about risk management at our company, we typically differentiate between intrinsic versus extrinsic risk management. We always focus on the extrinsic risks, like the risk of so-and-so leaving the project, or the risk of such-and-such technical approach not working out. But on most projects the extrinsic risks are actually dwarfed by the intrinsic risks. Or we might even refer to those as generic risks.
[00:23:11.19] The generic or intrinsic risks are risks like we’re just not going to do a good job of understanding requirements, and we’re going to have to redo a bunch of work because we spend too much time building the wrong thing, and when we show it to the customers, they say “No, that’s not what we wanted. We wanted this other thing instead.”
Low quality is another super big intrinsic risk. We see a lot of project teams that do fairly hasty work, and they leave lots of the quality work to the end. They accumulate bad technical debt as they go, so they work their way through the project and they think they’re making progress, but really what they’re doing is accumulating a lot of off-the-books defect correction work, and that’s really destabilizing to the project estimates and the project control.
[00:24:05.17] It’s appropriate to focus on these extrinsic, unique risks to projects, but when we look at the intrinsic risks, the risks that really show up on pretty much every project, there’s a high overlap between what we would do to narrow the cone of uncertainty versus what we would do to manage the intrinsic risk that we see on projects.
Sven Johann: [00:24:28.10] But then something like Scrum, or agile, or highly iterative methods, they actually reduce risks… Or at least a lot of the risks that you describe. If I have to deliver something which works every two weeks, and it has to be accepted by testers and other stakeholders, that takes off some risk of not getting done.
Steve McConnell: [00:24:57.19] I see that substantially the same way, with the huge asterisk of if it’s actually a high fidelity Scrum implementation and if we’re talking specifically about Scrum. The word agile is so overloaded and so diluted at this point, that if a team says it’s doing agile, that does not give me any confidence whatsoever that what they’re doing is any better than waterfall. If a team says that it’s doing Scrum and they actually are doing Scrum, not just saying that they’re doing Scrum, but if they have a high fidelity Scrum implementation, then I think part of what is brilliant and works so well about Scrum is that the way Scrum is set up it does in fact have the potential to attack some of those big, intrinsic risks on projects. It really depends on the team actually doing a high fidelity Scrum implementation.
[00:25:52.02] You mentioned two-week iterations – yes, if the team actually keeps its iterations short and delivers something at the end of each of those iterations, that helps in a number of ways. One, if we follow the discipline of driving the software to a potentially releasable level of quality, that helps to keep that generic risk of low quality in check. We see lots of teams doing Scrum that don’t do that; that’s one of the main failure modes that we see of Scrum – teams not having the discipline to keep the software at a potentially releasable level of quality. If they’re doing that, they’re losing a lot of the benefits of Scrum.
[00:26:32.10] The mere fact that we actually converge every two weeks gives us a better sense of progress than we would typically have in a more waterfall approach. Let’s say that we have done our requirements on a large batch basis, we’ve mostly done upfront requirements; we do our sprint planning and we map out a two-week iteration, and we think that a two-week iteration makes up about ten percent of our total requirements for the project. Well, if we get to the end of our two-week iteration and we’ve only completed half of the work – in other words, we’ve only completed 5% of the project, rather than 10% of the project – that gives us a pretty strong early data point that our project is going to be twice as long as we thought it was. It isn’t 100% guaranteed that that’s true, but it makes us start asking the question, and it makes us ask the question sooner rather than later.
[00:27:28.16] Then when we get into the next two-week iteration we can say, “Alright, well either we catch up or we still operate at half of the velocity we thought we were going to operate at, and we have another checkpoint as quickly as two weeks later.” This is great. In a traditional waterfall project that was planned to go on for 20 weeks, we could easily get 15 weeks into the project before we start to think we have any kind of problem. In a lot of cases we could get 19 and a half weeks into the project before we admit that we have a problem. But in a well-run Scrum implementation we might know 2-4 weeks into that planned 20-week project that we have a problem. That’s huge in terms of managing that generic risk of wishful thinking and under-estimation.
[00:28:12.07] The other thing I would say – we touched on the topic of risk management. If you break down risk management in consistent things like risk control, actually managing the risk, prioritizing the risk… But the starting point of all this is risk identification. I think one of the things that’s really built into Scrum, especially with the short iterations, is if we don’t actually an iteration like we thought we would, it makes us ask, “Why didn’t we do that?” That naturally leads to early, frequent risk identification.
[00:28:52.16] Scrum never talks about risk identification per se, but the consequence substantively is that in a high fidelity Scrum implementation we do in fact get ongoing, early identification of risk. If we can identify risk early rather than late, we typically have way higher leverage options for addressing those risks, so the whole thing works out better. It’s not just that we identify them earlier, it’s that there’s a lot more that we can do if we identify them earlier.
[00:29:23.04] If Scrum is done by the book, in a high fidelity way, it has the potential to reduce some traditionally really significant inherent risks and sources of variability in software. The other thing I wanted to comment on is that there is an undercurrent in Scrum that is adverse to estimation. There’s a cultural overlay on Scrum, which I think is not part of the practices of Scrum; if you read the Scrum literature, including original books by Ken Schwaber, there’s such a heavy emphasis on the ability that Scrum gives you to be flexible and to respond to changes and requirements that you can easily walk away with the idea – even though it doesn’t really say it – that “Oh, the only way we should ever run a Scrum project is to not pin down our requirements upfront, not even try, but we should just identify requirements as we go, metabolize them as we go, and only do it that way.”
[00:30:32.13] If we’re going to take that approach – which might work in some cases – we’re basically giving up any idea of doing longer range estimations. When Scrum is practiced that way, particularly requirements being done only iteration by iteration, then we really undermine our ability to estimate. I’m not saying that’s good, I’m not saying that’s bad, I’m saying that when we’re going to talk about the combination of estimation in Scrum, that’s a combination that just doesn’t work.
Sven Johann: [00:31:33.28] How would you align Scrum estimations and what we actually delivered? For example, we start a Scrum project, we do our estimations, we want to have these five features, and usually we don’t do feature by feature; we probably have two features in parallel. We estimate in story points, we have a velocity, and after a while it’s really difficult to match what we already did to the original requirements, because we never wrote down how much time we actually used for certain requirements.
Steve McConnell: [00:32:20.27] Yes, there are several questions in there. The way that I reconcile Scrum and estimation – frankly, I don’t think it’s a huge amount of work or much of a change in Scrum to actually reconcile this… We reconcile it by taking a large batch approach to requirements early in the project. That doesn’t mean that we can’t change our mind later in the project. The whole original notion of Agile as a manifest in the early days by XP was “Embrace change.” It wasn’t “Don’t make any commitment in the first place”, it was “Set yourself up so that when the changes occur, you can respond to them and embrace them.”
[00:33:01.18] Scrum gives you a great ability to embrace change, but that doesn’t mean that you don’t plant a steak in the ground in the first place. I think you reconcile that by putting enough definition on your full set of requirements – as much as you can – early in the Scrum project, including defining those well enough that you can do story point estimates for those requirements. Then you actually do the story point estimates, you calculate how many total story points you’re going to have in your project, you map that out across the schedule (how many story points per iteration) and then you begin doing the detailed development work of the project and you track your velocity.
[00:33:43.17] This gives you a tremendous ability to monitor whether you are actually progressing according to plan, or whether you’re going to be able to meet your estimate. After two or three iterations you always have some notion of what you think your velocity is going to be early on, but after two or three iterations you’ll have real project data that you can use to recalibrate your estimates, and that project data will either tell you that your initial estimates were pretty good, or that you need to recalibrate. Either way, you’re still pretty early in the project and you have the ability to raise your hand and say, “Hey look, we thought our velocity was going to be 25 story points per iteration, but now that we’re into it, we’re only doing about 15 story points per iteration. We’re going to be about 67% over if we continue our current pace.”
[00:34:36.13] Nobody likes hearing that, nobody likes that bad news, but they sure like it better if you can tell them earlier rather than later, and Scrum gives you a great ability to do that if you define your requirements upfront and do them to a level of detail where you can actually assign story points. What happens as you get underway is you get maybe a few iterations and things don’t change too much, and you’re delivering in descending priority order and everything’s going fine, and then changes start to come in. Well, that’s great, because Scrum gives you a fantastic mechanism for observing those changes.
[00:35:13.11] We can story point the proposed changes, we can prioritize those against existing items that are already in our release backlog; we can either just add them, and then we know how much our schedule is going to expand because we’re adding and we know what our velocity is, or we can take the new features and we can displace lower priority work that’s in the backlog and keep the number of story points the same.
[00:35:40.26] One of the things that I find very frustrating is the structure of Scrum is essentially an estimation Nirvana. It gives us a great ability to come up with a numeric, quantitative estimate early, it gives us the ability to sanity-check that estimate, recalibrate early in the project as we go, and it gives us a structured and disciplined way to talk about changes in a way that doesn’t completely invalidate the earlier estimation work once the project is underway. So we’ve got this great estimation machinery that is provided by Scrum if the team is actually willing to look at it that way. That’s why I get a little frustrated when I talk about this cultural overlay on Scrum teams, where sometimes teams will feel like estimation is antithetical to Scrum. I think the opposite is true. Businesses mostly like to have some idea of where they’re going to end up and what they’re spending their money on, and estimation goes hand-in-glove with the idea of businesses having some idea of what they’re going to spend their money on.
[00:36:44.29] I have come to the conclusion quite strongly that one of the Achille’s heels of agile and iterative development in general, seen from the business perspective, is that businesses would rather be wrong than vague. If we start out and say, “Here’s what we’re going to do: we’re going to build this exact feature set, and it’s going to take this amount of time, and it’s going to cost this amount of money” and we get halfway through the project and we say, “You know what? We’ve discovered some better ideas, so now we’re going to revise our plan and we’re going to add these features and subtract those features, and now the schedule is this other thing.”
[00:37:21.12] Businesses are actually okay with changing their minds, they’re okay with being wrong, but they need something specific to start with. The pattern I describe is a viable, acceptable pattern from the business point of view. What’s not typically acceptable from the business point of view is this canonical, agile description of “Well, just give us the amount of money that you have and give us your wishlist feature set. It doesn’t have to be complete, we’ll just keep coming back to you every two weeks and asking you what’s the next most valuable thing that we could do for you. This is going to have the effect of giving you the maximum possible value for the money you spend, because we’re always working on the next most important thing.”
[00:38:03.03] There’s a logic to that, and there’s nothing wrong with that analysis. The thing that’s wrong with it is it doesn’t match the way that businesses budget and spend money in most cases. The business isn’t going to give in the first place, unless they have at least a rough idea of what they’re going to get for their money. The whole idea of “Trust me, we’re always going to be doing the next most valuable thing” doesn’t really meet the businesses’ need to do that. That brings us right back to the topic of estimation, which is estimation serves this very useful business purpose of at least giving the business some idea of what it’s going to get for its money.
Sven Johann: [00:38:36.26] I remember many years ago in the Extreme Programming Project – they had all these early superstars as consultants. We divided the whole year in three months periods, we roughly estimated what we can do per quarter, and that was always on the wall. As you say, we changed our mind very often, but if somebody entered the room, he or she could really see what’s up in the next year. While we were on our journey, we could always look where we were. We had a pretty good understanding where we were in terms of the long-term planning, of the whole year or the whole product.
Steve McConnell: [00:39:44.13] I don’t want to make it sound like I think that purely iterative approach is never useful; I do think there are arenas where it’s useful. For example, in a real exploratory research setting.
Sven Johann: [00:39:56.29] We worked iteratively. We had a release every week, actually.
Steve McConnell: [00:40:05.12] One way to describe it – which is an unpopular way to describe it – is waterfall requirements Scrum development is a way of summarizing what I’m saying I think works really well in a lot of cases. But I don’t want to imply that there aren’t other environments where it’s useful to do pure iterative requirements. If I’m in an emerging space – we’re almost past this in the mobile space now, but if I was doing a mobile app five years ago, the idea that I don’t really know, I can’t make a plan for a year because it’s too volatile… I’m going to do my mobile app, doing the next most useful thing on a two-week basis – that’s probably a pretty good approach in a market that’s still that unknown and that volatile.
[00:40:53.00] If I’m in a research setting and I have a lot of unknown unknowns – I don’t even know what the unknowns are – then trying to identify “What are the next biggest unknowns? Let’s work on those for two weeks and then we’ll come back and take another look and see what the next biggest unknowns are, after we’ve turned over all the rocks we’re going to turn over this time.” I think that can be an appropriate response. The point is we’re talking about estimation, and while that’s a valid approach to managing and controlling a project with high levels of uncertainty, those aren’t approaches that support estimation.
[00:41:25.01] Estimation is not required on every single project, and it’s always useful to ask the business, “Do you care about estimation?” Typically, we don’t even have to ask, because the business is adamantly asking for the estimates in the first place, so it’s pretty clear that the business cares about the estimates.
Sven Johann: [00:41:43.19] Even in a lean startup environment? Actually, that’s more or less what you’ve described a few minutes ago, right? Where we have a very high rate of experimentation, just springing out stuff, new features to the users, validate if they really want to have it, otherwise forget about the requirement. For this kind of thing we don’t really need an estimate.
Steve McConnell: [00:42:08.25] If you’re in a startup environment and you’re a hundred percent sure that you’ve got a great idea… Going back in time, if I’m Lotus 1-2-3 and VisiCalc is already out there, and I know that my mission is I want to produce the next generation spreadsheet and really truly bring this very cool, groundbreaking, world-changing product, there’s not that much that’s unknown there in terms of requirements. I need to get my next-generation spreadsheet out. And yes, there could be some details in the features, but it can be treated as a pretty waterfallish kind of project. So not all startups necessary call for that approach.
[00:42:50.22] But there are other startups where people think, “Hey, here’s an emerging area. We think there is a lot of potential in this area. We’re not exactly sure what that potential is.” Your investors or yourself might be able to invest either money or sweat equity in exploring whether there might be a useful idea there or not. We see a lot of startups in our area (near Seattle) where people are just willing to try something for a year or two, saying “Well, we’re just going to see how far we can get”, and investors are willing to spend sometimes significant of money on the basis of really an idea and some passion, and a high potential area.
[00:43:35.10] Once you get into established businesses it’s a little different. In an established business, if the business is going to spend a hundred thousand dollars or a million dollars, they want to have some idea that they’re going to get something for that. That’s where we get back to the whole estimation idea. They need the estimates to even talk about the business case for spending the money. There are always exceptions to everything, but the most common case here is that businesses are going to look at anything that involves a major expense and they’re going to want to do some kind of business case for it. It might be formal, it might be informal, but they want to do a business case that includes the cost and it includes the benefit. Those are both pretty squishy numbers, we all know that, but that doesn’t mean that we’re going to completely abandon the activity altogether. What we ought to do si figure out how to get better at making those numbers less squishy.
[00:44:31.05] For the technical folks, our part of that equation is trying to make the cost numbers less squishy. What is the real delivery capability of the organization? It’s our job to be knowledgeable about what is the delivery capability of our organization, so that we can give the business a good sense of what the cost is. In an ideal world, the other part of the business would be equally good at estimating what the business potential is, so that they can do the benefits side. But that’s somebody else’s problem. Our problem is just doing as good a job as we can on doing the cost.
Sven Johann: [00:45:05.00] How do we do that? What good and bad estimation techniques…? If somebody says, “I want to have Facebook for dogs. How should I approach my estimation?”
Steve McConnell: [00:45:21.01] Well, we’ve already touched on a number of the key points. Before we can create an estimate that is really worth very much, we have to have worked our way far enough into the cone that we actually know what we’re talking about. Recognizing where we are in the cone of uncertainty and recognizing that we’re not going to have a lot of certainty if we’re still on the far left, wide side of the cone. That’s certainly a key part – don’t give off the cuff estimates, don’t treat some casual, gut feel number that somebody throws out as a meaningful estimate. Those are actually really key points.
[00:45:57.17] Another key thing we’ve talked about is make sure we’re clear about the distinction between estimates, targets and commitments. We want to make sure that we don’t confuse ourselves about whether we’ve actually done a real estimate or whether we’ve all just group-thinked our way into accepting a target or commitment without ever actually going through and doing any kind of an estimate.
[00:46:18.23] We’ve also touched on the idea of historical data, in a couple of different guises. We’ve touched on the idea of having a record of the costing schedule’s level of effort for completed past projects. That’s one form of historical data. We’ve talked about the idea of coming up with maybe an initial estimate on a Scrum project, but as soon as we can calculating the actual observed velocity for that project, so that we have the opportunity to recalibrate. That would be another kind of historical data, and I refer to that historical data as project data versus organizational data. Project data is really the most powerful form of historical data for estimation purposes.
[00:47:02.02] If we’re doing those kinds of things, we’re going to avoid just the things I’ve just talked about – avoid the most egregious estimation errors, and get us at least half of the way to where we need to be. The problem in software estimation in practice is not so much that people are making really good faith efforts and using the wrong techniques. The problem is more that people aren’t doing the good faith effort, or they don’t even know what a good faith effort would look like. The first step of saying, “I’m going to do something reasonable and I’m actually going to pay attention to this as an estimation activity, and we’re going to go through and do a real estimate” – that’s the biggest step an organization takes. Then exactly what they do after that is there are certainly better and worse practices, but I think that’s a secondary consideration.
[00:47:55.29] Once we do get to that second step, making use of historical data in some kind or other, is really the key, and there are a lot of different ways that we could go about doing that. For small projects where we have intact teams doing a super decomposed estimate with team members individually estimating [unintelligible 00:48:14.03], that can work okay for a really short, really small project. Unfortunately, one of the most common patterns we see with organizations that are trying to make a good faith effort to improve their estimates, but without really the understanding of how to do that, is they’ll go to an extreme level of detail prematurely. It’s sad to see that, because they’re doing an awful lot of work, and the motivation is good, but they’re working too hard and the technique they’re using is not the right technique.
[00:48:48.10] Sometimes it’s hard to convince companies that they can actually get better estimates earlier by doing less work and looking at less detail. Software people tend to be extremely detailed, but in fact, as we get into projects of any size, basing the estimate on historical data, basing it on attributes of the system that we can count – those are going to be the practices that help us come up with meaningful, more accurate, early in the project estimates, unless we’re talking about a two or three-person project that’s going to last for a couple months. For projects that are larger than that, we really need to get onto more of what I refer to in my book as “counting and computing”, rather than just using judgment for a very detailed task list.
Sven Johann: [00:49:37.10] Now I’m surprised, because I thought the best way to do it is really break it down. Maybe not in total detail, but at least collect the requirements, break them down and estimate individual pieces. With counting, what would you count? Recently I got very superficial requirements from a customer, and he wanted to know how much does it cost. I thought, “Okay, let’s count. Let’s count fields, let’s count buttons…”, but afterwards I felt a little bit unsure if that will be correct. Also, my boss [unintelligible 00:50:22.22] “I saw you counted, but…”
Steve McConnell: [00:50:30.08] Well, it’s hard to know in that case… There are cases where the things that you mentioned counting would be a pretty good indication of the overall scope of the project. I can’t say in your specific case whether that’s true or not.
Generically speaking, since so many teams are using Scrum now, story points are the most readily available thing to count. There’s a huge difference between going through a detailed feature list and estimating the effort or duration for each of those features, versus going through a detailed feature list and estimating the story points for each of those features. If we’re estimating the story points for those features, we probably have some general idea in our mind how we think a story point translates to days or hours or weeks of effort, but we’re still talking about the size in story points.
[00:51:25.09] Then, because we’ve estimated everything in story points and it’s a relative scale, and the different assignments of story points are relative to each other, once the project gets under way, we have the ability to calibrate those story points with real project data. If we’re doing Scrum, we can calibrate that pretty early on in the project. After 2-4 weeks into the project we should have a reasonable calibration. If on the other hand we’re estimating that same exact work at the same exact level of detail in effort, then we’ve got all this opportunity for wishful thinking to creep in; we’re invested in our estimate to say, “Oh, I thought this was going to take three days”, and once I start working on it I now am thinking “Oh, it’s supposed to take three days”, and I may end up taking shortcuts because I wanted to get it done in three days and then I start introducing technical debt into the project. The dynamic is just completely different, even if at a superficial level the estimation activity looks pretty similar.
Sven Johann: [00:52:31.09] But if you count story points, then you still need a good understanding of the requirements, right?
Steve McConnell: [00:52:41.27] You need a good enough understanding of the requirements to know where the story slots in on the story point scale. If you’re doing story points by the book, the scale is not infinitely divisible. It doesn’t go 1., 2., 3., 3.1, 3.2, 3.3. The typical story point scale is the Fibonacci number – 1, 2, 3, 5, 8, 13 etc. So you don’t really need to know “Is the thing exactly a 5?” You can know “Is there any way that we think this thing is as small as a 3, or is there any way that we think this thing is possibly as big as an 8?” Well, if it’s not as small as a 3 and it’s not as big as an 8, then it’s a 5. We need to know enough to have that discussion, but we don’t in our minds need to be thinking “Oh, is this a 5.5 or a 5.7?” That’s not the purpose of the exercise.
Sven Johann: [00:53:40.02] But eventually I have to translate story points to weeks or days, right?
Steve McConnell: [00:53:48.27] Well, not really. Eventually, you have to translate story points into sprint planning and “How much work do we think we can get done this sprint?”
Sven Johann: [00:53:56.18] But if my management wants to know when I’m done, I have to do it somehow.
Steve McConnell: [00:54:06.00] Yes, of course. If you have an intact team that’s already done some projects – which does happen sometimes… I’m pleased to say that one of the completely unplanned and unpublicized side effects of the move towards agile and Scrum does in fact seem to be that teams get kept together a little bit more than they used to. This whole idea of breaking up a high-performing, intact team was very painful for my first 20 years in the industry. I’m seeing less of that over the last ten years, so I think that’s good.
[00:54:42.12] If you do have an intact team, they probably already have a pretty well calibrated sense of what’s a 3 and what’s a 5, so you can take their velocity from the past project they worked on. It might not match exactly, but it’s not typically going to be off by a factor of two; it’s going to be fairly close.
If you don’t have an intact team, if you’re assemblying people together, then you’re probably going to have people with some different ideas about what is a 3 and what’s a 5 and what’s an 8. The team needs to go through some work as a team to get synced up on what that really is. When you present that really early in the project estimate to your manager and you say, “Look, our feature set is 240 story points. Our best guess is that we can do 20 story points per iterations, so that makes this a 12-iteration project”, you can certainly present that. But you can also present simultaneously the idea that “We’re going to be tracking velocity with this new team from day one on the project, and by the time we get through 2-3 iterations (4-6 weeks into the project) we’re going to have a very well calibrated idea of whether that idea of 20 story points per iteration is achievable or not. And we will come back to you no later than six weeks from now in our planned 20-week schedule and we will give you an update on whether we think that 20-week schedule is still realistic or whether we need to adjust that based on what we’re actually seeing from this team, on this project, at this point in time.”
Sven Johann: [00:56:19.03] I’m a little bit surprised, because I learned a long time ago whenever I should do an estimate, I express my uncertainty in the estimate with a range. Somebody asks me how long it takes and I say 10-30 days, or 10-13 days. And the larger the range, the more I express uncertainty. If we estimate with story points, that goes away.
Steve McConnell: [00:56:53.28] It really doesn’t go away, because uncertainty doesn’t come from the story points, it comes from the velocity. We’re back to this distinction between estimate and commitment. If my manager is truly asking me for an estimate, then I can go back and say something like, “Look, we’ve gone through and we’ve counted story points. We have 240 story points. If we look at the track record that individuals on this team have and we look at the velocities that they have participated in other projects, it looks to us like our velocity will be between 15 and 25 story points per iteration. We really don’t know enough yet to know for sure whether we’re going to be 15 or 20 or 25 or some other number in there. So if you ask us to give you an estimate today, we’re going to give you a range that’s based on 240 story points applied across a velocity of somewhere between 15 and 25 story points per iteration. If you want us to commit today, we’ll commit on the basis of the low end of productivity there – we’ll commit on the basis of 15 story points per iteration. But if you give us six weeks, then we can come back and we can give you a much more specific commitment, a much narrower range. In fact, we won’t even give you a range, we will just give you a commitment. We might have a range in our mind, but at that point we’ll be back into that sitting on the suitcase territory where we’ll know what size of suitcase we need. There might be some variability in terms of how hard we need to sit on it, i.e. is it going to be a cakewalk to finish this, or are we going to end up working a little bit of overtime at some point. But we’ll have a really, really good idea of where we need to be in order to deliver what the organization has said it wants to deliver.”
[00:58:49.12] We’ve worked with organizations who will have a couple different responses to that. Some of the organizations we’ve worked with will say, “Well, we really need the commitment now. If the only thing you can really commit to is a velocity of 15 story points per iteration and what that implies, we need to have that today, at this point in our planning process.” We’ve worked with other organizations that have said, “We understand that there’s some uncertainty here, and we would rather get a clearer picture, and if there is extra value to be had, we don’t want to prematurely assume less productivity than you might have. For now, we’ll plug in a planning number that we know has some variability on it of, say, 20 story points per iteration, or a schedule based on that velocity. But we’ll have a plan to update the plan in 4-6 weeks based on your observed velocity, and at that point we understand that will be a hard commitment number, but it will be a commitment number you can actually perform to, because it’s based on project data and historical data.”
Sven Johann: [00:59:59.20] We are close to the end. My last question – we talked a lot about Scrum. If we estimate in Scrum, we usually have the planning poker so many people estimate. Is that a good idea, and if yes, why?
Steve McConnell: [01:00:17.03] Planning poker is definitely a positive approach and can work fine. Planning poker is an example of a family of estimation techniques that are known as structured group decision-making techniques, which is exactly what it sounds like. It’s getting people together in a group and putting some structure around how they make a decision together. The planning poker has certain rules that provide the structure. The rules include things like “We’re going to use the Fibonacci number scale, we’re all going to come up with our estimates individually and then we’re all going to show them at the same time.” It may include some rules about what we do if people have a range of their estimates and everybody doesn’t have the same estimate, or it may not. That’s just one way to do it, and that can work fine.
[01:01:05.13] We’ve seen teams that take the poker part of it literally and will go in a conference room and put on green eyeshades. It doesn’t really do anything for me, but if it makes a somewhat tedious activity a little bit more interesting, that’s fine.
One of the things I like about planning poker is that a structured group decision-making is one way of describing it. Another way of describing that also could be a meeting. Meetings are just notorious for burning up time unproductively, and most people, especially most technical people, really don’t like meetings.
[01:01:44.01] One of the things I like about planning poker is that if you’re actually doing it the way you’re supposed to, it is essentially a recipe for a well-run meeting. The pace of planning poker should be pretty brisk, where you’re focused on estimating just on that story point scale items in the backlog, and you just go through the items in the backlog, and it keeps the discussion moving. That’s not to say in any given case that doesn’t get derailed a little bit, but if you’re doing it by the book it really is a nice structure for having a well-run, productive meeting. There’s no reason for anybody to get bored, because they should be very regularly and rhythmically providing your estimate for the next feature.
[01:02:22.27] Having said all that – and I like planning poker, and I’ve never tried to talk a team out of it that was using it and liked it and it was working well for them – it’s not the only possible way to do an estimate. In my company’s classes, an alternative technique that we teach is called pin the tail on the estimate. We have a scale that’s put on the wall, that’s the Fibonacci numbers written very large along the wall, and we have the different stories that are on post-in notes, and then through a group process we have the group agree “Okay, where does this post-it note go? Does it go under the 5? Does it go under the 8? Does it go under the 13?”
[01:03:04.05] That’s another example of a structured group decision-making technique. The details of this structure are different, but we’ve had pretty good success with that technique as well. I’m not going to say that technique is better than planning poker or worse… Some groups that we’ve worked with like that technique. For one thing, you’re standing up instead of sitting down, so some groups like that. In general, structured group decision-making techniques are another good technique for estimation in general, and certainly for assigning story points in particular.
Sven Johann: [01:03:41.04] Is there anything important I forgot to ask you?
Steve McConnell: [01:03:47.12] Well, estimation is a big topic and it’s really the original software engineering topic that got me interested in more structured and formal approaches to software engineering very early in my career. I tend to see a lot of the software development universe through the lens of software estimation. One of the reasons I find estimation interesting is that if a project team is doing poorly – if they’re performing poorly against their estimates – there could be any number of reasons why they’re performing poorly against their estimates. But if they’re performing well against their estimates, that typically means they’re estimating well, but it also means they’re also doing a lot of other things well, too.
[01:04:30.23] Estimation lens turns out to be a pretty interesting lens to view software development through in general. If you understand estimation well, that goes a long way toward understanding software development overall well. That’s what’s kept me interested in the topic for almost 30 years now, and I’d love to feel like this conversation will help some other people develop an interest in estimation, and maybe increase their interest in software development practices as well.
Sven Johann: [01:04:58.13] I can imagine. Do you have materials our listeners should know about software estimation? Except your book, of course.
Steve McConnell: [01:05:10.12] Well, my book really captures most of the important concepts about estimation. It did come out in 2006, so at the time some of the Scrum estimation practices where nowhere near as well developed as they are today. The estimation in Scrum has advanced beyond what I describe in that book. My company has an online training class that I lead in software estimations, that might be something for people to check out. That’s at ondemand.construx.com
[01:05:47.27] Then one of my favorite things, I wrote a blog entry several years ago now about my summer project of building a fort for my kids, and there were quite a few fun estimation takeaways from that project. If you do a search for “Steve McConnell estimation building a fort”, that’s one of the funner blog entries I ever wrote.
Sven Johann: [01:06:12.04] Good. We’ll provide all the links in the show notes. Thank you, Steve, for being on the show.
Steve McConnell: [01:06:18.16] Alright, thank you.
Sven Johann: [01:06:19.27] This is Sven Johann, for Software Engineering Radio.

* * *
Thanks for listening to SE Radio, an educational program brought to you by IEEE Software Magazine. For more information about the podcast, including other episodes, visit our website at se-radio.net.
To provide feedback, you can write comments on each episode on the website, or write a review on iTunes. Mention or message us on Twitter @seradio, or search for the Software Engineering Radio Group on LinkedIn, Google+ or Facebook. You can also e-mail us at [email protected]. This and all other episodes of SE Radio is licensed under the Creative Commons 2.5 license. Thanks again for your support!

Join the discussion
1 comment
  • Very interesting beginning – where you point out several meanings of “estimation” (a number versus a committment).

    Especially in software improvement projects it might be very helpful to enhance the scope of estimation from the conventional “estimate-effort-of-work” to “estimate-cost-of-problem”. In http://aim42.github.io we combine both approaches in order to make “problems” in systems, implementation, architecture and related processes comparable – facilitating stakeholder communication, especially with business people.

More from this show