Search
SE Radio

SE Radio 24: Development Processes Pt.1

Recording Venue:
Guest(s):
Host(s): Alexander Arno
In this episode Arno and Alex talk about the basics of software development processes. They discuss why and when software development processes are needed and also why some developers don’t like them. They discuss the theories behind different processes and talk about defined vs empiric processes in general. This episode is the first in a row that will later on describe specific processes like eXtreme programming or the unified process.


Show Notes

Links:

Join the discussion
8 comments
  • You defined versus empiric processes is a nice distinction. I agree that it is hard to write down a process for a creative activity. Although there are defined processes for problem solving that do work.

    I do take umbrage with your dismissal of CMM-I and believe it is based on a misunderstanding. The Capability Maturity Model – Integrated acknowledges specifically the problem you describe. And the Poppendieck reference listed with Episode 24 puts it succinctly ( The Challenges of Bringing Lean to Software Development ):

    “Since I am a process control engineer, I would like to call to your attention something we learned a long time ago: When you really need predictability, you create a feedback system that can respond to change as it occurs, rather than a predictive system that is incapable of dealing with variation.”

    That is precisely what the higher maturity level specific practices introduce: measurement and feedback. So that we can avoid the risks that are inherent in open loop systems.

    Of course, maturity is not determined by the performance of the specific practices. Maturity is determined by the organisational support for the practices used. In other words, how well new staff is introduced to the practices in use in the organisation. But that is another misconception I hear in the broadcast.

    That said, congratulations on the se-radio initiative. It is a great resource and the implementation discussions are very insightful.

  • Sorry,
    I posted my response twice. It did not show up the first time, because the page was loaded from cache.

    Feel free to delete this response.

  • Hi Bram,

    first thanks for the feedback! It’s always nice to discuss topics in more details
    and not only be in “broadcast mode” šŸ˜‰

    You wrote that there are working defined processes for creative work. Can you
    please point me to some, because I am really interested in this topic and have
    not found any yet (at least nothing where the process has a level of detail, that
    actually helps). And as you probably heard I really doubt a process can replace
    talent/intelligence/creativity in any form.

    For the CMM(I) part; Maybe one the problem starts earlier. If everybody could
    agree that CMM is not a process or methodology but only a measurement tool,
    there might be no problem with the CMM(I) levels and most probably even not with
    the philosophy (even if I still think the idea of repeatability is very flawed).
    But usually people use CMM (and especially CMMI) as a tool to improve there
    processes. So they usually try to go from step to step upwards the maturity levels.

    And now maybe you see my point: the last thing they do is reach a level where
    they have a situation where the process will be improved by the team itself. If
    you would have taken another philosophy this would (and should) be the
    first. Is it correct to have this as the highest maturity level: yes! But is it wrong
    to have it as the last step in an improvement process: I think also yes. And this
    is imho one of the effects of the basic idea to first gain repeatability and then
    self improvement.

    Again (and I hope I said it clearly enough in the webcast): CMM(I) is not
    a stupid thing, nor should one ignore it. It has a lot of experience and great
    techniques in place, any software development department/company could
    learn from. But I still (and as heavy as when I recorded the webcast) object
    to the basic idea of repeatability. Use CMMI level 5 with two different teams
    and you will get two completely different results. Are teams on level 5 usually
    more sucessfull then teams on level 1. Yes, most probably. Are teams on level 3
    usually more sucessfull than teams on level 1. I highly doubt this!

    So I hope these point out a few more aspects / opinions I do have on CMMI.
    And yes I know, that these are not necessarily the opinions of the majority of
    IT professionals and I happily disagree with some friends on this topic, but still
    these are fun discussions and usually both “sides” can learn from it. So again
    thanks for your feedback, and I am happy to discuss anything in more detail
    if you want…

    Regards Alex

  • Hi Alex,
    Re: You wrote that there are working defined processes for creative work.

    That is not quite what I wrote. I wrote: ā€œā€¦there are defined processes for problem solving that do work.ā€

    Here are some references that demonstrate what I mean:
    1. The Goal by Eliyahu M. Goldratt and Jeff Cox (Paperback – Jul 2004)
    Not he best reference for what I mean, but a great read and it gives enough of an idea that process helps solve problems
    2. The Team Handbook Third Edition by Barbara J. Streibel, Brian L. Joiner, and Peter R. Scholtes (Spiral-bound – Mar 24, 2003)
    An excellent resource for problem solving and other creative processes like: Meetings, Discussions, Decisions, Record keeping, Planning. I donā€™t have my copy here, or I would list the specific techniques. These are from the Amazon table of contents.

    Discipline is the common thread in the methods described. The methods do not prescribe how a problem should be solved. The methods describe a process for problem solving. Step 1, step 2ā€¦ And they do work.

    For instance: Meetings. How often do you attend a meeting without and agenda? Or the agenda is distributed at the meeting. Or there is no goal stated for the meeting at any time. Or there are no action items recorded from the meeting. Any of these meetings are mostly a waste of time. Meeting discipline is easy to establish if you own the meeting. And that way you have meetings with results.

    Re CMM-I
    1. CMM-I is not a process or methodology. Itā€™s a model.
    2. CMM-I can be used as a measurement tool.
    3. CMM-I can be used as a source for good ideas
    4. And you can use it to keep doors open now that telephone directories are getting scarce.
    5. The choice is yours.

    Re: Use CMMI level 5 with two different teams and you will get two completely different results.
    Of course you get different results. You can walk from Sydney to Melbourne bare foot or on running shoes. The results are different, but in both cases you get to Melbourne. So what.

    You have to be less sweeping in your implication. An organisation that operates at that level of maturity will know how the results will be different. Simply stated, the performance of the team is measured on three dimensions: time, cost and quality. An experienced team will deliver a product of acceptable quality (ie meeting specifications in all attributes) in a particular time for a particular cost. An inexperienced team is likely to take longer (or more effort with a bigger team), and may cost less or more depending on the hourly costs, to deliver the same quality.

    A mature organisation will know what tradeoffs they are making in advance. They may also decide to add experience in areas of high risk. And they will measure along the way and correct when there are deviations from the targets as opposed to deviations from the plan. That is Poppendieckā€™s assertion.

    A more prosaic description of organisational maturity is given by Gerald Weinberg in
    Quality Software Management: Anticipating Change (Quality Software Management) by Gerald M. Weinberg (Hardcover – May 1997). Here is a summary of his software engineering cultural patterns (pp 437-443). I only copied the process results, because they most clearly describe the different behaviours. I highly recommend the whole appendix to you.

    0. Oblivious culture: The results depend totally on the individual. No records are kept, so there are no measurements. Because the customer is the developer, delivery is always acceptable.

    1. Variable culture. The work is generally one on one between the customer and the developer. Quality is measured internally by its function (ā€œIt works!ā€), externally by the working relationship. Emotion, personal relations and mysticism drive everything. There is no consistent design, randomly structured code, and errors removed by haphazard testing. Some of the work is excellent, some is bizarre, and it all depends on the individual.

    2. Routine culture. The routine organisation has procedures to coordinate efforts, though its members only go through the motions of following them. Statistics on past performance are used not to change, but to prove that they are doing everything in the only reasonable way. Quality is measured internally by the number of errors (ā€œbugsā€). Generally the organisation uses bottom up design and semi structured code, with errors removed by testing and fixing. Routine organisations have many successes, but a few very large failures.

    3. Steering culture. They have procedures that are always well understood, but not always well defined in writing, and that are followed even in a crisis. Quality is measured by user (customer) response, but not systematically. Some measuring is done, but everybody debates which measurements are meaningful. Typically, they are top down design, structured code, design and code inspections, and incremental releases. The organisation has consistent success when it commits to undertake something.

    4. Anticipating culture. They use sophisticated tools and techniques, including function theoretical design, mathematical verification, and reliable measurement. They have consistent success, even on ambitious projects

    5. Congruent culture. Here are all the good things achievable by the other cultures, plus the willingness to spend to reach the next level of quality. Quality is measured by customer satisfaction and by the mean time to customer failure (ten to one hundred years). Customers love the quality and bet their life on it. In some sense [this culture] is like [the Oblivious culture] in being totally responsive to the customer, but it is much better at what it does.

    In summary, Maturity is not about predicting what will happen. It is about knowing what and how to manage to get a desired result. And the CMM-I model presents known, working practices in this area.

    Thanks for your response. As you say itā€™s fun to discuss and I will spread the gospel on se-radio.

  • Hi Bram,

    first thanks for the book recommendations. I know The Goal but I didn’t know the other one. Nevertheless maybe I was missunderstanding your comment, of course there are (working) processes for empiric problem solving. That’s what the whole incremental and iterative movement is all about. My initial comment in the webcast was not there are no processes for that kind of problem, but that process design for defined problems (e.g. all Waterfall stuff and in most cases the things people make of CMM(I)) will not work for empiric problems. What does not exist (at least still I am not aware of) is a methodology for empiric problems that does not finally rely on the teams capabilities. Which itself is not a problem (but not to well accepted in our industry), and actually even theoratically I don’t see a way to do it in a different way.

    For the rest of your comment: I read it three times now and there is nothing I can’t agree with. Especially your 5 points are exactly what I said about CMMI (at least tried). It is not a methodology/process (but most people treat it like one) and it has lot’s of great proven stuff in it. The only thing I always add (and I am not sure if we really dissagree on that one) is: don’t use it as methodology. This is imho wrong. Even worse if organizations try to improve level by level.

    One thing we might have a different opinion is the repeatability thing: your bare foot metaphor (great one! :-)) Of course you might arrive in both cases, but first this might not work between Munich and Rome (the alps barefoot is no fun at all šŸ˜‰ and so it MIGHT make a difference even for success and also you might be faster/slower with one or the other alternative. To leave the metaphor: repeatability in defined environemts means do it 1000 times and 999 (or what ever rate of accepted failures might be defined) the result is EXACTLY the same. Think production lines. So if you take 1000 teams to let the design the most whatever mobile phone you get 999 iPhones. No process will deliver that. The problem (design) is an empiric one as is software development. Absolutely agreed, that a development organisation with CMMI level 5 has a way higher probability to solve the problem and also to solve it good. But NOTHING there will be repeatable in the defined process kind of way.

    Wow, again a long answer šŸ™‚ So you see, it’s an interessting topic šŸ™‚

    Regards Alex

  • Re: What does not exist is a methodology for empiric problems that does not finally rely on the teams capabilities.

    I agree with you:the ability to solve problems is in the team, not in the methodology. And that is self evident for all creative activity. A million monkeys with a million typewriters will not write a single Shakespeare sonnet.

    Re: ā€¦ don’t use it as methodology

    We also agree on not using CMM-I as a methodology.

    Re: ā€¦ Think production lines ā€¦

    If repeatability is whatā€™s puts you off CMM-I, then I believe that is based on a misunderstanding. Repeatability in CMM-I is the same thing as repeatability in science. It is essential (for science to progress) that different scientists can repeat the same experiment and get the same result, based on the documented procedure provided by the first experimenter. At the same time each scientist will do some unique work. A new software engineer can verify the arguments, decisions and implementations of an existing implementation. Ie, can verify the results of the empirical processes. But different products will each have their own arguments, decisions and implementations. Results of their own empirical processes. CMM-I does not expect, prescribe or achieve repeatability in the creative space.

    Alex,

    I wrote a much longer answer on this point. I eventually summarized it in the paragraph above. I have left the long answer below, because it expands some of the ideas.

    Yes, we disagree here. And I do not understand the expectation that software can be written as a production line.

    Software engineering is a discipline that builds unique products. No two products are EXACTLY the same. If you want EXACTLY the same thing, you say >cp thing1 thing2 But what is the use of that.

    Building software is a creative act. The result of each activity is unique. The challenge is to manage and support that activity. There is a need to built in checks and balances, so you can be sure the product will meet its requirements, once it is no longer in your control. The software that controls the brakes on your car is made by an engineer. Do you trust that person? I donā€™t. I put my faith in the organisation and the dozens of people that have looked over this engineerā€™s idea. There are many engineering companies that are no longer in business, because an engineer made the wrong decision and no-one was ever asked to check. There was no reasoned argument written that could be verified. There was no defined process that could be checked, improved, repeated, taught.

    CMM-I certainly does not expect software to be built as a production line. It does provide practices that help guarantee that the results of hundreds of creative acts can be verified, understood, evaluated and coordinated, so that you and I can feel safe when we drive our car, fly a Boeing 777 or put our credit card in an ATM. So that the person who fires a cruise missile can be sure it wonā€™t explode in their face. And that over time we can built bigger and better products, because the processes we use to do this are now less wasteful than 20 years ago.

    CMM-I up to level 3 is mostly concerned with basics like having a process that can be taught. Making sure bad product is separated from good product. And making sure that all and only requirements are implemented. Level 4 and 5 support the feedback loops Mary Poppendieck advocates. Much of this is common sense. But as they say: Common sense is not so common. If you look around you there are many organizations that do not have these basics in place. They donā€™t teach new people how things are done in their shop. They donā€™t separate good and bad products. ā€œI thought I fixed thatā€ is a common comment in those shops. And they often gold plate some and omit other requirements.

    A mature organisation operating at defined level (3) (Steering in Weinbergā€™s terms) will not exhibit these basic flaws and so they will be more successful than a level 1 organisation. Mature organisations are very aware that engineering is a creative activity. And that engineers use empiric processes, build from their lifeā€™s experience and training. At the same time, those organizations are able to support those processes, so that no idea goes to waste. No mistake goes uncorrected and all products meet the customers requirement.

    So I hope you can see that I do like your suggested separation between empirical processes that are the sum of an engineers life’s experience, and defined processes, that are codified in a book like CMM-I.

    And I hope that you can see that in my opinion the two are complementary, support each other. Hence my concern when the broadcast dismissed the defined processes as of little concern.

    Thanks again for the discussion.

  • Hi Bram,

    cool answer and quiet to the point! I will follow your reply pattern, works pretty well I think šŸ™‚ So here we go (see everything I do not comment on as agreed šŸ˜‰

    It is essential (for science to progress) that different scientists can repeat the same experiment and get the same result, based on the documented procedure provided by the first experimenter.

    And that’s what in my opinion excatly will not work for software development and for sure will not be guaranted (but of course hindered or promoted) by (any) process. So I think that’s fundumentally the point where we have different opinions (and maybe experiences).

    The software that controls the brakes on your car is made by an engineer. Do you trust that person? I donā€™t. I put my faith in the organisation and the dozens of people that have looked over this engineerā€™s idea.

    And that is the other point where I guess we don’t agree. I put my trust in the persons (there track record and so on) and not in their processes. Again I don’t say they are valueless or anything. The processes are important and the people I would trust actually would use processes I could trust in, so thats not mutual exclusive. But the more important thing imho are the people. Everything else will fall in place.

    For the rest absolutely agreed. And again, CMMI has it’s point and still I prefer an organization, that knows its CMMI stuff over one not doing anything on process improvement. But I still think CMMI is one of the things that did more damage than good (and mostly because people use it the wrong way, see the process vs capability meassuring tool).

    So now I think, we have it pretty much on the point where our opinions differ, don’t you think. šŸ™‚ Very cool discussion. Thanks alot!

    Ciao Alex

  • Alex,

    Agreed. šŸ™‚

    Thanks for the discussion. And thanks again for the broadcast.

    I look forward to many more….

    Bram van Oosterhout

More from this show