Linda Crispin

SE Radio 164: Agile Testing with Lisa Crispin

Join the discussion
  • First thanks to the se-radio team for the high quality of podcasts for such a long time. It is a pleasure to listen to your show.

    I am a professional software engineer for many year. Usually I worked in projects with waterfall or classic iterative process approaches. My experiences about this is not that bad. If something went wrong, this was often caused by „soft factors“. Like risky and unrealistic commitment to customers. The best process won‘t help here.

    Nevertheless, I would like try Agile Development just to learn how this could work for me.

    On one hand, I have the impression that literature and periodical article are mostly created by consultants that usually work in more or less complex individual software projects. Maybe, these colleagues get more regular in touch with latest technology, tools and process trends.

    But I would assume that most of the software engineers in the world (as well as myself) are occupied in off-the-shelf product development projects. This includes shrinked wrapped software, special software in some obscure domains with only a few customers or software that is part of embedded systems.

    These software systems usually live for many years if not even decades. They include immense amount oft legacy code, have a rich functionality and are often well proven in field. They often maintained by software engineers that are also kind of domain experts since they work for years on the same subject.

    In this context, Agile Development has two important aspects:

    First there are a lot of best-practices and tools. This includes unit tests, test driven development, iterative or staged development, continues integration and so on. These means are not unique to Agile Development and could also be used in a waterfall or other software development processes.

    But in some aspects Agile Development is fundamentally different than other approaches. First, requirement engineering does not play that important role (especially upfront). But this is not the topic of the current podcast.
    Secondly, it is the claim that the software product could be released after each development cycle (e.g. each week).

    For software off-the-shelf product, this is far from realistic in my mind and the current podcast episode didn‘t convince me.

    Michael asked exactly the questions that are the key points for me: What is about user interface testing (in many system this is more than 50% of the code). What is about the „-ilities“ and performance? What is about the increasing amount of tests that blow up your iterations? Lisas answers were circumventive.

    Unit test are crucial but don‘t give you sufficient certainty that your software quality is good. The product is more than the sum of its pieces. Most problems appears at the interfaces of modules or components. Often the semantic interpretation of interface contracts is not that clear. Other problems arise from different execution paths (C2…C4 tests).

    Therefore, automated end-to-end tests are required that cover the most frequently used functionality of your customers. In the best case, they cover the whole system from user interface to database and back.

    Therefore you need a running database in defined state and should not merely mock it. Then you set up your environment (including automatic deployment of the runtime). Now you are ready to run your test cases. Afterwards, you have to evaluate the result. Test that just run but produce incorrect data are not of any value.

    If the database contains a realistic amount of data and your product contains several thousand function points (or million lines of code), the tests runs definitely more than 10 minutes and even more than 2 hours. The product I am working on, the (automated) tests run several days. And only fractions of the whole functionality is covered.

    Usually off-the-shelf products support a white range of platforms. You have to run the tests on different operating systems (with rich clients) or web browser (with web interfaces). You have to test your middleware on different application server (WebSphere, WebLogic..), databases (Oracle, SQL-Server, DB2) and languages (English, German, Japanese, Spanish…).

    There is a lot more to consider: You might automate your GUI driven tests by expensive and hard-to-maintain tools like Silktest. But the usability and latency of interaction has to be tested by humans. The same applies for test cases that cover the most frequently used permutations of the critical use cases of your most important customers (Lisa called this exploratory tests).

    Even worse are embedded system: Here you have to consider compliance criteria. E.g. your system must be tested in defined temperature zones. Often, you have also to fulfill real time conditions.

    Quickly, your regression test takes several month. And this does not meet the requirements of Agile Development.

    And you can‘t add just more servers or testers like Lisa proposed. At some points, your managers (and the market) will not pay for this. The same is true for optimizations of your tool suite.
    In product development, you cannot switch regularly to the newest test tool available. You have to maintain your legacy test. You cannot migrate giga bytes of test data once a year or so.

    Maybe you (the se-radio team) should produce an episode about this: Off-the-shelf product Agile Development. I am keen to hear how complex software products (distributed or embedded system) development could be mastered by Agile Development.

  • Let me share some of my insights with large and off-the-shelf software projects transitioning to Agile. I believe in such setups it is important not to take the goals too literal, e.g.
    – being able to release the software with every iteration might not be feasible and not be efficient, but it serves as a very good goal that you should optimize and challenge yourself towards. Improving your build and test cycles always pays off and you might be able to reduce waste.
    – having a dozen of teams fully self organized with very little coordination and synchronization might be hard. In case you have constraints regarding regulations or other, you might require some specialists spread across the teams plus some additional support organization caring for the teams.
    – test sets might be very large and complex, or the product has legacy code that is not easily testable via automation, or automation efforts are not concluded. Here you have to invest in lots of manual, tedious, and long lasting testing – not very agile on first sight, but necessary.

    I agree, very few of us start of with small teams and a perfect product. So I take agility and lean development more of a vision to improve towards but not as a 1:1 mantra, especially when starting to adopt it. Hope this makes sense.

More from this show