Some teams using Scrum and XP tend to have special Q&A iterations every several iterations and/or before the release. While it might be ok, during the transition to the agile processes, as a rule of thumb having Q&A sprints is a good indication of the problems with the definition of “done”. The main point of iterative development is to have a “potentially shippable” product at the end of iteration. Planning for Q&A sprints essentially means that at the end of the iteration, team does not plan to have a potentially shippable product.
Reasonable QA Sprints
I believe that separate Q&A sprints can be reasonable in the following cases:
1. During transition period from the waterfall. Until there is a common consensus on what exactly “done” means for the current project and until practices and tools are stable enough to support it, there can be mismatches about the definition of “done” and Q&A sprints might be able to fix the situation.
2. When some types of testing take too much time and effort. For a software-hardware company, a potentially shippable increment might mean a tested hardware board that might take weeks to create. For a network product, the release testing might require deploying the software on a multitude of system and network configurations.
3. When requirements unexpectedly change so, that much higher quality level is required. For example, it might happen if your company manages to make a contract with the hospital and software suddenly becomes health-critical. Then a team might spend several sprints devoted mainly to the existing code reviews and extra testing.
However, all the situations above are exceptional and an agile team should try to get rid of them as soon as possible in order to become more productive and predictive. In most of situations delays in bug detection cost more, than spending time on automating network reconfiguration or on adopting the reconfigurable hardware prototypes.
Or are there another situations, in which the Q&A iterations could be reasonable for a long period of time?
Most of the agile software development methods explicitly or implicitly recommend the use of test-driven-development and generally development of the testable code. The reasons for the high testability demands are no secret. By the very definition of “agility”, the agile project has to be ready and even embrace requirement and corresponding architecture changes. Every hardware engineer knows that it is easy to redesign the wired board only if all the components used are reliable and conform to the own specification. It holds true for the software world – it is easy to redesign the system only if its components are well tested.
There is little doubt that the code testability is a good thing, but how can you measure it? Is it possible to have a look at too classes and tell which one is more testable, than another? How to decide that the component is testable enough and doesn’t need the further refactoring?
Jay Flower proposed a score-based metric for testability. His metric is based on a set of attributes that make a class easily instantiable (public constructor with no arguments, does the constructor depend on other types) and easily sensible (how easy it is to check that the value has really been assigned, how many actors participated in the action). While these attributes are definitely important, I feel that the metric is a bit too indirect and focused on too low level details. At so detailed level there is always a possibly of omitting some of important details.
My measure of testability would be an integral value – the number of seconds or number of lines of code required to instantiate the class (actually same applies to the modules and subsystems) in the test harness and to run/sense the functionality of interest. I believe this metric correlates well with the Jay’s proposal, but being an integral metric it allows for more broad view on the problem. For example, the class with ten construction arguments will be ranked as well-testable if there is already a test factory for this class.
What do you think about such an integral metric? Can a set of mocks, stubs and fakes be considered a way to raise the component testability?
Agile software development methods tend to be lightweight. That is, while agile project can produce extensive user documentation, there is usually very small amount of internal documentation and bureaucracy. There is almost never any heavy architecture and design specification and even if there is, it is never produced upfront, but rather evolves iteratively.
Teams don’t just skip the boring documentation. There are reasons for using a lightweight process:
- Agile teams focus on adding value for the end customer.
While customers might find a user guide useful, in most of cases they don’t care about the internal design documents. Therefore if it is possible to work without the heavy upfront documentation, it actually saves some customer money.
- Agile teams evolve the solution design over time
Even when the agile process used requires some amount of internal documentation (it might be especially necessary, if multiple teams work on the same project in the multi-site environment), it cannot be produced at the beginning of the projects, because agile teams develop software feature by feature and embrace change.
- Agile teams compensate the “missing” documentation by other means
The purpose of the internal documentation is always to fix the concrete solution, to make sure that system is going to conform to the given technical and customer requirements. Agile teams tend to use test-driven-development and automated testing for the technical part and frequent communication with the potential user for the customer part.
What do you think about these reasons? Where is the border between too lightweight and too heavyweight project? What amount of documentation is good for your team?
When talking about software development, often the building construction analogy is used. As every other analogy out there, it is not very accurate. Unfortunately this particular analogy is inaccurate in a very important way. Buildings are the artifacts of the material world, while software is non-material.
The virtual nature of software allows for the transformations impossible in the physical world. In the physical world there might be a possibility to add balconies to the existing house during the expensive renovation, but it is impossible to add a hundred floors to the the two-storey house, just because the grand city architecture plan changed and it is now possible to get bigger return of investment by building a sky-scrapper instead. It is impossible to adjust change the floor plan just before roofing in.
With the software similar changes can happen everyday. If database A doesn’t scale up to the new demand, it can be replaced with the database B. If a website category structure doesn’t appeal to the users it can easily be replaced with a tag cloud.
This kind of changes doesn’t happen for free. In order to be easily adjustable the quality of already developed software should be permanently high enough. In order to be easily changeable, the software design should be kept tidy and re-factored bit by bit often. However, the opportunities opened by the permanently high design quality are enormous.
Agile software development methods are sometimes criticized for the inability to rely on. Agile project managers are unable to produce the fixed upfront effort-time-costs estimation. Sometimes it is even the core argument of the waterfall process proponents.
In fact agile manifesto principles “Customer collaboration over contract negotiation” and “Responding to change over following a plan” do not state that contracts and plans are useless. Agile community recognizes the value of contracts and plans, it is just so, that the agile developers value collaboration and responding to change more. If there is a possibility to create the upfront time and costs estimation, the agile team would be glad to do it. Unfortunately, in most of cases it is hardly possible. However, the agile approaches provide an alternative to the heavy upfront requirement analysis and design – iterative approach to estimations. Every next iteration, plans can and should be reevaluated, and estimations – readjusted.
The first two-three iterations after which the estimations become rather reliable often take less time than the traditional requirements analysis phase. Estimations can become even more reliable if during these first iterations, the team worked on the items with the least understanding exactly to produce the better estimations.
Did you yourself notice that the agile teams come to the reliable estimations faster, than their waterfall counterparts? If you worked in an agile team, that had to make hard contracts, did you manage to delay the contract details until couple of first iterations was completed?
When agile software development is tried in a large corporate environment, it often happens that the agile pilot team or even teams have to cooperate with the old good waterfall team around the corner. It sounds reasonable that the new things (like the agile process) is tried first on the project of not too big importance, that are supposed to add value to the more important activities. For example, an agile team might be asked to develop a web-interface to the client-server system being developed by a waterfall team.
I don’t have an experience with a really hard dependency, but I’ve been in a situation when our dependency on a waterfall team was rather weak. What happened is that the waterfall team tried to delay the integration as long as possible (our fault that we did not force it to happen earlier) and as a result the whole product has been delayed a bit.
From a limited personal experience of this kind of collaboration and from rumors around I can say that most often the agile team puts the pressure on the waterfallish partner. Since agile teams often practice test driven development and write tests for the external interfaces also, bugs in the waterfall component are discovered much earlier and more frequently, than the waterfall team is comfortable with. Since agile teams are ready to change according to the changing (or more clarified) customer demands, extra pressure is put on the waterfall team to change their plans much-much earlier, than they would like to. And so on, and so on.
Virtually in any case I can imagine the agile team puts the extra pressure on the waterfall team. After that everything depends on how well the pressure can be resolved and how much of politics has to be involved. The possible outcomes could be:
- The second team sees at least some value in the agile team arguments and adapts at least somewhat. For example, that could move from the strict waterfall to the phased delivery, that is a good first step to the agile process adoption.
- The second team (or its boss) claims that the agile team is disrupting the orderly business and then the situation mostly depends on the political powers of the team chief and the position of a higher management. In extreme cases the agile team can be even shut down
Did you experience this kind of an agile-waterfall collaboration in real life? I wonder what most frequently happens in reality, especially in situations with the strong inter-team dependencies.