Scott Ambler: TDD first

Scott Ambler in Helsinki. January 2008
A quick recapitulation from the recent workshop with Scott Ambler, the person behind the Agile Modeling methodology.

From all the engineering practices, implement Test-Driven Development first*

You might find exciting the excellent Scott’s video Are You Agile or Are You Fragile, where he talks about generalized specialists, database techniques in agile team and other interesting and relevant agile topics topics.

*That’s my personal impression after the practices related part of our discussion and might be different from what Scott was actually trying to say

Baby steps into agile

I was watching an old movie last night that I think is hysterical called “What About Bob?” starring Richard Dreyfus and Bill Murray. Bill Murray plays a guy with tons of phobias. Richard Dreyfus plays his doctor who is teaching him about taking baby steps to overcome his fears. Bill Murray has one particular phobia about riding in elevators. At one point in the movie, he decides to finally try getting on an elevator and says “Baby steps onto the elevator…baby steps into the elevator…I’m in the elevator” (the doors close) “HELLLLLLLLLLP!!!!!”.

Seeing that movie was the perfect impetus for me to write this post. I’ve been doing a lot of thinking lately on how to take our company through the agile adoption process. I’m thinking baby steps (minus the “Help!” scream hopefully). Over the past few weeks I’ve spoken with a number of leading agile proponents about what they would do to adopt agile in an enterprise and the answer has been a resounding “baby steps”. I’m glad to hear that because that’s how our small development team first adopted agile ourselves. I don’t believe that an organization can just throw the magic agile switch and change things overnight. I think that agile has to be grown organically in a series of steps that help organizations successfully adopt and accept agile practices.

The first step our team took was just getting into the agile “flow”. We instituted some basic practices to get our team into a good rhythm and to introduce agile concepts. We decided to begin two-week iterations on a single project. We already had some half-way decent requirements definitions for the work. But, we didn’t start prioritizing, building user stories, assigning story points, playing planning poker or anything else. We just started with a simple, two-week iteration in which we tried to decide what we could complete during that iteration. We planned the iteration collaboratively and really committed to completing everything we bit off for that iteration. We also started the practice of a daily stand-up, an iteration review, and a retrospective. That’s it. We did that for quite some time until the team was comfortable with the flow of an agile, iterative development style. It was working well, and it seemed to fit for our team.

Once we became comfortable with our new agile practices, we started to slowly introduce others into the mix. We began re-writing our requirements as user stories. We started playing planning poker to estimate story points. And, we began to track our velocity. As the ScrumMaster, I started to really protect the team from distractions and did everything I could to remove impediments for them. Again, that was it for quite some time. When we were feeling good about those, we introduced our client to the idea of prioritizing his backlog of user stories (it was a fixed-scope fixed-price contract, but we thought this was a good exercise to learn from anyway). We also started to really emphasize acceptance testing. Once we were alright with this set of practices, we took a look at what we should be doing on the actual development side to become more agile. We introduced continuous integration, daily builds, lite versions of paired programming. Again, just a few small practices that helped us slowly become more agile. Our next step was release planning. Around that time, we introduced some tooling for managing the projects (Rally) and we also began doing serious defect tracking and fixes with some tooling as well. We had also decided to start implementing agile practices on some other projects that our team was responsible for.

When we felt that we had a handle on our own team, we started to evangelize to our peers and our organization. The organization caught on after some convincing and started sending other project managers to Certified ScrumMaster training. We were going to start sending members of our team out onto other project teams to help coach and guide them in their adoption of agile practices. However, we left our old organization before we got to realize the full enterprise adoption of agile, but we were well on our way to getting them there.

Now, with our new company, we get to do it again…and we’re much smarter from our previous efforts. And our new company has an organizational culture which is very supportive of agile practices. As we move through our adoption process, I’ll keep you posted on our progress, but I’m pretty sure we’ll be taking baby steps first. Baby step into the flow…baby step into agile practices…baby steps into agile!!!

Retrospectives: You live, you learn

rear_viewI recently came across a quote from one of my favorite authors, Pearl S. Buck. She said:

“One faces the future with one’s past.”

Short, sweet and to the point. The quote really struck a chord with me because our development team recently learned this lesson the agile way. Last Friday, we completed the second iteration of an enterprise GIS development project and conducted a sprint review with our client. To our dismay, we seemed to have been off the mark in terms of what the client was expecting. The client seemed disappointed and our team was as well.

After the review, we conducted a team retrospective and the main topic was “Why did we miss the mark so badly?”. It was the right question to ask and sparked a very open discussion. We discovered that we had not adequately uncovered the business cases behind the user stories, that we had a communications gap with our client, and that we didn’t do a thorough review of the client’s existing application. We then asked the question “What do we do to fix this?”. We decided that at the following sprint planning session, we would discuss the user stories with our client again. We decided to hit the problem head on and increase our communications with the client and thoroughly review the existing application.

On Tuesday, we held our scheduled sprint planning session with the client. We gave our apologies for our missteps, emphasized the value of agile for helping catch this issue quickly and initiated a great, open, and collaborative planning session. To our pleasure, the client had identified the same set of issues as our development team had and came prepared to give a demo of the existing application. They were also very patient answering the questions our developers had about the user stories. In the end, everyone walked away from the sprint planning meeting feeling that we were back on track.

We also walked away with the feeling that our agile practices had done their job. They helped us raise any dysfunction to a visible level very quickly, and also allowed us to resolve those issues very quickly. If we had been developing in a traditional waterfall fashion, we could have been 6 months into development along the wrong track before anyone realized there was a problem…and by then it would have been too late. We would have had angry clients, a discouraged development team, and a failing project. Instead, we face our future with the full understanding of our past mistakes, a satisfied client, and a motivated development team. You live, you learn…

Architecture in an agile project

Something our team has wrestled with over the course of our agile adoption is how architecture and framework development fit into our Scrum process. We strive to deliver an increment of functionality to our client at the conclusion of each sprint. However, in many of our early sprints, we have to do architecture and framework development. The things we develop on these types of architecture/functionality tasks are not really useable by our clients and we really can’t demo them either.

So, we’ve decided that the best of course of action is to strike a good balance between architecture/framework and functionality development. In earlier sprints, we’ve accepted that architecture/framework will be a big part of what we’re working on. But we’ve also become cognizant of the fact that we need to show something to the client in terms of functionality. So, we usually comb through the backlog and pick the highest prioritized backlog item that we can complete early in the first sprints, while still doing the architecture/framework tasks we need to get done. In this way, we ensure that the client sees forward progress that they can touch and understand right out of the gate, and we build the architecture and framework to support future development efforts on the project. As we progress through our development, architecture/framework tasks become fewer and our delivery of functionality increases steadily. In graphic form, it looks something like this:

image

Product Engineering

It is now official. Since the beginning of the year I moved from the Chief Engineer role in a department doing speech recognition and synthesis to product management in the my company internal tools department.

Reasons

For the last couple of years I was more, than full-time focused on making the agile software development methods work for our team, our company and myself (heavy amount of practical scrum mastering and teaching included). It was a fascinating story full of learning, excitement and success though not without some failures and disappointment. I still believe that at the moment Agile and Lean Thinking are one of the top topics many SW development organizations should invest into.

However, the more I got into the topic, the more I realized that after some initial improvements within the team, these are not the in-team practices that are of the biggest waste. Whatever fast and effective the team is, it doesn’t matter if it goes in the wrong direction. It is a common problem or at least major improvement possibility in many companies. That’s why I decided to move into a role, where I can try helping teams build software that the customer really wants. Amazingly in some agile/lean oriented companies the person playing the similar role is actually called Chief Engineer – therefore in a way I am not changing gears that much.

Effects to this blog

At the moment the only thing I can be 100% sure about is that the new job is going to be full of learning, both on-the-job and in the classroom. I am planning to get acquainted with the new colleagues and practices, refresh my understanding of the agile requirements and learn some product management specific tricks. I am not going to tell a lot about the actual work I am doing unless some of the tools we make find its way to the general public. However, in the course of my learning, expect more coverage on the Product Owner role, product management and link to the customer. It doesn’t mean I or other permanent writers are going to stop writing about the team issues, practices or Scrum Master area, but there is going to be more about understanding the end customer problems. Then anyway I am not the only

Just though you might like to know the development plans for the site or more precisely for the parts of the site authored by me. Feel free to adjust the plans by making your voice heard.

Estimating the learning curve

lc Over the past two months, our development team has gone through a lot of changes…from joining a new company and setting up a new work environment to learning new technologies to serve our new clients. In general, we were a desktop-based shop that focused on the ESRI desktop stack. However, many of our new clients require web-based mapping applications. As such, our team is currently climbing a very steep learning curve to work with a new set of technologies in a new environment. More to the point, there are way more unknowns in what we’re currently doing than we’ve ever dealt with in the past.

One of the issues this has raised for us is how valid our current user story estimates are. Right now, everything seems more complex because we are researching just about everything we’re doing. Will the story points we assign to a user story today be valid in 6 months when our tasks become commonplace? Can we use our current velocity based on our learning curve to provide estimates for new work? We have lots of questions running through our heads right now and we’re coming to grips with some of the answers.

Earlier today I was speaking with Ryan Martens, the CTO and Founder of Rally Software Development, and I asked him what he would do in this situation. Ryan presented two solutions, both of which I think are very useful and effective for resolving our estimating issues.

The 3-Month Sliding Window Approach

One way to address the learning curve is to use a 3-month sliding window approach. In this approach, recent history is more important than long-term history. Go ahead and give the user stories the story points you think they deserve based on your current level of knowledge. But, as you go forward use the past 3-month’s worth of estimates to derive your team’s velocity. Keep the 3-months of estimates and velocities sliding forward so that as you move forward and learn more, your estimates and velocity are based on your most recent metrics, not on those gathered during your earlier stages in the learning curve. The further along in your technology transition you progress, the more stable your estimates will become. Once your transition is complete, your estimates and velocity should be far more valid than they were at the beginning of your transition.

Factor Out the Learning Curve

Another way to deal with the learning curve is to just factor it out of the equation. Split the new complex user stories into two pieces: the research component and the actual development component. Add research spikes to your iterations to learn how to do whatever it is you’re getting up to speed on. Once you figure out how to do it, then estimate the user story in another iteration. The isolated user story estimate should be closer to a valid value than if you considered it together with your research task.

Is Iteration Zero a good idea?

ITERATION_ZERO As I’ve been contemplating moving agile throughout our entire organization here at Data Transfer Solutions, I’ve been considering the usefulness and effectiveness of an Iteration Zero. Many agile teams use what’s known as Iteration Zero to put the necessary systems in place to enable the delivery of value to the customer. It’s essentially the getting started iteration. It takes place before any development begins. I think Peter Schuh described Iteration Zero very well in his book Integrating Agile in the Real World. Peter says:


“An iteration zero does not deliver any functionality to the customer. Instead the project team focuses on the the simple processes that will be required for the adoption and use of most agile practices. From a management point of view iteration zero may include :

  • Initial list of features identified and prioritized.
  • Project planning mechanism identified and agreed upon.
  • Identification of and agreement upon a team customer, essential stakeholders, and business users and the nature of iterative planning process, such as the time of planning meetings and the length of iterations.”

I personally think that the use of an Iteration Zero is very pragmatic and I think that Peter’s idea of what Iteration Zero looks like is very realistic. I believe that in the real world of software development, customers aren’t really ready to jump right in and start on a sprint from day one. Many of them don’t entirely understand the Scrum “process”. Most of them don’t truly understand their requirements. I think an Iteration Zero can be useful in educating the customer (and coming to an agreement with them) about the agile planning process. I also think it can be used to develop the initial prioritized backlog for the project. Now, I’m not advocating heavy up front requirements gathering here, but you do need some time to create effective user stories to start working from. From a management point of view, check out Agile Support’s post on Iteration Zero as it exists within their Agile Contract Engagement Roadmap as seen below:

image

Aside from the project management side of Iteration Zero, I think there can be a good case for the development team to engage in Iteration Zero as well. I think that as we switch between projects, we also tend to switch technologies or development platforms. To do so, development teams may need time to stand up a new database server because the new customer uses Oracle and the last customer needed SQL Server. Hopefully you have both running so you don’t need to worry about things like this, but you get the point. Sometimes the development team needs some time to get things set up to support the project before they start delivering potentially shippable product increments each sprint. For a quick idea of what another agile team has done during Iteration Zero, check out Energized Work’s post on their Iteration Zero.

I’d like to know your thoughts on Iteration Zero. Does your team use an Iteration Zero? If so, what kinds of tasks do they typically include? And most importantly, do you find them useful and effective in delivering value to your customers?

The Value of Software Testing

Lately I noticed several discussion threads both offline and online (the most notable online one is in the leandevelopment Yahoo! group) focusing on the value of software testing, both automated and manual. The main discussion point as far as I can see it is in whether the testing is actually a waste that is unfortunately needed to cope with the insufficient practices and should eventually be eliminated.

Knowledge Source

Software development is always not a mass production, but a new product development – software engineers are not manufacturing goods, but rather make a design for a compiler to manufacture something nobody previously built. Just as with any other type of new product development there bound to be cases of uncertainty and the good result is virtually always achieved with a help of frequent prototyping and validating the design decisions. The main job of a software development team is therefore along the lines of creating knowledge about the system built, its environment, design options, quality of the current code and so forth. In my opinion testing is no more and no less, than yet another tool for creating more knowledge that will eventually allow for building a successful product.

Costs of Producing Knowledge

To me the question of practical importance is not whether tests add or not add value, but how much they add, whether the added knowledge is of high relevance and if there are ways to get more useful knowledge easier. That is of course, context dependent. My experience as well as the literature I read suggest that testing SW produces some important bits of knowledge difficult to compensate in the other ways. I accept though that not all tests are equal. I like and find it useful to discuss which kind of testing produces most of the relevant knowledge in the cheapest way and which kind of testing might be cheaper to replace with e.g. code review. Still there is value in testing. It is just so that sometimes one could get some of the same useful information cheaper.

Your experience

What do you think? Does testing add value in your environment or should we try our best to remove the need for testing altogether?

Do you do Incremental Design ?

Incremental Design is one of the core values of the XP. Many favor to have a “rough up-front design” rather than the “big up-front design”. The days when the Design is a single long term process is gone (as in waterfall or v-model) and developers are embracing the design as an integrated daily process.

Software development has taught us to know that the “requirements change” and we need to live with it. When requirements change, the design too changes. We often hear that “the design need to be flexible enough”, but not practical in all cases. The software design is often a function of the designer’s technical and system/business knowledge. Its very difficult to predict and assume some of the changing requirements in advance.

With the 30 day Sprints and story cards, team knows the requirements to be fulfilled and what direction the direction is going towards. This makes it easy for the developers to concentrate on their modules and do a short, simple and rough up-front design.

Do we see the Incremental Design as a step ahead from the conventional big up-front design ? Does Incremental Design throws any challenges ?

XP Practice: Quarterly Cycle

Extreme programming weekly cycle is an efficient tool against over-planning that allows for focusing on implementing the top priority stories and getting the user feedback as soon as possible. While it is indeed important to deliver in small chunks of the high priority functionality, permanent work on the low level of detail can lead to the sub-optimization, when you might be polishing the already good enough product while loosing the opportunity to recognize the need for the completely new feature.

Quarterly cycle is no more, than a recommendation to have the regular reviews of the high level system structure, its goals and priorities – quarter just makes a natural period that matches many enterprises’ financial period, is clearly big enough not to be buried with the current issues and is small enough for the quarter planning meetings to remain forums on something that can actually be implemented relatively soon. Quarterly cycle is also a recommendation to reflect on the team practices, relations and feelings at least once a quarter. For example, quarterly discussion might be a good moment for discussing the need to try some completely new approach to testing or version control.

This page is a part of the Extreme Programming overview