We have to be good at being wrong. Architects can help

Anssi Piirainen raises the idea about the need to be good at being wrong in order to create good software. The core idea is good – you have to be ready that the first implementations might later look not very well. However, Anssi also presents the micro case study describing the problems with the “just-in-time design” approach. The changes required appeared to be unexpectedly difficult to not that experienced programmers.

Just-in-time design is not a silver bullet. It is just yet another good tool that can be rather useless until it is used with the correct support practices. Changes are inevitable and will happen. So we’d better be prepared to them. Extensive testing (or test-driven development) is one way to be protected against the regression. Another way is to implement some trial spikes before the start of the major coding and to ask an experienced architect to help with building the initial architecture. It doesn’t mean that this initial architecture should be frozen – the architect’s experience just helps to skip several first failure points.

Mocks, stubs and fakes

I am rather new to the Test-Driven-Development (TDD) and continuous testing in general. Therefore I quite often experience the terminology difficulties: how to call fully functional alternative object, how to call almost empty stub objects, etc. It looks like I am not alone.

Martin Fowler highlights the Gerard Meszaros’s proposal:

The generic term he uses is a Test Double (think stunt double). Test Double is a generic term for any case where you replace a production object for testing purposes. There are various kinds of double that Gerard lists:

  • Dummy objects are passed around but never actually used. Usually they are just used to fill parameter lists.
  • Fake objects actually have working implementations, but usually take some shortcut which makes them not suitable for production (an InMemoryDatabase is a good example).
  • Stubs provide canned answers to calls made during the test, usually not responding at all to anything outside what’s programmed in for the test. Stubs may also record information about calls, such as an email gateway stub that remembers the messages it ‘sent’, or maybe only how many messages it ‘sent’.
  • Mocks are pre-programmed with expectations which form a specification of the calls they are expected to receive. They can throw an exception if they receive a call they don’t expect and are checked during verification to ensure they got all the calls they were expecting.

Nice dictionary. I am going to try using it for some time. How do you like these term definitions? Worth trying? Worth advertising to colleagues?

Good design ghosts

I can recall several cases, when a seemingly clean’n’simple class appeared to be a huge and difficult to debug entity with loads of underlying asynchronous logic. The simple and small public interface hides the implementation details and according to the good Object-Oriented style all the underlying details are private. Or is it a good style to hide the implementation details into many private methods?

As it often happens, the problem comes from a good idea overused to the extent, where it inhibits another good idea. The encapsulation makes it easy to use your classes or systems externally, but it should not make you forget the internal structure of the class. As Michael Feathers tells

It often means that there’s some other abstraction hidden away inside the class that might be useful to pull out.

The class with a loads of private methods looks like a ghost of a good design over an old dirty dangerous grave. It is good to hide the internals of your system from the external clients, but don’t forget about the next level and keep the sublevels clean as well.

Waterfall in schools

Grig Gheorghiu tells about the funny experience of interviewing the graduates for a QA position. All of the candidates have been taught only a waterfall method.

He said they spent a lot of time writing design documents, and since they *only* had one semester for the whole project, they almost didn’t get to code at all. I couldn’t help laughing at that point, and I told him that maybe that should have been a red flag concerning the validity of the Waterfall methodology.

My university experience

Some 5-7 years ago I’ve been also taught a very concrete and Waterfall-like discipline of building the enterprise scale information systems (IS). It was not too bad, because some types of IS do have similarities and in this case rigorously applied waterfall is able to deliver more or less predictable products. What was bad was no notion about the existence of another approaches. Just as Grig tells we’ve been taught just a “Software Development Lifecycle”. Nobody told us that there are different ways and views on constructing the system.

What did you study about the software development? Did your teachers told you about the variety of approaches?

Process over the individuals

Individuals and interactions over processes and tools © agilemanifesto.org

Lately on a Russian programmer forum I’ve read a story about the usual victory of a process over the individuals.

One software development company had Windows and Unix stations. As a result from time to time the wrong endline characters leaked into the repository and caused the build breaks. There were two ways to overcome the problem:
1. To oblige everybody to check the files before the commit
2. To devote some time to writing an automated checker and/or converter.

As you might have suspected the first solution has been chosen. The irony of the situation is in that it was the status quo already. Nobody was doing things wrong on purpose, people were just making mistakes.

Fixing the failures

Now recall what sometimes happens in “old good” software companies when huge projects ship many months later, than was expected and with much lower quality, than was needed? The big bosses request “raising the bar higher”, plan more thoroughly and estimate more carefully. The irony is again in that it is already the current practice: there is no project manager, who isn’t trying to get as good estimations as possible.

Unfortunately the creation of a new software product is a complex process. The behavior of a complex system cannot be predicted by studying all the details in advance. Rigorous following the process might help when you develop the tenth information system for yet another book shop, but won’t help when you create the brand new functionality.

Be very careful, if the only result of the failure investigation is the decision to “really” follow the already existing process. It might be a good idea, but it might also be just a bad process. Even if it is good it might be more useful to study why the process hasn’t been followed.

Did it happen to you? Was it so, that when the reasons of a failure haven’t been understood, the boss simply requested the bigger soldier-like discipline? And did it work?

Why premature optimization is the root of all evil. In the management language

The program optimization is performed in order to reduce the execution time, memory usage, bandwidth or some other resource. Optimization essentially means sacrificing the clear, simple and understandable code architecture. As a result the level of complexity increases, debugging and restructuring becomes more time-consuming and more expensive.

It is simply a lot cheaper to get a clear working and testing code before the start of the optimization. As a side effect, a set of unit tests (if you practice them) will protect you from the potential regression coming from the optimization actions.

“If you haven’t achieved quality in your product development process, it makes zero sense to start optimizing it”. (Linda Hayes, Worksoft, Inc ). It costs too much. Tell it your boss. He might not understand the programmer “feelings”, but gets the money language very well.

Being evil is expensive

Once upon a time in a continent far-far away (from Europe) there was a mighty spreadsheet software vendor called Lotus Development Corporation. Their product Lotus 1-2-3 dominated the niche with its almost 100% market share. There was a neglectibly small competitor product named Microsoft Excel. The Microsoft’s product better met user needs, integrated well with the word processor and in general was a good product. Unfortunately the Lotus’es overwhelming user base left not a lot of chances for any competitor whatever good he was.

The Microsoft worked real hard to lower the enry barrier. They mailed demo packages by post, built the Lotus-to-Excel convertors, even created a special version of the Windows OS, that could be started without the installation and could run the MS Excel trial only – pretty much everything, that could eliminate the trialing efforts. And the most cunning trick of the Microsoft generals was to introduce the really good Excel-to-Lotus convertor. Yes, you heard it right, Microsoft released the tool that customers could use to switch back to their old good Lotus 1-2-3 whenever they decide that Excel isn’t worth any further trials.

Help the customer

As we all know, the results have been amazing. The ruse was in that almost nobody actually used the covertor to switch back. It was used to exchange the files with the friends who still use Lotus and was a backup option also.

The moral of the story is that your business enemies might very well be your customer best friends. And the customer likes to see that you value his opinion. Don’t push your offer too much, let your client know that you appreciate his inclinations, not yours, it will pay off.

Does it work for you?

Are you building a product that operates with the potentially competitor-generated content? Is the customer able to try your prduct in five minutes? How easy is it for a customer to switch back, to his old good competitors of yours?