Skip to content

Category: testingSyndicate content

Acceptance Testing: What, Why and How

October 11, 2007 by Vaibhav

What is acceptance testing?

Acceptance testing is a black-box testing performed on a software prior to its delivery. It involves running a suite of tests on a completed system (ref: Wikipedia). These test suites are made up of multiple tests or test cases. Each test case consists of a sequence of steps to perform which emulate the use case that is being tested; it also contains input data (if required) as well as the expected output. The result of a test case is either a pass or a fail.

CppUTest framework. Symbian extension and example

October 2, 2007 by Artem Marchenko

Update: The users of some a bit exotic SDKs reported that they cannot build the example. The reason was the project file SampleTest.mmp. I updated it making the project file a bit redundant. Now example should be compilable on any S60 SDK.

CppUTest is a unit testing framework based on CppUnitLite and designed for the embedded systems usage. Lately it has been ported to Symbian OS used in Nokia S60 smartphones. The framework is xUnit compatible, extremely simple and easy to use. It can print the test results both to console and to a junit-like xml file (later I am going to investigate how well it can be parsed by standard junit output parsers).

Symbian extension. Usage example

Unfortunately, CppUTest package at the moment does not include a full-blown Symbian specific example and lacks support for such Symbian primitives as descriptors (Symbian strings) and leaves (Symbian exceptions).

The file attached to this post contains both. It is a simple and heavily commented example of using CppUTest in Symbian, that includes cpputestsymbianextension.h adding support for descriptors and leaves.

The testability metrics

January 23, 2007 by Artem

Most of the agile software development methods explicitly or implicitly recommend the use of test-driven-development and generally development of the testable code. The reasons for the high testability demands are no secret. By the very definition of "agility", the agile project has to be ready and even embrace requirement and corresponding architecture changes. Every hardware engineer knows that it is easy to redesign the wired board only if all the components used are reliable and conform to the own specification. It holds true for the software world - it is easy to redesign the system only if its components are well tested.

There is little doubt that the code testability is a good thing, but how can you measure it? Is it possible to have a look at too classes and tell which one is more testable, than another? How to decide that the component is testable enough and doesn't need the further refactoring?

Score-based metric
Jay Flower proposed a score-based metric for testability.

Automated testing in agile software development

December 28, 2006 by Artem

The biggest difference of agile methods from traditional waterfall is the short feedback loop. The whole concept of agility in essence is no more, than "build the most important piece, evaluate, adjust, and repeat". Automated tests in the agile methods serve as a very important tool for shortening the feedback loop.

In the most of traditional processes automated tests are mimicking the manual test procedures. The tests are often written not by the code developers. Tests usually involve testing the functionality of the whole system. Test results come to the original developers quite late and tests don't take into account the latest code changes. Automating functional and system tests is a good thing. It makes testing a little easier, more effective and saves some money for the company. However, it is no more, than automating a small piece of the manual labor.

Costs of the legacy code

November 7, 2006 by Artem

"To me legacy code is simply code without tests."
Michael C. Feathers “Working effectively with legacy code”

The legacy code is a known software developers’ headache. The legacy code is the difficult to change code that the developers don’t really understand. This code often is inherited from the old developers who left the company, it usually has little to no documentation and little to no testing code. Legacy code can slow down the development speed up to the real competitive problems. When it takes eternity to make a required change, a company can hardly stand a competitor that is able to release life-critical software every month.

We have to be good at being wrong. Architects can help

April 28, 2006 by Artem

Anssi Piirainen raises the idea about the need to be good at being wrong in order to create good software. The core idea is good - you have to be ready that the first implementations might later look not very well. However, Anssi also presents the micro case study describing the problems with the "just-in-time design" approach. The changes required appeared to be unexpectedly difficult to not that experienced programmers.

Just-in-time design is not a silver bullet. It is just yet another good tool that can be rather useless until it is used with the correct support practices.

Mocks, stubs and fakes

April 25, 2006 by Artem

I am rather new to the Test-Driven-Development (TDD) and continuous testing in general. Therefore I quite often experience the terminology difficulties: how to call fully functional alternative object, how to call almost empty stub objects, etc. It looks like I am not alone.

Martin Fowler highlights the Gerard Meszaros's proposal:

The generic term he uses is a Test Double (think stunt double). Test Double is a generic term for any case where you replace a production object for testing purposes. There are various kinds of double that Gerard lists:

Why premature optimization is the root of all evil. In the management language

April 8, 2006 by Artem Marchenko

The program optimization is performed in order to reduce the execution time, memory usage, bandwidth or some other resource. Optimization essentially means sacrificing the clear, simple and understandable code architecture. As a result the level of complexity increases, debugging and restructuring becomes more time-consuming and more expensive.

It is simply a lot cheaper to get a clear working and testing code before the start of the optimization. As a side effect, a set of unit tests (if you practice them) will protect you from the potential regression coming from the optimization actions.

Selling Test-Driven Development to your boss

March 29, 2006 by Artem

Managers don't speak C++, refactoring and some of them don't trust the programmer's "feelings". To sell TDD to them, we have to speak the management language of costs, deadlines and delivery delays. Apply some management language and you'll be able to start the TDD.

Killer argument tips:
1) Extra testing saves at least some time of the final test round
2) Extra testing improves the code quality at least a bit
3) If extra testing is applied to the most important components and interactions, it takes not that scary amount of time
Therefore even in the worst case the time spent on the extra testing will be partially compensated with the better quality and smaller final testing efforts. And in the best case, both quality and delivery time will be improved significantly.

Testability as the design metric

March 26, 2006 by Artem

I don't care how good you think your design is. If I can't walk in and write a test for an arbitrary method of yours in five minutes its not as good as you think it is, and whether you know it or not, you're paying a price for it. (Michael Feathers)

If you practice a lot of unit testing, test-driven-development of whatever method that includes a lot of testing, you have to spend a lot of time on it. And most of time you have to do the testing-related work before the problem happens and before anyone is affected. Certainly it is good to capture the bug early, before it hurts anybody, but the need to do the "extra" work in advice introduces the temptation to "just tweak this small feature and test it if the problem arises later".

Best of AgileSoftwareDevelopment.com