Sunday, April 29, 2007

One Team - Developers and QA

Bookmark and Share

All too often, it seems natural causes encourage an antagonistic or "us" vs "them" relationship between traditional development and QA teams. This is quite unfortunate, in that the roles and responsibilities of these groups can and should present a much more symbiotic relationship. It's good news that the Agile movement is beginning to break down barriers and encourage more cooperation and team work.

Even in environments that cling to more traditional software methods, however, there is significant opportunity to produce better software more quickly and efficiently by encouraging tighter collaboration between these disciplines. Let's consider just two of these techniques with hopes of compelling additional creative energy.

The two techniques I'd like to call out are in the area of automated unit testing. Unit testing is, of course, a topic worthy of its own treatment, so we'll save the nitty-gritty for another day and just hit the relevant highlights now. In this focus on unit testing, I'll make the case that Developers need their QA partners and QA needs Developers. Specifically, Developers won't produce the most effective unit testing capabilities without participation from QA, and QA won't provide the most valuable assessment of product quality without participating in the creation of a unit testing capability and fully leveraging the capability in the overall assessment strategy.

First, let's look at the necessity of unit testing in the Development group. All too often, I'm asked questions such as, "Isn't it QA's job to do the testing," or "How can Development afford the time to write all those tests." Once you take this apart a little, you find that Development MUST create "all those tests" - its critical to their success. As I mentioned earlier, we'll talk more about the breadth and depth of unit testing another time. For right now, I hope one straightforward concept will drive this point home. I always respond by politely asking someone to explain how, absent automated unit tests, the Developer knows when the task is complete. I won't belabor this point, but consider how to respond to the question "How does the developer know he's done coding?" Invariably, the answer reflects the necessity that the developer do some degree of testing to ensure their work is behaving as expected. Unfortunately, this testing is all too often done in a completely ad hoc fashion, requiring spontaneous development of test cases and supporting test data. In this effort, corner cases are missed, forgotten business requirements are overlooked, important code paths are bypassed, etc. And let's not forget this work is often repeated from scratch the next time someone goes in to modify the same code, thus wasting effort and repeating the same mistakes. I'll assert that the developer only knows when he's done when all his tests pass... when the light is green.

So, if these automated unit tests are a requirement, and the developer doesn't know he's done until they all pass, how does he ensure the tests are produced reliably and efficiently? They must be produced as part of product delivery. They must be written along side the product code - developed just as the product itself is developed. The developer must write the tests and keep them up to date every time the code changes. When a new bug is found, the tests must be updated so the developer can always rely on the "green light" representing success... representing a completed task. But, here's where we get into a little trouble. How good are Developers at recognizing potential problems or identifying failure scenarios? Granted, there are plenty of exceptions, but generally speaking the QA minds are much more adept at this sort of thing than the Developer minds. By engaging QA in the definition of the automated unit tests, Developers produce much more thorough and effective unit tests and come closer to the goal of the unit testing capability - completely surrounding every component with automated unit tests that immediately, consistently, and efficiently validate component behavior... and let the Developer know when he's done!

But the Developer is not the only one that gets something out of this. As I have the pleasure of doing training for various audiences, including those from the QA profession, I will frequently ask the question, "Do you in QA test every possible scenario or code path before releasing a software product?" These smart folks always immediately answer with something like, "Of course not!" As we know, an effective QA strategy involves careful risk assessment and prioritization of testing efforts to ensure focus and energy on the product areas most likely to introduce defects and those where defects would lead to the most severe impact. So here's where we get to the "symbiotic" part of the relationship. While QA involvement in the test development helped improve the unit test effectiveness leading to better code output (QA helped the Developer), now the awareness of test coverage represented by the automated unit tests leads directly to better risk assessment and an ability for the QA team to focus their energy on higher-value activities associated with overall product quality assessment (Developers helped QA). By working on bigger picture, value-add assessment activities, the QA team is able to ensure higher overall product quality.

In this straightforward scenario, we see two disciplines cooperating with one another - working as ONE TEAM. Each discipline contributes their unique strengths, resulting in higher quality software delivered more reliably and more efficiently.


No comments: