Monday, May 14, 2007

Good Architecture Encourages Good Estimates

Bookmark and Share

When it comes to project estimation, much has been written about the importance of historical contexts and correlations, and it only stands to reason that relevant historical data helps lead to more accurate estimates. As Steve McConnell points out in Software Estimation: Demystifying the Black Art (Best Practices (Microsoft))1, historical data helps to work around common sources of inaccuracies in estimation because it accounts for organizational influences, avoids subjectivity and unfounded optimism, and reduces estimation politics.

But do we get the biggest bang for the buck out of our historical data? While common approaches to estimation successfully leverage this data, there's still more we can do. Historical data is even more valuable when used in the context of effective architectural practices. In situations where it is possible to leverage consistent architectural patterns and frameworks, historical data can be captured and put to use at a lower level of granularity, resulting it much more reliable estimates earlier in the project.

Let's look at an example. In my experience, the most common practices have us maintain and leverage experience at the project level. In this approach, a database of project performance attributes is built up over time, supporting the establishment of prediction models for similar projects. For example, if an organization has completed 10 "green field", web-based, Java/J2EE projects for the Health Care industry using Oracle and MQ Series in a b2b services delivery model, and these projects have taken on average 47,600 man-hours to complete (.6 std dev), then it's reasonable to predict (for planning purposes) that a future project of similar parameters will also require 47,600 hours of effort. This is a common approach and certainly provides value, especially at the early stages of a project.

As a project progresses and the architecture is better understood and becomes more stable, the same principles can be applied a lower level of granularity. For sake of example (albeit perhaps contrived), let's imagine we get towards the end of Elaboration (assuming RUP for the moment), and we are satisfied that our architecture essentially consists of a collection of typical patterns like this:

A. Web Page (JSF)
B. Service Facade (stateless session beans)
C. Business Object (pojo)
D. Data Transfer (Value Objects)
E. Persistence (Data Access Objects)

Now let's imagine a straightforward business application with, say, 20 Use Cases such as "Create Account", "View Order History", "Make Payment", etc. If the team applies the architectural patterns consistently, it is completely reasonable to look at the project as a situation where the engineers will produce 1 (or n) of each of these above "components" for each use case. In other words, for each use case, a new Web Page will be created to talk to a new Service Facade using a new Value object for data transfer. The Service Facade component uses a new Business Object that handles the business logic. The new Business Object depends on a new Data Access Object (DAO) for data persistence.

If we have good historical data for what it typically takes to build a Service Facade or a Data Access Object (in this architecture), then the estimate nearly becomes a mathematical exercise... 20 Web Pages at 40 hours each plus 20 Service Facades at 15 hours each plus 20 Business Objects at 52 hours each plus... and so on.

While this is no estimation panacea, and it's only one part of the overall estimation process (for example, let's talk about the value of architecture on "impact analysis" another day), I've been able to apply this approach for a long time with very positive results. Of course, the key is designing and applying a consistent architecture and capturing the historicals at the relevant level of detail.


1Software Estimation: Demystifying the Black Art. Redmond, Wa.: Microsoft Press, 2006


No comments: