Monday, November 22, 2010

Service Delivery Methods

(Staff Augmentation vs. Managed Services)

In new outsourcing engagements when determining the delivery method many companies pursue the path of least resistance for reasons of time constraints, lack of oversight, or they are simply testing the water and don’t want to invest the effort. Little do these companies know that the easiest route in the beginning can become the biggest drain on their critical resources down line. I use the analogy of neighborhood yard work to highlight the differences between staff augmentation and fixed bid delivery methods, which are the two most prominent delivery methods in IT outsourcing.

The charts embedded below are intended to depict the evolution of delivery models beginning with Staff Augmentation and moving to piecemeal Out-tasking and then finally to a full blown Managed Services.  Each chart indicates the roles and responsibilites between the contracting company and the vendor, as well as probably services levels that can be established.

Option #1

One option you have is to hire my 14 year old son to do your yard work and it’s a fairly easy process. You call him on the phone or catch him when he’s outside and ask him if he’s interested in making some extra money. The offer is usually met with a wide grin, a nod of the head, and the response, “Sure, when can I start?” This process requires little effort on your part to get started and the work can begin almost immediately. What follows is an inexperienced wet-behind-the-ears young boy shows up at your house with no mower, no gas, but with a pair of gloves and a desire to get going. This also happens to be the point in which your work begins. You have to ensure that your mower still works and that you have gas. Is the blade set at the right level? If not, then you’ll also have to adjust the wheels for the right cut. You then have to follow my son around showing him the layout of your yard and explaining where to mow and where not to mow among other details. Also, you might have to get out the rake and find some bags, oh, and you might as well start picking up the garden hose and the chairs since you’re out there anyway. You’ll probably get some extra monitoring by me checking in how things are progressing and to ensure things are going well and my son is doing a good job. At the end of the day you’ll have your grass mowed per your direction and the clippings bagged and placed at the curb. You will likely have paid a fair amount for the work, but certainly below market rates. The downside is that all the plans you had for the day will have been pushed out to later in the week, because you had to be engaged in the effort and part of the delivery. Also, if you want your trees or hedges trimmed, well that probably means you’ll need to contact another boy in the neighborhood, since my son really isn’t old enough to be working on a ladder with sharp instruments.

This scenario is “staff augmentation” delivery method.

Pros:

  • Easy to get started
  • Pay for only the work performed
  • Flexibility of adjusting to your schedule
Cons:

  • You are part of the delivery
  • Turnover takes its toll on productivity
  • Plans are focused on short-term deliverables



Option #2

The other option you have for yard work is to engage a landscaping service to provide regular yard maintenance. This means you’ll need to research local companies who offer yard service in your area. You’ll probably want to make a list and check out any references. You may want to drive through your neighborhood and if you see a crew working at a house, then stop by later and ask the homeowner about their service. Once you narrow your list down to two or three business you may want them to come out to your house for a quote. After you have made your final selection then you’ll need to meet one more time with the service provider to agree on a start date, price, and frequency of visits. You may also agree on non-recurring activity, such as tree trimming, and what type of service you’ll need during the winter. From that point on you’ll get your lawn mowed on a recurring date and once a month you’ll get an invoice in the mail for an entire month’s worth of service. The service is provided whether you are at home or not, so your time is freed up for other more important activities. The downside of this option is you’ll pay a little more and you lose some flexibility. For instance your grass will get cut once a week whether you need it or not.

This scenario is "managed services" or sometimes call “fixed bid” delivery method.

Pros:
  • No price fluctuation
  • Guaranteed delivery meeting minimum quality standard
  • Equipment and variable costs covered by service provider
Cons:

  • Difficult to start and stop service once initiated
  • Customized service becomes a challenge



Sunday, November 21, 2010

Portfolio Assessment for Outsourcing Contracts

People say that change happens for one of two reasons: evolutionary or revolutionary. When considering outsourcing information technology (IT), change rarely happens due to evolutionary means, since it is not a natural occurrence to intentionally decouple IT systems and move components to another entity at a different location. In these days of globalization outsourcing usually means turning over aspects of your application portfolio to a company based outside of your own country and operating in diverse geographic locations spread across multiple time zones.

IT change is more often revolutionary and the decision to outsource is made by a senior executive committee or even by a non-IT group, such as the Finance department. So, the question presented to the IT group is not “Do we outsource?” but more “What to outsource?”

It’s widely known that certain aspects of the software development life cycle (SDLC) are better suited for outsourcing than others. As an example, interacting with the end users and eliciting requirements is best managed by analyst that are part of the company owning the customer relationship. On the other hand coding and testing are easily adopted to turning over to a third party who may have a larger pool of resources from which to draw and may even have better processes support application development. Handing over a legacy portfolio to a qualified third party to ‘moth ball’ or simply ‘keep the lights on’ is fundamental to most large outsourcing vendors, and in doing so allows the outsourcer to focus on new products or a technology refresh.




Other factors weigh in on decision regarding what to outsource, such as product life cycle, budget, domain knowledge, and overall readiness to support an outsourcing engagement. Within this in mind the process in determining what to outsource should not be taken lightly. Recognizing there is no single parameter or component that is a complete indicator of which portfolios are right for outsourcing, it is generally an entire set of factors that combined give indications when one portfolio may be more appropriate than another for outsourcing.


When evaluating a portfolio for outsourcing potential, I suggest a balanced approach in assessing the portfolio, so that no one factor overshadows the other factors or adversely influences decisions. My experience has shown that there are three categories of parameters that must be considered:
  • Business contribution of the portfolio
  • Skills required for development and support
  • Platform characteristics on which the portfolio operates
There is a simple but dynamic process for evaluating portfolios and assessing their potential for successful outsourcing. Based on years of experience in managing millions of dollars in outsourcing engagements and overseeing hundreds of projects and statements of work (SOW), I have found a set of criteria that are comprehensive, yet equally aligned with the three categories. A simple questionnaire that scores responses and can be utilized to rank potfolios relative to each potfolio's adaptability as determined by proven industry best practices.

Business factors
1. Business Continuity, what is the risk to going business of a failure in this portfolio?
2. Competitive Advantage, does this portfolio provide an advantage over similar product from competitors?
3. Investment Budget, is the annual portfolio budget for development and support greater than the minimum threshold to justify the ramp-up costs?
4. Relationship Complexity, how complex is the contributor arrangements, such as the number of vendors or stakeholders?
Skills Related
5. Domain Dependencies, what, if any, domain knowledge dependencies are there for successful sourcing by a third party?
6. Skills Availability, how available are the skills in the open market that are required supporting all development and operational needs?
7. Readiness, are SMEs trained and are tools in place to adequately support the sourcing ramp-up?
8. Knowledge Transfer, what is the length of time it takes an average technician to become productive on the applications?
Platform characteristics
9. Product Life Cycle, where is the portfolio (or product) in its life cycle?
10. Platform Stability, how complex are the systems and tools on which the applications will run?
11. Maintenance, what percent of the portfolio budget is allocated to maintenance and production support?
12. Instances, are there multiple instances or derivatives
As I have indicated not every portfolio is easily adapted to outsourcing. Once vendors have been vetted and contracts signed, then changing vendors or brining the development back in-house can be even a larger initiative than the original outsourcing due to ramp-down and knowledge transfer. A focused assessment of portfolios prior to outsourcing can pay huge dividends in the end.

Saturday, November 20, 2010

Best Practices in Estimating Software Development Projects


"All estimates are guesses; you can only reduce uncertainty, not elminate it."

Similar to introductions when you never get a second chance at a first impression, you also can never revise a first estimate.  This dilemma strains many commercial endeavors that involve selling complex technology solutions.  Under estimate the costs and the business opportunity falls apart leaving a very dissatisfied customer.  Over estimate, or pad, the costs and it is likely the competitors will seize the opportunity.
So, what’s the right answer to the question, “How accurate must an estimate be?”
If you believe estimates must be accurate to within a very small percentage of the overall costs, then you’ll likely have impatient customers waiting around idle while you go about mustering all the king’s horses and all the king’s men to evaluate detailed scenarios long after the project starts.  It’s known that idle customers are generally unhappy customers, and what are they most likely to do while they are idle, but shop around for alternatives.  Not to mention what happens to all the active projects while the assem¬bled experts work on the estimate.  On the other hand, if you believe estimates can be done by a couple of senior people on the back of an envelope, then you may be heading for situations where costs are woefully under recognized and the opportunity eventually dissolves, or the customer ends up paying more for the solution, or worse yet your company has to absorb the overruns.
As it turns out the answer is somewhere in between, which is at a point when the right balance is achieved between effort and accuracy.  In simple terms, estimates have to be “just good enough”, but the problem with just good enough is there are no guidelines for those estimating on when they have achieved the right balance.  Also, if you are in the business of developing software applications, then you need an estimation process that is repeatable and measurable.

Impact of Getting IT Wrong

Estimating is a fine balance and realistic estimates ultimately help companies make better business decisions.  But what is the impact of inade¬quate estimating processes?  Essentially, it comes down to opportunity costs, in that there is cost to a company for directing its valuable resource pool on efforts that are ill conceived or never implemented.  Improper resource allocation can also lead to misrepresentation of capacity and stifle new initiatives, because resources are otherwise unavailable to begin new work.  It’s also known that poor estimates contribute to prolonged and often revised delivery schedules.  This generally means the project stakeholders are involved in time consuming activities to re-justify and re-prioritize development in order to keep projects alive.  Lack of rigor and standardization not only leads to bad estimates, but often outright missed costs.  For instance key non-programming components, such as hardware, network, and third party expenses, can be overlooked and not factored into delivery costs.  Finally, if one or more of these situations exist, it likely means the customer is negatively impacted, contributing to declining satisfaction before a project is ever implemented.  The profile of an enterprise that does not take estimation seriously might look like the following:
  • Inefficient resource utilization
  • Capacity planning stifles new initiatives
  • Overhead associated with re-justification of efforts
  • Failure to account for non-programming efforts
  • Tribal knowledge takes precedence
  • Declining customer satisfaction
It is easy to understand that many times projects don’t fail because of poor to execution, but because of unrealistic estimates that doom projects from the beginning.
In the Forrester(1) 2008 research paper on estimation, author Carey Schwaber provides excellent insight into the impact of improper estimating practices and how companies can leverage a discrete set of principles in adopting industry best practices and improving estimates.  I have used the Forrester research in describing below how companies can improve their estimates.

Estimating Best Practices

By adopting estimating as a discipline and implementing best practices, companies can ultimately provide realistic estimates and a more predictable estimating process.  Forrester’s Carey Schwaber suggests the following four best practices for improving estimates.

1.            Groom estimation experts

Good estimation is a learning process.  It’s logical that the more someone researches a particular portfolio, then the more of a subject matter expert they become and the more accurate their estimates.  Using the same estimators enables experts to gain the experience necessary for quality estimations, but it’s also unrealistic to rely exclusively on a core set of estimators over time and junior staff should be used to supplement.  When junior staff is needed for estimating, either as an apprentice or when resources are stretched, then a senior person should always verify and challenge the estimates.  This aids in the learning process.
More mature software development organizations are beginning to adopt estimation centers of excellence (CoE) in order to establish a core set of estimators.  These estimators are positioned to work in small groups and oversee all aspects of estimation, including establishing bench¬marks, mentoring junior staff, and recommending tools.
Organizations should avoid pitfalls when building estimating expertise that include forgetting experts already have day jobs and expecting estimators to be experts on everything.

2.            Base estimates on past results

A development organization’s past performance is highly indicative of its future results.  Thus, basing estimates on actual data leads to the most realistic estimates.  Also, when discussing aggressive timelines with project sponsors and stakeholders, using actual data adds the needed credibility to the dialog to make it productive.
Most organizations that store historical project data do so in scattered collections of spread-sheets, emails, and other disparate files.  In order for the historical data to be meaningful to estimators, it first must be accessible.  Relevant historical data should be stored in a single repository.  Storing data comes with the cost of having to maintain the data, and as time goes on, the amount of data stored increases and drives up the cost.  For this reason companies should be selective about the historical data stored.  The most important data to collect is the duration of the major tasks from the major modules. 
It is important that the actual resulting data from project be stored and not just a history of prior estimates.  If you’re only evaluating old estimates to make new estimates, then you’re not learning and you’re most likely repeating any flaws that may have been part of the earlier estimates.  Make sure the process includes post implementation reviews in order to analyze the estimates against actual project performance and update the historical information.

3.            Crosscheck estimates using multiple techniques

No single estimating method will work for all scenarios and projects; therefore, many conservative organizations apply more than one method for estimation and reconcile the difference.  Significant discrepancies that are uncovered may suggest a need to revisit assumptions.  Some best practices employed for multiple estimation techniques are:
  • By employing structured group estimation, such as the CoE described earlier, organizations are able to avoid biases becoming routine in estimates.  A method used by some organizations is Wideband Delphi, composed of a panel of estimators who converge upon a consensus through repeated rounds of estimating.  It is a cyclical process whereby individuals estimate working alone, then the group convenes and a moderator is used to resolve large variations.
  • Some professional services firms compute a project’s contingency buffer using an adaptation of the “critical chain” method, whereby experts account for risk by producing two estimates for every task that include both stretch and highly probably estimates.
  • Effective estimators decompose large development modules into granular assignable tasks, known as work breakdown structure (WBS), whereby the sub-task are much easier to estimate and they ultimately roll up to an aggregate estimate.  This “bottom up” approach has it pitfalls in that it is often hard to quantify soft activities, such as cross-functional and external communication. 
  • Some shops derive estimates by analyzing the functional complexity of projects rather than purely considering the effort expended over time.  These estimators gauge complexity through a detailed function point analysis (FPA) that is compared to a historical record to determine duration.  Don’t underestimate the difficulty of deploying FPA for estimates, as it requires enormous rigor, detailed historical record, and extensive training.
Experienced estimators are able to understand the nuances among the different methods and resolving the estimate discrepancies, but more important they also know to never discount the differences as it could be an indication of flawed assumptions. 

4.            Revise estimates as knowledge improves

The earliest project estimates by nature are the least accurate.  In fact models have been built that reflect the expected confidence level of estimates beginning at project inception through detailed design, which indicate estimates done at requirements phase can be off by a factor of 50%.  In other words a twelve month delivery schedule built from the requirements could easily be off by six months.  To avoid this risk companies build multiple estimates into their delivery process, and companies that have made this work typically adopt a multistage approval process.  Rather than give a single green light for the entire project, sponsors assess progress through stages that can drive mid-course corrections.
These best practices don’t recognize a “fudge factor” for estimating, such as simply doubling the initial estimate due to a bad track record.  These types of estimates are baseless and lack credibility, and as previously suggested, don’t allow for productive dialog with the project stakeholders.
Estimates should always be delivered in ranges rather than absolutes.  This is primarily due to the fact estimates are based on the known and not the unknown.  Estimates should consider key risks that are identified and prioritized in the estimate and take into account the contingency plans required to mitigate the most critical risks.  It’s the mitigation plans that may or may not be invoked that suggest the need for estimate ranges.

Estimating New Platforms and Applications

The best practices previously described assume, to a large degree, that organizations over time will build expertise in certain applications by nature of performing recurring estimates of similar work and increased familiariz­ation with the platform.  However, there will be instances in which a company is required to provide estimates on platforms where they have very little or no experience.  In these cases the estimating company must come up to speed on the application, as much as feasible in short time, and analyze functional specifications, system performance, defect volume, and support history. 

In considering estimating new software development on unknown platforms, using a bottom up approach, such as the work breakdown structure method, may not be an option due to lack of detailed knowledge.  In fact limits on time, resources, and knowledge may restrict these types of estimates to little more than an educated guess.  These limits can be overcome by using a “top down” model that is based on generally agreed “Rule of Thumb” (RoT) capabilities of an average practitioner and applying this to the high level functional components identified in the preliminary analysis.  For instance it might be agreed by the core estimating team that the RoT for designing and developing a user interface screen on most platforms requires approximately 350 hours of work, and if the functional requirements call for seven unique screens, then the effort is approximately 2,450 hours.  Using this method on the normal functional components found in most software applications in addition to user screens, items such as database modifications, external links, and reports, then factoring the RoT times the quantity of each component will result in a high level approximation of the baseline effort.  To account for project management, integrated testing, and other project related overhead, a generally accepted overhead percentage should be applied to the baseline effort in order to derive the overall effort.
With a high level estimate based on RoT, then to account for the high level of risk, three scenarios are constructed based on optimal, worst case, and typical conditions.  By assigning both a relative weighting (“Typical” = 1x factor) and a probability between the three scenarios, the estimator can determine a suggested estimate range from low to high with a target estimate at the highest probability.
The key benefit to using a model is it not only brings consistency, but it allows for ongoing refinement.  By closely evaluating key components of the model, such as RoT values, overhead percentage, weighting factor, and probability, or by adding other functional components, the resulting estimate can be further refined as knowledge level increases.  Another key aspect of using a model is it is reusable on future estimates and the results are re-creatable.  This means estimates are based on business logic that can be defended, and not superficial speculation, which is a critical component of communicating estimates among stakeholders and sponsors.
Example:  New Platform Estimation Model

RoT
Quantity
Effort (hrs)
Functional Components
·    U.I. screens
·    Database updates
·    External links
·    Reports
Baseline
Overhead
Overall effort  (a)

350
80
200
125

22%

7
24
5
3

2,450
1,920
1,000
         375
5,745
+    1,149
6,894
Figure 1.0 – Determining overall effort using RoT

Optimal
Typical
Worst
(b)    Weighting
(c)    Range (a * b)
(d)    Probability
        Size (c * d)

Estimate (e + f + g)
½x
3,447
15%
517
(e)

1x
6,894
65%
4,481
(f)
4x
27,576
20%
5,515
(g)
10,513
Figure 2.0 – Estimate range based on weighting and probability
Low                                                                                                                                                  High
3,447                                              10,513                                                                              27,576

In this example the overall effort is approximated at 6,894 hours using the RoT method.  The estimate is further refined by considering three scenarios where Optimal is ½ of Typical and Worst case is four times Typical.  Applying the weight and probability to each scenario and aggregating the outcome results in a range of 3,447 hours to 27,576 hours with the highest probability at 10,513 hours.
As highlighted by the best practice of crosschecking, no one estimate should stand on its own.  To validate estimates on new platforms the estimator should compare the results using other methods as a sanity check to determine if the range and target estimate are in line with expectations.  For instance if the results of the RoT estimation indicate the effort has a probability of 8,000 hours, then are the requirements in line with the level of effort on historical projects that consumed the same amount of effort.  If the estimates don’t reconcile or complement each other, then additional analysis may be necessary on the functional requirements or the assumptions.

Benchmarks as Components of Estimates

In order for estimates to be useful there must be a common denominator that connects them.  You can’t derive meaning from separate estimates when they are quantified using different metrics.  Many times costs comparisons are required between different development projects on different platforms and different skills.  How would you compare one estimate shown in development hours against another estimate reflecting network utilization?  The answer is you wouldn’t.  To distill effort down into dollars, there must be a general set of benchmarks applied to the effort.  Likewise, if you don’t understand your current cost structure, then it’s unlikely you will be able to estimate future costs.  Development organizations should work closely with their Finance group in determining per unit costs.  This can be done by utilizing historical data to project output for a full time person (a/k/a FTE) and computing it against the salary related costs.  This will produce a blended hourly rate that can be applied to the estimate to arrive at total development cost.  Blended rates can be maintained by permanent employee versus contractor, new development versus maintenance, or even geographic; however, the rates should be easy to maintain and not overcomplicate the process.
The productivity metrics and blended rates are key components in the estimating process and they should be closely monitored and retained along with other key project related historical data.  All this data should be evaluated at least annually to ensure accuracy, as well as year over year for any negative trends that could indicate delivery problems, such as productivity declines.
As companies progress their estimating practice and become more proficient they should consider additional rigor in areas of the development life cycle that heavily impact estimates. 
·         Focus attention on requirements definition as a critical input to estimation.  Poorly defined requirements are the greatest influence contributing to bad estimates. 
·         Invest in estimation tools and training.  When looking to standardize the estimating process there is no more effective means than adopting a comprehensive tool that can be utilized across the enterprise.  Not only does the tool mean common usage and practice, but it allows collection of historical data that provides input into future estimates.

Summary

Estimating is a journey and not a destination.  While this is a well used cliché, it is very true here.  By recognizing that estimating is ever evolving and you will never achieve perfect estimates, companies can implement best practices that grow with the enterprise and enable efficient and accurate estimates of business opportunities.  The objective is to reduce uncertainty by creating realistic estimates that balance effort and accuracy, and in doing so provide valuable insights that have lasting effects on customer satisfaction, operational efficiency and the bottom line.

References

(1) Forrester’s “Best Practices: Estimating Development Projects” by Carey Schwaber, April 22, 2008