Archive for the ‘Process’ Category
This post is based on the “Performance Effects of Measurement and Analysis: Perspectives from CMMI High Maturity Organizations and Appraisers” from the SEI (Relevant page to download the report is here)
The SEI has published a seminal report (although its around 150 pages only), comparing the use of statistical methods and models with high-maturity levels from 2008 and 2009 surveys. The work, as expected, has a lot of details, including validation of the results using statistical analysis!
1. Process Performance Models are used extensively in the areas of defect prediction, cost/schedule performance, estimation accuracy while other areas are relatively low. Interesting: Models for Customer satisfaction are less frequent
2. Many organizations use optimization techniques when building/using process performance models. Monte Carlo simulation and use of probabilistic modeling have grown. Interesting: Other techniques have reduced in popularity, while “don’t know” responses have increased!
3. Level of stakeholder involvement in measurement and analysis is along expected lines with measurement specialists having a high level of involvement. Interesting: It is not clear if all organizations have dedicated measurement specialists or process engineers take on the role as needed. Customer involvement is, predictably, less at organizational level
4. Organizations seem to have invested in training specialists in modeling techniques followed by process engineers. Interesting: It is not clear who the “users” of the models are – in a Software product/service organization, I expect users to be project managers and engineers
5. 75% of managers understand the results of the models well. Interesting: The % itself is interesting, since many of the managers I have met do not understand well how the models are built!
6. Just about 66% of those who build statistical models understand the intent behind it from the CMMI perspective. Interesting: Somehow, this does not resonate well with me. The only explanation I can think of is that the model builders are statisticians who are guided by the Process Engineers in identifying factors, building models and interpreting the results
7. Documenting the models and results well is a significant differentiator for high-maturity organizations. Interesting: No surprise there!
8. Not enough expertise is the only challenge that remained constant between 2008 and 2009. Other reasons have decreased! Interesting: In one year, have our problems decreased? I think in 2008, they were exaggerated!
9. 65% of managers want to use PPMs for knowing when their projects are out of track. Interesting: This is good, because having something just to gain a “high-maturity” tag is not, uh, “high-maturity” [Although, "PPMs are the way in the organization" comes a close second!]
10. There are 5 “healthy” ingredients for a good process performance model that is consistent across many research reports. When all ingredients are present, the value of the PPM to the organization is “substantial”. Interesting: The CMMI does not provide any directions on using such reports as guidance!
1. There are many responses that mention the lack of clarity in what is expected from high-maturity practices.
2. Problems like lack of accurate historical data, wide variation in the type and nature of projects, resources etc continue to plague industry
3. There are no peer-reviewed, published reports on factors to be considered in process performance models. Even common ones like defect prediction do not have standard regression equations, where values/co-efficients can be adjusted based on organizational performance
4. Process Performance Models do not have enough documentation to describe the input data that was used to produce them. This causes resistance in using them well
5. Impact of people variation is not usually considered as a factor, but which often skews actual performance
6. Experts in statistical techniques tend to forget that finding the cause of variation is notoriously difficult in software development, which is what managers are more interested in! Stating variance values often brings up the question – “what is causing the variation?”, for which the answer is “Thats what you have to find out”. Silence.
7. High-maturity organizations often have the management commitment to stay the course even in financial difficulties – they believe that having high maturity practices is a necessary element of beating the competition and hence coming out of the financial crisis. Without this belief, High maturity goals remain another management fad
Read the report a few times to digest it. What conclusions did you draw? What aspects do you observe in your organizations? Did I miss something in my observations?
The most common problem in a Metrics program is defining the "why" for each of the metrics. The second most common problem is getting agreement from all stakeholders on the "what-when-where-which-how" parts of the definition. In this 2-part series, we will look at what you can do to make this easier for your stakeholders to understand what to expect from your metrics approach.
- Defining a Metrics catalog
- Creating standard data collection, reporting and presentation templates
In this part, let us take a look creating a Metrics Catalog to hold the metrics definitions and communicate in an unambiguous way on what the metrics mean and how they will be reported.
Part 2 will look at creating standard templates for the execution, once the metrics are agreed. However, it is often best to gain agreement on the templates along with the definition, since any major change in the templates can cause changes in the mechanics of the definitions.
A Metrics Catalog
A metrics catalog is simply a table with information on the collection and evaluation of Metrics. It can be a spreadsheet with columns for the different parts of the Metric. A set of columns given below can be used as a starting list
Name: Define the metric name. Be careful to make the names consistent (calling one "defect density" and the as "other % reduction in defects across testing cycles " doesn’t help!)
Inputs: What are the raw inputs that you will be using for this metric
Tip: A Metric is always a relationship between two entities")
Formula: How will be the metric be computed? Describe the relationship in mathematical terms.
Objective: Describe the intent of this metric – how will you interpret the values of this metric
Tip: Use general descriptors for the interpretation – don’t say, for example, "if the value is <99%, it means we are not doing good." Rather, say "the values for this metric will help us determine how our customers perceive our services")
Data Source: Identify where the inputs will come from and if possible, who is responsible to collect this information. This is one of the fields where the more detailed the information is, the easier it is for everybody later.
Unit of Measure: Describe the units for the values of the Metric. Is it % or defects/Lines of Code or just a number. Be very careful with this as this will impact how you will report the metrics in a visual form
Target Values: Describe the acceptable range of values for the metric
Tip: Be cautious with this, if you don’t have historical data. Leave it blank for the first few periods and then fill it in with the best performance of the actual values. Once you have sufficient samples, you can devise a proper target value).
Tip 2: Be extra cautious with “industry benchmarks”. Unless they are really similar, don’t thrust them on your organization or you will encounter lot of resistance to the metrics initiative)
Frequency of collection: Describe how often will you collect the inputs, compute and report the metric
Tip: As much as possible, try to keep the frequency constant for all the metrics in the Catalog. Think Collection=Reporting -just because data is available weekly, it doesn’t mean you need to collect and report it weekly.)
Area: Describe which area of the product/service lifecycle does this metric belong to.
Type of metric: List if it is a leading metric or lagging one
The first part of this series provided an overview of the PMO, types of PMOs and typical functions.
The second part looked at the role of PMO in setting up and monitoring Change Management processes and activities.
The third part discussed the Quality Management responsibilities of a PMO and provided a table of contents to a Quality Management Plan
This post shares some information and experience on how the PMO can review projects and what to focus on in such reviews.
One of the most important functions of the PMO is to periodically review projects, to be able to answer the following questions:
1. Where is the project wrt where it should be?
2. Will the project deliver on its objectives – timelines, quality etc?
We have all worked on projects, where the status is green for weeks and even months and suddenly moves to “Red” one fine day.
The best early warning system is effective and in-depth reviews by the PMO for each project in its portfolio. The frequency of such reviews depends on:
Size of the project
If the project is large and complex, one review meeting with all stakeholders is not effective. There is usually too much discussion on some items, especially those that are over the tolerance levels, while routine ones are not given much time. Instead, multiple reviews with separate teams will provide the necessary focus and insight into that area.
Separate reviews also help you to validate information being provided by one team with others. with a single meeting, contradictory statements are not voiced due to fear or a desire to avoid conflict.
If the project is small or medium sized (<30 – 40 people and less number of cross-domain teams), a single review can be effective as all stakeholders can present information quickly.
A typical review should not be more than 3 hours, as information overload sets in and people become mentally tired.
Criticality to business
Review depth also depends on how important the project is to the business. For example, a public-facing market solution will need to be monitored much closely than a project for generating MIS reports.
- If the project is progressing smoothly, with interediate deliverables on time and within quality limits, you may want to schedule a monthly meeting with offline status reports weekly.
- If the project is just about surviving, weekly reviews are necessary to tightly control the ship.
- Iif the project is behind on timelines or there are escalations from customers (can be internal such as marketing, end-users etc), day-wise monitoring may be required.
This does not mean having long meetings everyday, but you may request for daily status reports to be circulated to the governance team, with meetings held twice in the week.
What should you review
At the minimum, the review should focus on
- Verify status of tasks with respect to the Plan
- Reviewing Key accomplishments during the reporting period
- Understanding key deliverables and activities during the next period and the progress on them till now to determine if they will still be met. A good way to do this would be to ask for Estimated time to complete in-progress activities and verify against the plan
- Check for Dependencies for the upcoming activities to see if there are any impacts due to external and internal dependencies (such as staff from another team, software or hardware availability etc)
- Status of top issues and any new issues added
- Status of top risks and any updates to the Risk profile
- Change requests created/modified during the period
- Quality indicators such as defect trends, incident escalations etc
How to review effectively
- Instead of having a template which can restrict information, ask the project to develop something incorporating the above. The main point of this is to ensure they don’t feel constrained to report in a manner they feel uncomfortable with
- The report can be simple to start with, but must be able to provide enough information for the PMO to decide on the true status of the project.
- Status is usually shown in Traffic-light symbols, but this generally is not accurate or atleast consistent. Insist on objective criteria to determine what is yellow and what is red.
- Watch for tasks that rapidly change in progress completion, especially ones that slide downwards.
- When people use vague qualifiers like “I think it should be done in a couple of days” or “I believe we are on track”, look at start and end dates of the activity to gain an idea of the effort consumed. Ask for time to complete to gain a true understanding of the remaining work
- A major factor in missed deadlines is underestimating the time it takes to solve operational issues. A solid issue management mechanism will help PMO understand the blocking issues that could impact the delivery
- How is product doing with respect to Quality? Are defects being captured accurately? Schedule and review external audits that verify this one process specifically, since defects may not always be reported under the belief that they are minor
- Take the time to review customer feedback, if any and see how it dovetails into the performance of the project
- Periodically reviewing risks is one of the most important tasks of the PMO. Risk profile must be kept updated when more information is received on a subject that is impacted by a risk
The critical part, I cannot overemphasize this, is that the project must feel that the PMO will do anything to help the project solve issues and move forward. This may mean releasing additional funds or add experts for short durations to solve problems. If the project team feels that the PMO is only reviewing/policing, it will find ways to hide information.
You can find an example of a status report template (and some other good ones) at Derek Huether’s blog Critical Path.
A project review is a good opportunity for the PMO to demonstrate leadership to the projects. Transparent communication, accountability, decision-making and support are necessary elements to conduct a good project review.
What’s your take? What have I missed completely? Do you have something more to add?
We started off the PMO series with a basic introduction about the PMO – terminologies, the different types of PMO and some of its typical functions.
Let’s talk about one very important part of a PMO function – Change Management. Change is the only constant in life – cliched? Of course, but true nevertheless. It is also one of the biggest causes of “project death” – those projects which go on indefinitely, but always overdue and a cost sink (read an extreme example of how change in scope resulted in a 12-year project that was also a massive failure!).
In a large project/program, change management becomes very important to ensure that something remains stable or atleast manageable.
Change Management has become the norm in the industry today and there are dedicated “Change Managers” too sometimes, but there is enough change mismanagement too. One of the biggest reasons for this mismanagement is because it is used synonymously with managing Requirements Change.
Managing change does not only mean managing changes to scope (“scope creep”, as it is called, but that is a creepy term). Architecture/Design decisions, standards and tools also must be controlled to prevent chaos. This is where most change management processes fail.
Let us look at some change management mechanisms and then we will revisit how change management can be applied.
Change Control Board (CCB):
One of the most common responses/techniques, but often under utilized. The CCB need not be a single, all-powerful entity, but there can be more distributed ones at different levels. For example, for large architectural changes, there can be a high-level CCB, but smaller design decisions can be changed by a lower level CCB. It is usually good to organize such mini-CCBs by the amount of control they have rather than by phase – this will create cross-functional teams at all levels, rather than more silos by function
Change Request Creation and Tracking processes:
Having a formal Change processes itself is a barrier to most spur-of-the-moment change decisions. At the minimum, change request processes should describe how a change request is created, who reviews it, criteria for escalation, stakeholders to be involved and change closure. It also needs to tie in Configuration Control for effectiveness.
Incorporating change (and its consequences) into planning:
Usually this is a fatality in most change processes. Changes to non-scope areas of the project are considered to be immune to schedule or cost effects, which is rather unlikely. Sometimes, the development team is asked to absorb the effect as the price for not understanding or doing it right the first time. Managers in charge of change control must resist this thought process or risk losing much more at a later stage in the project.
Stricter controls as Project progresses
At the start, change is more likely, since everyone is feeling around in the dark, establishing sign-posts and installing lights, figuratively, but as you progress in the project, it is important to ensure that every change request is asked “why” several times. Any change later in the lifecycle, especially with respect to decisions, is likely to affect work products already produced and accepted. A common victim of this syndrome in an application development project is the User Interface, which is thought to be like a skin – easily replaced, but is it? In services, Change is more tightly connect to configuration than with Application development, but the principle still holds true.
Having looked at some mechanisms for managing change, let us go back on how and where to apply change management. Change Management in an application development scenario can be used at:
- Scope management
- Technology stacks
- Standards to be followed, such as branding, user interface etc
- Third-party components
- Development environments
In a services environment, change needs to be managed for
- System Software (OS, standard application software etc)
- Communication equipment
- Services and their endpoints
- Processes and
- knowledge databases
Note: In IT Service Management circles, the CCB is termed as CAB, shortened for Change Advisory Board (though why it just “advises” stumps me).
That’s alright, I know this stuff, but where does the PMO fit in, you ask? PMO must be the oversight for managing change. The PMO establishes the procedures for change control and provides necessary direction to the Program on the levels of CCB (scope of change control, escalation criteria etc). It is also the final arbiter for changes to Project scope, schedule or cost.
In fact, for rescuing troubled projects, one of the first things a PMO should do is to take a hard look at the project for change leaks and based on the amount of leakage, institute an appropriate level of change control. I say “take a hard look” since it is almost guaranteed that a typical derailed project has issues managing change.
What are you experiences in managing change in your projects or services? Is there something else? Think and let me know about it.
One of the most controversial elements among IT staff is the use of checklists to verify accurate completion of activities.
Most implementations of IT frameworks, particularly in application development and IT services, strongly advocate the use of checklists as the first record of verification and/or validation. The reasoning behind it is quite simple – verify if critical activities/items have been completed and the resultant output meets the requirements.
At the outset, people create a checklist to verify a long list of things to be checked, with the opinion that all of them are important! Over a period of time, “Process improvement” adds more items to the checklists, resulting in a checklist that takes more time to fill than the activity which it verifies. Result – no one uses it in spirit, defeating its very purpose.
A look at other industries shows a different trend though. Manufacturing, construction and even pilots routinely use checklists to ensure nothing of relevance is left out. Let us a take a detailed look at the Airline industry.
Pilots use a number of checklists, before, during and after takeoff – sometimes using as many as 17 different checklists for each flight:
- Pre-flight checklists on safety, external inspections, cockpit inspections, before engine start, starting the engines, before taxiing, during taxiing
- Take Off checklists – before take-off, line-up, during take-off, after take-off, Climbing, Cruising
- Landing checklists – Descent, before landing, going around, after landing, shutdown, before leaving aircraft.
In addition, there are other checklists for abnormal conditions, such as emergency landings, loss of cabin pressure etc.
Two things differentiate checklists in other industries and checklists in IT – one, the target for applying is potentially infinite in IT and the items are not necessarily sequential.
Checklists in IT usually check for standards adherence and may have some questions to check for common mistakes. The problem with this is that standards have to be checked through the output and may occur hundreds of times, which may not be humanly possible.
The best way is to automate the standards verification process, through tools such as static analyzers, scripts to check authorized installations of software across the enterprise etc. Checklists must be used only for checking logical errors, common mistakes in applying principles and other areas, where automation may not be optimal.
Unfortunately, most IT Frameworks mandate organization-wide definition of standard checklists that become outdated quickly. Effectiveness can be improved exponentially by requiring that checklists be used by IT, but the contents of the checklists can be defined by the teams themselves. This is a COBIT-style approach, where control points are defined, but it is left to the teams themselves to define how they will pass the control point.
The Personal Software Process by Watts Humphrey (SEI CMU), for example, recommends that software developers create their own checklists based on their work style. For more information on PSP, visit the SEI website.