Software Development Process Metric

Each process should be bond to metrics to continuously improve, not special for testing. It is better and supposed to be set up in the early stage, because each member could be observant on it to keep improvement associated with the final target. I just got chance to do that when the release cycle is near over, handy work is decreased down. I would like to know the actual result based on the rule, instead of only bugs and feelings. That will help me to know how to improve the testing coverage next time, how to reduce the waste during the development, how to prioritize the things, how to communicate with others with big picture and context. I am aimed to do an objective measure of the effectiveness and efficiency of testing. It might be a way to know what happened before and how to improve next. Most importantly, it tends to improve my mind.

1. Define Metrics

·  Deciding the audience (executive team, test team)

·  Identifying the metrics which capture the status of each type of testing

·  Ensuring that all different categories in metrics are considered based on the project needs

·  Setting up easy mechanisms for data collection and data capture

·  Identifying the goals or problem areas where improvement is required

·  Refining the goals, using the “Goal-Question-Metric” technique

2.  Identifying data required for the metric; if data is not available, identify/set up a process to capture the data

·  Provide the definition for each metric

·  Define the benchmark or goal for each metric

·  Verify whether benchmark or goal are realistic, by comparing with industry standards or with data of similar projects within the organization

·  Based on the type of testing, metrics are mainly classified into:

§  Manual testing

§  Automation testing

§  Performance testing

·  Each of these are further categorized based on the focus area:

§  Productivity

§  Quality

§  People

§  Environment/Infrastructure

§  Stability

§  Progress

§  Tools

§  Effectiveness

3. Communication with stakeholders

To ensure better end-results and to increase buy-in, the metrics identifying and planning process must involve all stakeholders.

·   Communicate the need for metrics to all the affected teams

·   Educate the testing team regarding the data points that need to be captured for generating the metrics

·  Obtain feedback from stakeholders

·  Communicate with stakeholders - how often the data needs to be collected, how often the reports need be generated, etc.

4. Capturing and verifying data

·  Ensure that the data capturing mechanism is set up and streamlined

·  Communicate and give proper guidelines to the team members on the data that is required

·  Set up verification points to ensure that all data is captured

·  Identifying the sources of inaccurate data for each basic metric and take corrective steps to eliminate inaccuracies

·  For each metrics, define a source of data and procedure to capture data

·  Ensure that minimum effort is spent on capturing the data by automating the data capturing process wherever possible (like some tools API)

·  Capture the data in a centralized location easily accessible to all members

·  Collect the data with minimal manual intervention

5. Analyzing and processing data

·  Once the data is captured, the data must be analyzed for completeness

·  Verify whether the data filed is accurate and up-to-date

·  Define the process/template in which derived data must be captured

·  Calculate all the metrics (derived metrics) based on the base metrics

·  Verify whether the metrics are conveying the correct information

·  Automate the process of calculating derived metric from basic metric to reduce the effort

6. Reporting

·  Define an effective approach for reporting, like a metric dashboard

·  It is advisable to obtain feedback from stakeholders and their representatives on the metrics to be presented by providing samples

·  Metrics should be presented based on the audience and in a consistent format

·  Reports should contain the summary of observations

·  Reporting should be in a clearly understandable format, preferably graphs and charts with guidelines to understand the report

·  Reports should clearly point out all the issues or highlights

·  Based on the request, user should be able to access the data

·  Reports should be presented in such a way that metrics are compared against benchmarks and trends shown

·  Reports could be easily customizable based on the user requirement

·  Ensure that efforts spent on reporting is minimal; whatever required try to automate (like macros)

7. Continuous improvement

·  Continuous improvement is the key to the success of any process

·  After successful implementation of metrics and after achieving the benchmark, revisit the goals and benchmarks and set them above the industry standards

·  Regularly collect feedbacks from the stakeholders

·  Metrics report must be accessible to everyone

·  Evaluate new metrics to capture

·  Refine the report template

·  Ensure that efforts spent on reporting is minimal

Challenges in implementation of a metrics program

Up to 80 percent of all software metrics initiatives fail within two years. To avoid common pitfalls in test metrics, the following aspects need to be considered:

· Management commitment: to be successful, every process improvement initiative needs strong management commitment in terms of owning and driving the initiative on an ongoing basis.

· Measuring too much, too soon: one can add identify many metrics that can be captured in projects, but the key is to identify the most important ones that add value.

· Measuring too litter, too late: The other mistake team make is to collect few metrics too late in the process. This does not provide the right information for proper decision making.

· Wrong metrics: if the metrics do not really relate to the goals, it does not make sense to collect them.

·  Vague metrics definitions: Ambiguous metric definitions are dangerous, as different people may interpret them in different ways, thus resulting in inaccurate results.

·  Using metrics data to evaluate individuals: One of the primary reasons for a metrics program being not appreciated and supported by all levels of the team is the fear that the data may be used against them. So never use the metrics data to evaluate a person.

·  Using metrics to motivate rather than to understand: Many managers make the mistake of using metrics to motivate teams or projects. This may send the signals that the metrics are being used to evaluate individuals and team. So the focus must be on understanding the message given by the metrics.

·  Collecting data that is not used: There may be instances where data is collected but not really used for analysis; avoid such situations.

·  Lack of communication and training:

Explain why: there is a need to explain to a skeptical team why you need to measure the items you choose.

Share the results

Define data items and procedures

Key metrics for software testing

Test progress tracking metric: Track the cumulative test cases or test points – planned, attempted and successful, over the test execution period.

Defect Metrics

1) Defects by action taken

2) Defects by injection phase

3) Defects by detection phase

4) Defects by priority

Release criteria

If a defect falls on the shaded region, the software should not be released or will need a good reason for the defect to be waived off.

Defect severity

Priority

Critical

Serious

Moderate

Minor

High

 

 

 

 

Medium

 

 

 

 

Low

 

 

 

 

 

Defect by cause

This metric will help the development team and the test team to focus on the areas for improvement. For example, it could be distributed into below areas,

enhancement, impact not analyzed, Err in existing pgm, not applicable, insufficient information, insufficient time, lack of experience, lack of system understanding, improper setup, Stds. Not followed, less domain knowledge, lack of coordination and ambiguous spec.  

Defect by type

This metric can be a good pointer to areas for improvement. It could be distributed into below areas,

Standards, TestSetup, Detailed Design, Comments, Consistency, Not a defect, user interface, documentation, incomplete test case, incomplete reqmts, detailed design, func architecture, performance, reusability, others, func architecture, invalid test case, naming conventions, logic, incorrect reqmts and planning.

 

Reference:

1. Software Testing Metrics (Infosys)

2. Metrics used in testing

 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章