test basics

ARCHIVE FOR THE ‘TESTING BASICS’ CATEGORY

0

Difference between Re-testing and Regression testing:

Re-Testing:

  • When tester finds the bug and report to developers and developer fix that bug now if tester tests only that test case in which he found the bug with same or different data then it is known as retesting.
  • Re testing requires re-running of the failed test cases.
  • Re testing is plan based for bug fixes in build notes and docs.

 Regression Testing:

  • After modification or fixing the bug if tester test that test case in which he found the bug and test all the or specified test cases which he executes earlier then it is known as regression testing . The aim of this testing is that bug fixing is not affect the passed test cases.
  • It is an important activity, performed on modified software to provide confidence that changes are correct and do not affect the other functionality and components.
  • Regression testing is generic and may not be always specific to defect fixes.
0

Difference between System Testing and System Integration Testing:

1. Integration testing is a testing in which individual software modules are combined and tested as a group whileSystem testing is a testing conducted on a complete, integrated system to evaluate the system’s compliance with its specified requirements.

2. Integration testing is testing the interface between the modules; it can be top down, bottom up, big bang while System testing is testing the end to end business scenarios in the environment which is similar to production environment.

3. System testing will be conducting at final level while Integration testing will be done at each time of module binding or a new module need to bind with the system.

4. System testing is a high level testing while Integration testing is low level testing. In simple words on completion of integration testing, system testing started not vice versa.

5. Test cases Integration testing are created with the express purpose of exercising the interfaces between the components or modules while test cases for System testing are developed to simulated the real life scenarios.

6. For Example if an application has 8 modules. Testing the entire application with all 8 modules combined, we call it System testing and if application interacts with some other applications (External systems) to retrieve or send data, to test with other application and external system or any other module we call it Integration testing or system integration testing.

2

Difference between Sanity and Smoke Testing:

Smoke Testing:

  • When a build is received smoke testing is done to ensure that whether the build is ready or stable for further testing.
  • Smoke testing is a wide approach where all areas of software application are tested without getting into deeper.
  • Test Cases for smoke testing can be manual or automated.
  • A smoke test is basically designed to touch each and every part of an app in a cursory way.
  • Smoke testing is Shallow and wide.
  • Smoke testing is conducted to ensure whether the most crucial functions of a program are working, but not bothering with finer details.
  • Smoke testing is like General Health Check Up

Sanity Testing:

  • After receiving a software build, with minor changes in code, or functionality, Sanity testing is performed to ascertain that the bugs have been fixed and no further issues are introduced due to these changes. The goal is to determine that the proposed functionality works roughly as expected.
  • Sanity testing exercises only the particular component of the entire system.
  • A sanity test is usually unscripted and without test scripts or test cases.
  • Sanity Testing is narrow and deep
  • Sanity testing is to verify whether requirements are met or not, checking all features breadth-first
  • Sanity Testing is like specialized health check up
0

Test Plan:

It is a high level document in which how to perform testing is described. The Test Plan document is usually prepared by the Test Lead or Test Manager and the focus of the document is to describe what to test, how to test, when to test and who will do what test.

The plan typically contains a detailed understanding of what the eventual workflow will be.

Master test plan: A test plan that typically addresses multiple test levels.

Phase test plan: A test plan that typically addresses one test phase.

Test Plan Template contains following components:

1. Introduction—

A brief summary of the product being tested. Outline all the functions at a high level.

  • Overview of This New System
  • Purpose of this Document
  • Objectives of System Test

2Resource Requirements

  • Hardware– List of hardware requirements
  • Software–List of software requirements: primary and secondary OS
  • Test Tools—List of tools that will be used for testing.
  • Staffing

 3. Responsibilities

List of QA team members and their responsibilities

4.      Scope—

  • In Scope
  • Out Scope

 5. Training—

List of training’s required

6. References—

List the related documents, with links to them if available, including the following:

  • Project Plan
  • Configuration Management Plan

7.      Features To Be Tested / Test Approach—

  • List the features of the software/product to be tested
  • Provide references to the Requirements and/or Design specifications of the features to be tested

8.      Features Not to Be Tested—

  • List the features of the software/product which will not be tested.
  • Specify the reasons these features won’t be tested.

9.      Test Deliverables—

  • List of the test cases/matrices or their location
  • List of the features to be automated

10.  Approach—

  • Mention the overall approach to testing.
  • Specify the testing levels [if it’s a Master Test Plan], the testing types, and the testing methods [Manual/Automated; White Box/Black Box/Gray Box

11.Dependencies—

  • Personnel Dependencies
  • Software Dependencies
  • Hardware Dependencies
  • Test Data & Database

12.Test Environment—

  • Specify the properties of test environment: hardware, software, network etc.
  • List any testing or related tools.

13.APPROVALS—

  • Specify the names and titles of all persons who must approve this plan.
  • Provide space for signatures and dates.

14.Risks and Risk management plans—

  • List the risks that have been identified.
  • Specify the mitigation plan and the contingency plan for each risk.

15.Test Criteria—

  • Entry Criteria
  • Exit Criteria
  • Suspension Criteria

16.Estimate—

  • Size
  • Effort
  • Schedule
0
English: Blue ink on the inspection sheet indi...

English: Blue ink on the inspection sheet indicates to the students that they are a “go” during the sling load hands-on testing in the Camp Robertson Training Area Oct. 8. (Photo credit: Wikipedia)

Difference between Load Testing and Stress Testing:

  • Testing the app with maximum number of user and input is defined as load testing. While testing the app with more than maximum number of user and input is defined as stress testing.
  • In Load testing we measure the system performance based on a volume of users. While in Stress testing we measure the breakpoint of a system.
  • Load Testing is testing the application for a given load requirements which may include any of the following criteria:
    •  Total number of users.
    •  Response Time
    • Through Put
    • Some parameters to check State of servers/application.
  • While stress testing is testing the application for unexpected load. It includes
    • Vusers
    • Think-Time

Example:

If an app is build for 500 users, then for load testing we check up to 500 users and for stress testing we check greater than 500.

3

English: System Life Cycle as taught in the AQ...

Systems Development Life Cycle (SDLC):

The systems development life cycle (SDLC) is a conceptual model used in project management that describes the stages involved in an information system development project. o manage this, a number of system development life cycle (SDLC) models have been created: waterfall, fountain, spiral, build and fix, rapid prototyping, incremental etc.

Stages or different phases of SDLC are mentioned below:

  • Project planning, feasibility study
  • Systems analysis, requirements definition
  • Systems design
  • Implementation
  • Integration and testing
  • Acceptance, installation, deployment
  • Maintenance

Project planning, feasibility study: Establishes a high-level view of the intended project and determines its goals.

Systems analysis, requirements definition: Refines project goals into defined functions and operation of the intended application. Analyzes end-user information needs.

Systems design: Describes desired features and operations in detail, including screen layouts, business rules, process diagrams, pseudo code and other documentation.

Implementation: Coding is written by developers in this phase.

Integration and testing: Brings all the pieces together into a special testing environment, then checks for errors, bugs and interoperability.

Acceptance, installation, deployment: The final stage of initial development, where the software is put into production and runs actual business.

Maintenance: What happens during the rest of the software’s life: changes, correction, additions, and moves to a different computing platform and more? This, the least glamorous and perhaps most important step of all, goes on seemingly forever.

1

Waterfall Model:

The waterfall model is a sequential software development model in which development is seen as flowing steadily downwards (like a waterfall) through several phases.

Waterfall model was meant to function in a systematic way that takes the production of the software from the basic step going downwards towards detailing just like a Waterfall which begins at the top of the cliff and goes downwards but not backwards.

Different Phases of Waterfall Model:

Definition Study / Analysis: During this phase research is being conducted which includes brainstorming about the software, what it is going to be and what purpose is it going to fulfill.

Basic Design: If the first phase gets successfully completed and a well thought out plan for the software development has been laid then the next step involves formulating the basic design of the software on paper.

Technical Design / Detail Design:  After the basic design gets approved, then a more elaborated technical design can be planned. Here the functions of each of the part are decided and the engineering units are placed for example modules, programs etc.

Construction / Implementation: In this phase the source code of the programs is written.Testing: At this phase, the whole design and its construction is put under a test to check its functionality. If there are any errors then they will surface at this point of the process.

Integration: in the phase of Integration, the company puts it in use after the system has been successfully tested.Management and Maintenance: Maintenance and management is needed to ensure that the system will continue to perform as desired.

Advantages of Waterfall Model:

  • Waterfall model is simple to implement and also the amount of resources required for it are minimal.
  • This methodology is preferred in projects where quality is more important as compared to schedule or cost.
  • Documentation is produced at every stage of the software’s development. This makes understanding the product designing procedure, simpler.
  • After every major stage of software coding, testing is done to check the correct running of the code.

 Disadvantages of Waterfall Iterative Model:

  •  Real projects rarely follow the sequential flow and iterations in this model are handled indirectly. These changes can cause confusion as the project proceeds.
  •  In this model we freeze software and hardware. But as technology changes at a rapid pace, such freezing is not advisable especially in long-term projects.
  • Even a small change in any previous stage can cause big problem for subsequent phases as all phases are dependent on each-other.
  • Going back a phase or two can be a costly affair.

 

Bug Life cycle

Posted: September 3, 2012 in Testing Basics
Tags: buglife cyclelife og bugVarious Stages og bug
0

Bug Life cycle:

(Bug Life cycle) is the journey of a defect from its identification to its closure. The Life Cycle varies from organization to organization.

The different states of a bug can be summarized as follows:

1. New
2. Open
3. Assign
4. Test
5. Verified
6. Deferred
7. Reopened
8. Duplicate
9. Rejected and
10. Closed

Description of Various Stages:

  1. New: Tester finds a defect and posts it with the status NEW. This means that the bug is not yet approved.
  2. Open: After a tester has posted a bug, the lead of the tester approves that the bug is genuine and he changes the state as “OPEN”.
  3. Assign: Once the lead changes the state as “OPEN”, he assigns the bug to corresponding developer or developer team. The state of the bug now is changed to “ASSIGN”.
  4. Test: Once the developer fixes the bug, he has to assign the bug to the testing team for next round of testing. Before he releases the software with bug fixed, he changes the state of bug to “TEST”. It specifies that the bug has been fixed and is released to testing team.
  5. Deferred: If a valid NEW or ASSIGNED defect is decided to be fixed in upcoming releases instead of the current release it is DEFERRED. This defect is ASSIGNED when the time comes.
  6. Rejected: if Found bug is not invalid, it is DROPPED / REJECTED. Note that the specific reason for this action needs to be given.
  7. Duplicate: If the bug is repeated twice or the two bugs mention the same concept of the bug, then one bug status is changed to “DUPLICATE”.
  8. Verified: If the Tester / Test Lead finds that the defect is indeed fixed and is no more of any concern, it is VERIFIED.
  9. Reopened: If the Tester finds that the ‘fixed’ bug is in fact not fixed or only partially fixed, it is reassigned to the Developer who ‘fixed’ it. A REASSIGNED/Reopen  bug needs to be COMPLETED again.
  10. Closed: Once the bug is fixed, it is tested by the tester. If the tester feels that the bug no longer exists in the software, he changes the status of the bug to “CLOSED”. This state means that the bug is fixed, tested and approved.

 

0

STLC (Software Test Life Cycle):

Software testing life cycle (STLC) identifies what test activities to carry out i.e. the process of testing software in a well planned and systematic way is known as software testing lifecycle (STLC).STLC consists of six different phases.

1.Requirements Analysis:

In this phase testers analyze the customer requirements and work with developers during the design phase to see which requirements are testable.

2.Test Planning:

In this phase all the planning about testing is done like what needs to be tested, how the testing will be done, test strategy to be followed, what will be the test environment, what test methodologies will be followed, hardware and software availability, resources, risks etc. During the planning stage, the team of senior level persons comes out with an outline of Testing Plan at High Level. Test plan describe the following:

o   Scope of testing

o   Identification of resources

o   Identification of test strategies

o   Identification of Risks

o   Time schedule

3. Test Case Development:

In this phase test cases are created by tester for testing. In case of automation testing test scripts are created by the tester. This phase also involves the following activities:

o   Revision & finalization of Matrix for Functional Validation.

o   Revision & finalization of testing environment.

o   Review and baseline test cases and scripts

4.     Test Environment setup:

This phase involves the following activities:

  • Understand the required architecture, environment set-up.
  • Prepare hardware and software requirement list
  • Finalize connectivity requirements
  • Setup test Environment and test data
  • Prepare environment setup checklist

5.   Test Execution and Bug Reporting:

In this phase test cases are executed and defects are reported in bug tracking tool, after the test execution is complete and all the defects are reported. After developers fixes the bugs which are reported by the tester, tester conduct the regression testing to ensure that bug has been fixed and not affected any other areas of software

6. Test Cycle closure:

This phase involves the following activities:

  • Track the defects to closure
  • Evaluate cycle completion criteria based on – Time, Test coverage , Cost , Software Quality , Critical  Business ObjectivesPrepare test metrics based on the above parameters.
  •  Prepare Test closure report
  • Test result analysis to find out the defect distribution by type and severity

Test Metrics

Posted: August 30, 2012 in Testing Basics
Tags: Fault coverageMetricsTestsVariance
0

Test Metrics:

The objective of Test Metrics is to capture the planned and actual quantities the effort, time and resources required to complete all the phases of Testing of the SW Project. It provides a measure of the percentage of the software tested at any point during testing.
Test metrics should cover basically 3 things:
1. Test coverage
2. Time for one test cycle
3. Convergence of testing

There are various types of test metrics. Different organization used different types of test metrics.

Functional test coverage:It can be calculated as:
FC=Number of test requirements that are covered by test cases/Total number of test requirements.

Schedule Variance:Schedule Variance indicates how much ahead or behind schedule the testing is. It can be calculated as:
SV = (Actual End Date-Planned End Date) / (Planned End Date-Planned Start Date+1)*100

A high value in schedule variance may signify poor estimation. A low value in schedule variance may signify correct estimation, clear and well understood requirements.

Effort Variance: Effort may be measured in person hours or person days or person months. Effort variance would be computed for all tasks completed during a period .It can be calculated as:
EV= (Actual effort-Estimated effort)/ (Estimated Effort) X 100%

A high positive value of effort variance may signify optimistic estimation, changing business processes, high learning curve, new technology and/or functional area.
A high negative value of effort variance may signify pessimistic estimation or excessive buffering an efficient and skilful project team, high level of componentization and re-usability, clear plans and schedules.
A low value of effort variance may signify accuracy in estimation, timely availability of resources, no creeping requirements.

Defect Age (In Time): Defect Age is used to calculate the time from Introduction to Detection.
Average Age = Phase (Detected – Introduced) / Number of Defects

On-Time delivery: This metrics sheds light on the ability to meet customer commitments. On time delivery may be tracked during the course of the project based on the actual delivery dates and planned commitments for the deliveries done during a period.
OTD= ((No. Of Delivery on time)/Total No of due Delivery)*100

A low value of %On time delivery may signify poor planning and tracking, delays on account of customer, , incorrect estimation, or may point to a project risk having occurred.
A large value of %on time delivery may signify good planning, tracking and foresight, with a high responsiveness for immediate corrective action; a receptive customer, high commitment of the team, and good estimation.

Test cost: It is used to find resources consumed in the testing.
TC= test cost Vs total system cost
This meets identifies the amount of resources used in testing process.

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章