TYPES OF TESTING


TYPES OF TESTING

 

Testing of software can broadly classified into Black Box Testing and White Box Testing.

1.  Black Box Testing

 

1.1 FUNCTIONAL TESTING

 

In this type of testing, the software is tested for the functional requirements. The tests are written in order to check if the application behaves as expected. Although functional testing is often done toward the end of the development cycle, it can—and should, —be started much earlier. Individual components and processes can be tested early on, even before it's possible to do functional testing on the entire system. Functional testing covers how well the system executes the functions it is supposed to execute—including user commands, data manipulation, searches and business processes, user screens, and integrations. Functional testing covers the obvious surface type of functions, as well as the back-end operations (such as security and how upgrades affect the system).

 

1.2 STRESS TESTING

The application is tested against heavy load such as complex numerical values, large number of inputs, large number of queries etc. which checks for the stress/load the applications can withstand. Stress testing deals with the quality of the application in the environment.   The idea is to create an environment more demanding of the application than the application would experience under normal work loads. This is the hardest and most complex category of testing to accomplish and it requires a joint effort from all teams. A test environment is established with many testing stations. At each station, a script is exercising the system. These scripts are usually based on the regression suite. More and more stations are added, all simultaneous hammering on the system, until the system breaks. The system is repaired and the stress test is repeated until a level of stress is reached that is higher than expected to be present at a customer site. Race conditions and memory leaks are often found under stress testing. A race condition is a conflict between at least two tests. Each test works correctly when done in isolation. When the two tests are run in parallel, one or both of the tests fail. This is usually due to an incorrectly managed lock. A memory leak happens when a test leaves allocated memory behind and does not correctly return the memory to the memory allocation scheme. The test seems to run correctly, but after being exercised several times, available memory is reduced until the system fails.

1.3 LOAD TESTING

The application is tested against heavy loads or inputs such as testing of web sites in order to find out at what point the web-site/application fails or at what point its performance degrades. Load testing operates at a predefined load level, usually the highest load that the system can accept while still functioning properly. Note that load testing does not aim to break the system by overwhelming it, but instead tries to keep the system constantly humming like a well-oiled machine.In the context of load testing, extreme importance should be given of having large datasets available for testing. Bugs simply do not surface unless you deal with very large entities such thousands of users in repositories such as LDAP/NIS/Active Directory; thousands of mail server mailboxes, multi-gigabyte tables in databases, deep file/directory hierarchies on file systems, etc. Testers obviously need automated tools to generate these large data sets, but fortunately any good scripting language worth its salt will do the job.

1.4 AD-HOC TESTING

This type of testing is done without any formal Test Plan or Test Case creation. Ad-hoc testing helps in deciding the scope and duration of the various other testing and it also helps testers in learning the application prior starting with any other testing. It is the least formal method of testing. One of the best uses of ad hoc testing is for discovery. Reading the requirements or specifications (if they exist) rarely gives you a good sense of how a program actually behaves. Even the user documentation may not capture the “look and feel” of a program. Ad hoc testing can find holes in your test strategy, and can expose relationships between subsystems that would otherwise not be apparent. In this way, it serves as a tool for checking the completeness of your testing. Missing cases can be found and added to your testing arsenal. Finding new tests in this way can also be a sign that you should perform root cause analysis. Ask yourself or your test team, “What other tests of this class should we be running?” Defects found while doing ad hoc testing are often examples of entire classes of forgotten test cases. Another use for ad hoc testing is to determine the priorities for your other testing activities. In our example program, Panorama may allow the user to sort photographs that are being displayed. If ad hoc testing shows this to work well, the formal testing of this feature might be deferred until the problematic areas are completed. On the other hand, if ad hoc testing of this sorting photograph feature uncovers problems, then the formal testing might receive a higher priority.

1.5 EXPLORATORY TESTING

This testing is similar to the ad-hoc testing and is done in order to learn/explore the application.

Exploratory software testing is a powerful and fun approach to testing. In some situations, it can be orders of magnitude more productive than scripted testing. At least unconsciously, testers perform exploratory testing at one time or another. Yet it doesn't get much respect in our field. It can be considered as “Scientific Thinking” at real time

1.6 USABILITY TESTING

This testing is also called as ‘Testing for User-Friendliness’. This testing is done if User Interface of the application stands an important consideration and needs to be specific for the specific type of user. Usability testing is the process of working with end-users directly and indirectly to assess how the user perceives a software package and how they interact with it.  This process will uncover areas of difficulty for users as well as areas of strength.  The goal of usability testing should be to limit and remove difficulties for users and to leverage areas of strength for maximum usability. This testing should ideally involve direct user feedback, indirect feedback (observed behavior), and when possible computer supported feedback.  Computer supported feedback is often (if not always) left out of this process. Computer supported feedback can be as simple as a timer on a dialog to monitor how long it takes users to use the dialog and counters to determine how often certain conditions occur (ie. error messages, help messages, etc).  Often, this involves trivial modifications to existing software, but can result in tremendous return on investment. Ultimately, usability testing should result in changes to the delivered product in line with the discoveries made regarding usability.  These changes should be directly related to real-world usability by average users.  As much as possible, documentation should be written supporting changes so that in the future, similar situations can be handled with ease.

1.7 SMOKE TESTING

This type of testing is also called sanity testing and is done in order to check if the application is ready for further major testing and is working properly without failing up to least expected level. A test of new or repaired equipment by turning it on. If it smokes... guess what... it doesn't work! The term also refers to testing the basic functions of software. The term was originally coined in the manufacture of containers and pipes, where smoke was introduced to determine if there were any leaks. A common practice at Microsoft and some other shrink-wrap software companies is the "daily build and smoke test" process. Every file is compiled, linked, and combined into an executable program every day, and the program is then put through a "smoke test," a relatively simple check to see whether the product "smokes" when it runs.

1.8 RECOVERY TESTING

Recovery testing is basically done in order to check how fast and better the application can recover against any type of crash or hardware failure etc. Type or extent of recovery is specified in the requirement specifications. It is basically testing how well a system recovers from crashes, hardware failures, or other catastrophic problems

1.9 VOLUME TESTING

Volume testing is done against the efficiency of the application. Huge amount of data is processed through the application (which is being tested) in order to check the extreme limitations of the system

Volume Testing, as its name implies, is testing that purposely subjects a system (both hardware and software) to a series of tests where the volume of data being processed is the subject of the test. Such systems can be transactions processing systems capturing real time sales or could be database updates and or data retrieval.

Volume testing will seek to verify the physical and logical limits to a system's capacity and ascertain whether such limits are acceptable to meet the projected capacity of the organization’s business processing.

1.10 DOMAIN TESTING

Domain testing is the most frequently described test technique. Some authors write only about domain testing when they write about test design. The basic notion is that you take the huge space of possible tests of an individual variable and subdivide it into subsets that are (in some way) equivalent. Then you test a representative from each subset.

1.11 SCENARIO TESTING

 

Scenario tests are realistic, credible and motivating to stakeholders, challenging for the program and easy to evaluate for the tester. They provide meaningful combinations of functions and variables rather than the more artificial combinations you get with domain testing or combinatorial test design.

 

1.12 REGRESSION TESTING

 

Regression testing is a style of testing that focuses on retesting after changes are made. In traditional regression testing, we reuse the same tests (the regression tests). In risk-oriented regression testing, we test the same areas as before, but we use different (increasingly complex) tests. Traditional regression tests are often partially automated. These note focus on traditional regression.

Regression testing attempts to mitigate two risks:

o        A change that was intended to fix a bug failed.

o        Some change had a side effect, unfixing an old bug or introducing a new bug

 

Regression testing approaches differ in their focus. Common examples include:

Bug regression: We retest a specific bug that has been allegedly fixed.

Old fix regression testing: We retest several old bugs that were fixed, to see if they are back. (This is the classical notion of regression: the program has regressed to a bad state.)

General functional regression: We retest the product broadly, including areas that worked before, to see whether more recent changes have destabilized working code. (This is the typical scope of automated regression testing.)

Conversion or port testing: The program is ported to a new platform and a subset of the regression test suite is run to determine whether the port was successful. (Here, the main changes of interest might be in the new platform, rather than the modified old code.)

Configuration testing: The program is run with a new device or on a new version of the operating system or in conjunction with a new application. This is like port testing except that the underlying code hasn't been changed--only the external components that the software under test must interact with.

Localization testing: The program is modified to present its user interface in a different language and/or following a different set of cultural rules. Localization testing may involve several old tests (some of which have been modified to take into account the new language) along with several new (non-regression) tests.

Smoke testing also known as build verification testing:A relatively small suite of tests is used to qualify a new build. Normally, the tester is asking whether any components are so obviously or badly broken that the build is not worth testing or some components are broken in obvious ways that suggest a corrupt build or some critical fixes that are the primary intent of the new build didn't work. The typical result of a failed smoke test is rejection of the build (testing of the build stops) not just a new set of bug reports.

1.13 USER ACCEPTANCE TESTING

 

In this type of testing, the software is handed over to the user in order to find out if the software meets the user expectations and works as it is expected to. In software development, user acceptance testing (UAT) - also called beta testing, application testing, and end user testing - is a phase of software development in which the software is tested in the "real world" by the intended audience. UAT can be done by in-house testing in which volunteers or paid test subjects use the software or, more typically for widely-distributed software, by making the test version available for downloading and free trial over the Web. The experiences of the early users are forwarded back to the developers who make final changes before releasing the software commercially.

 

1.14 ALPHA TESTING

 

In this type of testing, the users are invited at the development center where they use the application and the developers note every particular input or action carried out by the user. Any type of abnormal behavior of the system is noted and rectified by the developers.

 

1.15 BETA TESTING

 

In this type of testing, the software is distributed as a beta version to the users and users test the application at their sites. As the users explore the software, in case if any exception/defect occurs that is reported to the developers. Beta testing comes after alpha testing. Versions of the software, known as beta versions, are released to a limited audience outside of the company. The software is released to groups of people so that further testing can ensure the product has few faults or bugs. Sometimes, beta versions are made available to the open public to increase the feedback field to a maximal number of future users.

 

 

2.    White Box Testing

 

2.1 UNIT TESTING

 

The developer carries out unit testing in order to check if the particular module or unit of code is working fine. The Unit Testing comes at the very basic level as it is carried out as and when the unit of the code is developed or a particular functionality is built. Unit testing deals with testing a unit   as a whole. This would test the interaction of many functions but confine the test within one unit. The exact scope of a unit is left to interpretation. Supporting test code, sometimes called scaffolding, may be necessary to support an individual test. This type of testing is driven by the architecture and implementation teams. This focus is also called black-box testing because only the details of the interface are visible to the test. Limits that are global to a unit are tested here. In the construction industry, scaffolding is a temporary, easy to assemble and disassemble, frame placed around a building to facilitate the construction of the building. The construction workers first build the scaffolding and then the building. Later the scaffolding is removed, exposing the completed building. Similarly, in software testing, one particular test may need some supporting software. This software establishes an environment around the test. Only when this environment is established can a correct evaluation of the test take place. The scaffolding software may establish state and values for data structures as well as providing dummy external functions for the test. Different scaffolding software may be needed from one test to another test. Scaffolding software rarely is considered part of the system. Sometimes the scaffolding software becomes larger than the system software being tested. Usually the scaffolding software is not of the same quality as the system software and frequently is quite fragile. A small change in the test may lead to much larger changes in the scaffolding. Internal and unit testing can be automated with the help of coverage tools. A coverage tool analyzes the source code and generates a test that will execute every alternative thread of execution. It is still up to the programmer to combine this test into meaningful cases to validate the result of each thread of execution. Typically, the coverage tool is used in a slightly different way. First the coverage tool is used to augment the source by placing informational prints after each line of code. Then the testing suite is executed generating an audit trail. This audit trail is analyzed and reports the percent of the total system code executed during the test suite. If the coverage is high and the untested source lines are of low impact to the system's overall quality, then no more additional tests are required.

 

2.2 STATIC & DYNAMIC ANALYSIS

 

Static analysis involves going through the code in order to find out any possible defect in the code. Dynamic analysis involves executing the code and analyzing the output.

 

2.3 STATEMENT COVERAGE

 

In this type of testing the code is executed in such a manner that every statement of the application is executed at least once. It helps in assuring that all the statements execute without any side effect.

 

2.4 BRANCH COVERAGE

 

No software application can be written in a continuous mode of coding, at some point we need to branch out the code in order to perform a particular functionality. Branch coverage testing helps in validating of all the branches in the code and making sure that no branching leads to abnormal behavior of the application.

 

2.5 SECURITY TESTING

 

Security Testing is carried out in order to find out how well the system can protect itself from unauthorized access, hacking – cracking, any code damage etc. which deals with the code of application. This type of testing needs sophisticated testing techniques.

 

2.6 MUTATION TESTING

 

A kind of testing in which, the application is tested for the code that was modified after fixing a particular bug/defect. It also helps in finding out which code and which strategy of coding can help in developing the functionality effectively.

 

 

3. TESTING GENERAL INTERVIEW QUESTIONS

 

What is Acceptance Testing?
Testing conducted to enable a user/customer to determine whether to accept a software product. Normally performed to validate the software meets a set of agreed acceptance criteria.

What is Accessibility Testing?
Verifying a product is accessible to the people having disabilities (deaf, blind, mentally disabled etc.).

 

What is Ad Hoc Testing?
A testing phase where the tester tries to 'break' the system by randomly trying the system's functionality. Can include negative testing as well. See also Monkey Testing.

What is Agile Testing?
Testing practice for projects using agile methodologies, treating development as the customer of testing and emphasizing a test-first design paradigm. See also Test Driven Development.

What is Application Binary Interface (ABI)?
A specification defining requirements for portability of applications in binary forms across defferent system platforms and environments.

What is Application Programming Interface (API)?
A formalized set of software calls and routines that can be referenced by an application program in order to access supporting system or network services.

What is Automated Software Quality (ASQ)?
The use of software tools, such as automated testing tools, to improve software quality.

What is Automated Testing?
Testing employing software tools which execute tests without manual intervention. Can be applied in GUI, performance, API, etc. testing.
The use of software to control the execution of tests, the comparison of actual outcomes to predicted outcomes, the setting up of test preconditions, and other test control and test reporting functions.
What is Backus-Naur Form?
A metalanguage used to formally describe the syntax of a language.

What is Basic Block?
A sequence of one or more consecutive, executable statements containing no branches.

What is Basis Path Testing?
A white box test case design technique that uses the algorithmic flow of the program to design tests.

What is Basis Set?
The set of tests derived using basis path testing.

What is Baseline?
The point at which some deliverable produced during the software engineering process is put under formal change control.
What you will do during the first day of job?
What would you like to do five years from now?

Tell me about the worst boss you've ever had.

What are your greatest weaknesses?

What are your strengths?

What is a successful product?

What do you like about Windows?

What is good code?

What are basic, core, practices for a QA specialist?

What do you like about QA?

What has not worked well in your previous QA experience and what would you change?

How you will begin to improve the QA process?

What is the difference between QA and QC?

What is UML and how to use it for testing?
What is Beta Testing?
Testing of a rerelease of a software product conducted by customers.

What is Binary Portability Testing?
Testing an executable application for portability across system platforms and environments, usually for conformation to an ABI specification.

What is Black Box Testing?
Testing based on an analysis of the specification of a piece of software without reference to its internal workings. The goal is to test how well the component conforms to the published requirements for the component.

What is Bottom Up Testing?
An approach to integration testing where the lowest level components are tested first, then used to facilitate the testing of higher level components. The process is repeated until the component at the top of the hierarchy is tested.

What is Boundary Testing?
Test which focus on the boundary or limit conditions of the software being tested. (Some of these tests are stress tests).
What is Bug?
A fault in a program which causes the program to perform in an unintended or unanticipated manner.

What is Boundary Value Analysis?
BVA is similar to Equivalence Partitioning but focuses on "corner cases" or values that are usually out of range as defined by the specification. his means that if a function expects all values in range of negative 100 to positive 1000, test inputs would include negative 101 and positive 1001.

What is Branch Testing?
Testing in which all branches in the program source code are tested at least once.

What is Breadth Testing?
A test suite that exercises the full functionality of a product but does not test features in detail.

What is CAST?
Computer Aided Software Testing.
What is CMMI?
What do you like about computers?

Do you have a favourite QA book? More than one? Which ones? And why.

What is the responsibility of programmers vs QA?

What are the properties of a good requirement?

Ho to do test if we have minimal or no documentation about the product?

What are all the basic elements in a defect report?

Is an "A fast database retrieval rate" a testable requirement?

What is software quality assurance?

What is the value of a testing group? How do you justify your work and budget?

What is the role of the test group vis-à-vis documentation, tech support, and so forth?

How much interaction with users should testers have, and why?

How should you learn about problems discovered in the field, and what should you learn from those problems?

What are the roles of glass-box and black-box testing tools?

What issues come up in test automation, and how do you manage them?
What is Capture/Replay Tool?
A test tool that records test input as it is sent to the software under test. The input cases stored can then be used to reproduce the test at a later time. Most commonly applied to GUI test tools.

What is CMM?
The Capability Maturity Model for Software (CMM or SW-CMM) is a model for judging the maturity of the software processes of an organization and for identifying the key practices that are required to increase the maturity of these processes.

What is Cause Effect Graph?
A graphical representation of inputs and the associated outputs effects which can be used to design test cases.

What is Code Complete?
Phase of development where functionality is implemented in entirety; bug fixes are all that are left. All functions found in the Functional Specifications have been implemented.

What is Code Coverage?
An analysis method that determines which parts of the software have been executed (covered) by the test case suite and which parts have not been executed and therefore may require additional attention.
What is Code Inspection?
A formal testing technique where the programmer reviews source code with a group who ask questions analyzing the program logic, analyzing the code with respect to a checklist of historically common programming errors, and analyzing its compliance with coding standards.

What is Code Walkthrough?
A formal testing technique where source code is traced by a group with a small set of test cases, while the state of program variables is manually monitored, to analyze the programmer's logic and assumptions.

What is Coding?
The generation of source code.

What is Compatibility Testing?
Testing whether software is compatible with other elements of a system with which it should operate, e.g. browsers, Operating Systems, or hardware.
What is Component?
A minimal software item for which a separate specification is available.

What is Component Testing?
See the question what is Unit Testing.

What is Concurrency Testing?
Multi-user testing geared towards determining the effects of accessing the same application code, module or database records. Identifies and measures the level of locking, deadlocking and use of single-threaded code and locking semaphores.

What is Conformance Testing?
The process of testing that an implementation conforms to the specification on which it is based. Usually applied to testing conformance to a formal standard.

What is Context Driven Testing?
The context-driven school of software testing is flavor of Agile Testing that advocates continuous and creative evaluation of testing opportunities in light of the potential information revealed and the value of that information to the organization right now.
What development model should programmers and the test group use?
How do you get programmers to build testability support into their code?

What is the role of a bug tracking system?

What are the key challenges of testing?

Have you ever completely tested any part of a product? How?

Have you done exploratory or specification-driven testing?

Should every business test its software the same way?

 

Discuss the economics of automation and the role of metrics in testing.

Describe components of a typical test plan, such as tools for interactive products and for database products, as well as cause-and-effect graphs and data-flow diagrams.

When have you had to focus on data integrity?

What are some of the typical bugs you encountered in your last assignment?

How do you prioritize testing tasks within a project?

How do you develop a test plan and schedule? Describe bottom-up and top-down approaches.

When should you begin test planning?

When should you begin testing?

What is Conversion Testing?
Testing of programs or procedures used to convert data from existing systems for use in replacement systems.

What is Cyclomatic Complexity?
A measure of the logical complexity of an algorithm, used in white-box testing.

What is Data Dictionary?
A database that contains definitions of all data items defined during analysis.

What is Data Flow Diagram?
A modeling notation that represents a functional decomposition of a system.

What is Data Driven Testing?
Testing in which the action of a test case is parameterized by externally defined data values, maintained as a file or spreadsheet. A common technique in Automated Testing.

What is Debugging?
The process of finding and removing the causes of software failures.

What is Defect?
Nonconformance to requirements or functional / program specification

What is Dependency Testing?
Examines an application's requirements for pre-existing software, initial states and configuration in order to maintain proper functionality.

What is Depth Testing?
A test that exercises a feature of a product in full detail.

What is Dynamic Testing?
Testing software through executing it. See also Static Testing.

What is Emulator?
A device, computer program, or system that accepts the same inputs and produces the same outputs as a given system.
What is Endurance Testing?
Checks for memory leaks or other problems that may occur with prolonged execution.

What is End-to-End testing?
Testing a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.

What is Equivalence Class?
A portion of a component's input or output domains for which the component's behaviour is assumed to be the same from the component's specification.

What is Equivalence Partitioning?
A test case design technique for a component in which test cases are designed to execute representatives from equivalence classes.

What is Exhaustive Testing?
Testing which covers all combinations of input values and preconditions for an element of the software under test.
What is Functional Decomposition?
A technique used during planning, analysis and design; creates a functional hierarchy for the software.

What is Functional Specification?
A document that describes in detail the characteristics of the product with regard to its intended features.

What is Functional Testing?
Testing the features and operational behavior of a product to ensure they correspond to its specifications.
Testing that ignores the internal mechanism of a system or component and focuses solely on the outputs generated in response to selected inputs and execution conditions.
See also What is Black Box Testing.

What is Glass Box Testing?
A synonym for White Box Testing.
Do you know of metrics that help you estimate the size of the testing effort?
How do you scope out the size of the testing effort?

How many hours a week should a tester work?

How should your staff be managed? How about your overtime?

How do you estimate staff requirements?

What do you do (with the project tasks) when the schedule fails?

How do you handle conflict with programmers?

How do you know when the product is tested well enough?

What characteristics would you seek in a candidate for test-group manager?

What do you think the role of test-group manager should be? Relative to senior management? Relative to other technical groups in the company? Relative to your staff?

How do your characteristics compare to the profile of the ideal manager that you just described?

How does your preferred work style work with the ideal test-manager role that you just described? What is different between the way you work and the role you described?

Who should you hire in a testing group and why?
What is Gorilla Testing?
Testing one particular module, functionality heavily.

What is Gray Box Testing?
A combination of Black Box and White Box testing methodologies? testing a piece of software against its specification but using some knowledge of its internal workings.

What is High Order Tests?
Black-box tests conducted once the software has been integrated.

What is Independent Test Group (ITG)?
A group of people whose primary responsibility is software testing,

What is Inspection?
A group review quality improvement process for written material. It consists of two aspects; product (document itself) improvement and process improvement (of both document production and inspection).
What is Integration Testing?
Testing of combined parts of an application to determine if they function together correctly. Usually performed after unit and functional testing. This type of testing is especially relevant to client/server and distributed systems.

What is Installation Testing?
Confirms that the application under test recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions.

What is Load Testing?
See Performance Testing.

What is Localization Testing?
This term refers to making software specifically designed for a specific locality.

What is Loop Testing?
A white box testing technique that exercises program loops.
What is Metric?
A standard of measurement. Software metrics are the statistics describing the structure or content of a program. A metric should be a real objective measurement of something such as number of bugs per lines of code.

What is Monkey Testing?
Testing a system or an Application on the fly, i.e just few tests here and there to ensure the system or an application does not crash out.

What is Negative Testing?
Testing aimed at showing software does not work. Also known as "test to fail". See also Positive Testing.

What is Path Testing?
Testing in which all paths in the program source code are tested at least once.

What is Performance Testing?
Testing conducted to evaluate the compliance of a system or component with specified performance requirements. Often this is performed using an automated test tool to simulate large number of users. Also know as "Load Testing".
What is Positive Testing?
Testing aimed at showing software works. Also known as "test to pass". See also Negative Testing.

What is Quality Assurance?
All those planned or systematic actions necessary to provide adequate confidence that a product or service is of the type and quality needed and expected by the customer.

What is Quality Audit?
A systematic and independent examination to determine whether quality activities and related results comply with planned arrangements and whether these arrangements are implemented effectively and are suitable to achieve objectives.

What is Quality Circle?
A group of individuals with related interests that meet at regular intervals to consider problems or other matters related to the quality of outputs of a process and to the correction of problems or to the improvement of quality.
What is Quality Control?
The operational techniques and the activities used to fulfill and verify requirements of quality.

What is Quality Management?
That aspect of the overall management function that determines and implements the quality policy.

What is Quality Policy?
The overall intentions and direction of an organization as regards quality as formally expressed by top management.

What is Quality System?
The organizational structure, responsibilities, procedures, processes, and resources for implementing quality management.

What is Race Condition?
A cause of concurrency problems. Multiple accesses to a shared resource, at least one of which is a write, with no mechanism used by either to moderate simultaneous access.

What is Ramp Testing?
Continuously raising an input signal until the system breaks down.
What is Recovery Testing?
Confirms that the program recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions.

What is Regression Testing?
Retesting a previously tested program following modification to ensure that faults have not been introduced or uncovered as a result of the changes made.

What is Release Candidate?
A pre-release version, which contains the desired functionality of the final version, but which needs to be tested for bugs (which ideally should be removed before the final version is released).

What is Sanity Testing?
Brief test of major functional elements of a piece of software to determine if its basically operational. See also Smoke Testing.

What is Scalability Testing?
Performance testing focused on ensuring the application under test gracefully handles increases in work load.
What is the role of metrics in comparing staff performance in human resources management?
How do you estimate staff requirements?

What do you do (with the project staff) when the schedule fails?

Describe some staff conflicts youÂ’ve handled.

Why did you ever become involved in QA/testing?

What is the difference between testing and Quality Assurance?

What was a problem you had in your previous assignment (testing if possible)? How did you resolve it?

What are two of your strengths that you will bring to our QA/testing team?

What do you like most about Quality Assurance/Testing?

What do you like least about Quality Assurance/Testing?

What is the Waterfall Development Method and do you agree with all the steps?

What is the V-Model Development Method and do you agree with this model?
What is Security Testing?
Testing which confirms that the program can restrict access to authorized personnel and that the authorized personnel can access the functions available to their security level.

What is Smoke Testing?
A quick-and-dirty test that the major functions of a piece of software work. Originated in the hardware testing practice of turning on a new piece of hardware for the first time and considering it a success if it does not catch on fire.

What is Soak Testing?
Running a system at high load for a prolonged period of time. For example, running several times more transactions in an entire day (or night) than would be expected in a busy day, to identify and performance problems that appear after a large number of transactions have been executed.

What is Software Requirements Specification?
A deliverable that describes all data, functional and behavioral requirements, all constraints, and all validation requirements for software/

What is Software Testing?
A set of activities conducted with the intent of finding errors in software.
What is Static Analysis?
Analysis of a program carried out without executing the program.

What is Static Analyzer?
A tool that carries out static analysis.

What is Static Testing?
Analysis of a program carried out without executing the program.

What is Storage Testing?
Testing that verifies the program under test stores data files in the correct directories and that it reserves sufficient space to prevent unexpected termination resulting from lack of space. This is external storage as opposed to internal storage.

What is Stress Testing?
Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements to determine the load under which it fails and how. Often this is performance testing using a very high level of simulated load.
What is Structural Testing?
Testing based on an analysis of internal workings and structure of a piece of software. See also White Box Testing.

What is System Testing?
Testing that attempts to discover defects that are properties of the entire system rather than of its individual components.

What is Testability?
The degree to which a system or component facilitates the establishment of test criteria and the performance of tests to determine whether those criteria have been met.

What is Testing?
The process of exercising software to verify that it satisfies specified requirements and to detect errors.
The process of analyzing a software item to detect the differences between existing and required conditions (that is, bugs), and to evaluate the features of the software item (Ref. IEEE Std 829).
The process of operating a system or component under specified conditions, observing or recording the results, and making an evaluation of some aspect of the system or component.
What is Test Automation? It is the same as Automated Testing.

What is Test Bed?
An execution environment configured for testing. May consist of specific hardware, OS, network topology, configuration of the product under test, other application or system software, etc. The Test Plan for a project should enumerated the test beds(s) to be used.
What is Test Case?
Test Case is a commonly used term for a specific test. This is usually the smallest unit of testing. A Test Case will consist of information such as requirements testing, test steps, verification steps, prerequisites, outputs, test environment, etc.
A set of inputs, execution preconditions, and expected outcomes developed for a particular objective, such as to exercise a particular program path or to verify compliance with a specific requirement.
Test Driven Development? Testing methodology associated with Agile Programming in which every chunk of code is covered by unit tests, which must all pass all the time, in an effort to eliminate unit-level and regression bugs during development. Practitioners of TDD write a lot of tests, i.e. an equal number of lines of test code to the size of the production code.

What is Test Driver?
A program or test tool used to execute a tests. Also known as a Test Harness.

What is Test Environment?
The hardware and software environment in which tests will be run, and any other software with which the software under test interacts when under test including stubs and test drivers.

What is Test First Design?
Test-first design is one of the mandatory practices of Extreme Programming (XP).It requires that programmers do not write any production code until they have first written a unit test.
What is a "Good Tester"?

Could you tell me two things you did in your previous assignment (QA/Testing related hopefully) that you are proud of?

List 5 words that best describe your strengths.

What are two of your weaknesses?

What methodologies have you used to develop test cases?

In an application currently in production, one module of code is being modified. Is it necessary to re- test the whole application or is it enough to just test functionality associated with that module?

How do you go about going into a new organization? How do you assimilate?

Define the following and explain their usefulness: Change Management, Configuration Management, Version Control, and Defect Tracking.

What is ISO 9000? Have you ever been in an ISO shop?

When are you done testing?

What is the difference between a test strategy and a test plan?

What is ISO 9003? Why is it important
What is Test Harness?
A program or test tool used to execute a tests. Also known as a Test Driver.

What is Test Plan?
A document describing the scope, approach, resources, and schedule of intended testing activities. It identifies test items, the features to be tested, the testing tasks, who will do each task, and any risks requiring contingency planning. Ref IEEE Std 829.

What is Test Procedure?
A document providing detailed instructions for the execution of one or more test cases.

What is Test Script?
Commonly used to refer to the instructions for a particular test that will be carried out by an automated test tool.

What is Test Specification?
A document specifying the test approach for a software feature or combination or features and the inputs, predicted results and execution conditions for the associated tests.
What is Test Suite?
A collection of tests used to validate the behavior of a product. The scope of a Test Suite varies from organization to organization. There may be several Test Suites for a particular product for example. In most cases however a Test Suite is a high level concept, grouping together hundreds or thousands of tests related by what they are intended to test.

What is Test Tools?
Computer programs used in the testing of a system, a component of the system, or its documentation.

What is Thread Testing?
A variation of top-down testing where the progressive integration of components follows the implementation of subsets of the requirements, as opposed to the integration of components by successively lower levels.

What is Top Down Testing?
An approach to integration testing where the component at the top of the component hierarchy is tested first, with lower level components being simulated by stubs. Tested components are then used to test lower level components. The process is repeated until the lowest level components have been tested.
What is Total Quality Management?
A company commitment to develop a process that achieves high quality product and customer satisfaction.

What is Traceability Matrix?
A document showing the relationship between Test Requirements and Test Cases.

What is Usability Testing?
Testing the ease with which users can learn and use a product.

What is Use Case?
The specification of tests that are conducted from the end-user perspective. Use cases tend to focus on operating software as an end-user would conduct their day-to-day activities.

What is Unit Testing?
Testing of individual software components.
What is Validation?
The process of evaluating software at the end of the software development process to ensure compliance with software requirements. The techniques for validation is testing, inspection and reviewing
What is Verification?
The process of determining whether of not the products of a given phase of the software development cycle meet the implementation steps and can be traced to the incoming objectives established during the previous phase. The techniques for verification are testing, inspection and reviewing.
What is Volume Testing?
Testing which confirms that any values that may become large over time (such as accumulated counts, logs, and data files), can be accommodated by the program and will not cause the program to stop working or degrade its operation in any manner.

What is Walkthrough?
A review of requirements, designs or code characterized by the author of the material under review guiding the progression of the review.
What is White Box Testing?
Testing based on an analysis of internal workings and structure of a piece of software. Includes techniques such as Branch Testing and Path Testing. Also known as Structural Testing and Glass Box Testing. Contrast with Black Box Testing.

What is Workflow Testing?
Scripted end-to-end testing which duplicates specific workflows which are expected to be utilized by the end-user.
What are ISO standards? Why are they important?
What is IEEE 829? (This standard is important for Software Test Documentation-Why?)

What is IEEE? Why is it important?

Do you support automated testing? Why?

We have a testing assignment that is time-driven. Do you think automated tests are the best solution?

What is your experience with change control? Our development team has only 10 members. Do you think managing change is such a big deal for us?

Are reusable test cases a big plus of automated testing and explain why.

Can you build a good audit trail using Compuware's QACenter products. Explain why.

How important is Change Management in today's computing environments?

Do you think tools are required for managing change. Explain and please list some tools/practices which can help you managing change.

We believe in ad-hoc software processes for projects. Do you agree with this? Please explain your answer.

When is a good time for system testing?
Are regression tests required or do you feel there is a better use for resources?

Our software designers use UML for modeling applications. Based on their use cases, we would like to plan a test strategy. Do you agree with this approach or would this mean more effort for the testers.


Tell me about a difficult time you had at work and how you worked through it.


Give me an example of something you tried at work but did not work out so you had to go at things another way.

 

How can one file compare future dated output files from a program which has change, against the baseline run which used current date for input. The client does not want to mask dates on the output files to allow compares
Test Automation
What automating testing tools are you familiar with?
How did you use automating testing tools in your job?

Describe some problem that you had with automating testing tool.

How do you plan test automation?

Can test automation improve test effectiveness?

What is data - driven automation?

What are the main attributes of test automation?

Does automation replace manual testing?

How will you choose a tool for test automation?

How you will evaluate the tool for test automation?

What are main benefits of test automation?

What could go wrong with test automation?

How you will describe testing activities?

What testing activities you may want to automate?

 

 

4. WINRUNNER INTERVIEW QUESTIONS

 

4.1 How you used WinRunner in your project? - Yes, I have been using WinRunner for creating automated scripts for GUI, functional and regression testing of the AUT.

 

4.2 Explain WinRunner testing process?

WinRunner testing process involves six main stages

o        Create GUI Map File so that WinRunner can recognize the GUI objects in the application being tested

o        Create test scripts by recording, programming, or a combination of both. While recording tests, insert checkpoints where you want to check the response of the application being tested.

o        Debug Test: run tests in Debug mode to make sure they run smoothly

o        Run Tests: run tests in Verify mode to test your application.

o        View Results: determines the success or failure of the tests.

o        Report Defects: If a test run fails due to a defect in the application being tested, you can report information about the defect directly from the Test Results window.

4.3 What is contained in the GUI map?

 

WinRunner stores information it learns about a window or object in a GUI Map. When WinRunner runs a test, it uses the GUI map to locate objects. It reads an object.s description in the GUI map and then looks for an object with the same properties in the application being tested. Each of these objects in the GUI Map file will be having a logical name and a physical description. There are 2 types of GUI Map files. Global GUI Map file: a single GUI Map file for the entire application. GUI Map File per Test: WinRunner automatically creates a GUI Map file for each test created.

 

4.4 How does WinRunner recognize objects on the application?

 

WinRunner uses the GUI Map file to recognize objects on the application. When WinRunner runs a test, it uses the GUI map to locate objects. It reads an object.s description in the GUI map and then looks for an object with the same properties in the application being tested.

4.5 Have you created test scripts and what is contained in the test scripts? 

 

Yes I have created test scripts. It contains the statement in Mercury Interactive.s Test Script Language (TSL). These statements appear as a test script in a test window. You can then enhance your recorded test script, either by typing in additional TSL functions and programming elements or by using WinRunner.s visual programming tool, the Function Generator.

 

4.6 How does WinRunner evaluate test results?

 

Following each test run, WinRunner displays the results in a report. The report details all the major events that occurred during the run, such as checkpoints, error messages, system messages, or user messages. If mismatches are detected at checkpoints during the test run, you can view the expected results and the actual results from the Test Results window.

 

 

5. LOADRUNNER INTERVIEW QUESTIONS

 

5.1 What is load testing? - Load testing is to test that if the application works fine with the loads that result from large number of simultaneous users, transactions and to determine weather it can handle peak usage periods.

5.2 What is Performance testing? - Timing for both read and update transactions should be gathered to determine whether system functions are being performed in an acceptable timeframe. This should be done standalone and then in a multi user environment to determine the effect of multiple transactions on the timing of a single transaction.

1.    Did u use LoadRunner? What version? - Yes. Version 7.2.

2.    Explain the Load testing process? -
Step 1: Planning the test. Here, we develop a clearly defined test plan to ensure the test scenarios we develop will accomplish load-testing objectives.
 Step 2: Creating Vusers. Here, we create Vuser scripts that contain tasks performed by each Vuser, tasks performed by Vusers as a whole, and tasks measured as transactions. Step 3: Creating the scenario. A scenario describes the events that occur during a testing session. It includes a list of machines, scripts, and Vusers that run during the scenario. We create scenarios using LoadRunner Controller. We can create manual scenarios as well as goal-oriented scenarios. In manual scenarios, we define the number of Vusers, the load generator machines, and percentage of Vusers to be assigned to each script. For web tests, we may create a goal-oriented scenario where we define the goal that our test has to achieve. LoadRunner automatically builds a scenario for us. 
Step 4: Running the scenario.
We emulate load on the server by instructing multiple Vusers to perform tasks simultaneously. Before the testing, we set the scenario configuration and scheduling. We can run the entire scenario, Vuser groups, or individual Vusers.
 
Step 5: Monitoring the scenario.
We monitor scenario execution using the LoadRunner online runtime, transaction, system resource, Web resource, Web server resource, Web application server resource, database server resource, network delay, streaming media resource, firewall server resource, ERP server resource, and Java performance monitors.
 Step 6: Analyzing test results. During scenario execution, LoadRunner records the performance of the application under different loads. We use LoadRunner.s graphs and reports to analyze the application.s performance.

3.    When do you do load and performance Testing? - We perform load testing once we are done with interface (GUI) testing. Modern system architectures are large and complex. Whereas single user testing primarily on functionality and user interface of a system component, application testing focuses on performance and reliability of an entire system. For example, a typical application-testing scenario might depict 1000 users logging in simultaneously to a system. This gives rise to issues such as what is the response time of the system, does it crash, will it go with different software applications and platforms, can it hold so many hundreds and thousands of users, etc. This is when we set do load and performance testing.

4.    What are the components of LoadRunner? - The components of LoadRunner are The Virtual User Generator, Controller, and the Agent process, LoadRunner Analysis and Monitoring, LoadRunner Books Online.

5.    What Component of LoadRunner would you use to record a Script? - The Virtual User Generator (VuGen) component is used to record a script. It enables you to develop Vuser scripts for a variety of application types and communication protocols.

6.    What Component of LoadRunner would you use to play Back the script in multi user mode? - The Controller component is used to playback the script in multi-user mode. This is done during a scenario run where a vuser script is executed by a number of vusers in a group.

7.    What is a rendezvous point? - You insert rendezvous points into Vuser scripts to emulate heavy user load on the server. Rendezvous points instruct Vusers to wait during test execution for multiple Vusers to arrive at a certain point, in order that they may simultaneously perform a task. For example, to emulate peak load on the bank server, you can insert a rendezvous point instructing 100 Vusers to deposit cash into their accounts at the same time.

8.    What is a scenario? - A scenario defines the events that occur during each testing session. For example, a scenario defines and controls the number of users to emulate, the actions to be performed, and the machines on which the virtual users run their emulations.

9.    Explain the recording mode for web Vuser script? - We use VuGen to develop a Vuser script by recording a user performing typical business processes on a client application. VuGen creates the script by recording the activity between the client and the server. For example, in web based applications, VuGen monitors the client end of the database and traces all the requests sent to, and received from, the database server. We use VuGen to: Monitor the communication between the application and the server; Generate the required function calls; and Insert the generated function calls into a Vuser script.

10.Why do you create parameters? - Parameters are like script variables. They are used to vary input to the server and to emulate real users. Different sets of data are sent to the server each time the script is run. Better simulate the usage model for more accurate testing from the Controller; one script can emulate many different users on the system.

11.What is correlation? Explain the difference between automatic correlation and manual correlation? - Correlation is used to obtain data which are unique for each run of the script and which are generated by nested queries. Correlation provides the value to avoid errors arising out of duplicate values and also optimizing the code (to avoid nested queries). Automatic correlation is where we set some rules for correlation. It can be application server specific. Here values are replaced by data which are created by these rules. In manual correlation, the value we want to correlate is scanned and create correlation is used to correlate.

12.How do you find out where correlation is required? Give few examples from your projects? - Two ways: First we can scan for correlations, and see the list of values which can be correlated. From this we can pick a value to be correlated. Secondly, we can record two scripts and compare them. We can look up the difference file to see for the values which needed to be correlated.  In my project, there was a unique id developed for each customer, it was nothing but Insurance Number, it was generated automatically and it was sequential and this value was unique. I had to correlate this value, in order to avoid errors while running my script. I did using scan for correlation.

13.Where do you set automatic correlation options? - Automatic correlation from web point of view can be set in recording options and correlation tab. Here we can enable correlation for the entire script and choose either issue online messages or offline actions, where we can define rules for that correlation. Automatic correlation for database can be done using show output window and scan for correlation and picking the correlate query tab and choose which query value we want to correlate. If we know the specific value to be correlated, we just do create correlation for the value and specify how the value to be created.

14.What is a function to capture dynamic values in the web Vuser script? - Web_reg_save_param function saves dynamic data information to a parameter.

15.When do you disable log in Virtual User Generator, When do you choose standard and extended logs? - Once we debug our script and verify that it is functional, we can enable logging for errors only. When we add a script to a scenario, logging is automatically disabled. Standard Log Option: When you select
Standard log, it creates a standard log of functions and messages sent during script execution to use for debugging. Disable this option for large load testing scenarios. When you copy a script to a scenario, logging is automatically disabled Extended Log Option: Select
extended log to create an extended log, including warnings and other messages. Disable this option for large load testing scenarios. When you copy a script to a scenario, logging is automatically disabled. We can specify which additional information should be added to the extended log using the Extended log options.

16.How do you debug a LoadRunner script? - VuGen contains two options to help debug Vuser scripts-the Run Step by Step command and breakpoints. The Debug settings in the Options dialog box allow us to determine the extent of the trace to be performed during scenario execution. The debug information is written to the Output window. We can manually set the message class within your script using the lr_set_debug_message function. This is useful if we want to receive debug information about a small section of the script only.

17.How do you write user defined functions in LR? Give me few functions you wrote in your previous project? - Before we create the User Defined functions we need to create the external
library (DLL) with the function. We add this library to VuGen bin directory. Once the library is added then we assign user defined function as a parameter. The function should have the following format: __declspec (dllexport) char* <function name>(char*, char*)Examples of user defined functions are as follows:GetVersion, GetCurrentTime, GetPltform are some of the user defined functions used in my earlier project.

18.What are the changes you can make in run-time settings? - The Run Time Settings that we make are: a) Pacing - It has iteration count. b) Log - Under this we have Disable Logging Standard Log and c) Extended Think Time - In think time we have two options like Ignore think time and Replay think time. d) General - Under general tab we can set the vusers as process or as multithreading and whether each step as a transaction.

 

 

6. SQA (Software Quality Assurance) INTERVIEW QUESTIONS

 

6.1. The top management was feeling that when there are any changes in the technology being used, development schedules etc, it was a waste of time to update the Test Plan. Instead, they were emphasizing that you should put your time into testing than working on the test plan. Your Project Manager asked for your opinion. You have argued that Test Plan is very important and you need to update your test plan from time to time. It’s not a waste of time and testing activities would be more effective when you have your plan clear. Use some metrics. How you would support your argument to have the test plan consistently updated all the time.

 

1.    The QAI is starting a project to put the CSTE certification online. They will use an automated process for recording candidate information, scheduling candidates for exams, keeping track of results and sending out certificates. Write a brief test plan for this new project.

2.    The project had a very high cost of testing. After going in detail, someone found out that the testers are spending their time on software that doesnt have too many defects. How will you make sure that this is correct?

3.    What are the disadvantages of overtesting?

4.    What happens to the test plan if the application has a functionality not mentioned in the requirements?

5.    You are given two scenarios to test. Scenario 1 has only one terminal for entry and processing whereas scenario 2 has several terminals where the data input can be made. Assuming that the processing work is the same, what would be the specific tests that you would perform in Scenario 2, which you would not carry on Scenario 1?

6.    Your customer does not have experience in writing Acceptance Test Plan. How will you do that in coordination with customer? What will be the contents of Acceptance Test Plan?

7.    How do you know when to stop testing?

8.    What can you do if the requirements are changing continuously?

9.    What is the need for Test Planning?

10.What are the various status reports you will generate to Developers and Senior Management?

11.Define and explain any three aspects of code review?

12.Why do you need test planning?

13.Explain 5 risks in an e-commerce project. Identify the personnel that must be involved in the risk analysis of a project and describe their duties. How will you prioritize the risks?

14.What are the various status reports that you need generate for Developers and Senior Management?

15.You have been asked to design a Defect Tracking system. Think about the fields you would specify in the defect tracking system?

16.Write a sample Test Policy?

17.Explain the various types of testing after arranging them in a chronological order?

18.Explain what test tools you will need for client-server testing and why?

19.Explain what test tools you will need for Web app testing and why?

20.Explain pros and cons of testing done development team and testing by an independent team?

21.Differentiate Validation and Verification?

22.Explain Stress, Load and Performance testing?

23.Describe automated capture/playback tools and list their benefits?

24.How can software QA processes be implemented without stifling productivity?

1.    How is testing affected by object-oriented designs?

2.    What is extreme programming and what does it have to do with testing?

3.    Write a test transaction for a scenario where 6.2% of tax deduction for the first $62,000 of income has to be done?

4.    What would be the Test Objective for Unit Testing? What are the quality measurements to assure that unit testing is complete?

5.    Prepare a checklist for the developers on Unit Testing before the application comes to testing department.

6.    Draw a pictorial diagram of a report you would create for developers to determine project status.

7.    Draw a pictorial diagram of a report you would create for users and management to determine project status.

8.    What 3 tools would you purchase for your company for use in testing? Justify the need?

9.    Put the following concepts, put them in order, and provide a brief description of each:

o        system testing

o        acceptance testing

o        unit testing

o        integration testing

o        benefits realization testing

10.What are two primary goals of testing?

11.If your company is going to conduct a review meeting, who should be on the review committe and why?

12.Write any three attributes which will impact the Testing Process?

13.What activity is done in Acceptance Testing, which is not done in System testing?

14.You are a tester for testing a large system. The system data model is very large with many attributes and there are a lot of inter-dependencies within the fields. What steps would you use to test the system and also what are the effects of the steps you have taken on the test plan?

15.Explain and provide examples for the following black box techniques?

o        Boundary Value testing

o        Equivalence testing

o        Error Guessing

16.What are the product standards for?

o        Test Plan

o        Test Script and Test Report

17.You are the test manager starting on system testing. The development team says that due to a change in the requirements, they will be able to deliver the system for SQA 5 days past the deadline. You cannot change the resources (work hours, days, or test tools). What steps will you take to be able to finish the testing in time?

18.Your company is about to roll out an e-commerce application. Its not possible to test the application on all types of browsers on all platforms and operating systems. What steps would you take in the testing environment to reduce the business risks and commercial risks?

19.In your organization, testers are delivering code for system testing without performing unit testing. Give an example of test policy:

o        Policy statement

o        Methodology

o        Measurement

1.    Testers in your organization are performing tests on the deliverables even after significant defects have been found. This has resulted in unnecessary testing of little value, because re-testing needs to be done after defects have been rectified. You are going to update the test plan with recommendations on when to halt testing. Wwhat recommendations are you going to make?

2.    How do you measure:

o        Test Effectiveness

o        Test Efficiency

3.    You found out the senior testers are making more mistakes then junior testers; you need to communicate this aspect to the senior tester. Also, you dont want to lose this tester. How should one go about constructive criticism?

4.    You are assigned to be the test lead for a new program that will automate take-offs and landings at an airport. How would you write a test strategy for this new program?

 

 

7. TEST AUTOMATION INTERVIEW QUESTIONS

 

1. What automating testing tools are you familiar with?
2. How did you use automating testing tools in your job?
3. Describe some problem that you had with automating testing tool.
4. How do you plan test automation?
5. Can test automation improve test effectiveness?

6. What is data - driven automation?
7. What are the main attributes of test automation?
8. Does automation replace manual testing?
9. How will you choose a tool for test automation?
10. How you will evaluate the tool for test automation?
11. What are main benefits of test automation?
12. What could go wrong with test automation?
13. How you will describe testing activities?
14. What testing activities you may want to automate?
15. Describe common problems of test automation.
16. What types of scripting techniques for test automation do you know?
17. What are principles of good testing scripts for automation?
18. What tools are available for support of testing during software development life cycle?
19. Can the activities of test case design be automated?
20. What are the limitations of automating software testing?
21. What skills needed to be a good software test automator?
22. How to find that tools work well with your existing system?
23. Describe some problem that you had with automating testing tool.
24. What are the main attributes of test automation?
25. What testing activities you may want to automate in a project?

26. How to find that tools work well with your existing system?
27. What are some of the common misconceptions during implementation of an automated testing tools for the first time?

 

 

8. CERTIFICATIONS

 

8.1 International Software Testing Qualifications Board

 

The ISTQB (International Software Testing Qualifications Board) was founded in Edinburgh in November 2002. The EOQ-SG (European Organisation for Quality – Software Group) is the legal entity acting as the umbrella organization for the ISTQB.

The ISTQB is responsible for the international qualification scheme called "ISTQB Certified Tester". The qualifications are based on a syllabus, and there is a hierarchy of qualifications and guidelines for accreditation and examination. The ISTQB Foundation Level exam replaced the existing Information Systems Examination Board (ISEB) Foundation Exam as of 2006-06-01.

It is the ISTQB's role to support a single, universally accepted, international qualification scheme, aimed at software and system testing professionals, by providing the core syllabi and by setting guidelines for accreditation and examination for national boards.

The contents of each syllabus are taught as courses by training providers, which have been accredited by national boards. They are globally marketed under the brand name "ISTQB Certified Tester". Each course is concluded by an examination covering the contents of the syllabus. After the examination, each successful participant receives the "ISTQB Certified Tester" certificate (or the local variant with the added "ISTQB compliant" logo).

The accreditation process and certification are regulated by accreditation and certification regulations of the national boards in their various valid versions.

 

For MORE DETAILS

 

ISTQB Official Website

Syllabus (Version 2007)

8.2 Certified Software Tester (CSTE) certification

 

Certified Software Tester (CSTE) certification is a formal recognition of a level of proficiency in the software testing industry. The recipient is acknowledged as having an overall comprehension of the Common Body of Knowledge (CBOK) for the Software Testing Profession.

Inherent Benefits

For the Individual

·         CSTE certification is proof that you've mastered a basic skill set recognized worldwide in the Testing arena.

·         CSTE certification can result in more rapid career advancement.

·         Results in greater acceptance in the role of an advisor to upper management.

·         Assists individuals in improving and enhancing their organization's software testing programs.

·         Motivates personnel having software-testing responsibilities to maintain their professional competency.

For the Organization

·         CSTE is expected to be a 'change agent', someone who can change the culture and work habits of individuals to make quality in software testing happen.

·         Aids organizations in selecting and promoting qualified individuals

·         Demonstrates an individual's willingness to improve professionally.

·         Defines the tasks (skill domains) associated with software testing duties in order to evaluate skill mastery.

·         Acknowledges attainment of an acceptable standard of professional competency.

For further details on the Software Certification Program, visit www.softwarecertifications.org or write at [email protected]

For MORE DETAILS

8.3 Certified Software Test Professional (CSTP)

 

The International Institute for Software Testing (IIST) has been offering the Certified Software Test Professional (CSTP) certification since 1999. Currently there are thousands of people at different stages in the CSTP program.

CSTP is an education-based certification, based on a Body of Knowledge that covers areas essential for every test professional to effectively perform their job in testing projects.

Objectives of the CSTP certification

·         Help individuals develop their software testing skills through formal education

·         Establish a common skill set for software testing professionals according to a well-defined Body of Knowledge

·         Create a pool of qualified software testing professionals

·         Prepare candidates for a wider range of software testing assignments

·         Complement company in-house and on-the-job training programs

·         Provide professional recognition and career enhancement

For MORE DETAILS

 

8.4 Certified Test Manager (CTM) certification

 

The International Institute for Software Testing (IIST) has been offering the Certified Software Test Professional (CSTP) certification since 1999. Currently there are thousands of people at different stages in the CSTP program. Although CSTP has been serving the purpose of establishing a foundation of software testing and providing test professionals with the skill and knowledge necessary to perform different test activities, a gap still exists in the management skills required by test managers and test leads to effectively manage the test process, the test project and the test organization. The Certified Test Manager (CTM) certification has been created to fill this gap. CTM is based on the Test Management Body of Knowledge (TMBOK) developed by IIST through its Advisory Board.

Objectives of the CSTP certification

The CTM Certification was developed to fill the gap in the management skills required by test managers and test leads to effectively manage the test process, the test project and the test organization. Specifically, CTM aims at achieving the following objectives:

  • Help individuals develop their test management skills through formal education
  • Establish a common skill set for software test managers and test leads based on a well-defined Test Management Body of Knowledge (TMBOK)
  • Create a pool of qualified software test managers
  • Prepare test professionals, especially those who achieved the Certified Test Professional (CSTP) designation for management and lead positions in software testing projects
  • Provide professional recognition and career enhancement for those who manage test projects.

For MORE DETAILS

8.5 Certified Software Quality Analyst (CSQA) certification

 

Acquiring the designation of Certified Software Quality Analyst (CSQA) indicates a professional level of competence in the principles and practices of quality assurance in the IT profession.

Inherent Benefits


For the Individual

  • The certification proves helpful in initiating new projects & software process improvement
  • It helps in evaluating deployment options, including ROI, with a holistic perspective
  • Helps in understanding the communication skills required for an interface with the management
  • CSQAs contribute towards 'QA', 'QC' function, 'SEPGSM Group', and metrics based process management

For the Organization

  • Enables management to distinguish CSQAs as professionals who can act as change agents for new or existing initiatives.
  • Helps in weeding out inefficiencies in the system to minimize cost and increase ROI.
  • Helps in identifying a competent workforce with the ability to drive quantified process improvement initiatives, using approaches like CMMI®, Six Sigma.
  • In the increasingly competitive scenario of the IT industry, Quality is the only sustainable advantage for an organization or industry sector. Organizations looking for robust processes or performance management of processes (performance scorecard), require CSQAs who are crucial for the success of this journey.
  • The CSQA examination is designed to identify an individual's capability to assist the management in improving the quality of information systems, evaluate his/her ability to apply quality assurance knowledge to practice, and provide a foundation for granting professional recognition.


For further details on the Software Certification Program, visit www.softwarecertifications.org or write at [email protected]

For MORE DETAILS

 

8.6 Certified Software Project Manager (CSPM) certification

 

The need for improved and more reliable Software Project Management calls for professionals who can effectively design, manage, test, and monitor the status of software projects. Organizations now demand high quality software, delivered on time and within budget.
Certification is recommended as a means to define the Common Body of Knowledge for the practice of software project management, and to evaluate an individual
s ability to apply that knowledge to practice.

Objectives and Benefits

The CSPM program is intended to establish standards for initial qualification, and continuing improvement of professional competence. This helps to:

Define the tasks (skill domains) associated with software project management activities in order to evaluate mastery of these activities

Demonstrate an individual’s willingness to improve professionally

Acknowledge attainment of an acceptable standard of professional competency

Aid organizations in selecting and promoting qualified individuals

Motivate personnel having software project management responsibilities to maintain their professional competency Assist individuals in improving and enhancing their organization’s software project management programs (i.e., provide a mechanism to lead a professional)

 

 

Qualifications

Recipients of the CSPM designation will be those who demonstrate experience and knowledge in matters related to software project management.
The assessment is designed to certify candidates who:

Have three years current software project management experience, OR two years of current software project management work experience with a bachelor’s degree from an accredited college-level institution,

 

AND

Can demonstrate proficiency in practicing software project management.

 



For any further details on the Software Certification Program please visit www.softwarecertifications.org or write to us at [email protected]

For MORE DETAILS

 

8.7 Certified Software Process Engineer (CSPE) certification

 

The Certified Software Process Engineer (CSPE) certification aims at developing professionals, who can be a part of the process improvement team and can support Quality Head/ Process Improvement team to develop software process, track and implement process improvement suggestions, analyze and solve quality problems. The CSPE certification can be taken by fresh graduates and individuals with work experience, aspiring to develop skills and knowledge in quality tools and processes and join an organization's process improvement initiative.

Since the mid 1980s, more and more software organizations have started using models and standards to support and improve their software engineering activities. Most of the process improvement programs have achieved positive results. However, a lot many have died a silent death for lack of experienced professionals participating in these process improvement initiatives. CSPE certification is a step in the direction to help bridge the gap between the demand and supply of qualified software process improvement professionals.

CPSE certification demonstrates that the certified individual has the knowledge to:

  • Understand software development lifecycles
  • Understand what is software process, software process improvement and how to implement it in an organization
  • Develop and implement software development and maintenance processes and methods
  • Understand the basic concepts related to software quality
  • Understand and use software review and testing techniques
  • Understand basics of software project management, team organization and project scheduling concepts
  • Understand the importance of measurement in software project management
  • Define basic metrics for software projects

Target Audience

  • Software professionals with 0 to 2 years of experience who want to be a part of Process Improvement Team/ SEPG (Software Engineering Process Group) members
  • Fresh graduates who intend to join quality domain as Process Improvement Team/ SEPG member
  • Professionals from non-IT domain with an intend to switch to software domain

Prerequisites

  • Education: Graduate or equivalent
  • Waiver: If a person has more than 2 years of experience in the software field the education requirement can be waived

Recommended Reading

  • EdistaLearning Modules
    • SE 100 Series: Software Engineering Process Approach
      • SE 101: An Introduction to Software Engineering
      • SE 102: Software Process Models
      • SE 103: Common Process Framework
      • SE 104: Software Process Improvement
      • SE 105: Advanced Software Process Model
    • SE 201: Basic Concepts of Software Project Management
    • SE 202: Software Project Measurement and Metrics
    • SE 301: Basic Concepts of Software Quality
    • SE 303: Formal Technical Reviews
    • SE 501: An Introduction to Software Testing
  • Books and White Papers
    • Humphrey, W.S., Managing the Software Process, Reading M A
    • Pressman, R.S., Software Engineering – A Practitioner’s Approach, Fifth edition, McGraw-Hill Publishing Company, 2001
    • Grady, R. B., Successful Software Process Improvement, Prentice Hall, 1997
    • ISO 9001:2000, Quality management systems – Requirements, International Organization for Standardization
    • Schulmeyer, C. G., Zero Defect Software, McGraw-Hill Publishing Company, 1990, p. 33
    • Cavano, J.P. and J. A. McCall, A Framework for the Measurement of Software Quality, Proc. ACM Software Quality Assurance Workshop, November, 1978, pp. 133-139
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章