Monday, September 3, 2007

Complete Test Cycle

Testing
Testing is a process used to help identify the correctness, completeness and quality of developed computer software. With that in mind, testing can never completely establish the correctness of computer software. In other words Testing is nothing but CRITICISM or COMPARISION. Here comparison in the sense comparing the actual value with expected one.
There are many approaches to software testing, but effective testing of complex products is essentially a process of investigation, not merely a matter of creating and following rote procedure. One definition of testing is "the process of questioning a product in order to evaluate it", where the "questions" are things the tester tries to do with the product, and the product answers with its behavior in reaction to the probing of the tester. Although most of the intellectual processes of testing are nearly identical to that of review or inspection, the word testing is connoted to mean the dynamic analysis of the product—putting the product through its paces.
The quality of the application can and normally does vary widely from system to system but some of the common quality attributes include reliability, stability, portability, maintainability and usability. Refer to the ISO standard ISO 9126 for a more complete list of attributes and criteria.
Testing helps is verifying and Validating if the Software is working as it is intended to be working. Thins involves using Static and Dynamic methodologies to Test the application.
Because of the fallibility of its human designers and its own abstract, complex nature, software development must be accompanied by quality assurance activities. It is not unusual for developers to spend 40% of the total project time on testing. For life-critical software (e.g. flight control, reactor monitoring), testing can cost 3 to 5 times as much as all other activities combined. The destructive nature of testing requires that the developer discard preconceived notions of the correctness of his/her developed software. Software Testing Fundamentalstesting objectives include1. Testing is a process of executing a program with the intent of finding an error.2. A good test case is one that has a high probability of finding an as yet undiscovered error.3. A successful test is one that uncovers an as yet undiscovered error.Testing should systematically uncover different classes of errors in a minimum amount of time and with a minimum amount of effort. A secondary benefit of testing is that it demonstrates that the software appears to be working as stated in the specifications. The data collected through testing can also provide an indication of the software's reliability and quality. But, testing cannot show the absence of defect -- it can only show that software defects are present.

When Testing should start:
Testing early in the life cycle reduces the errors. Test deliverables are associated with every phase of development. The goal of Software Tester is to find bugs, find them as early as possible, and make them sure they are fixed.
The number one cause of Software bugs is the Specification. There are several reasons specifications are the largest bug producer.
In many instances a Spec simply isn’t written. Other reasons may be that the spec isn’t thorough enough, it’s constantly changing, or it’s not communicated well to the entire team. Planning software is vitally important. If it’s not done correctly bugs will be created.
The next largest source of bugs is the Design, That’s where the programmers lay the plan for their Software. Compare it to an architect creating the blue print for the building, Bugs occur here for the same reason they occur in the specification. It’s rushed, changed, or not well communicated.
Coding errors may be more familiar to you if you are a programmer. Typically these can be traced to the Software complexity, poor documentation, schedule pressure or just plain dump mistakes. It’s important to note that many bugs that appear on the surface to be programming errors can really be traced to specification. It’s quite common to hear a programmer say, “Oh, so that’s what it’s supposed to do. If someone had told me that I wouldn’t have written the code that way.”
The other category is the catch-all for what is left. Some bugs can blamed for false positives, conditions that were thought to be bugs but really weren’t. There may be duplicate bugs, multiple ones that resulted from the square root cause. Some bugs can be traced to Testing errors.
Costs: The costs re logarithmic- that is, they increase tenfold as time increases. A bug found and fixed during the early stages when the specification is being written might cost next to nothing, or 10 cents in our example. The same bug, if not found until the software is coded and tested, might cost $1 to $10. If a customer finds it, the cost would easily top $100.
When to Stop Testing
This can be difficult to determine. Many modern software applications are so complex, and run in such as interdependent environment, that complete testing can never be done. "When to stop testing" is one of the most difficult questions to a test engineer. Common factors in deciding when to stop are:
Deadlines (release deadlines, testing deadlines.)
Test cases completed with certain percentages passed
Test budget depleted
Coverage of code/functionality/requirements reaches a specified point
The rate at which Bugs can be found is too small
Beta or Alpha Testing period ends
The risk in the project is under acceptable limit.
Practically, we feel that the decision of stopping testing is based on the level of the risk acceptable to the management. As testing is a never ending process we can never assume that 100 % testing has been done, we can only minimize the risk of shipping the product to client with X testing done. The risk can be measured by Risk analysis but for small duration / low budget / low resources project, risk can be deduced by simply: -
Measuring Test Coverage.
Number of test cycles.
Number of high priority bugs.
Test Strategy: How we plan to cover the product so as to develop an adequate assessment of quality.A good test strategy is: Specific PracticalJustifiedThe purpose of a test strategy is to clarify the major tasks and challenges of the test project.Test Approach and Test Architecture are other terms commonly used to describe what I’m calling test strategy.Example of a poorly stated (and probably poorly conceived) test strategy:
"We will use black box testing, cause-effect graphing, boundary testing, and white box testing to test this product against its specification."
Test Strategy: Type of Project, Type of Software, when Testing will occur, Critical Success factors, Tradeoffs Test Plan - Why
Identify Risks and Assumptions up front to reduce surprises later.
Communicate objectives to all team members.
Foundation for Test Spec, Test Cases, and ultimately the Bugs we find.Failing to plan = planning to fail.
Test Plan - What
Derived from Test Approach, Requirements, Project Plan, Functional Spec., and Design Spec.
Details out project-specific Test Approach.
Lists general (high level) Test Case areas.
Include testing Risk Assessment.
Include preliminary Test Schedule
Lists Resource requirements.
Test Plan
The test strategy identifies multiple test levels, which are going to be performed for the project. Activities at each level must be planned well in advance and it has to be formally documented. Based on the individual plans only, the individual test levels are carried out.
Entry means the entry point to that phase. For example, for unit testing, the coding must be complete and then only one can start unit testing. Task is the activity that is performed. Validation is the way in which the progress and correctness and compliance are verified for that phase. Exit tells the completion criteria of that phase, after the validation is done. For example, the exit criterion for unit testing is all unit test cases must pass.
Unit Test Plan {UTP}
The unit test plan is the overall plan to carry out the unit test activities. The lead tester prepares it and it will be distributed to the individual testers, which contains the following sections.
What is to be tested?
The unit test plan must clearly specify the scope of unit testing. In this, normally the basic input/output of the units along with their basic functionality will be tested. In this case mostly the input units will be tested for the format, alignment, accuracy and the totals. The UTP will clearly give the rules of what data types are present in the system, their format and their boundary conditions. This list may not be exhaustive; but it is better to have a complete list of these details.
Sequence of Testing
The sequences of test activities that are to be carried out in this phase are to be listed in this section. This includes whether to execute positive test cases first or negative test cases first, to execute test cases based on the priority, to execute test cases based on test groups etc. Positive test cases prove that the system performs what is supposed to do; negative test cases prove that the system does not perform what is not supposed to do. Testing the screens, files, database etc., are to be given in proper sequence.
Basic Functionality of Units
How the independent functionalities of the units are tested which excludes any communication between the unit and other units. The interface part is out of scope of this test level. Apart from the above sections, the following sections are addressed, very specific to unit testing.
Unit Testing Tools
Priority of Program units
Naming convention for test cases
Status reporting mechanism
Regression test approach
ETVX criteria
Integration Test Plan
The integration test plan is the overall plan for carrying out the activities in the integration test level, which contains the following sections.
What is to be tested?
This section clearly specifies the kinds of interfaces fall under the scope of testing internal, external interfaces, with request and response is to be explained. This need not go deep in terms of technical details but the general approach how the interfaces are triggered is explained.
Sequence of Integration
When there are multiple modules present in an application, the sequence in which they are to be integrated will be specified in this section. In this, the dependencies between the modules play a vital role. If a unit B has to be executed, it may need the data that is fed by unit A and unit X. In this case, the units A and X have to be integrated and then using that data, the unit B has to be tested. This has to be stated to the whole set of units in the program. Given this correctly, the testing activities will lead to the product, slowly building the product, unit by unit and then integrating them.
System Test Plan {STP}
The system test plan is the overall plan carrying out the system test level activities. In the system test, apart from testing the functional aspects of the system, there are some special testing activities carried out, such as stress testing etc. The following are the sections normally present in system test plan.
What is to be tested?
This section defines the scope of system testing, very specific to the project. Normally, the system testing is based on the requirements. All requirements are to be verified in the scope of system testing. This covers the functionality of the product. Apart from this what special testing is performed are also stated here.
Functional Groups and the Sequence
The requirements can be grouped in terms of the functionality. Based on this, there may be priorities also among the functional groups. For example, in a banking application, anything related to customer accounts can be grouped into one area; anything related to inter-branch transactions may be grouped into one area etc. Same way for the product being tested, these areas are to be mentioned here and the suggested sequences of testing of these areas, based on the priorities are to be described.
Acceptance Test Plan {ATP}
The client at their place performs the acceptance testing. It will be very similar to the system test performed by the Software Development Unit. Since the client is the one who decides the format and testing methods as part of acceptance testing, there is no specific clue on the way they will carry out the testing. But it will not differ much from the system testing. Assume that all the rules, which are applicable to system test, can be implemented to acceptance testing also.
Since this is just one level of testing done by the client for the overall product, it may include test cases including the unit and integration test level details.

A sample Test Plan Outline along with their description is as shown below:
Test Plan Outline
1. BACKGROUND – This item summarizes the functions of the application system and the tests to be performed.2. INTRODUCTION 3. ASSUMPTIONS – Indicates any anticipated assumptions which will be made while testing the application.4. TEST ITEMS - List each of the items (programs) to be tested.5. FEATURES TO BE TESTED - List each of the features (functions or requirements) which will be tested or demonstrated by the test.6. FEATURES NOT TO BE TESTED - Explicitly lists each feature, function, or requirement which won't be tested and why not. 7. APPROACH - Describe the data flows and test philosophy.Simulation or Live execution, Etc. This section also mentions all the approaches which will be followed at the various stages of the test execution.8. ITEM PASS/FAIL CRITERIA Blanket statement - Itemized list of expected output and tolerances9. SUSPENSION/RESUMPTION CRITERIA - Must the test run from start to completion?Under what circumstances it may be resumed in the middle?Establish check-points in long tests.10. TEST DELIVERABLES - What, besides software, will be delivered?Test reportTest software11. TESTING TASKS Functional tasks (e.g., equipment set up) Administrative tasks12. ENVIRONMENTAL NEEDS - Security clearance, Office space & equipment, Hardware/software requirements13. RESPONSIBILITIES - Who does the tasks in Section 10? What does the user do?14. STAFFING & TRAINING15. SCHEDULE16. RESOURCES17. RISKS & CONTINGENCIES18. APPROVALS
The schedule details of the various test pass such as Unit tests, Integration tests, System Tests should be clearly mentioned along with the estimated efforts.
Risk Analysis:
A risk is a potential for loss or damage to an Organization from materialized threats. Risk Analysis attempts to identify all the risks and then quantify the severity of the risks. A threat as we have seen is a possible damaging event. If it occurs, it exploits vulnerability in the security of a computer based system.
Risk Identification: 1. Software Risks: Knowledge of the most common risks associated with Software development, and the platform you are working on.2. Business Risks: Most common risks associated with the business using the Software3. Testing Risks: Knowledge of the most common risks associated with Software Testing for the platform you are working on, tools being used, and test methods being applied.4. Premature Release Risk: Ability to determine the risk associated with releasing unsatisfactory or untested Software Products.5. Risk Methods: Strategies and approaches for identifying risks or problems associated with implementing and operating information technology, products and process; assessing their likelihood, and initiating strategies to test those risks.
Traceability means that you would like to be able to trace back and forth how and where any work product fulfills the directions of the preceding (source-) product. The matrix deals with the where, while the how you have to do yourself, once you know the where.
Take e.g. the Requirement of User Friendliness (UF). Since UF is a complex concept, it is not solved by just one design-solution and it is not solved by one line of code. Many partial design-solutions may contribute to this Requirement and many groups of lines of code may contribute to it.
A Requirements-Design Traceability Matrix puts on one side (e.g. left) the sub-requirements that together are supposed to solve the UF requirement, along with other (sub-) requirements. On the other side (e.g. top) you specify all design solutions. Now you can connect on the cross points of the matrix, which design solutions solve (more, or less) any requirement. If a design solution does not solve any requirement, it should be deleted, as it is of no value.
Having this matrix, you can check whether any requirement has at least one design solution and by checking the solution(s) you may see whether the requirement is sufficiently solved by this (or the set of) connected design(s).
If you have to change any requirement, you can see which designs are affected. And if you change any design, you can check which requirements may be affected and see what the impact is.
In a Design-Code Traceability Matrix you can do the same to keep trace of how and which code solves a particular design and how changes in design or code affect each other.
Demonstrates that the implemented system meets the user requirements.Serves as a single source for tracking purposes.Identifies gaps in the design and testing. Prevents delays in the project timeline, which can be brought about by having to backtrack to fill the gaps


Software Testing Life Cycle: The test development life cycle contains the following components: RequirementsUse Case DocumentTest PlanTest CaseTest Case executionReport AnalysisBug AnalysisBug Reporting
Typical interaction scenario from a user's perspective for system requirements studies or testing. In other words, "an actual or realistic example scenario". A use case describes the use of a system from start to finish. Use cases focus attention on aspects of a system useful to people outside of the system itself.
Users of a program are called users or clients.
Users of an enterprise are called customers, suppliers, etc.
Use Case:
A collection of possible scenarios between the system under discussion and external actors, characterized by the goal the primary actor has toward the system's declared responsibilities, showing how the primary actor's goal might be delivered or might fail.
Use cases are goals (use cases and goals are used interchangeably) that are made up of scenarios. Scenarios consist of a sequence of steps to achieve the goal; each step in a scenario is a sub (or mini) goal of the use case. As such each sub goal represents either another use case (subordinate use case) or an autonomous action that is at the lowest level desired by our use case decomposition.
This hierarchical relationship is needed to properly model the requirements of a system being developed. A complete use case analysis requires several levels. In addition the level at which the use case is operating at it is important to understand the scope it is addressing. The level and scope are important to assure that the language and granularity of scenario steps remain consistent within the use case.
There are two scopes that use cases are written from: Strategic and System. There are also three levels: Summary, User and Sub-function.

Scopes: Strategic and SystemStrategic Scope:
The goal (Use Case) is a strategic goal with respect to the system. These goals are goals of value to the organization. The use case shows how the system is used to benefit the organization.,/p> These strategic use cases will eventually use some of the same lower level (subordinate) use cases.
System Scope:
Use cases at system scope are bounded by the system under development. The goals represent specific functionality required of the system. The majority of the use cases are at system scope. These use cases are often steps in strategic level use cases
Levels: Summary Goal , User Goal and Sub-function.Sub-function Level Use Case:
A sub goal or step is below the main level of interest to the user. Examples are "logging in" and "locate a device in a DB". Always at System Scope.
User Level Use Case:
This is the level of greatest interest. It represents a user task or elementary business process. A user level goal addresses the question "Does your job performance depend on how many of these you do in a day". For example "Create Site View" or "Create New Device" would be user level goals but "Log In to System" would not. Always at System Scope.
Summary Level Use Case:
Written for either strategic or system scope. They represent collections of User Level Goals. For example summary goal "Configure Data Base" might include as a step, user level goal "Add Device to database". Either at System of Strategic Scope.

Test Documentation
Test documentation is a required tool for managing and maintaining the testing process. Documents produced by testers should answer the following questions:
What to test? Test Plan
How to test? Test Specification
What are the results? Test Results Analysis Report
Bug Life cycle:
In entomology(the study of real, living Bugs), the term life cycle refers to the various stages that an insect assumes over its life. If you think back to your high school biology class, you will remember that the life cycle stages for most insects are the egg, larvae, pupae and adult. It seems appropriate, given that software problems are also called bugs, that a similar life cycle system is used to identify their stages of life. Figure 18.2 shows an example of the simplest, and most optimal, software bug life cycle.
This example shows that when a bug is found by a Software Tester, it’s logged and assigned to a programmer to be fixed. This state is called open state. Once the programmer fixes the code , he assigns it back to the tester and the bugs enter the resolved state. The tester then performs a regression test to confirm that the bug is indeed fixed and, if it closes it out. The bug then enters its final state, the closed state.
In some situations though, the life cycle gets a bit more complicated.
In this case the life cycle starts out the same with the Tester opening the bug and assigning to the programmer, but the programmer doesn’t fix it. He doesn’t think its bad enough to fix and assigns it to the project manager to decide. The Project Manager agrees with the Programmer and places the Bug in the resolved state as a “wont-fix” bug. The tester disagrees, looks for and finds a more obvious and general case that demonstrates the bug, reopens it, and assigns it to the Programmer to fix. The programmer fixes the bug, resolves it as fixed, and assign it to the Tester. The tester confirms the fix and closes the bug.
You can see that a bug might undergo numerous changes and iterations over its life, sometimes looping back and starting the life all over again. Figure below takes the simple model above and adds to it possible decisions, approvals, and looping that can occur in most projects. Of course every software company and project will have its own system, but this figure is fairly generic and should cover most any bug life cycle that you’ll encounter
The generic life cycle has two additional states and extra connecting lines. The review state is where Project Manager or the committee, sometimes called a change Control Board, decides whether the bug should be fixed. In some projects all bugs go through the review state before they’re assigned to the programmer for fixing. In other projects, this may not occur until near the end of the project, or not at all. Notice that the review state can also go directly to the closed state. This happens if the review decides that the bug shouldn’t be fixed – it could be too minor is really not a problem, or is a testing error. The other is a deferred. The review may determine that the bug should be considered for fixing at sometime in the future, but not for this release of the software.
The additional line from resolved state back to the open state covers the situation where the tester finds that the bug hasn’t been fixed. It gets reopened and the bug’s life cycle repeats.
The two dotted lines that loop from the closed and the deferred state back to the open state rarely occur but are important enough to mention. Since a Tester never gives up, its possible that a bug was thought to be fixed, tested and closed could reappear. Such bugs are often called Regressions. It’s possible that a deferred bug could later be proven serious enough to fix immediately. If either of these occurs, the bug is reopened and started through the process again. Most Project teams adopt rules for who can change the state of a bug or assign it to someone else. For example, maybe only the Project Manager can decide to defer a bug or only a tester is permitted to close a bug. What’s important is that once you log a bug, you follow it through its life cycle, don’t lose track of it, and prove the necessary information to drive it to being fixed and closed.
Bug Report - Why
Communicate bug for reproducibility, resolution, and regression.
Track bug status (open, resolved, closed).
Ensure bug is not forgotten, lost or ignored.Used to back create test case where none existed before.
Software Testing Life Cycle:The test development life cycle contains the following components:RequirementsUse Case DocumentTest PlanTest CaseTest Case executionReport AnalysisBug AnalysisBug Reporting
Typical interaction scenario from a user's perspective for system requirements studies or testing. In other words, "an actual or realistic example scenario". A use case describes the use of a system from start to finish. Use cases focus attention on aspects of a system useful to people outside of the system itself.
Users of a program are called users or clients.
Users of an enterprise are called customers, suppliers, etc.
Use Case:
A collection of possible scenarios between the system under discussion and external actors, characterized by the goal the primary actor has toward the system's declared responsibilities, showing how the primary actor's goal might be delivered or might fail.
Use cases are goals (use cases and goals are used interchangeably) that are made up of scenarios. Scenarios consist of a sequence of steps to achieve the goal, each step in a scenario is a sub (or mini) goal of the use case. As such each sub goal represents either another use case (subordinate use case) or an autonomous action that is at the lowest level desired by our use case decomposition.
This hierarchical relationship is needed to properly model the requirements of a system being developed. A complete use case analysis requires several levels. In addition the level at which the use case is operating at it is important to understand the scope it is addressing. The level and scope are important to assure that the language and granularity of scenario steps remain consistent within the use case.
There are two scopes that use cases are written from: Strategic and System. There are also three levels: Summary, User and Sub-function.
Scopes: Strategic and SystemStrategic Scope:
The goal (Use Case) is a strategic goal with respect to the system. These goals are goals of value to the organization. The use case shows how the system is used to benefit the organization.,/p> These strategic use cases will eventually use some of the same lower level (subordinate) use cases.System Scope:
Use cases at system scope are bounded by the system under development. The goals represent specific functionality required of the system. The majority of the use cases are at system scope. These use cases are often steps in strategic level use cases
Levels: Summary Goal , User Goal and Sub-function.Sub-function Level Use Case:
A sub goal or step is below the main level of interest to the user. Examples are "logging in" and "locate a device in a DB". Always at System Scope.
User Level Use Case:
This is the level of greatest interest. It represents a user task or elementary business process. A user level goal addresses the question "Does your job performance depend on how many of these you do in a day". For example "Create Site View" or "Create New Device" would be user level goals but "Log In to System" would not. Always at System Scope.
Summary Level Use Case:
Written for either strategic or system scope. They represent collections of User Level Goals. For example summary goal "Configure Data Base" might include as a step, user level goal "Add Device to database". Either at System of Strategic Scope.
Test Documentation
Test documentation is a required tool for managing and maintaining the testing process. Documents produced by testers should answer the following questions:
What to test? Test Plan
How to test? Test Specification
What are the results? Test Results Analysis Report
The Verification activities fall into the category of Static Testing. During static testing, you have a checklist to check whether the work you are doing is going as per the set standards of the organization. These standards can be for Coding, Integrating and Deployment. Reviews, Inspection's and Walkthrough's are static testing methodologies.
Dynamic Testing involves working with the software, giving input values and checking if the output is as expected. These are the Validation activities. Unit Tests, Integration Tests, System Tests and Acceptance Tests are few of the Dynamic Testing methodologies. As we go further, let us understand the various Test Life Cycle's and get to know the Testing Terminologies. To understand more of software testing, various methodologies, tools and techniques, you can download the Software Testing Guide Book from here.
Difference Between Static and Dynamic Testing: Please refer the definition of Static Testing to observe the difference between the static testing and dynamic testing.
Dynamic Testing involves working with the software, giving input values and checking if the output is as expected. These are the Validation activities. Unit Tests, Integration Tests, System Tests and Acceptance Tests are few of the Dynamic Testing methodologies. As we go further, let us understand the various Test Life Cycle's and get to know the Testing Terminologies. To understand more of software testing, various methodologies, tools and techniques, you can download the Software Testing Guide Book from here.
Difference Between Static and Dynamic Testing: Please refer the definition of Static Testing to observe the difference between the static testing and dynamic testing.
White box testing is a test case design method that uses the control structure of the procedural design to derive test cases. Test cases can be derived that
1. guarantee that all independent paths within a module have been exercised at least once,2. exercise all logical decisions on their true and false sides,3. execute all loops at their boundaries and within their operational bounds, and4. exercise internal data structures to ensure their validity.The Nature of Software Defects
Logic errors and incorrect assumptions are inversely proportional to the probability that a program path will be executed. General processing tends to be well understood while special case processing tends to be prone to errors.
We often believe that a logical path is not likely to be executed when it may be executed on a regular basis. Our unconscious assumptions about control flow and data lead to design errors that can only be detected by path testing.Typographical errors are random.Basis Path Testing
This method enables the designer to derive a logical complexity measure of a procedural design and use it as a guide for defining a basis set of execution paths. Test cases that exercise the basis set are guaranteed to execute every statement in the program at least once during testing.
Flow Graphs
Flow graphs can be used to represent control flow in a program and can help in the derivation of the basis set. Each flow graph node represents one or more procedural statements. The edges between nodes represent flow of control. An edge must terminate at a node, even if the node does not represent any useful procedural statements. A region in a flow graph is an area bounded by edges and nodes. Each node that contains a condition is called a predicate node. Cyclomatic complexity is a metric that provides a quantitative measure of the logical complexity of a program. It defines the number of independent paths in the basis set and thus provides an upper bound for the number of tests that must be performed.
The Basis SetAn independent path is any path through a program that introduces at least one new set of processing statements (must move along at least one new edge in the path). The basis set is not unique. Any number of different basis sets can be derived for a given procedural design. Cyclomatic complexity, V(G), for a flow graph G is equal to 1. The number of regions in the flow graph.2. V(G) = E - N + 2 where E is the number of edges and N is the number of nodes.3. V(G) = P + 1 where P is the number of predicate nodes.Deriving Test Cases1. From the design or source code, derive a flow graph.2. Determine the cyclomatic complexity of this flow graph. Even without a flow graph, V(G) can be determined by counting the number of conditional statements in the code.3. Determine a basis set of linearly independent paths. Predicate nodes are useful for determining the necessary paths.4. Prepare test cases that will force execution of each path in the basis set. Each test case is executed and compared to the expected results.Automating Basis Set DerivationThe derivation of the flow graph and the set of basis paths is amenable to automation. A software tool to do this can be developed using a data structure called a graph matrix. A graph matrix is a square matrix whose size is equivalent to the number of nodes in the flow graph. Each row and column correspond to a particular node and the matrix corresponds to the connections (edges) between nodes. By adding a link weight to each matrix entry, more information about the control flow can be captured. In its simplest form, the link weight is 1 if an edge exists and 0 if it does not. But other types of link weights can be represented:� the probability that an edge will be executed,� the processing time expended during link traversal,� the memory required during link traversal, or � the resources required during link traversal.Graph theory algorithms can be applied to these graph matrices to help in the analysis necessary to produce the basis set.Loop TestingThis white box technique focuses exclusively on the validity of loop constructs. Four different classes of loops can be defined:1. simple loops,2. nested loops,3. concatenated loops, and4. unstructured loops.Simple LoopsThe following tests should be applied to simple loops where n is the maximum number of allowable passes through the loop:1. skip the loop entirely,2. only pass once through the loop,3. m passes through the loop where m < n,4. n - 1, n, n + 1 passes through the loop.Nested LoopsThe testing of nested loops cannot simply extend the technique of simple loops since this would result in a geometrically increasing number of test cases. One approach for nested loops:1. Start at the innermost loop. Set all other loops to minimum values.2. Conduct simple loop tests for the innermost loop while holding the outer loops at their minimums. Add tests for out-of-range or excluded values.3. Work outward, conducting tests for the next loop while keeping all other outer loops at minimums and other nested loops to typical values.4. Continue until all loops have been tested.Concatenated LoopsConcatenated loops can be tested as simple loops if each loop is independent of the others. If they are not independent (e.g. the loop counter for one is the loop counter for the other), then the nested approach can be used.Unstructured LoopsThis type of loop should be redesigned not tested!!!Other White Box TechniquesOther white box testing techniques include:1. Condition testingexercises the logical conditions in a program.2. Data flow testingselects test paths according to the locations of definitions and uses of variables in the program.
In computer programming, a unit test is a method of testing the correctness of a particular module of source code. The idea is to write test cases for every non-trivial function or method in the module so that each test case is separate from the others if possible. This type of testing is mostly done by the developers. Benefits The goal of unit testing is to isolate each part of the program and show that the individual parts are correct. It provides a written contract that the piece must satisfy. This isolated testing provides four main benefits:
Encourages change Unit testing allows the programmer to refactor code at a later date, and make sure the module still works correctly (regression testing). This provides the benefit of encouraging programmers to make changes to the code since it is easy for the programmer to check if the piece is still working properly. Simplifies Integration Unit testing helps eliminate uncertainty in the pieces themselves and can be used in a bottom-up testing style approach. By testing the parts of a program first and then testing the sum of its parts will make integration testing easier. Documents the code Unit testing provides a sort of "living document" for the class being tested. Clients looking to learn how to use the class can look at the unit tests to determine how to use the class to fit their needs. Separation of Interface from Implementation Because some classes may have references to other classes, testing a class can frequently spill over into testing another class. A common example of this is classes that depend on a database; in order to test the class, the tester finds herself writing code that interacts with the database. This is a mistake, because a unit test should never go outside of its own class boundary. As a result, the software developer abstracts an interface around the database connection, and then implements that interface with their own Mock Object. This results in loosely coupled code, thus minimizing dependencies in the system. Limitations It is important to realize that unit-testing will not catch every error in the program. By definition, it only tests the functionality of the units themselves. Therefore, it will not catch integration errors, performance problems and any other system-wide issues. In addition, it may not be trivial to anticipate all special cases of input the program unit under study may receive in reality. Unit testing is only effective if it is used in conjunction with other software testing activities.
Unit Testing - Software Unit Testing, Tools, Research Topics, Toolkits, Extreme Programming Unit Testing

Requirements Testing
Usage:
To ensure that system performs correctly
To ensure that correctness can be sustained for a considerable period of time.
System can be tested for correctness through all phases of SDLC but incase of reliability the programs should be in place to make system operational.
Objective:
Successfully implementation of user requirements,/li>
Correctness maintained over considerable period of time Processing of the application complies with the organization’s policies and procedures.
Secondary user’s needs are fulfilled:
Security officer
DBA
Internal auditors
Record retention
Comptroller
How to UseTest conditions created
These test conditions are generalized ones, which becomes test cases as the SDLC progresses until system is fully operational.
Test conditions are more effective when created from user’s requirements.
Test conditions if created from documents then if there are any error in the documents those will get incorporated in Test conditions and testing would not be able to find those errors.
Test conditions if created from other sources (other than documents) error trapping is effective.
Functional Checklist created.
When to Use
Every application should be Requirement tested
Should start at Requirements phase and should progress till operations and maintenance phase.
The method used to carry requirement testing and the extent of it is important.
Example
Creating test matrix to prove that system requirements as documented are the requirements desired by the user.
Creating checklist to verify that application complies to the organizational policies and procedures.

Regression Testing
Usage:All aspects of system remain functional after testing.
Change in one segment does not change the functionality of other segment.
Objective:Determine System documents remain current
Determine System test data and test conditions remain current
Determine Previously tested system functions properly without getting effected though changes are made in some other segment of application system.
How to Use
Test cases, which were used previously for the already tested segment is, re-run to ensure that the results of the segment tested currently and the results of same segment tested earlier are same.
Test automation is needed to carry out the test transactions (test condition execution) else the process is very time consuming and tedious.
In this case of testing cost/benefit should be carefully evaluated else the efforts spend on testing would be more and payback would be minimum.
When to Use
When there is high risk that the new changes may effect the unchanged areas of application system.
In development process: Regression testing should be carried out after the pre-determined changes are incorporated in the application system.
In Maintenance phase : regression testing should be carried out if there is a high risk that loss may occur when the changes are made to the system
Example
Re-running of previously conducted tests to ensure that the unchanged portion of system functions properly.
Reviewing previously prepared system documents (manuals) to ensure that they do not get effected after changes are made to the application system.
Disadvantage
Time consuming and tedious if test automation not done
Regression Testing - Software Testing - Network Regression Testing - Web & Automated Regression Testing
Error Handling Testing
Usage:
It determines the ability of applications system to process the incorrect transactions properly
Errors encompass all unexpected conditions.
In some system approx. 50% of programming effort will be devoted to handling error condition.
Objective:
Determine Application system recognizes all expected error conditions
Determine Accountability of processing errors has been assigned and procedures provide a high probability that errors will be properly corrected
Determine During correction process reasonable control is maintained over errors.
How to Use
A group of knowledgeable people is required to anticipate what can go wrong in the application system.
It is needed that all the application knowledgeable people assemble to integrate their knowledge of user area, auditing and error tracking.
Then logical test error conditions should be created based on this assimilated information.
When to Use
Throughout SDLC.
Impact from errors should be identified and should be corrected to reduce the errors to acceptable level.
Used to assist in error management process of system development and maintenance.
Example
Create a set of erroneous transactions and enter them into the application system then find out whether the system is able to identify the problems..
Using iterative testing enters transactions and trap errors. Correct them. Then enter transactions with errors, which were not present in the system earlier.
Manual Support Testing
Usage:
It involves testing of all the functions performed by the people while preparing the data and using these data from automated system.
Objective:
Verify manual support documents and procedures are correct.
Determine Manual support responsibility is correct
Determine Manual support people are adequately trained.
Determine Manual support and automated segment are properly interfaced.
How to Use
Process evaluated in all segments of SDLC.
Execution of the can be done in conjunction with normal system testing.
Instead of preparing, execution and entering actual test transactions the clerical and supervisory personnel can use the results of processing from application system.
To test people it requires testing the interface between the people and application system.
When to Use
Verification that manual systems function properly should be conducted throughout the SDLC.
Should not be done at later stages of SDLC.
Best done at installation stage so that the clerical people do not get used to the actual system just before system goes to production.
Example
Provide input personnel with the type of information they would normally receive from their customers and then have them transcribe that information and enter it in the computer.
Users can be provided a series of test conditions and then asked to respond to those conditions. Conducted in this manner, manual support testing is like an examination in which the users are asked to obtain the answer from the procedures and manuals available to them.
Intersystem Testing
Usage:
To ensure interconnection between application functions correctly.
Objective:
Determine Proper parameters and data are correctly passed between the applications
Documentation for involved system is correct and accurate.
Ensure Proper timing and coordination of functions exists between the application system.
How to Use
Operations of multiple systems are tested.
Multiple systems are run from one another to check that they are acceptable and processed properly.
When to Use
When there is change in parameters in application system
The parameters, which are erroneous then risk associated to such parameters, would decide the extent of testing and type of testing.
Intersystem parameters would be checked / verified after the change or new application is placed in the production.
Example
Develop test transaction set in one application and passing to another system to verify the processing.
Entering test transactions in live production environment and then using integrated test facility to check the processing from one system to another.
Verifying new changes of the parameters in the system, which are being tested, are corrected in the document.
Disadvantage
Time consuming and tedious if test automation not done
Cost may be expensive if system is run several times iteratively.

Control Testing
Usage:
Control is a management tool to ensure that processing is performed in accordance to what management desire or intents of management.
Objective:
Accurate and complete data
Authorized transactions
Maintenance of adequate audit trail of information.
Efficient, effective and economical process.
Process meeting the needs of the user.
How to Use
To test controls risks must be identified.
Testers should have negative approach i.e. should determine or anticipate what can go wrong in the application system.
Develop risk matrix, which identifies the risks, controls; segment within application system in which control resides.
When to Use
Should be tested with other system tests.
Example
file reconciliation procedures work
Manual controls in place.

Parallel Testing
Usage:
To ensure that the processing of new application (new version) is consistent with respect to the processing of previous application version.
Objective:
Conducting redundant processing to ensure that the new version or application performs correctly.
Demonstrating consistency and inconsistency between 2 versions of the application.
How to Use
Same input data should be run through 2 versions of same application system.
Parallel testing can be done with whole system or part of system (segment).
When to Use
When there is uncertainty regarding correctness of processing of new application where the new and old version are similar.
In financial applications like banking where there are many similar applications the processing can be verified for old and new version through parallel testing
Example
Operating new and old version of a payroll system to determine that the paychecks from both systems are reconcilable.
Running old version of application to ensure that the functions of old system are working fine with respect to the problems encountered in the new system.

Volume testing
Whichever title you choose (for us volume test) here we are talking about realistically exercising an application in order to measure the service delivered to users at different levels of usage. We are particularly interested in its behavior when the maximum number of users are concurrently active and when the database contains the greatest data volume.
The creation of a volume test environment requires considerable effort. It is essential that the correct level of complexity exists in terms of the data within the database and the range of transactions and data used by the scripted users, if the tests are to reliably reflect the to be production environment. Once the test environment is built it must be fully utilized. Volume tests offer much more than simple service delivery measurement. The exercise should seek to answer the following questions:
What service level can be guaranteed. How can it be specified and monitored? Are changes in user behaviour likely? What impact will such changes have on resource consumption and service delivery? Which transactions/processes are resource hungry in relation to their tasks? What are the resource bottlenecks? Can they be addressed? How much spare capacity is there? The purpose of volume testing is to find weaknesses in the system with respect to its handling of large amount of data during extended time periods


Stress testing

The purpose of stress testing is to find defects of the system capacity of handling large numbers of transactions during peak periods. For example, a script might require users to login and proceed with their daily activities while, at the same time, requiring that a series of workstations emulating a large number of other systems are running recorded scripts that add, update, or delete from the database.
Performance testing

System performance is generally assessed in terms of response time and throughput rates under differing processing and configuration conditions. To attack the performance problems, there are several questions should be asked first:
 How much application logic should be remotely executed?
 How much updating should be done to the database server over the network from the client workstation?
 How much data should be sent to each in each transaction?According to Hamilton [10], the performance problems are most often the result of the client or server being configured inappropriately.The best strategy for improving client-sever performance is a three-step process [11]. First, execute controlled performance tests that collect the data about volume, stress, and loading tests. Second, analyze the collected data. Third, examine and tune the database queries and, if necessary, provide temporary data storage on the client while the application is executing

Unit Testing
In computer programming, a unit test is a method of testing the correctness of a particular module of source code. The idea is to write test cases for every non-trivial function or method in the module so that each test case is separate from the others if possible. This type of testing is mostly done by the developers. Benefits The goal of unit testing is to isolate each part of the program and show that the individual parts are correct. It provides a written contract that the piece must satisfy. This isolated testing provides four main benefits: Encourages change Unit testing allows the programmer to refactor code at a later date, and make sure the module still works correctly (regression testing). This provides the benefit of encouraging programmers to make changes to the code since it is easy for the programmer to check if the piece is still working properly. Simplifies Integration Unit testing helps eliminate uncertainty in the pieces themselves and can be used in a bottom-up testing style approach. By testing the parts of a program first and then testing the sum of its parts will make integration testing easier. Documents the code Unit testing provides a sort of "living document" for the class being tested. Clients looking to learn how to use the class can look at the unit tests to determine how to use the class to fit their needs.
Separation of Interface from Implementation Because some classes may have references to other classes, testing a class can frequently spill over into testing another class. A common example of this is classes that depend on a database; in order to test the class, the tester finds herself writing code that interacts with the database. This is a mistake, because a unit test should never go outside of its own class boundary. As a result, the software developer abstracts an interface around the database connection, and then implements that interface with their own Mock Object. This results in loosely coupled code, thus minimizing dependencies in the system. Limitations It is important to realize that unit-testing will not catch every error in the program. By definition, it only tests the functionality of the units themselves. Therefore, it will not catch integration errors, performance problems and any other system-wide issues. In addition, it may not be trivial to anticipate all special cases of input the program unit under study may receive in reality. Unit testing is only effective if it is used in conjunction with other software testing activities.
Unit Testing - Software Unit Testing, Tools, Research Topics, Toolkits, Extreme Programming Unit Testing
Manual Support Testing
Usage:
It involves testing of all the functions performed by the people while preparing the data and using these data from automated system.
Objective:
Verify manual support documents and procedures are correct.
Determine Manual support responsibility is correct
Determine Manual support people are adequately trained.
Determine Manual support and automated segment are properly interfaced.
How to Use
Process evaluated in all segments of SDLC.
Execution of the can be done in conjunction with normal system testing.
Instead of preparing, execution and entering actual test transactions the clerical and supervisory personnel can use the results of processing from application system.
To test people it requires testing the interface between the people and application system.
When to Use
Verification that manual systems function properly should be conducted throughout the SDLC.
Should not be done at later stages of SDLC.
Best done at installation stage so that the clerical people do not get used to the actual system just before system goes to production.
Example
Provide input personnel with the type of information they would normally receive from their customers and then have them transcribe that information and enter it in the computer.
Users can be provided a series of test conditions and then asked to respond to those conditions. Conducted in this manner, manual support testing is like an examination in which the users are asked to obtain the answer from the procedures and manuals available to them.



Dynamic Testing
Dynamic Testing involves working with the software, giving input values and checking if the output is as expected. These are the Validation activities. Unit Tests, Integration Tests, System Tests and Acceptance Tests are few of the Dynamic Testing methodologies. As we go further, let us understand the various Test Life Cycle's and get to know the Testing Terminologies. To understand more of software testing, various methodologies, tools and techniques, you can download the Software Testing Guide Book from here.
Difference Between Static and Dynamic Testing: Please refer the definition of Static Testing to observe the difference between the static testing and dynamic testing.
Static Testing
The Verification activities fall into the category of Static Testing. During static testing, you have a checklist to check whether the work you are doing is going as per the set standards of the organization. These standards can be for Coding, Integrating and Deployment. Reviews, Inspection's and Walkthrough's are static testing methodologies.
Black box testing
IntroductionBlack box testing attempts to derive sets of inputs that will fully exercise all the functional requirements of a system. It is not an alternative to white box testing. This type of testing attempts to find errors in the following categories:1. incorrect or missing functions, 2. interface errors, 3. errors in data structures or external database access,4. performance errors, and 5. initialization and termination errors.Tests are designed to answer the following questions:1. How is the function's validity tested?2. What classes of input will make good test cases?3. Is the system particularly sensitive to certain input values?4. How are the boundaries of a data class isolated?5. What data rates and data volume can the system tolerate?6. What effect will specific combinations of data have on system operation?White box testing should be performed early in the testing process, while black box testing tends to be applied during later stages. Test cases should be derived which1. reduce the number of additional test cases that must be designed to achieve reasonable testing, and2. tell us something about the presence or absence of classes of errors, rather than an error associated only with the specific test at hand.Equivalence PartitioningThis method divides the input domain of a program into classes of data from which test cases can be derived. Equivalence partitioning strives to define a test case that uncovers classes of errors and thereby reduces the number of test cases needed. It is based on an evaluation of equivalence classes for an input condition. An equivalence class represents a set of valid or invalid states for input conditions.Equivalence classes may be defined according to the following guidelines:1. If an input condition specifies a range, one valid and two invalid equivalence classes are defined.2. If an input condition requires a specific value, then one valid and two invalid equivalence classes are defined.3. If an input condition specifies a member of a set, then one valid and one invalid equivalence class are defined.4. If an input condition is Boolean, then one valid and one invalid equivalence class are defined.Boundary Value AnalysisThis method leads to a selection of test cases that exercise boundary values. It complements equivalence partitioning since it selects test cases at the edges of a class. Rather than focusing on input conditions solely, BVA derives test cases from the output domain also. BVA guidelines include:1. For input ranges bounded by a and b, test cases should include values a and b and just above and just below a and b respectively.2. If an input condition specifies a number of values, test cases should be developed to exercise the minimum and maximum numbers and values just above and below these limits.3. Apply guidelines 1 and 2 to the output.4. If internal data structures have prescribed boundaries, a test case should be designed to exercise the data structure at its boundary.Cause-Effect Graphing TechniquesCause-effect graphing is a technique that provides a concise representation of logical conditions and corresponding actions. There are four steps:1. Causes (input conditions) and effects (actions) are listed for a module and an identifier is assigned to each.2. A cause-effect graph is developed.3. The graph is converted to a decision table.4. Decision table rules are converted to test cases.
What is blackbox testing, difference between blackbox testing and whitebox testing, Blackbox Testing plans, unbiased blackbox testing

White box testing
White box testing is a test case design method that uses the control structure of the procedural design to derive test cases. Test cases can be derived that
1. guarantee that all independent paths within a module have been exercised at least once,2. exercise all logical decisions on their true and false sides,3. execute all loops at their boundaries and within their operational bounds, and4. exercise internal data structures to ensure their validity.The Nature of Software Defects
Logic errors and incorrect assumptions are inversely proportional to the probability that a program path will be executed. General processing tends to be well understood while special case processing tends to be prone to errors.
We often believe that a logical path is not likely to be executed when it may be executed on a regular basis. Our unconscious assumptions about control flow and data lead to design errors that can only be detected by path testing.Typographical errors are random.Basis Path Testing
This method enables the designer to derive a logical complexity measure of a procedural design and use it as a guide for defining a basis set of execution paths. Test cases that exercise the basis set are guaranteed to execute every statement in the program at least once during testing.
Flow Graphs
Flow graphs can be used to represent control flow in a program and can help in the derivation of the basis set. Each flow graph node represents one or more procedural statements. The edges between nodes represent flow of control. An edge must terminate at a node, even if the node does not represent any useful procedural statements. A region in a flow graph is an area bounded by edges and nodes. Each node that contains a condition is called a predicate node. Cyclomatic complexity is a metric that provides a quantitative measure of the logical complexity of a program. It defines the number of independent paths in the basis set and thus provides an upper bound for the number of tests that must be performed.
The Basis SetAn independent path is any path through a program that introduces at least one new set of processing statements (must move along at least one new edge in the path). The basis set is not unique. Any number of different basis sets can be derived for a given procedural design. Cyclomatic complexity, V(G), for a flow graph G is equal to 1. The number of regions in the flow graph.2. V(G) = E - N + 2 where E is the number of edges and N is the number of nodes.3. V(G) = P + 1 where P is the number of predicate nodes.Deriving Test Cases1. From the design or source code, derive a flow graph.2. Determine the cyclomatic complexity of this flow graph. Even without a flow graph, V(G) can be determined by counting the number of conditional statements in the code.3. Determine a basis set of linearly independent paths. Predicate nodes are useful for determining the necessary paths.4. Prepare test cases that will force execution of each path in the basis set. Each test case is executed and compared to the expected results.Automating Basis Set DerivationThe derivation of the flow graph and the set of basis paths is amenable to automation. A software tool to do this can be developed using a data structure called a graph matrix. A graph matrix is a square matrix whose size is equivalent to the number of nodes in the flow graph. Each row and column correspond to a particular node and the matrix corresponds to the connections (edges) between nodes. By adding a link weight to each matrix entry, more information about the control flow can be captured. In its simplest form, the link weight is 1 if an edge exists and 0 if it does not. But other types of link weights can be represented:� the probability that an edge will be executed,� the processing time expended during link traversal,� the memory required during link traversal, or � the resources required during link traversal.Graph theory algorithms can be applied to these graph matrices to help in the analysis necessary to produce the basis set.Loop TestingThis white box technique focuses exclusively on the validity of loop constructs. Four different classes of loops can be defined:1. simple loops,2. nested loops,3. concatenated loops, and4. unstructured loops.
Simple LoopsThe following tests should be applied to simple loops where n is the maximum number of allowable passes through the loop:1. skip the loop entirely,2. only pass once through the loop,3. m passes through the loop where m < n,4. n - 1, n, n + 1 passes through the loop.Nested LoopsThe testing of nested loops cannot simply extend the technique of simple loops since this would result in a geometrically increasing number of test cases. One approach for nested loops:1. Start at the innermost loop. Set all other loops to minimum values.2. Conduct simple loop tests for the innermost loop while holding the outer loops at their minimums. Add tests for out-of-range or excluded values.3. Work outward, conducting tests for the next loop while keeping all other outer loops at minimums and other nested loops to typical values.4. Continue until all loops have been tested.Concatenated LoopsConcatenated loops can be tested as simple loops if each loop is independent of the others. If they are not independent (e.g. the loop counter for one is the loop counter for the other), then the nested approach can be used.Unstructured LoopsThis type of loop should be redesigned not tested!!!Other White Box TechniquesOther white box testing techniques include:1. Condition testingexercises the logical conditions in a program.2. Data flow testingselects test paths according to the locations of definitions and uses of variables in the program.

Unit Testing
In computer programming, a unit test is a method of testing the correctness of a particular module of source code. The idea is to write test cases for every non-trivial function or method in the module so that each test case is separate from the others if possible. This type of testing is mostly done by the developers. Benefits The goal of unit testing is to isolate each part of the program and show that the individual parts are correct. It provides a written contract that the piece must satisfy. This isolated testing provides four main benefits: Encourages change Unit testing allows the programmer to refactor code at a later date, and make sure the module still works correctly (regression testing). This provides the benefit of encouraging programmers to make changes to the code since it is easy for the programmer to check if the piece is still working properly. Simplifies Integration Unit testing helps eliminate uncertainty in the pieces themselves and can be used in a bottom-up testing style approach. By testing the parts of a program first and then testing the sum of its parts will make integration testing easier. Documents the code Unit testing provides a sort of "living document" for the class being tested. Clients looking to learn how to use the class can look at the unit tests to determine how to use the class to fit their needs. Separation of Interface from Implementation Because some classes may have references to other classes, testing a class can frequently spill over into testing another class. A common example of this is classes that depend on a database; in order to test the class, the tester finds herself writing code that interacts with the database. This is a mistake, because a unit test should never go outside of its own class boundary. As a result, the software developer abstracts an interface around the database connection, and then implements that interface with their own Mock Object. This results in loosely coupled code, thus minimizing dependencies in the system. Limitations It is important to realize that unit-testing will not catch every error in the program. By definition, it only tests the functionality of the units themselves. Therefore, it will not catch integration errors, performance problems and any other system-wide issues. In addition, it may not be trivial to anticipate all special cases of input the program unit under study may receive in reality. Unit testing is only effective if it is used in conjunction with other software testing activities.
Unit Testing - Software Unit Testing, Tools, Research Topics, Toolkits, Extreme Programming Unit Testing

Win Runner
WinRunner, Mercury Interactive enterprise functional testing tool. It is used to quickly create and run sophisticated automated tests on your application. Winrunner helps you automate the testing process, from test development to execution. You create adaptable and reusable test scripts that challenge the functionality of your application. Prior to a software release, you can run these tests in a single overnight run- enabling you to detect and ensure superior software quality.
WinRunner Interview Questions
1) How you used WinRunner in your project?
Ans. Yes, I have been WinRunner for creating automates scripts for GUI, functional and regression testing of the AUT.
2) Explain WinRunner testing process?Ans. WinRunner testing process involves six main stagesi. Create GUI Map File so that WinRunner can recognize the GUI objects in the application being testedii. Create test scripts by recording, programming, or a combination of both. While recording tests, insert checkpoints where you want to check the response of the application being tested.iii. Debug Test: run tests in Debug mode to make sure they run smoothlyiv. Run Tests: run tests in Verify mode to test your application.v. View Results: determines the success or failure of the tests.vi. Report Defects: If a test run fails due to a defect in the application being tested, you can report information about the defect directly from the Test Results window.3) What in contained in the GUI map?
Ans. WinRunner stores information it learns about a window or object in a GUI Map. When WinRunner runs a test, it uses the GUI map to locate objects. It reads an object’s description in the GUI map and then looks for an object with the same properties in the application being tested. Each of these objects in the GUI Map file will be having a logical name and a physical description.
There are 2 types of GUI Map files.i. Global GUI Map file: a single GUI Map file for the entire applicationii. GUI Map File per Test: WinRunner automatically creates a GUI Map file for each test created.4) How does WinRunner recognize objects on the application?
Ans. WinRunner uses the GUI Map file to recognize objects on the application. When WinRunner runs a test, it uses the GUI map to locate objects. It reads an object’s description in the GUI map and then looks for an object with the same properties in the application being tested.
5) Have you created test scripts and what is contained in the test scripts?
Ans. Yes I have created test scripts. It contains the statement in Mercury Interactive’s Test Script Language (TSL). These statements appear as a test script in a test window. You can then enhance your recorded test script, either by typing in additional TSL functions and programming elements or by using WinRunner’s visual programming tool, the Function Generator.
6) How does WinRunner evaluates test results?
Ans. Following each test run, WinRunner displays the results in a report. The report details all the major events that occurred during the run, such as checkpoints, error messages, system messages, or user messages. If mismatches are detected at checkpoints during the test run, you can view the expected results and the actual results from the Test Results window.
7) Have you performed debugging of the scripts?
Ans. Yes, I have performed debugging of scripts. We can debug the script by executing the script in the debug mode. We can also debug script using the Step, Step Into, Step out functionalities provided by the WinRunner.
8) How do you run your test scripts?
Ans. We run tests in Verify mode to test your application. Each time WinRunner encounters a checkpoint in the test script, it compares the current data of the application being tested to the expected data captured earlier. If any mismatches are found, WinRunner captures them as actual results.
9) How do you analyze results and report the defects?
Ans. Following each test run, WinRunner displays the results in a report. The report details all the major events that occurred during the run, such as checkpoints, error messages, system messages, or user messages. If mismatches are detected at checkpoints during the test run, you can view the expected results and the actual results from the Test Results window. If a test run fails due to a defect in the application being tested, you can report information about the defect directly from the Test Results window. This information is sent via e-mail to the quality assurance manager, who tracks the defect until it is fixed.
10) What is the use of Test Director software?
Ans. Test Director is Mercury Interactive’s software test management tool. It helps quality assurance personnel plan and organize the testing process. With Test Director you can create a database of manual and automated tests, build test cycles, run tests, and report and track defects. You can also create reports and graphs to help review the progress of planning tests, running tests, and tracking defects before a software release.
11) How you integrated your automated scripts from Test Director?
Ans. When you work with WinRunner, you can choose to save your tests directly to your Test Director database or while creating a test case in the Test Director we can specify whether the script in automated or manual. And if it is automated script then Test Director will build a skeleton for the script that can be later modified into one which could be used to test the AUT.
12) What are the different modes of recording?Ans. There are two type of recording in WinRunner.i. Context Sensitive recording records the operations you perform on your application by identifying Graphical User Interface (GUI) objects. ii. Analog recording records keyboard input, mouse clicks, and the precise x- and y-coordinates traveled by the mouse pointer across the screen.13) What is the purpose of loading WinRunner Add-Ins?
Ans. Add-Ins are used in WinRunner to load functions specific to the particular add-in to the memory. While creating a script only those functions in the add-in selected will be listed in the function generator and while executing the script only those functions in the loaded add-in will be executed else WinRunner will give an error message saying it does not recognize the function.
14) What are the reasons that WinRunner fails to identify an object on the GUI?Ans. WinRunner fails to identify an object in a GUI due to various reasons.i. The object is not a standard windows object.ii. If the browser used is not compatible with the WinRunner version, GUI Map Editor will not be able to learn any of the objects displayed in the browser window.15) What do you mean by the logical name of the object.
Ans. An object’s logical name is determined by its class. In most cases, the logical name is the label that appears on an object.
16) If the object does not have a name then what will be the logical name?Ans. If the object does not have a name then the logical name could be the attached text.17) What is the different between GUI map and GUI map files?Ans. The GUI map is actually the sum of one or more GUI map files. There are two modes for organizing GUI map files.i. Global GUI Map file: a single GUI Map file for the entire application
ii. GUI Map File per Test: WinRunner automatically creates a GUI Map file for each test created. GUI Map file is a file which contains the windows and the objects learned by the WinRunner with its logical name and their physical description.
18) How do you view the contents of the GUI map?
Ans. GUI Map editor displays the content of a GUI Map. We can invoke GUI Map Editor from the Tools Menu in WinRunner. The GUI Map Editor displays the various GUI Map files created and the windows and objects learned in to them with their logical name and physical description.
19) When you create GUI map do you record all the objects of specific objects?Ans. If we are learning a window then WinRunner automatically learns all the objects in the window else we will we identifying those object, which are to be learned in a window, since we will be working with only those objects while creating scripts.20) What is the purpose of set_window command?
Ans. Set_Window command sets the focus to the specified window. We use this command to set the focus to the required window before executing tests on a particular window.
Syntax: set_window(, time); The logical name is the logical name of the window and time is the time the execution has to wait till it gets the given window into focus.
21) How do you load GUI map?Ans. We can load a GUI Map by using the GUI_load command.Syntax: GUI_load();22) What is the disadvantage of loading the GUI maps through start up scripts?
Ans.1.If we are using a single GUI Map file for the entire AUT then the memory used by the GUI Map may be much high.
2.If there is any change in the object being learned then WinRunner will not be able to recognize the object, as it is not in the GUI Map file loaded in the memory. So we will have to learn the object again and update the GUI File and reload it.
23) How do you unload the GUI map?
Ans. We can use GUI_close to unload a specific GUI Map file or else we call use GUI_close_all command to unload all the GUI Map files loaded in the memory.
Syntax: GUI_close(); or GUI_close_all;24) What actually happens when you load GUI map?
Ans. When we load a GUI Map file, the information about the windows and the objects with their logical names and physical description are loaded into memory. So when the WinRunner executes a script on a particular window, it can identify the objects using this information loaded in the memory.
25) What is the purpose of the temp GUI map file?
Ans. While recording a script, WinRunner learns objects and windows by itself. This is actually stored into the temporary GUI Map file. We can specify whether we have to load this temporary GUI Map file should be loaded each time in the General Options.
26) What is the extension of gui map file?Ans. The extension for a GUI Map file is “.gui”.27) How do you find an object in an GUI map.Ans. The GUI Map Editor is been provided with a Find and Show Buttons.
i. To find a particular object in the GUI Map file in the application, select the object and click the Show window. This blinks the selected object.
ii. To find a particular object in a GUI Map file click the Find button, which gives the option to select the object. When the object is selected, if the object has been learned to the GUI Map file it will be focused in the GUI Map file.
28) What different actions are performed by find and show button?
Ans. 1.To find a particular object in the GUI Map file in the application, select the object and click the Show window. This blinks the selected object.
2.To find a particular object in a GUI Map file click the Find button, which gives the option to select the object. When the object is selected, if the object has been learned to the GUI Map file it will be focused in the GUI Map file.
29) How do you identify which files are loaded in the GUI map?Ans. The GUI Map Editor has a drop down “GUI File” displaying all the GUI Map files loaded into the memory.30) How do you modify the logical name or the physical description of the objects in GUI map?Ans. You can modify the logical name or the physical description of an object in a GUI map file using the GUI Map Editor.
31) When do you feel you need to modify the logical name?
Ans. Changing the logical name of an object is useful when the assigned logical name is not sufficiently descriptive or is too long.
32) When it is appropriate to change physical description?
Ans. Changing the physical description is necessary when the property value of an object changes.
33) How WinRunner handles varying window labels?
Ans. We can handle varying window labels using regular expressions. WinRunner uses two “hidden” properties in order to use regular expression in an object’s physical description. These properties are regexp_label and regexp_MSW_class.
i. The regexp_label property is used for windows only. It operates “behind the scenes” to insert a regular expression into a window’s label description.
ii. The regexp_MSW_class property inserts a regular expression into an object’s MSW_class. It is obligatory for all types of windows and for the object class object.
34) What is the purpose of regexp_label property and regexp_MSW_class property?
Ans. The regexp_label property is used for windows only. It operates “behind the scenes” to insert a regular expression into a window’s label description.
The regexp_MSW_class property inserts a regular expression into an object’s MSW_class. It is obligatory for all types of windows and for the object class object.
35) How do you suppress a regular expression?
Ans. We can suppress the regular expression of a window by replacing the regexp_label property with label property.
36) How do you copy and move objects between different GUI map files?
Ans. We can copy and move objects between different GUI Map files using the GUI Map Editor. The steps to be followed are:
i. Choose Tools > GUI Map Editor to open the GUI Map Editor.ii. Choose View > GUI Files.iii. Click Expand in the GUI Map Editor. The dialog box expands to display two GUI map files simultaneously.iv. View a different GUI map file on each side of the dialog box by clicking the file names in the GUI File lists.v. In one file, select the objects you want to copy or move. Use the Shift key and/or Control key to select multiple objects. To select all objects in a GUI map file, choose Edit > Select All.vi. Click Copy or Move.vii. To restore the GUI Map Editor to its original size, click Collapse.37) How do you select multiple objects during merging the files?
Ans. Use the Shift key and/or Control key to select multiple objects. To select all objects in a GUI map file, choose Edit > Select All.
38) How do you clear a GUI map files?Ans. We can clear a GUI Map file using the “Clear All” option in the GUI Map Editor.39) How do you filter the objects in the GUI map?Ans. GUI Map Editor has a Filter option. This provides for filtering with 3 different types of options.i. Logical name displays only objects with the specified logical name.ii. Physical description displays only objects matching the specified physical description. Use any substring belonging to the physical description.iii. Class displays only objects of the specified class, such as all the push buttons.40) How do you configure GUI map?
a. When WinRunner learns the description of a GUI object, it does not learn all its properties. Instead, it learns the minimum number of properties to provide a unique identification of the object.
b. Many applications also contain custom GUI objects. A custom object is any object not belonging to one of the standard classes used by WinRunner. These objects are therefore assigned to the generic “object” class. When WinRunner records an operation on a custom object, it generates obj_mouse_ statements in the test script.
c. If a custom object is similar to a standard object, you can map it to one of the standard classes. You can also configure the properties WinRunner uses to identify a custom object during Context Sensitive testing. The mapping and the configuration you set are valid only for the current WinRunner session. To make the mapping and the configuration permanent, you must add configuration statements to your startup test script.
General Testing Interview Questions
1.What is 'Software Quality Assurance'?
Software QA involves the entire software development Process - monitoring and improving the process, making sure that any agreed-upon standards and procedures are followed, and ensuring that problems are found and dealt with. It is oriented to 'prevention'. (See the Books section for a list of useful books on Software Quality Assurance.) 2.What is 'Software Testing'?
Testing involves operation of a system or application under controlled conditions and evaluating the results (eg, 'if the user is in interface A of the application while using hardware B, and does C, then D should happen'). The controlled conditions should include both normal and abnormal conditions. Testing should intentionally attempt to make things go wrong to determine if things happen when they shouldn't or things don't happen when they should. It is oriented to 'detection'. Organizations vary considerably in how they assign responsibility for QA and testing. Sometimes they're the combined responsibility of one group or individual. Also common are project teams that include a mix of testers and developers who work closely together, with overall QA processes monitored by project managers. It will depend on what best fits an organization's size and business structure.
4.Why is it often hard for management to get serious about quality assurance?
* Solving problems is a high-visibility process; preventing problems is low-visibility. This is illustrated by an old parable: In ancient China there was a family of healers, one of whom was known throughout the land and employed as a physician to a great lord. 5.Why does software have bugs?
* Miscommunication or no communication - as to specifics of what an application should or shouldn't do (the application's requirements). * Software complexity - the complexity of current software applications can be difficult to comprehend for anyone without experience in modern-day software development. Multi-tiered applications, client-server and distributed applications, data communications, enormous relational databases, and sheer size of applications have all contributed to the exponential growth in software/system complexity. * Programming errors - programmers, like anyone else, can make mistakes. * Changing requirements (whether documented or undocumented) - the end-user may not understand the effects of changes, or may understand and request them anyway - redesign, rescheduling of engineers, effects on other projects, work already completed that may have to be redone or thrown out, hardware requirements that may be affected, etc. If there are many minor changes or any major changes, known and unknown dependencies among parts of the project are likely to interact and cause problems, and the complexity of coordinating changes may result in errors. Enthusiasm of engineering staff may be affected. In some fast-changing business environments, continuously modified requirements may be a fact of life. In this case, management must understand the resulting risks, and QA and test engineers must adapt and plan for continuous extensive testing to keep the inevitable bugs from running out of control - see 'What can be done if requirements are changing continuously?' in Part 2 of the FAQ. Also see information about 'agile' approaches such as XP, also in Part 2 of the FAQ. * Time pressures - scheduling of software projects is difficult at best, often requiring a lot of guesswork. When deadlines loom and the crunch comes, mistakes will be made.
6.How can new Software QA processes be introduced in an existing organization?
* A lot depends on the size of the organization and the risks involved. For large organizations with high-risk (in terms of lives or property) projects, serious management buy-in is required and a formalized QA process is necessary. * Where the risk is lower, management and organizational buy-in and QA implementation may be a slower, step-at-a-time process. QA processes should be balanced with productivity so as to keep bureaucracy from getting out of hand. * For small groups or projects, a more ad-hoc process may be appropriate, depending on the type of customers and projects. A lot will depend on team leads or managers, feedback to developers, and ensuring adequate communications among customers, managers, developers, and testers. * The most value for effort will often be in (a) requirements management processes, with a goal of clear, complete, testable requirement specifications embodied in requirements or design documentation, or in 'agile'-type environments extensive continuous coordination with end-users, (b) design inspections and code inspections, and (c) post-mortems/retrospectives. 7.What is verification? validation? * Verification typically involves reviews and meetings to evaluate documents, plans, code, requirements, and specifications. This can be done with checklists, issues lists, walkthroughs, and inspection meetings. Validation typically involves actual testing and takes place after verifications are completed. The term 'IV & V' refers to Independent Verification and Validation. 8.What is a 'walkthrough'? * A 'walkthrough' is an informal meeting for evaluation or informational purposes. Little or no preparation is usually required.
9.What's an 'inspection'? * An inspection is more formalized than a 'walkthrough', typically with 3-8 people including a moderator, reader, and a recorder to take notes. The subject of the inspection is typically a document such as a requirements spec or a test plan, and the purpose is to find problems and see what's missing, not to fix anything. Attendees should prepare for this type of meeting by reading thru the document; most problems will be found during this preparation. The result of the inspection meeting should be a written report. 10.What kinds of testing should be considered? * Black box testing - not based on any knowledge of internal design or code. Tests are based on requirements and functionality. * White box testing - based on knowledge of the internal logic of an application's code. Tests are based on coverage of code statements, branches, paths, conditions. * Unit testing - the most 'micro' scale of testing; to test particular functions or code modules. Typically done by the programmer and not by testers, as it requires detailed knowledge of the internal program design and code. Not always easily done unless the application has a well-designed architecture with tight code; may require developing test driver modules or test harnesses. * Incremental integration testing - continuous testing of an application as new functionality is added; requires that various aspects of an application's functionality be independent enough to work separately before all parts of the program are completed, or that test drivers be developed as needed; done by programmers or by testers. * Integration testing - testing of combined parts of an application to determine if they function together correctly. The 'parts' can be code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems. * Functional testing - black-box type testing geared to functional requirements of an application; this type of testing should be done by testers. This doesn't mean that the programmers shouldn't check that their code works before releasing it (which of course applies to any stage of testing.) * System testing - black-box type testing that is based on overall requirements specifications; covers all combined parts of a system.
* End-to-end testing - similar to system testing; the 'macro' end of the test scale; involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate. * Sanity testing or smoke testing - typically an initial testing effort to determine if a new software version is performing well enough to accept it for a major testing effort. For example, if the new software is crashing systems every 5 minutes, bogging down systems to a crawl, or corrupting databases, the software may not be in a 'sane' enough condition to warrant further testing in its current state. * Regression testing - re-testing after fixes or modifications of the software or its environment. It can be difficult to determine how much re-testing is needed, especially near the end of the development cycle. Automated testing tools can be especially useful for this type of testing. * Acceptance testing - final testing based on specifications of the end-user or customer, or based on use by end-users/customers over some limited period of time. * Load testing - testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the system's response time degrades or fails. * Stress testing - term often used interchangeably with 'load' and 'performance' testing. Also used to describe such tests as system functional testing while under unusually heavy loads, heavy repetition of certain actions or inputs, input of large numerical values, large complex queries to a database system, etc. * Performance testing - term often used interchangeably with 'stress' and 'load' testing. Ideally 'performance' testing (and any other 'type' of testing) is defined in requirements documentation or QA or Test Plans. * Usability testing - testing for 'user-friendliness'. Clearly this is subjective, and will depend on the targeted end-user or customer. User interviews, surveys, video recording of user sessions, and other techniques can be used. Programmers and testers are usually not appropriate as usability testers. * Install/uninstall testing - testing of full, partial, or upgrade install/uninstall processes. * Recovery testing - testing how well a system recovers from crashes, hardware failures, or other catastrophic problems. * Failover testing - typically used interchangeably with 'recovery testing' * Security testing - testing how well the system protects against unauthorized internal or external access, willful damage, etc; may require sophisticated testing techniques. * Compatibility testing - testing how well software performs in a particular hardware/software/operating system/network/etc. environment. * Exploratory testing - often taken to mean a creative, informal software test that is not based on formal test plans or test cases; testers may be learning the software as they test it. * Ad-hoc testing - similar to exploratory testing, but often taken to mean that the testers have significant understanding of the software before testing it. * Context-driven testing - testing driven by an understanding of the environment, culture, and intended use of software. For example, the testing approach for life-critical medical equipment software would be completely different than that for a low-cost computer game. * User acceptance testing - determining if software is satisfactory to an end-user or customer. * Comparison testing - comparing software weaknesses and strengths to competing products. * Alpha testing - testing of an application when development is nearing completion; minor design changes may still be made as a result of such testing. Typically done by end-users or others, not by programmers or testers. * Beta testing - testing when development and testing are essentially completed and final bugs and problems need to be found before final release. Typically done by end-users or others, not by programmers or testers. * Mutation testing - a method for determining if a set of test data or test cases is useful, by deliberately introducing various code changes ('bugs') and retesting with the original test data/cases to determine if the 'bugs' are detected. Proper implementation requires large computational resources. 11.What are 5 common problems in the software development process? * Solid requirements - clear, complete, detailed, cohesive, attainable, testable requirements that are agreed to by all players. Use prototypes to help nail down requirements. In 'agile'-type environments, continuous coordination with customers/end-users is necessary. * Realistic schedules - allow adequate time for planning, design, testing, bug fixing, re-testing, changes, and documentation; personnel should be able to complete the project without burning out. * Adequate testing - start testing early on, re-test after fixes or changes, plan for adequate time for testing and bug-fixing. 'Early' testing ideally includes unit testing by developers and built-in testing and diagnostic capabilities. * Stick to initial requirements as much as possible - be prepared to defend against excessive changes and additions once development has begun, and be prepared to explain consequences. If changes are necessary, they should be adequately reflected in related schedule changes. If possible, work closely with customers/end-users to manage expectations. This will provide them a higher comfort level with their requirements decisions and minimize excessive changes later on. * Communication - require walkthroughs and inspections when appropriate; make extensive use of group communication tools - e-mail, groupware, networked bug-tracking tools and change management tools, intranet capabilities, etc.; insure that information/documentation is available and up-to-date - preferably electronic, not paper; promote teamwork and cooperation; use prototypes if possible to clarify customers' expectations.
12.What is software 'quality'? * Quality software is reasonably bug-free, delivered on time and within budget, meets requirements and/or expectations, and is maintainable. However, quality is obviously a subjective term. It will depend on who the 'customer' is and their overall influence in the scheme of things. A wide-angle view of the 'customers' of a software development project might include end-users, customer acceptance testers, customer contract officers, customer management, and the development organizations. * Management/accountants/testers/salespeople, future software maintenance engineers, stockholders, magazine columnists, etc. Each type of 'customer' will have their own slant on 'quality' - the accounting department might define quality in terms of profits while an end-user might define quality as user-friendly and bug-free. 13.What is 'good code'? * * 'Good code' is code that works, is bug free, and is readable and maintainable. Some organizations have coding 'standards' that all developers are supposed to adhere to, but everyone has different ideas about what's best, or what is too many or too few rules. There are also various theories and metrics, such as McCabe Complexity metrics. It should be kept in mind that excessive use of standards and rules can stifle productivity and creativity. 'Peer reviews', 'buddy checks' code analysis tools, etc. can be used to check for problems and enforce standards. For C and C++ coding, here are some typical ideas to consider in setting rules/standards; these may or may not apply to a particular situation: * Minimize or eliminate use of global variables. * Use descriptive function and method names - use both upper and lower case, avoid abbreviations, use as many characters as necessary to be adequately descriptive (use of more than 20 characters is not out of line); be consistent in naming conventions. * Use descriptive variable names - use both upper and lower case, avoid abbreviations, use as many characters as necessary to be adequately descriptive (use of more than 20 characters is not out of line); be consistent in naming conventions. * Function and method sizes should be minimized; less than 100 lines of code is good, less than 50 lines is preferable. * Function descriptions should be clearly spelled out in comments preceding a function's code. * Organize code for readability. * Use whitespace generously - vertically and horizontally. * Each line of code should contain 70 characters max. * One code statement per line. * Coding style should be consistent throughout a program (eg, use of brackets, indentations, naming conventions, etc.) * In adding comments, err on the side of too many rather than too few comments; a common rule of thumb is that there should be at least as many lines of comments (including header blocks) as lines of code. * No matter how small, an application should include documentation of the overall program function and flow (even a few paragraphs is better than nothing); or if possible a separate flow chart and detailed program documentation. * Make extensive use of error handling procedures and status and error logging. * For C++, to minimize complexity and increase maintainability, avoid too many levels of inheritance in class hierarchies (relative to the size and complexity of the application). Minimize use of multiple inheritance, and minimize use of operator overloading (note that the Java programming language eliminates multiple inheritance and operator overloading.) * For C++, keep class methods small, less than 50 lines of code per method is preferable. * For C++, make liberal use of exception handlers. 14.What is 'good design'? * * 'Design' could refer to many things, but often refers to 'functional design' or 'internal design'. Good internal design is indicated by software code whose overall structure is clear, understandable, easily modifiable, and maintainable; is robust with sufficient error-handling and status logging capability; and works correctly when implemented. Good functional design is indicated by an application whose functionality can be traced back to customer and end-user requirements. For programs that have a user interface, it's often a good idea to assume that the end user will have little computer knowledge and may not read a user manual or even the on-line help; some common rules-of-thumb include: * The program should act in a way that least surprises the user * It should always be evident to the user what can be done next and how to exit * The program shouldn't let the users do something stupid without warning them. 15.What is SEI? CMM? CMMI? ISO? IEEE? ANSI? Will it help? * SEI = 'Software Engineering Institute' at Carnegie-Mellon University; initiated by the U.S. Defense Department to help improve software development processes. * CMM = 'Capability Maturity Model', now called the CMMI ('Capability Maturity Model Integration'), developed by the SEI. It's a model of 5 levels of process 'maturity' that determine effectiveness in delivering quality software. It is geared to large organizations such as large U.S. Defense Department contractors. However, many of the QA processes involved are appropriate to any organization, and if reasonably applied can be helpful. Organizations can receive CMMI ratings by undergoing assessments by qualified auditors. * Level 1 - characterized by chaos, periodic panics, and heroic efforts required by individuals to successfully complete projects. Few if any processes in place; successes may not be repeatable. * Level 2 - software project tracking, requirements management, realistic planning, and configuration management processes are in place; successful practices can be repeated. * Level 3 - standard software development and maintenance processes are integrated throughout an organization; a Software Engineering Process Group is in place to oversee software processes, and training programs are used to ensure understanding and compliance. * Level 4 - metrics are used to track productivity, processes, and products. Project performance is predictable, and quality is consistently high. * Level 5 - the focus is on continuous process improvement. The impact of new processes and technologies can be predicted and effectively implemented when required. * Perspective on CMM ratings: During 1997-2001, 1018 organizations were assessed. Of those, 27% were rated at Level 1, 39% at 2, 23% at 3, 6% at 4, and 5% at 5. (For ratings during the period 1992-96, 62% were at Level 1, 23% at 2, 13% at 3, 2% at 4, and 0.4% at 5.) The median size of organizations was 100 software engineering/maintenance personnel; 32% of organizations were U.S. federal contractors or agencies. For those rated at Level 1, the most problematical key process area was in Software Quality Assurance. * ISO = 'International Organization for Standardization' - The ISO 9001:2000 standard (which replaces the previous standard of 1994) concerns quality systems that are assessed by outside auditors, and it applies to many kinds of production and manufacturing organizations, not just software. It covers documentation, design, development, production, testing, installation, servicing, and other processes. The full set of standards consists of: (a)Q9001-2000 - Quality Management Systems: Requirements; (b)Q9000-2000 - Quality Management Systems: Fundamentals and Vocabulary; (c)Q9004-2000 - Quality Management Systems: Guidelines for Performance Improvements. To be ISO 9001 certified, a third-party auditor assesses an organization, and certification is typically good for about 3 years, after which a complete reassessment is required. Note that ISO certification does not necessarily indicate quality products - it indicates only that documented processes are followed. Also see http://www.iso.ch/ for the latest information. In the U.S. the standards can be purchased via the ASQ web site at http://e-standards.asq.org/ * IEEE = 'Institute of Electrical and Electronics Engineers' - among other things, creates standards such as 'IEEE Standard for Software Test Documentation' (IEEE/ANSI Standard 829), 'IEEE Standard of Software Unit Testing (IEEE/ANSI Standard 1008), 'IEEE Standard for Software Quality Assurance Plans' (IEEE/ANSI Standard 730), and others. * ANSI = 'American National Standards Institute', the primary industrial standards body in the U.S.; publishes some software-related standards in conjunction with the IEEE and ASQ (American Society for Quality). * Other software development/IT management process assessment methods besides CMMI and ISO 9000 include SPICE, Trillium, TickIT, Bootstrap, ITIL, MOF, and CobiT. 16.What is the 'software life cycle'? * The life cycle begins when an application is first conceived and ends when it is no longer in use. It includes aspects such as initial concept, requirements analysis, functional design, internal design, documentation planning, test planning, coding, document preparation, integration, testing, maintenance, updates, retesting, phase-out, and other aspects. 17.Will automated testing tools make testing easier? * Possibly For small projects, the time needed to learn and implement them may not be worth it. For larger projects, or on-going long-term projects they can be valuable. * A common type of automated tool is the 'record/playback' type. For example, a tester could click through all combinations of menu choices, dialog box choices, buttons, etc. in an application GUI and have them 'recorded' and the results logged by a tool. The 'recording' is typically in the form of text based on a scripting language that is interpretable by the testing tool. If new buttons are added, or some underlying code in the application is changed, etc. the application might then be retested by just 'playing back' the 'recorded' actions, and comparing the logging results to check effects of the changes. The problem with such tools is that if there are continual changes to the system being tested, the 'recordings' may have to be changed so much that it becomes very time-consuming to continuously update the scripts. Additionally, interpretation and analysis of results (screens, data, logs, etc.) can be a difficult task. Note that there are record/playback tools for text-based interfaces also, and for all types of platforms. * Another common type of approach for automation of functional testing is 'data-driven' or 'keyword-driven' automated testing, in which the test drivers are separated from the data and/or actions utilized in testing (an 'action' would be something like 'enter a value in a text box'). Test drivers can be in the form of automated test tools or custom-written testing software. The data and actions can be more easily maintained - such as via a spreadsheet - since they are separate from the test drivers. The test drivers 'read' the data/action information to perform specified tests. This approach can enable more efficient control, development, documentation, and maintenance of automated tests/test cases. * Other automated tools can include: * Code analyzers - monitor code complexity, adherence to standards, etc. * Coverage analyzers - these tools check which parts of the code have been exercised by a test, and may be oriented to code statement coverage, condition coverage, path coverage, etc. * Memory analyzers - such as bounds-checkers and leak detectors. * Load/performance test tools - for testing client/server and web applications under various load levels. * Web test tools - to check that links are valid, HTML code usage is correct, client-side and server-side programs work, a web site's interactions are secure. * Other tools - for test case management, documentation management, bug reporting, and configuration management.
SDLC
Once upon a time, software development consisted of a programmer writing code to solve a problem or automate a procedure. Nowadays, systems are so big and complex that teams of architects, analysts, programmers, testers and users must work together to create the millions of lines of custom-written code that drive our enterprises.
To manage this, a number of system development life cycle (SDLC) models have been created: waterfall, fountain, and spiral, build and fix, rapid prototyping, incremental, and synchronize and stabilize.
The oldest of these, and the best known, is the waterfall: a sequence of stages in which the output of each stage becomes the input for the next. These stages can be characterized and divided up in different ways, including the following:
· Project planning, feasibility study: Establishes a high-level view of the intended project and determines its goals.
· Systems analysis, requirements definition: Refines project goals into defined functions and operation of the intended application. Analyzes end-user information needs.
· Systems design: Describes desired features and operations in detail, including screen layouts, business rules, process diagrams, pseudocode and other documentation.
· Implementation: The real code is written here.
· Integration and testing: Brings all the pieces together into a special testing environment, then checks for errors, bugs and interoperability.
· Acceptance, installation, deployment: The final stage of initial development, where the software is put into production and runs actual business.
· Maintenance: What happens during the rest of the software's life: changes, correction, additions, and moves to a different computing platform and more. This, the least glamorous and perhaps most important step of all, goes on seemingly forever.
SOFTWARE DEVELOPMENT LIFE CYCLE MODELS

I was asked to put together this high-level and traditional software life cycle information as a favor for a friend of a friend, so I thought I might as well share it with everybody.
The General Model
Software life cycle models describe phases of the software cycle and the order in which those phases are executed. There are tons of models, and many companies adopt their own, but all have very similar patterns. The general, basic model is shown below:

Each phase produces deliverables required by the next phase in the life cycle. Requirements are translated into design. Code is produced during implementation that is driven by the design. Testing verifies the deliverable of the implementation phase against requirements.
Requirements
Business requirements are gathered in this phase. This phase is the main focus of the project managers and stake holders. Meetings with managers, stake holders and users are held in order to determine the requirements. Who is going to use the system? How will they use the system? What data should be input into the system? What data should be output by the system? These are general questions that get answered during a requirements gathering phase. This produces a nice big list of functionality that the system should provide, which describes functions the system should perform, business logic that processes data, what data is stored and used by the system, and how the user interface should work. The overall result is the system as a whole and how it performs, not how it is actually going to do it.
Design
The software system design is produced from the results of the requirements phase. Architects have the ball in their court during this phase and this is the phase in which their focus lies. This is where the details on how the system will work is produced. Architecture, including hardware and software, communication, software design (UML is produced here) are all part of the deliverables of a design phase.
Implementation
Code is produced from the deliverables of the design phase during implementation, and this is the longest phase of the software development life cycle. For a developer, this is the main focus of the life cycle because this is where the code is produced. Implementation my overlap with both the design and testing phases. Many tools exists (CASE tools) to actually automate the production of code using information gathered and produced during the design phase.
Testing
During testing, the implementation is tested against the requirements to make sure that the product is actually solving the needs addressed and gathered during the requirements phase. Unit tests and system/acceptance tests are done during this phase. Unit tests act on a specific component of the system, while system tests act on the system as a whole.
So in a nutshell, that is a very basic overview of the general software development life cycle model. Now lets delve into some of the traditional and widely used variations.


Waterfall Model
This is the most common and classic of life cycle models, also referred to as a linear-sequential life cycle model. It is very simple to understand and use. In a waterfall model, each phase must be completed in its entirety before the next phase can begin. At the end of each phase, a review takes place to determine if the project is on the right path and whether or not to continue or discard the project. Unlike what I mentioned in the general model, phases do not overlap in a waterfall model.
Waterfall Life Cycle Model
Advantages
Simple and easy to use.
Easy to manage due to the rigidity of the model – each phase has specific deliverables and a review process.
Phases are processed and completed one at a time.
Works well for smaller projects where requirements are very well understood.
Disadvantages
Adjusting scope during the life cycle can kill a project
No working software is produced until late during the life cycle.
High amounts of risk and uncertainty.
Poor model for complex and object-oriented projects.
Poor model for long and ongoing projects.
Poor model where requirements are at a moderate to high risk of changing.

V-Shaped Model
Just like the waterfall model, the V-Shaped life cycle is a sequential path of execution of processes. Each phase must be completed before the next phase begins. Testing is emphasized in this model more so than the waterfall model though. The testing procedures are developed early in the life cycle before any coding is done, during each of the phases preceding implementation.
Requirements begin the life cycle model just like the waterfall model. Before development is started, a system test plan is created. The test plan focuses on meeting the functionality specified in the requirements gathering.
The high-level design phase focuses on system architecture and design. An integration test plan is created in this phase as well in order to test the pieces of the software systems ability to work together.
The low-level design phase is where the actual software components are designed, and unit tests are created in this phase as well.
The implementation phase is, again, where all coding takes place. Once coding is complete, the path of execution continues up the right side of the V where the test plans developed earlier are now put to use.
V-Shaped Life Cycle Model
Advantages
Simple and easy to use.
Each phase has specific deliverables.
Higher chance of success over the waterfall model due to the development of test plans early on during the life cycle.
Works well for small projects where requirements are easily understood.
Disadvantages
Very rigid, like the waterfall model.
Little flexibility and adjusting scope is difficult and expensive.
Software is developed during the implementation phase, so no early prototypes of the software are produced.
Model doesn’t provide a clear path for problems found during testing phases.

Incremental Model
The incremental model is an intuitive approach to the waterfall model. Multiple development cycles take place here, making the life cycle a “multi-waterfall” cycle. Cycles are divided up into smaller, more easily managed iterations. Each iteration passes through the requirements, design, implementation and testing phases.
A working version of software is produced during the first iteration, so you have working software early on during the software life cycle. Subsequent iterations build on the initial software produced during the first iteration.
Incremental Life Cycle Model
Advantages
Generates working software quickly and early during the software life cycle.
More flexible – less costly to change scope and requirements.
Easier to test and debug during a smaller iteration.
Easier to manage risk because risky pieces are identified and handled during its iteration.
Each iteration is an easily managed milestone.
Disadvantages
Each phase of an iteration is rigid and do not overlap each other.
Problems may arise pertaining to system architecture because not all requirements are gathered up front for the entire software life cycle.

Spiral Model
The spiral model is similar to the incremental model, with more emphases placed on risk analysis. The spiral model has four phases: Planning, Risk Analysis, Engineering and Evaluation. A software project repeatedly passes through these phases in iterations (called Spirals in this model). The baseline spiral, starting in the planning phase, requirements are gathered and risk is assessed. Each subsequent spirals builds on the baseline spiral.Requirements are gathered during the planning phase. In the risk analysis phase, a process is undertaken to identify risk and alternate solutions. A prototype is produced at the end of the risk analysis phase.Software is produced in the engineering phase, along with testing at the end of the phase. The evaluation phase allows the customer to evaluate the output of the project to date before the project continues to the next spiral.In the spiral model, the angular component represents progress, and the radius of the spiral represents cost.
Spiral Life Cycle Model
Advantages
· High amount of risk analysis
· Good for large and mission-critical projects.
· Software is produced early in the software life cycle.
Disadvantages
· Can be a costly model to use.
· Risk analysis requires highly specific expertise.
· Project’s success is highly dependent on the risk analysis phase.
· Doesn’t work well for smaller projects.



Testing Types

Acceptance Test
The test performed by users of a new or changed system in order to approve the system and go live.
Active Test
Introducing test data and analyzing the results. Contrast with "passive test" (below).Ad Hoc Test
Informal testing without a test case.Age Test (aging)
Evaluating a system's ability to perform in the future. To perform these tests, hardware and/or test data is modified to a future date.Alpha Test
The first testing of a product in the lab. Then comes beta testing.
The first test of newly developed hardware or software in a laboratory setting. When the first round of bugs has been fixed, the product goes into beta test with actual users. For custom software, the customer may be invited into the vendor's facilities for an alpha test to ensure the client's vision has been interpreted properly by the developer.
Automated Test
Using software to test software. Automated tests may still require human intervention to monitor stages for analysis or errors.Beta Test
Testing by end users. Follows alpha testing.A test of new or revised hardware or software that is performed by users at their facilities under normal operating conditions. Beta testing follows alpha testing. Vendors of packaged software often offer their customers the opportunity of beta testing new releases or versions, and the beta testing of elaborate products such as operating systems can take months.
Black Box Test
Testing software based on output only without any knowledge of internal operation. Contrast with "white box test."Dirty Test
Same as "negative test."


Environment Test
A test of new software that determines whether all transactions flow properly between input, output and storage devices. A test of new software that determines whether all transactions flow properly between input, output and storage devices. Typically done by systems programmers, an environment test ensures that all the parts of the system are in place. It does not test for content or validate output, which is performed by quality assurance personnel who develop test suites
Functional Test
Testing functional requirements of software, such as menus and key commands. Testing software based on its functional requirements. It ensures that the program physically works the way it was intended and all required menu options are present. It also ensures that the program conforms to the industry standards relevant to that environment; for example, in a Windows program, pressing F1 brings up help.
Negative Test
Using invalid input to test a program's error handling.Passive Test
Monitoring the results of a running system without introducing any special test data. Contrast with "active test" (above).Recovery Test
Testing a system's ability to recover from a hardware or software failure.Regression Test
To test revised software to see if previously working functions were impacted. In software development, testing a program that has been modified in order to ensure that additional bugs have not been introduced. When a program is enhanced, testing is often done only on the new features. However, adding source code to a program often introduces errors in other routines, and many of the old and stable functions must be retested along with the new ones. Regression testing is one of the most important aspects of software testing and is often overlooked or given scant attention.
Smoke Test
Turn it on and see what happens. A test of new or repaired equipment by turning it on. If it smokes... guess what... it doesn't work! The term also refers to testing the basic functions of software. The term was originally coined in the manufacture of containers and pipes, where smoke was introduced to determine if there were any leaks.
System Test
Overall testing in the lab and in the user environment. See alpha test and beta test.Test Case
A set of test data, test programs and expected results. A set of test data and test programs (test scripts) and their expected results. A test case validates one or more system requirements and generates a pass or fail.Test Scenario
A set of test cases. A set of test cases that ensure that the business process flows are tested from end to end. They may be independent tests or a series of tests that follow each other, each dependent on the output of the previous one. The terms "test scenario" and "test case" are often used synonymously.
Test Suite
A collection of test cases and/or test scenarios. A collection of test scenarios and/or test cases that are related or that may cooperate with each other.
Unit Test
A test of one component of the system. Contrast with "system test."User Acceptance Test (UAT)
See "acceptance test" above.
The final testing stages by users of a new or changed information system. If successful, it signals the approval to implement the system live. Cosmetic and other small changes may still be required as a result of the test, but the system is considered stable and processing data according to requirements.White Box Test
Testing software with knowledge of the internal operation. Contrast with "black box test."

Mutation Testing

A: Mutation testing is testing where our goal is to make mutant software fail, and thus demonstrate the adequacy of our test case. How do we perform mutation testing?
Step one: We create a set of mutant software. In other words, each mutant software differs from the original software by one mutation, i.e. one single syntax change made to one of its program statements, i.e. each mutant software contains one single fault.
Step two: We write and apply test cases to the original software and to the mutant software.
Step three: We evaluate the results, based on the following set of criteria: Our test case is inadequate, if both the original software and all mutant software generate the same output. Our test case is adequate, if our test case detects faults in our software, or, if, at least, one mutant software generates a different output than does the original software for our test case.

Monkey Testing

A: Monkey testing is random testing performed by automated testing tools (after the latter are developed by humans). These automated testing tools are considered "monkeys", if they work at random. We call them "monkeys" because it is widely believed that if we allow six monkeys to pound on six typewriters at random, for a million years, they will recreate all the works of Isaac Asimov.

There are "smart monkeys" and "dumb monkeys". "Smart monkeys" are valuable for load and stress testing; they will find a significant number of bugs, but are also very expensive to develop. "Dumb monkeys", on the other hand, are inexpensive to develop, are able to do some basic testing, but they will find few bugs. However, the bugs "dumb monkeys" do find will be hangs and crashes, i.e. the bugs you least want to have in your software product. "Monkey testing" can be valuable, but they should not be your only testing.

Why we compare files?

We compare files because of configuration management, revision control, requirement version control, or document version control. Examples are Rational ClearCase, DOORS, PVCS, and CVS. CVS, for example, enables several, often distant, developers to work together on the same source code.


What is stochastic testing?

Stochastic testing is the same as "monkey testing", but stochastic testing is a lot more technical sounding name for the same testing process.
Stochastic testing is black box testing, random testing, performed by automated testing tools. Stochastic testing is a series of random tests over time. The software under test typically passes the individual tests, but our goal is to see if it can pass a large number of individual tests.

What is PDR?

PDR is an acronym. In the world of software QA or testing, it stands for "peer design review", informally known as "peer review".

What is Peer-Review?
In the most general of terms, peer-review is the act of having another writer read what you have written and respond in terms of its effectiveness. This reader attempts to identify the writing's strengths and weaknesses, and then suggests strategies for revising it. The hope is that not only will the specific piece of writing be improved, but that future writing attempts will also be more successful. Peer-review happens with all types of writing, at any stage of the process, and with all levels of writers.


Why is that my company requires a PDR?

Your company requires a PDR, because your company wants to be the owner of the very best possible design and documentation. Your company requires a PDR, because when you organize a PDR, you invite, assemble and encourage the company's best experts to voice their concerns as to what should or should not go into your design and documentation, and why.

Please don't be negative. Please do not assume your company is finding fault with your work, or distrusting you in any way. Remember, PDRs are not about you, but about design and documentation. There is a 90+ per cent probability your company wants you, likes you and trust you because you're a specialist, and because your company hired you after a long and careful selection process.

Your company requires a PDR, because PDRs are useful and constructive. Just about everyone - even corporate chief executive officers (CEOs) - attend PDRs from time to time. When a corporate CEO attends a PDR, he has to listen for "feedback" from shareholders. When a CEO attends a PDR, the meeting is called the "annual shareholders' meeting".


Smoke and sanity testing

There are two types of test types, Smoke and Sanity. What they are exactly? Here we go.....
The general definition (related to Hardware) of Smoke Testing is: Smoke testing is a safe harmless procedure of blowing smoke into parts of the sewer and drain lines to detect sources of unwanted leaks and sources of sewer odors.
It it is related to s/w, the definition is Smoke testing is non-exhaustive software testing, ascertaining that the most crucial functions of a program work, but not bothering with finer details.
Sanity testing is a cursory testing; it is performed whenever a cursory testing is sufficient to prove the application is functioning according to specifications. This level of testing is a subset of regression testing. It normally includes a set of core tests of basic GUI functionality to demonstrate connectivity to the database, application servers, printers, etc.

1 comment:

Naveen Verma said...

Testers can find here each n every related topic to Software testing

Counts