Monday, September 3, 2007

Software Testing Interview Questions

What is Acceptance Testing?
Testing conducted to enable a user/customer to determine whether to accept a software product. Normally performed to validate the software meets a set of agreed acceptance criteria.

What is Accessibility Testing?
Verifying a product is accessible to the people having disabilities (deaf, blind, mentally disabled etc.).

What is Ad Hoc Testing?
A testing phase where the tester tries to 'break' the system by randomly trying the system's functionality. Can include negative testing as well. See also Monkey Testing.

What is Agile Testing?
Testing practice for projects using agile methodologies, treating development as the customer of testing and emphasizing a test-first design paradigm. See also Test Driven Development.

What is Application Binary Interface (ABI)?
A specification defining requirements for portability of applications in binary forms across defferent system platforms and environments.

What is Application Programming Interface (API)?
A formalized set of software calls and routines that can be referenced by an application program in order to access supporting system or network services.

What is Automated Software Quality (ASQ)?
The use of software tools, such as automated testing tools, to improve software quality.

What is Automated Testing?
Testing employing software tools which execute tests without manual intervention. Can be applied in GUI, performance, API, etc. testing. The use of software to control the execution of tests, the comparison of actual outcomes to predicted outcomes, the setting up of test preconditions, and other test control and test reporting functions. What is Backus-Naur Form? A metalanguage used to formally describe the syntax of a language.

What is Basic Block?
A sequence of one or more consecutive, executable statements containing no branches.

What is Basis Path Testing?
A white box test case design technique that uses the algorithmic flow of the program to design tests.

What is Basis Set?
The set of tests derived using basis path testing.

What is Baseline?
The point at which some deliverable produced during the software engineering process is put under formal change control.

What you will do during the first day of job?

What would you like to do five years from now?
Tell me about the worst boss you've ever had.

What are your greatest weaknesses?

What are your strengths?

What is a successful product?

What do you like about Windows?

What is good code?

What are basic, core, practices for a QA specialist?

What do you like about QA?

What has not worked well in your previous QA experience and what would you change?

How you will begin to improve the QA process?

What is the difference between QA and QC?

What is UML and how to use it for testing?

What is Beta Testing?
Testing of a rerelease of a software product conducted by customers.

What is Binary Portability Testing?
Testing an executable application for portability across system platforms and environments, usually for conformation to an ABI specification.

What is Black Box Testing?
Testing based on an analysis of the specification of a piece of software without reference to its internal workings. The goal is to test how well the component conforms to the published requirements for the component.
What is Bottom Up Testing? An approach to integration testing where the lowest level components are tested first, then used to facilitate the testing of higher level components. The process is repeated until the component at the top of the hierarchy is tested.

What is Boundary Testing?
Test which focus on the boundary or limit conditions of the software being tested. (Some of these tests are stress tests).

What is Bug?
A fault in a program which causes the program to perform in an unintended or unanticipated manner.

What is Boundary Value Analysis?
BVA is similar to Equivalence Partitioning but focuses on "corner cases" or values that are usually out of range as defined by the specification. his means that if a function expects all values in range of negative 100 to positive 1000, test inputs would include negative 101 and positive 1001.

What is Branch Testing?
Testing in which all branches in the program source code are tested at least once.

What is Breadth Testing?
A test suite that exercises the full functionality of a product but does not test features in detail.

What is CAST?
Computer Aided Software Testing.

What is CMMI?

What do you like about computers?

Do you have a favourite QA book? More than one? Which ones? And why.

What is the responsibility of programmers vs QA?

What are the properties of a good requirement?

How to do test if we have minimal or no documentation about the product?

What are all the basic elements in a defect report?

Is an "A fast database retrieval rate" a testable requirement?

What is software quality assurance?

What is the value of a testing group? How do you justify your work and budget?

What is the role of the test group vis-à-vis documentation, tech support, and so forth?

How much interaction with users should testers have, and why?

How should you learn about problems discovered in the field, and what should you learn from those problems?

What are the roles of glass-box and black-box testing tools?

What issues come up in test automation, and how do you manage them?

What is Capture/Replay Tool?
A test tool that records test input as it is sent to the software under test. The input cases stored can then be used to reproduce the test at a later time. Most commonly applied to GUI test tools. 53. What is CMM? The Capability Maturity Model for Software (CMM or SW-CMM) is a model for judging the maturity of the software processes of an organization and for identifying the key practices that are required to increase the maturity of these processes.

What is CMM?
The Capability Maturity Model for Software (CMM or SW-CMM) is a model for judging the maturity of the software processes of an organization and for identifying the key practices that are required to increase the maturity of these processes.

What is Cause Effect Graph?
A graphical representation of inputs and the associated outputs effects which can be used to design test cases.

What is Code Complete?
Phase of development where functionality is implemented in entirety; bug fixes are all that are left. All functions found in the Functional Specifications have been implemented.

What is Code Coverage?
An analysis method that determines which parts of the software have been executed (covered) by the test case suite and which parts have not been executed and therefore may require additional attention.

What is Code Inspection?
A formal testing technique where the programmer reviews source code with a group who ask questions analyzing the program logic, analyzing the code with respect to a checklist of historically common programming errors, and analyzing its compliance with coding standards.

What is Code Walkthrough?
A formal testing technique where source code is traced by a group with a small set of test cases, while the state of program variables is manually monitored, to analyze the programmer's logic and assumptions.

What is Coding?
The generation of source code.

What is Compatibility Testing?
Testing whether software is compatible with other elements of a system with which it should operate, e.g. browsers, Operating Systems, or hardware.

What is Component?
A minimal software item for which a separate specification is available.

What is Concurrency Testing?
Multi-user testing geared towards determining the effects of accessing the same application code, module or database records. Identifies and measures the level of locking, deadlocking and use of single-threaded code and locking semaphores.

What is Conformance Testing?
The process of testing that an implementation conforms to the specification on which it is based. Usually applied to testing conformance to a formal standard.

What is Context Driven Testing?
The context-driven school of software testing is flavor of Agile Testing that advocates continuous and creative evaluation of testing opportunities in light of the potential information revealed and the value of that information to the organization right now.

What development model should programmers and the test group use?

How do you get programmers to build testability support into their code?

What is the role of a bug tracking system?

What are the key challenges of testing?

Have you ever completely tested any part of a product? How?

Have you done exploratory or specification-driven testing?

Should every business test its software the same way?

Discuss the economics of automation and the role of metrics in testing.

Describe components of a typical test plan, such as tools for interactive products and for database products, as well as cause-and-effect graphs and data-flow diagrams.

When have you had to focus on data integrity?

What are some of the typical bugs you encountered in your last assignment?

How do you prioritize testing tasks within a project?

How do you develop a test plan and schedule?

Describe bottom-up and top-down approaches. When should you begin test planning?

When should you begin testing?

What is Conversion Testing?
Testing of programs or procedures used to convert data from existing systems for use in replacement systems.

What is Cyclomatic Complexity?
A measure of the logical complexity of an algorithm, used in white-box testing.

What is Data Dictionary?
A database that contains definitions of all data items defined during analysis.

What is Data Flow Diagram?
A modeling notation that represents a functional decomposition of a system.

What is Data Driven Testing?
Testing in which the action of a test case is parameterized by externally defined data values, maintained as a file or spreadsheet. A common technique in Automated Testing.

What is Debugging?
The process of finding and removing the causes of software failures.

What is Defect?
Nonconformance to requirements or functional / program specification

What is Dependency Testing?
Examines an application's requirements for pre-existing software, initial states and configuration in order to maintain proper functionality.

What is Depth Testing?
A test that exercises a feature of a product in full detail.

What is Dynamic Testing?
Testing software through executing it. See also Static Testing.

What is Emulator?
A device, computer program, or system that accepts the same inputs and produces the same outputs as a given system.

What is Endurance Testing?
Checks for memory leaks or other problems that may occur with prolonged execution.

What is End-to-End testing?
Testing a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.

What is Equivalence Class?
A portion of a component's input or output domains for which the component's behaviour is assumed to be the same from the component's specification.

What is Equivalence Partitioning?
A test case design technique for a component in which test cases are designed to execute representatives from equivalence classes.

What is Exhaustive Testing?
Testing which covers all combinations of input values and preconditions for an element of the software under test.

What is Functional Decomposition?
A technique used during planning, analysis and design; creates a functional hierarchy for the software.

What is Functional Specification?
A document that describes in detail the characteristics of the product with regard to its intended features.

What is Functional Testing?
Testing the features and operational behavior of a product to ensure they correspond to its specifications. Testing that ignores the internal mechanism of a system or component and focuses solely on the outputs generated in response to selected inputs and execution conditions.

What is Glass Box Testing?
A synonym for White Box Testing.

Do you know of metrics that help you estimate the size of the testing effort?

How do you scope out the size of the testing effort?

How many hours a week should a tester work?

How should your staff be managed?

How about your overtime?

How do you estimate staff requirements?

What do you do (with the project tasks) when the schedule fails?

How do you handle conflict with programmers?

How do you know when the product is tested well enough?

What characteristics would you seek in a candidate for test-group manager?

What do you think the role of test-group manager should be? Relative to senior management? Relative to other technical groups in the company? Relative to your staff?

How do your characteristics compare to the profile of the ideal manager that you just described?

How does your preferred work style work with the ideal test-manager role that you just described?

What is different between the way you work and the role you described?

Who should you hire in a testing group and why?

What is Gorilla Testing?
Testing one particular module, functionality heavily.

What is Gray Box Testing?
A combination of Black Box and White Box testing methodologies? testing a piece of software against its specification but using some knowledge of its internal workings.

What is High Order Tests?
Black-box tests conducted once the software has been integrated.

What is Independent Test Group (ITG)?
A group of people whose primary responsibility is software testing.

What is Inspection?
A group review quality improvement process for written material. It consists of two aspects; product (document itself) improvement and process improvement (of both document production and inspection).

What is Integration Testing?
Testing of combined parts of an application to determine if they function together correctly. Usually performed after unit and functional testing. This type of testing is especially relevant to client/server and distributed systems.

What is Installation Testing?
Confirms that the application under test recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions.

What is Localization Testing?
This term refers to making software specifically designed for a specific locality.

What is Loop Testing?
A white box testing technique that exercises program loops.

What is Metric?
A standard of measurement. Software metrics are the statistics describing the structure or content of a program. A metric should be a real objective measurement of something such as number of bugs per lines of code.

What is Monkey Testing?
Testing a system or an Application on the fly, i.e just few tests here and there to ensure the system or an application does not crash out.

What is Negative Testing?
Testing aimed at showing software does not work. Also known as "test to fail". See also Positive Testing.

What is Path Testing?
Testing in which all paths in the program source code are tested at least once.

What is Performance Testing?
Testing conducted to evaluate the compliance of a system or component with specified performance requirements. Often this is performed using an automated test tool to simulate large number of users. Also know as "Load Testing".

What is Positive Testing?
Testing aimed at showing software works. Also known as "test to pass". See also Negative Testing.

What is Quality Assurance?
All those planned or systematic actions necessary to provide adequate confidence that a product or service is of the type and quality needed and expected by the customer.

What is Quality Audit?
A systematic and independent examination to determine whether quality activities and related results comply with planned arrangements and whether these arrangements are implemented effectively and are suitable to achieve objectives.

What is Quality Circle?
A group of individuals with related interests that meet at regular intervals to consider problems or other matters related to the quality of outputs of a process and to the correction of problems or to the improvement of quality.

What is Quality Control?
The operational techniques and the activities used to fulfill and verify requirements of quality.

What is Quality Management?
That aspect of the overall management function that determines and implements the quality policy.

What is Quality Policy?
The overall intentions and direction of an organization as regards quality as formally expressed by top management.

What is Quality System?
The organizational structure, responsibilities, procedures, processes, and resources for implementing quality management.

What is Race Condition?
A cause of concurrency problems. Multiple accesses to a shared resource, at least one of which is a write, with no mechanism used by either to moderate simultaneous access.

What is Ramp Testing?
Continuously raising an input signal until the system breaks down.

What is Recovery Testing?
Confirms that the program recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions.

What is Release Candidate?
A pre-release version, which contains the desired functionality of the final version, but which needs to be tested for bugs (which ideally should be removed before the final version is released).

What is Sanity Testing?
Brief test of major functional elements of a piece of software to determine if its basically operational. See also Smoke Testing.

What is Scalability Testing?
Performance testing focused on ensuring the application under test gracefully handles increases in work load.

What is the role of metrics in comparing staff performance in human resources management?

How do you estimate staff requirements?

What do you do (with the project staff) when the schedule fails?

Describe some staff conflicts youÂ’ve handled.

Why did you ever become involved in QA/testing?

What is the difference between testing and Quality Assurance?

What was a problem you had in your previous assignment (testing if possible)?How did you resolve it?

What are two of your strengths that you will bring to our QA/testing team?

What do you like most about Quality Assurance/Testing?

What do you like least about Quality Assurance/Testing?

What is the Waterfall Development Method and do you agree with all the steps?

What is the V-Model Development Method and do you agree with this model?

What is Security Testing?
Testing which confirms that the program can restrict access to authorized personnel and that the authorized personnel can access the functions available to their security level.

What is Smoke Testing?
A quick-and-dirty test that the major functions of a piece of software work. Originated in the hardware testing practice of turning on a new piece of hardware for the first time and considering it a success if it does not catch on fire.

What is Soak Testing?
Running a system at high load for a prolonged period of time. For example, running several times more transactions in an entire day (or night) than would be expected in a busy day, to identify and performance problems that appear after a large number of transactions have been executed.

What is Software Requirements Specification?
A deliverable that describes all data, functional and behavioral requirements, all constraints, and all validation requirements for software.

What is Software Testing?
A set of activities conducted with the intent of finding errors in software.

What is Static Analysis?
Analysis of a program carried out without executing the program.

What is Storage Testing?
Testing that verifies the program under test stores data files in the correct directories and that it reserves sufficient space to prevent unexpected termination resulting from lack of space. This is external storage as opposed to internal storage.

What is Stress Testing?
Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements to determine the load under which it fails and how. Often this is performance testing using a very high level of simulated load.

What is Structural Testing?
Testing based on an analysis of internal workings and structure of a piece of software. See also White Box Testing.

What is System Testing?
Testing that attempts to discover defects that are properties of the entire system rather than of its individual components.

What is Testability?
The degree to which a system or component facilitates the establishment of test criteria and the performance of tests to determine whether those criteria have been met.

What is Test Bed?
An execution environment configured for testing. May consist of specific hardware, OS, network topology, configuration of the product under test, other application or system software, etc. The Test Plan for a project should enumerated the test beds(s) to be used.

What is Test Case?
Test Case is a commonly used term for a specific test. This is usually the smallest unit of testing. A Test Case will consist of information such as requirements testing, test steps, verification steps, prerequisites, outputs, test environment, etc. A set of inputs, execution preconditions, and expected outcomes developed for a particular objective, such as to exercise a particular program path or to verify compliance with a specific requirement.

Test Driven Development?
Testing methodology associated with Agile Programming in which every chunk of code is covered by unit tests, which must all pass all the time, in an effort to eliminate unit-level and regression bugs during development. Practitioners of TDD write a lot of tests, i.e. an equal number of lines of test code to the size of the production code.

What is Test Environment?
The hardware and software environment in which tests will be run, and any other software with which the software under test interacts when under test including stubs and test drivers.

What is Test First Design?
Test-first design is one of the mandatory practices of Extreme Programming (XP).It requires that programmers do not write any production code until they have first written a unit test.

What is Test Plan?
A document describing the scope, approach, resources, and schedule of intended testing activities. It identifies test items, the features to be tested, the testing tasks, who will do each task, and any risks requiring contingency planning. Ref IEEE Std 829.

What is Test Specification?
A document specifying the test approach for a software feature or combination or features and the inputs, predicted results and execution conditions for the associated tests.

What is Test Suite?
A collection of tests used to validate the behavior of a product. The scope of a Test Suite varies from organization to organization. There may be several Test Suites for a particular product for example. In most cases however a Test Suite is a high level concept, grouping together hundreds or thousands of tests related by what they are intended to test.

What is Thread Testing?
A variation of top-down testing where the progressive integration of components follows the implementation of subsets of the requirements, as opposed to the integration of components by successively lower levels.

What is Top Down Testing?
An approach to integration testing where the component at the top of the component hierarchy is tested first, with lower level components being simulated by stubs. Tested components are then used to test lower level components. The process is repeated until the lowest level components have been tested.

What is Total Quality Management?
A company commitment to develop a process that achieves high quality product and customer satisfaction.

What is Traceability Matrix?
A document showing the relationship between Test Requirements and Test Cases.

What is Use Case?
The specification of tests that are conducted from the end-user perspective. Use cases tend to focus on operating software as an end-user would conduct their day-to-day activities.

What is Validation?
The process of evaluating software at the end of the software development process to ensure compliance with software requirements. The techniques for validation is testing, inspection and reviewing.

What is Verification?
The process of determining whether of not the products of a given phase of the software development cycle meet the implementation steps and can be traced to the incoming objectives established during the previous phase. The techniques for verification are testing, inspection and reviewing.

Describe some problem that you had with automating testing tool. How do you plan test automation?

Can test automation improve test effectiveness?

What is data - driven automation?

What are the main attributes of test automation?

Does automation replace manual testing?

How will you choose a tool for test automation? How you will evaluate the tool for test automation?

What are main benefits of test automation?

What could go wrong with test automation?

How you will describe testing activities?

What testing activities you may want to automate?

User Acceptance Testing

For years, User Acceptance Test (UAT) has been synonymous with screen testing. This is logical because for years most of the user experience was what was on the screen. With today’s integrated systems, validating the user experience means going behind the screens to ensure the user’s data has been recorded into downstream applications correctly.
Therefore, UAT needs to be defined as more than simply screens. UAT teams need easy, automated ways of accessing the behind the screens information flow for validation. By using Solstice’s recording capability to create back-end tests, UAT teams can develop detailed tests to expose critical data that is important to users but is displayed on screens. Further, once errors are found, the detailed back-end test can pinpoint exactly where and what has caused a problem. Solstice gives UAT teams the increased visibility demanded by today’s integrated systems.

Component and End-to-End Testing

Integration projects have a very strong need for a well planned component testing phase. Integration projects are assembly oriented. We tend to think of system development in terms of “stacks” or Legos®. That is often valid for stand alone applications. However, if you look at most integrated system diagrams, they look more like Tinker Toys©. Integrated systems need to be assembled and validated one step at a time just like Tinker Toys©.
Organizations that employ automated integration testing at the unit level can combine unit tests into component tests with simple XP-like folder structures. Where unit tests are not available, Solstice’s recorders can create baseline tests in minutes, including the most complex message structures. As the component tests are built, they can be saved into a central library and used as building blocks for other tests.
Component tests can be incrementally combined until they provide a validation of the entire process flow. Once the flow has been validated, testing the permutation of alternatives that can flow across the path is the next step. Solstice’s data substitution capability makes it simple to use one test case and a file of options to cover all expected paths. Data substitution streamlines testing and test maintenance and makes testing large numbers of alternatives practical.

Integration Testing Solutions for Quality Assurance Teams

QA teams vary in their background and make up. Some QA teams have embraced integration testing. Integration testing veterans have generally approached testing with stubs, harnesses or manual log traces and file dumps. It is a manual, time-consuming process, but it is the only way to access and validate integration data. Other QA teams have been on the boundaries of integration testing and are still focused on the screens and applications. QA teams on the boundaries generally do not have access to the integration layer required to validate integration process flows. For both types of QA approaches, Solstice provides a dramatic improvement in the way business is done.
For QA teams testing their integration manually – the implementation of automated integration testing from Solstice will save substantial time, resources and money. Solstice allows testing teams to access the integration layer for testing. Using an XP-like user interface, Solstice allows testers to capture integration pipes the same way they currently capture and test screens. With a similar test case/test suite paradigm and formatted messages and content, QA teams can accomplish in hours what used to take days and weeks. For QA teams not currently involved in integration testing, Solstice can be a bridge for departments to effectively participate in this increasingly valuable function. Solstice gives QA staff the ability to record and create integration test cases without regular intervention from integration and messaging specialists. Messages tend to be large and complicated, with headers that have little to do with content. Solstice captures both the message and the payload in a familiar XP-like interface that allows testers to see messages on a field-by-field basis. Solstice makes it as simple to test integration as it is to test applications.

Regression Testing

Once an integrated system is assembled, the next challenge is an assessment and creation of a regression test to assure the risk thresholds of an organization are met. Fortune 500 customers report that 50% of their critical data is behind the screens. With so much vital data in message pipes and the permutations and combinations of paths through the system growing with every system change, regression testing the integration increments become the only way of assuring that integrated systems will work properly. For those teams who have done component testing, individual component and end-to- end tests can be combined into regression suites. For those teams who are starting a regression initiative from scratch, Solstice’s recording capability can help them quickly create the test cases and suites to build their regression library. Solstice provides the only repeatable Enterprise Integration Testing tool that tests a process step-by-step as it moves through applications and across protocols. By capturing the message traffic between systems, Solstice’s regression tests can isolate problem components, or provide a step-by-step view of an entire process. Each step and its content is saved into baseline tests and suites that can exercise tens of thousands of alternatives with the click of a button.

Compliance Testing

Compliance initiatives are moving into their second and third generation in many organizations, and the focus is shifting to testing and validation. When regulations such as Sarbanes-Oxley, The Patriot Act or HIPAA first emerged, the initial thrust was to document processes.
Documentation revealed flaws that created the second wave of compliance projects, which focused on change management and governance, both of which are fueled by the details of processes and data. These initiatives are shining the spotlight on testing and validation as a critical component in the compliance process. Testing provides several vital compliance components: the assurance that risk has been addressed as well as the traceability of how it was accomplished. Compliance dashboards and reporting are useless without complete and accurate information. Organizations are recognizing that compliance is a ground-up, quality-oriented initiative, not a stand alone project.
Compliance is not purely a data driven effort. Compliance looks at process and transformation as well. With Solstice's focus on step-by-step integrated processes across technologies, it is ideally suited to serve up the end-to-end perspective the compliance initiatives desire. Fortune 500 clients report that more than half of their critical data is behind the screens. Validation and testing techniques must delve into the integration framework to obtain the comprehensive data and process validation. Solstice provides the only enterprise-level, cross-protocol testing solution that provides a repeatable way of validating that this behind-the-screens data meets compliance standards.

Performance Testing

Performance testing is critical for any application and integrated systems are no exception. The more integrated the system, the more complicated performance testing is. Often the setup of a performance test can take days. Although Solstice is not a performance testing tool, it is very useful in reducing set up time.
There are two primary ways that Solstice makes life easier for performance test teams. First, Solstice can isolate a component instead of having to assemble an entire process and driving a test through a GUI. Because Solstice’s tests are captured at each intermediate step, an individual system can be isolated and messages fired directly to a system. Solstice provides a replay throttle that allows users to control the speed at which messages are sent to an application. The distributed architecture of Solstice’s protocol server that serves up the messages permits messages to be sent from various geographic locations while centrally coordinated.
Second, Solstice’s Simulation capability can replace connecting systems or databases that are required for process operation but may not be readily available. Eliminating the setup time for associated systems and databases surrounding a test is probably the biggest time saver for performance tests.

Test Life Cycle

1. Requirements stage
2. Test Plan
3. Test Design.
4. Design Reviews
5. Code Reviews
6. Test Cases preparation.
7. Test Execution
8. Test Reports.
9. Bugs Reporting
10. Reworking on patches.
11. Release to production.

Requirements Stage
Normally in many companies, developers itself take part in the requirements stage. Especially for product-based companies, a tester should also be involved in this stage. Since a tester thinks from the user side whereas a developer can’t. A separate panel should be formed for each module comprising a developer, a tester and a user. Panel meetings should be scheduled in order to gather everyone’s view. All the requirements should be documented properly for further use and this document is called “Software Requirements Specifications”.
Test Plan
Without a good plan, no work is a success. A successful work always contains a good plan. The testing process of software should also require good plan. Test plan document is the most important document that brings in a process – oriented approach. A test plan document should be prepared after the requirements of the project are confirmed. The test plan document must consist of the following information: • Total number of features to be tested. • Testing approaches to be followed. • The testing methodologies • Number of man-hours required. • Resources required for the whole testing process. • The testing tools that are to be used. • The test cases, etc
The software design is done in systematical manner or using the UML language. The tester can do the reviews over the design and can suggest the ideas and the modifications needed.
Code Reviews
Code reviews are similar to unit testing. Once the code is ready for release, the tester should be ready to do unit testing for the code. He must be ready with his own unit test cases. Though a developer does the unit testing, a tester must also do it. The developers may oversee some of the minute mistakes in the code, which a tester may find out.
Test Execution and Bugs Reporting
Once the unit testing is completed and the code is released to QA, the functional testing is done. A top-level testing is done at the beginning of the testing to find out the top-level failures. If any top-level failures occur, the bugs should be reported to the developer immediately to get the required workaround. The test reports should be documented properly and the bugs have to be reported to the developer after the testing is completed.
Release to Production
Once the bugs are fixed, another release is given to the QA with the modified changes. Regression testing is executed. Once the QA assures the software, the software is released to production. Before releasing to production, another round of top-level testing is done. The testing process is an iterative process. Once the bugs are fixed, the testing has to be done repeatedly. Thus the testing process is an unending process.

DataBase Testing

Database Testing



Why Test an RDBMS?
There are several reasons why you need to develop a comprehensive testing strategy for your RDBMS:
The current state of the art in many organizations is for data professionals to control changes to the database schemas, for developers to visually inspect the database during construction, and to perform some form of formal testing during the test phase at the end of the lifecycle. Unfortunately, none of these approaches prove effective. Application developers will often go around their organization's data management group because they find them too difficult to work with, too slow in the way they work, or sometimes they don't even know they should be working together. The end result is that the teams don't follow the desired data quality procedures and as a result quality suffers.

Uncomfortable Question:
Isn't it time that we stopped talking about data quality and actually started doing something about it?

What to test in an RDBMS.
Black-Box Testing at the Interface
White/Clear-Box Testing Internally Within the Database
O/R mappings (including the meta data)
Incoming data values
Outgoing data values (from queries, stored functions, views ...)
Scaffolding code (e.g. triggers or updateable views) which support refactorings
Typical unit tests for your stored procedures, functions, and triggers
Existence tests for database schema elements (tables, procedures, ...)
View definitions
Referential integrity (RI) rules
Default values for a column
Data invariants for a single column
Data invariants involving several columns

When Should We Test?
Agile software developers take a test-first approach to development where they write a test before you write just enough production code to fulfill that test. The first step is to quickly add a test, basically just enough code to fail. Next you run your tests, often the complete test suite although for sake of speed you may decide to run only a subset, to ensure that the new test does in fact fail. You then update your functional code to make it pass the new tests. The fourth step is to run your tests again. If they fail you need to update your functional code and retest. Once the tests pass the next step is to start over.

How to Test
Although you want to keep your database testing efforts as simple as possible, at first you will discover that you have a fair bit of both learning and set up to do. In this section I discuss the need for various database sandboxes in which people will test: in short, if you want to do database testing then you're going to need test databases (sandboxes) to work in. I then overview how to write a database test and more importantly describe setup strategies for database tests. Finally, I overview several database testing tools which you may want to consider.
Database Sandboxes
A common best practice on agile teams is to ensure that developers have their own "sandboxes" to work in, depicts the various types of sandboxes which your team may choose to work in. In each sandbox you'll have a copy of the database. In the development sandbox you'll experiment, implement new functionality, and refactor existing functionality, validate your changes through testing, and then eventually you'll promote your work once you're happy with it to the project integration sandbox. In this sandbox you will rebuild your system and then run all the tests to ensure you haven't broken anything (if so, then back to the development sandbox). Occasionally, at least once an iteration/cycle, you'll deploy your work to the level (demo and pre-production testing), and every so often (perhaps once every six to twelve months) into production. The primary advantage of sandboxes are that they help to reduce the risk of technical errors adversely affecting a larger group of people than is absolutely necessary at the time.

Writing Database Tests
There's no magic when it comes to writing a database test, you write them just like you would any other type of test. Database tests are typically a three-step process:
1. Setup the test. You need to put your database into a known state before running tests against it. There are several strategies for doing so.
2. Run the test. Using a database regression testing tool, run your database tests just like you would run your application tests.
3. Check the results. You'll need to be able to do "table dumps" to obtain the current values in the database so that you can compare them against the results which you expected.
Setting up Database Tests
To successfully your database you must first know the exact state of the database, and the best way to do that is to simply put the database in a known state before running your test suite. There are two common strategies for doing this:
1. Fresh start. A common practice is to rebuild the database, including both creation of the schema as well as loading of initial test data, for every major test run (e.g. testing that you do in your project integration or pre-production test sandboxes).
2. Data reinitialization. For testing in developer sandboxes, something that you should do every time you rebuild the system, you may want to forgo dropping and rebuilding the database in favor of simply reinitializing the source data. You can do this either by erasing all existing data and then inserting the initial data vales back into the database, or you can simple run updates to reset the data values. The first approach is less risky and may even be faster for large amounts of data.
An important part of writing database tests is the creation of test data. You have several strategies for doing so:
1. Have source test data. You can maintain an external definition of the test data, perhaps in flat files, XML files, or a secondary set of tables. This data would be loaded in from the external source as needed.
2. Test data creation scripts. You develop and maintain scripts, perhaps using data manipulation language (DML) SQL code or simply application source code (e.g. Java or C#), which does the necessary deletions, insertions, and/or updates required to create the test data.
3. Self-contained test cases. Each individual test case puts the database into a known state required for the test.
What Testing Tools Are Available?
I believe that there are several critical features which you need to successfully test RDBMSs.
The testing tools should support the language that you're developing in. For example, for internal database testing if you're a Microsoft SQL Server developer, your T-SQL procedures should likely be tested using some form of T-SQL framework. Similarly, Oracle DBAs should have a PL-SQL-based unit testing framework. Third, you need tools which help you to put your database into a known state, which implies the need not only for test data generation but also for managing that data (like other critical development assets, test data should be under configuration management control).
Table 2. Some database testing tools.
Category
Description
Examples
Unit testing tools
Tools which enable you to regression test your database.
DBUnit
NDbUnit
OUnit for Oracle (being replaced soon by Qute)
SQLUnit
TSQLUnit (for testing T-SQL in MS SQL Server)
Visual Studio Team Edition for Database Professionals includes testing capabilities
Testing tools for load testing
Tools simulate high usage loads on your database, enabling you to determine whether your system's architecture will stand up to your true production needs.
Empirix
Mercury Interactive
RadView
Rational Suite Test Studio
Web Performance
Test Data Generator
Developers need test data against which to validate their systems. Test data generators can be particularly useful when you need large amounts of data, perhaps for stress and load testing.
Data Factory
Datatect
DTM Data Generator
Turbo Data

Who Should Test?
During development cycles, the primary people responsible for doing database testing are application developers and agile DBAs. They will typically pair together, and because they are hopefully taking a TDD-approach to development the implication is that they'll be doing database unit testing on a continuous basis. During the release cycle your testers, if you have any, will be responsible for the final system testing efforts and therefore they will also be doing database testing.
The role of your data management (DM) group, or IT management if your organization has no DM group, should be to support your database testing efforts. They should promote the concept that database testing is important, should help people get the requisite training that they require, and should help obtain database testing tools for your organization. As you have seen, database testing is something that is done continuously by the people on development teams, it isn't something that is done by another group (except of course for system testing efforts). In short, the DM group needs to support database testing efforts and then get out of the way of the people who are actually doing the work.
Beware Coupling:
One danger with database regression testing, and with regression testing in general, is coupling between tests. If you put the database into a known state, then run several tests against that known state before resetting it, then those tests are potentially coupled to one another. Coupling between tests occurs when one test counts on another one to successfully run so as to put the database into a known state for it. Self-contained test cases do not suffer from this problem, although may be potentially slower as a result due to the need for additional initialization steps.

=========================================================================

Test Plan Template

1. OVERVIEW
1.1. PRODUCT NAME
1.2. PRODUCT REVISION
1.3. PROJECT LEADS
1.3.1. Marketing Lead (or other customer representative)
1.3.2. Program Manager
1.3.3. Development Lead
1.3.4. Test Lead
1.3.5. Build and Release Control Engineer
1.3.6. Legal representative
Include names, phone numbers, and email addresses for each. Note that this table will differ for a particular company or group. The goal is to ensure that anyone walking into the company or into the test role can easily identify and contact the people he/she needs to reach.
1.4. TEST PROJECT STAFF
1.4.1. Test requirements designers
1.4.2. Test case designers
1.4.3. Test personnel
1.4.3.1. For manual (i.e. non-automated) tests
1.4.3.2. For automated tests
1.4.3.3. Test automation programmers
1.4.4. Documentation reviewers
1.4.5. Legal reviewer
Include names, phone numbers, and email addresses for each. Note that there may be several people in each role, that one person may be obliged to fill multiple roles, and that some roles (e.g. legal reviewer) won’t be required for all projects.
1.5. PRODUCT OVERVIEW
This could also be “description of the change requirements” for maintenance projects.
1.5.1. Cut and paste from requirements document or specification a brief summary description of the product or change, or describe the project as understood by the developers—if the latter, make sure that there is agreement and sign-off from the customer.
1.6. TRACKING AND REPORTING SYSTEMS
1.6.1. Identify the defect tracking system in use.
1.6.2. Identify the manner and schedule by which defect reports are expected to be delivered to developers.
1.6.3. Identify parties that may have access to the tracking system.
1.6.4. Identify the change control system.
1.6.5. Identify the means by which the team is to be notified of changes to the requirements, the product, the test plan, etc.
2. TESTING SYNOPSIS
2.1. Items to be tested
2.1.1. Refer to the functional requirements that specify the features and functions to be tested. The description of the change need not be excessively detailed when there is a complete description to refer to in some other document. On the other hand, if there is no reasonable specification available, more detail is called for here.
2.2. Items not to be Tested
2.2.1. List the features and functions that will not be covered in this test plan. Identify briefly the reasons for leaving them out.
2.3. System Requirements
2.3.1. This section should be filled out in detail for new projects. For existing maintenance tasks, a simple cross-reference to the document describing existing system requirements is fine. Note any changes to previous system requirements, especially when support for a given product or platform is being dropped.
2.3.2. If there is a system requirement that could be unclear, make it specific; for example, for Web-based projects, identify not only the supported browsers but also the minimum versions of the supported browsers.
2.4. Standards/Reference material
2.4.1. List any standards or other reference material used in the creation of this test plan.
2.4.2. Identify standards for acceptance criteria, defect severity, testable specifications, and so on. (These standards may have to be created, or adapted from time to time; the first use of this test plan will require more work than later iterations.)
2.5. Glossary
2.5.1. In cases where terminology could be unfamiliar or open to interpretation, provide a list defining the unclear terms.
2.5.2. Obtain agreement on these terms from all interested parties.
2.5.3. Note: If no one is forthcoming with the information you need, make something up; they might not have done their jobs from the outset, but they’ll be happy to correct your work! You will have achieved the goal, which is clarity and agreement.
3. TYPES OF TESTING
3.1. ACCEPTANCE TESTING
3.1.1. Detail a set of acceptance criteria—conditions that must be met before testing can begin. A smoke test should represent the bare minimum of acceptance testing.
3.1.2. As noted above, the ideal is to create a separate document for acceptance criteria that can be reused and referred to here. If any particular, specialized test cases not listed in that document will be used, refer to them here.
3.2. FEATURE LEVEL TESTING
This is the real meat of the test plan. The test categories below are filled in itemizing categories of tests, along with references to the test library or catalog. Individual test cases should not be listed here; test requirements generally should not be either; the details should exist elsewhere and can be cross-referenced.
3.2.1. Task-Oriented Functional Tests
3.2.1.1. This is a detailed section, listing tests requirements for program features against functional specifications, user guides or other design related documents. If there are test matrices available listing these features and their interdependence (and there should be), refer to them.
3.2.2. Forced-Error Tests
3.2.2.1. Provide or refer to a list of all error conditions and messages. Identify the tests that will be run to force the program into error conditions.
3.2.3. Boundary Tests
3.2.3.1. Boundary tests—tests carried out at the lines between valid and invalid input, acceptable and unacceptable system requirements (such as memory, disk space, or timing), and other tests at the limits of performance—are the keys to eliminating duplication of effort. Identify the types of boundary tests that will be carried out. Note that such tests can also fall into the categories outlined below, so this section may be removed, or made a sub-section of those categories.
3.2.4. Integration Tests
3.2.4.1. Identify components or modules that can be combined and tested independently to reduce dependence on system testing. Identify any test harnesses or drivers that need to be developed.
3.2.5. System-Level Tests
3.2.5.1. Specify the tests will be carried out to fully exercise the program as a whole to ensure that all elements of the integrated system function properly. Note that when unit and integration testing have been properly performed, the dependence upon system testing can be reduced.
3.2.6. Real World User-Level Test
3.2.6.1. In contrast to types of testing designed to find defects, identify tests that will demonstrate the successful functioning of the program as you expect the customer to use it. What type of workflow tests will be run? What type of “real work” will be carried out using the program?
3.2.7. Unstructured Tests
3.2.7.1. Specify the amount of ad-hoc or exploratory testing that will be carried out. Identify the scope and the time associated with this form of testing.
3.2.8. Volume Tests
3.2.8.1. Indicate the types of tests will be carried out to see how the program deals with very large amounts of data, or with a large demand on timely processing. Note that these tests can rarely be performed without automation; identify the automation tools, test harnesses, or scripts that will be used. Ensure that the programs developed for the test automation effort are accompanied by their own sets of requirements, specifications, and development processes.
3.2.9. Stress Tests
3.2.9.1. Identify the limits under which the program is expected to perform. These may include number of transactions per unit time, timeouts, memory constraints, disk space constraints, and so on. Volume tests and stress tests are closely related; you may consider wrapping both into the same category.
3.2.9.2. How will the product be tested to push the upper functional limits of the program? Will specific tools or test suites be used to carry out stress tests? Ensure that these are reusable.
3.2.10. Performance Tests
3.2.10.1. Refer to the functional requirements that specify acceptable performance. Identify the functions that need to be measured, and the tests needed to show conformance to the requirements.
3.3. REGRESSION TESTING
3.3.1. At each stage of new development or maintenance, a subset of the regression test library should be run, focusing on the feature or function that has changed from the previous version. Unit, integration, and system tests are all viable places for regression testing. For small maintenance fixes, identify this subset. A good version control system can allow the building of older versions of the software for comparative purposes.
3.3.2. In the final phase of a complete development cycle, a full regression test cycle is run. Identify the test case libraries and suites that will be run.
3.3.3. Whether a subset or a full regression test run, existing test scripts, matrices and test cases should be used, whether automation is available or not. Identify the documents that describe the details. Emphasize regression tests for functions that are new or that have changed, for components that have had a history of vulnerability, for high-risk defects, and for previously-fixed severe defects.
3.4. CONFIGURATION AND COMPATIBILITY TESTING
3.4.1. If applicable, identify the types of software and hardware compatibility tests that will be carried out.
3.4.2. List operating systems, software applications, device drivers etc. that the product will be tested with or against.
3.4.3. List hardware environments required for in-house testing.
3.5. DOCUMENTATION TESTING/ONLINE HELP TESTING
3.5.1. Documentation and online help testing will be carried out to verify technical accuracy of documented material.
3.5.2. If a license agreement is included in or displayed by the product, or the portion of it to which this test plan refers, ensure the correct one is being used (see the next item below).
3.6. COPYRIGHTS AND LICENSE AGREEMENT
3.6.1. Identify any copyright notices displayed by the program. Verify that they are accurate and up to date.
3.6.2. In cases where an End-User License Agreement (EULA) is displayed by the program, which EULA will be used in this product? Provide a link to the file. Ensure that it is the consistent with the one included in the product.
3.6.3. Receive sign-off from the legal department that this is the correct EULA for this product.
3.7. UTILITY, TOOL KIT, AND COLLATERAL TESTS
3.7.1. If there are any additional products or components to be included in the final product, or on the distribution media, list the types of tests that will be carried out, and the extent to which they shall be performed.
3.8. INSTALL/UNINSTALL TESTS
3.8.1. How will deployment and installation be tested?
3.8.2. How will the uninstallation or rollback process be tested?
3.8.3. Since some form of deployment is required for all software products, what generic installation and uninstallation test catalogs will be used or adapted for these tests?
3.9. CODE COVERAGE
3.9.1. What tools or processes will be used to assure that each line of code is run at least once during testing?
3.9.2. Have the developers performed coverage tests during unit or integration testing? Have they provided the results of these tests? Have they provided source code, test harnesses, or test tools?
3.9.3. Are there plans to cover all code during regression testing? If not, why not?
3.10. YEAR 2000 AND DATE COMPLIANCE
3.10.1. Identify date and time values that are accepted, calculated, and output by the program. Pay attention both to hard dates and timespans.
3.10.2. What tests, if any, will be carried out to make sure the program will continue to work correctly when dates on both sides of the year 2000 are processed?
3.10.3. What tests, if any, will be run to ensure that other forms of date processing are done correctly?
3.11. INTERNATIONALIZATION
3.11.1. For products intended for global markets, what tests will be carried out to make sure the product can be easily localized (that is, adapted for a specific local market)? For products intended for Asian markets, what tests will be performed to verify that the program correctly handles multiple-byte character sets?
4. TEST SCHEDULE AND RESOURCES
4.1. Identify the estimated effort required to execute the test plan. Include a both a range and a confidence level. Use the guidelines on pp. 3.23 – 3.44 of the course book to plan and review estimates.
4.2. Identify the resources available to carry out the test plan.
4.3. Identify time or resource constraints that will lead to a risk of the test project falling behind schedule, below expected scope, or below expected quality. Cross-reference this with the Unresolved Issues and Risks section later in this document.
4.4. If any testing is to be handled by another entity, such as another department or a third party test lab, identify them. List names and contact information at the beginning of this document. List the specific tasks they will be assigned to carry out. Include references to contracts with these people, and ensure that contracts are approved and signed.
5. TEST PHASES AND COMPLETION CRITERIA
5.1. Detail the planned test cycles and phases; these should be linked to the development plan for the project. Specify the type of testing being done in each phase. Typically unit testing will be done by the developer of the code, and need not be covered in detail in the test plan. Integration and system testing phases should be detailed here.
5.2. Outline the criteria for assessing the severity of found defects. List expectations for setting the priorities on resolving them. Collaborate with the developer(s), project managers, and the customer representatives on this.
5.3. Identify in advance the criteria that must be fulfilled before each stage of testing can be considered complete. Make these specific, measurable, and decidable; otherwise, expectations will differ and time will be wasted on discussion and debate.
5.4. If there are to be staged releases of system testing – typically alpha for internal releases, beta for limited releases to external test sites, and final releases – sometime called “gold master”, define them. Define acceptance standards for each phase. Ideally these should be in a separate document that can be referred to hereBear in mind that there is a chance that the standards set here are subject to being overruled by some authority or another; for example, a product may ship with a higher than satisfactory number of minor defects, at the behest of a marketing department or CFO that wants the product released with time as the most important consideration. Be prepared to accept such decisions dispassionately, but also be prepared to record them as failures to fulfill the standards set and agreed upon in advance. Companies and individuals can forget easily and repeat mistakes when there is no record of breached agreements and their consequences; people learn and improve more easily when records of successes and failures are available.
6. UNRESOLVED ISSUES AND RISKS
6.1. Identify issues that have yet to be decided as of this draft of the plan. Note these as risks to the schedule, scope, or quality of the test effort.
6.2. Identify other risks that may have an impact on the success of the plan. Use the risks outlined in the course book and the attached speaker notes as a guideline to identifying common risks. Refer also to the Software Project Survival Guide (Steve McConnell), which includes a good list of risks for every phase of development. When assessing risk, don’t be optimistic; the quality of the test plan and the risk assessment is weakened by failure to assess risk realistically.
7. TEST PLAN REVIEW
7.1. Include plans for review of this test plan. Identify the parties to review and approve the document, either within the test group or with another set of developers or test engineers. Look at sample test plan checklists, such as that on p. 3.45 – 3.47 of the course book, or those in Software Project Survival Guide. Use ideas from these checklists to develop your own checklists, appropriate to the size and scope of the product. Identify here the checklist(s) that will be used.
7.2. Meet with developers and customers or customer representatives to ensure that the test plan meets their requirements.

Windows based CheckList

  1. Check List for Windows based applications
  2. Check Analec logo on every page.
  3. Check the tab sequence on every page.
  4. Check the tab sequence on all the pop up screens.
  5. Spellings & grammatical mistakes on all pages.
  6. Spellings & grammatical mistakes on pop up windows.
  7. Tabbing to an entry field with text in it should highlight the entire text in the field.
  8. If a field is disabled (grayed) then it should not get focus. It should not be possible to select them with either the mouse or by using TAB
  9. Tab to each button - Press SPACE - This should activate
  10. Tab to each button - Press RETURN - This should activate
  11. All pop up messages should be centrally aligned on the page.
  12. Use special characters ‘`’, ‘>’, ‘~’, ‘<’ as input in all text box’s.
  13. For each input to the system identify invalid values.
  14. Check for duplicate values for all inputs.
  15. Test with default values of the system.
  16. Range testing for all text boxes.
  17. Check the application with different screen resolutions.
  18. Drop downs should show the complete names of all items in it or there should be a tool tip if the names are too long for the drop down.
  19. Download file window should have the correct file extension and size of the file that is being downloaded.
  20. If no results found cursor should go back to the search box.
  21. All drop downs should start with ‘Please select’ or blank text.
  22. Pressing the Arrow on a drop down should give list of options. This List may be scrollable. You should not be able to type text in the box.
  23. Items should be in alphabetical order with the exception of blank/none which is at the top or the bottom of the list box.
  24. List box should allow a single selection to be chosen, by clicking with the mouse, or using the Up and Down Arrow keys.
  25. Check the confirmation message before closing the application through close button on the top right hand side, top left hand side on the blue bar, right click on task bar & close option and ‘Alt f4’ through keyboard.
  26. Check for pop up messages before deleting any item from the list
  27. Check size for dialogue boxes and alert windows. It should be as per the message on it.
  28. Check for the alignment of labels, buttons and text boxes of the application.
  29. Check all links on all pages.
  30. Check for update conditions and related effects on the system.
  31. Are there explicit date conditions? Current date, future date, invalid dates, range dates, expiry dates.
  32. What can cause corrupt inputs and how does system respond?

Web Check List

Check List for Web based applications

1.Check Company’s logo on every page.
2.Click on logo to check its opening the Company’s home page.
3.Check the tab sequence on every page.
4.Check the tab sequence on all the pop up screens.
5.Tabbing to an entry field with text in it should highlight the entire text in the field.
6.If a field is disabled (grayed) then it should not get focus. It should not be 7.possible to select them with either the mouse or by using TAB
8.Tab to each button - Press SPACE - This should activate
9.Tab to each button - Press RETURN - This should activate
10.All pop up messages should be centrally aligned on the page.
11.Use special characters ‘`’, ‘>’, ‘~’, ‘<’ as input in all text box’s.
12.For each input to the system identify invalid values.
13.Check for duplicate values for all inputs.
14.Test with default values of the system.
15.Spellings & grammatical mistakes on all pages.
16.Spellings & grammatical mistakes on pop up windows.
17.Check for consistent CSS.
18.Range testing and boundary value testing for all text boxes.
19.Check the text alignment on every page.
20.Check page name on top blue bar, task bar and all the possible places on the page.
21.Check all the links given on the page.
22.Check the page stability by changing the text size of the web page.
23.Check the stability of the page with different screen resolutions.
24.Secured pages should not open after un-checking ‘SSL & TSL’ through internet options.
25.Check the login session expiration by clicking on ‘Log out’ tab and then press the ‘Back’ button on the page and click on any link. System should log out the user and show the login screen.
26.Check the session timeout of a web page by leaving the page ideal for sometime.
27.Drop downs should show the complete names of all items in it or there should be a tool tip if the names are too long for the drop down.
28.Download file window should have the correct file extension and size of the file that is being downloaded.
29.Check for the service failure after removing the encrypted string from the address bar.
30.Check the page after changing the encoding style of the page.
31.Login with different users to the same web page to check the availability of resources.
32.If no results found cursor should go back to the search box.
33.All drop downs should start with ‘Please select’ or blank text.
34.Pressing the Arrow on a drop down should give list of options. This List may be scrollable. You should not be able to type text in the box.
35.Items should be in alphabetical order with the exception of blank/none which is at the top or the bottom of the list box.
36.List box should allow a single selection to be chosen, by clicking with the mouse, or using the Up and Down Arrow keys. However, this depends on the situation.
37.Check for update conditions and related effects to the system.
38.Are there explicit date conditions? Current date, future date, invalid dates, range dates, expiry dates.
39.What can cause corrupt inputs and how does system respond?

Complete Test Cycle

Testing
Testing is a process used to help identify the correctness, completeness and quality of developed computer software. With that in mind, testing can never completely establish the correctness of computer software. In other words Testing is nothing but CRITICISM or COMPARISION. Here comparison in the sense comparing the actual value with expected one.
There are many approaches to software testing, but effective testing of complex products is essentially a process of investigation, not merely a matter of creating and following rote procedure. One definition of testing is "the process of questioning a product in order to evaluate it", where the "questions" are things the tester tries to do with the product, and the product answers with its behavior in reaction to the probing of the tester. Although most of the intellectual processes of testing are nearly identical to that of review or inspection, the word testing is connoted to mean the dynamic analysis of the product—putting the product through its paces.
The quality of the application can and normally does vary widely from system to system but some of the common quality attributes include reliability, stability, portability, maintainability and usability. Refer to the ISO standard ISO 9126 for a more complete list of attributes and criteria.
Testing helps is verifying and Validating if the Software is working as it is intended to be working. Thins involves using Static and Dynamic methodologies to Test the application.
Because of the fallibility of its human designers and its own abstract, complex nature, software development must be accompanied by quality assurance activities. It is not unusual for developers to spend 40% of the total project time on testing. For life-critical software (e.g. flight control, reactor monitoring), testing can cost 3 to 5 times as much as all other activities combined. The destructive nature of testing requires that the developer discard preconceived notions of the correctness of his/her developed software. Software Testing Fundamentalstesting objectives include1. Testing is a process of executing a program with the intent of finding an error.2. A good test case is one that has a high probability of finding an as yet undiscovered error.3. A successful test is one that uncovers an as yet undiscovered error.Testing should systematically uncover different classes of errors in a minimum amount of time and with a minimum amount of effort. A secondary benefit of testing is that it demonstrates that the software appears to be working as stated in the specifications. The data collected through testing can also provide an indication of the software's reliability and quality. But, testing cannot show the absence of defect -- it can only show that software defects are present.

When Testing should start:
Testing early in the life cycle reduces the errors. Test deliverables are associated with every phase of development. The goal of Software Tester is to find bugs, find them as early as possible, and make them sure they are fixed.
The number one cause of Software bugs is the Specification. There are several reasons specifications are the largest bug producer.
In many instances a Spec simply isn’t written. Other reasons may be that the spec isn’t thorough enough, it’s constantly changing, or it’s not communicated well to the entire team. Planning software is vitally important. If it’s not done correctly bugs will be created.
The next largest source of bugs is the Design, That’s where the programmers lay the plan for their Software. Compare it to an architect creating the blue print for the building, Bugs occur here for the same reason they occur in the specification. It’s rushed, changed, or not well communicated.
Coding errors may be more familiar to you if you are a programmer. Typically these can be traced to the Software complexity, poor documentation, schedule pressure or just plain dump mistakes. It’s important to note that many bugs that appear on the surface to be programming errors can really be traced to specification. It’s quite common to hear a programmer say, “Oh, so that’s what it’s supposed to do. If someone had told me that I wouldn’t have written the code that way.”
The other category is the catch-all for what is left. Some bugs can blamed for false positives, conditions that were thought to be bugs but really weren’t. There may be duplicate bugs, multiple ones that resulted from the square root cause. Some bugs can be traced to Testing errors.
Costs: The costs re logarithmic- that is, they increase tenfold as time increases. A bug found and fixed during the early stages when the specification is being written might cost next to nothing, or 10 cents in our example. The same bug, if not found until the software is coded and tested, might cost $1 to $10. If a customer finds it, the cost would easily top $100.
When to Stop Testing
This can be difficult to determine. Many modern software applications are so complex, and run in such as interdependent environment, that complete testing can never be done. "When to stop testing" is one of the most difficult questions to a test engineer. Common factors in deciding when to stop are:
Deadlines (release deadlines, testing deadlines.)
Test cases completed with certain percentages passed
Test budget depleted
Coverage of code/functionality/requirements reaches a specified point
The rate at which Bugs can be found is too small
Beta or Alpha Testing period ends
The risk in the project is under acceptable limit.
Practically, we feel that the decision of stopping testing is based on the level of the risk acceptable to the management. As testing is a never ending process we can never assume that 100 % testing has been done, we can only minimize the risk of shipping the product to client with X testing done. The risk can be measured by Risk analysis but for small duration / low budget / low resources project, risk can be deduced by simply: -
Measuring Test Coverage.
Number of test cycles.
Number of high priority bugs.
Test Strategy: How we plan to cover the product so as to develop an adequate assessment of quality.A good test strategy is: Specific PracticalJustifiedThe purpose of a test strategy is to clarify the major tasks and challenges of the test project.Test Approach and Test Architecture are other terms commonly used to describe what I’m calling test strategy.Example of a poorly stated (and probably poorly conceived) test strategy:
"We will use black box testing, cause-effect graphing, boundary testing, and white box testing to test this product against its specification."
Test Strategy: Type of Project, Type of Software, when Testing will occur, Critical Success factors, Tradeoffs Test Plan - Why
Identify Risks and Assumptions up front to reduce surprises later.
Communicate objectives to all team members.
Foundation for Test Spec, Test Cases, and ultimately the Bugs we find.Failing to plan = planning to fail.
Test Plan - What
Derived from Test Approach, Requirements, Project Plan, Functional Spec., and Design Spec.
Details out project-specific Test Approach.
Lists general (high level) Test Case areas.
Include testing Risk Assessment.
Include preliminary Test Schedule
Lists Resource requirements.
Test Plan
The test strategy identifies multiple test levels, which are going to be performed for the project. Activities at each level must be planned well in advance and it has to be formally documented. Based on the individual plans only, the individual test levels are carried out.
Entry means the entry point to that phase. For example, for unit testing, the coding must be complete and then only one can start unit testing. Task is the activity that is performed. Validation is the way in which the progress and correctness and compliance are verified for that phase. Exit tells the completion criteria of that phase, after the validation is done. For example, the exit criterion for unit testing is all unit test cases must pass.
Unit Test Plan {UTP}
The unit test plan is the overall plan to carry out the unit test activities. The lead tester prepares it and it will be distributed to the individual testers, which contains the following sections.
What is to be tested?
The unit test plan must clearly specify the scope of unit testing. In this, normally the basic input/output of the units along with their basic functionality will be tested. In this case mostly the input units will be tested for the format, alignment, accuracy and the totals. The UTP will clearly give the rules of what data types are present in the system, their format and their boundary conditions. This list may not be exhaustive; but it is better to have a complete list of these details.
Sequence of Testing
The sequences of test activities that are to be carried out in this phase are to be listed in this section. This includes whether to execute positive test cases first or negative test cases first, to execute test cases based on the priority, to execute test cases based on test groups etc. Positive test cases prove that the system performs what is supposed to do; negative test cases prove that the system does not perform what is not supposed to do. Testing the screens, files, database etc., are to be given in proper sequence.
Basic Functionality of Units
How the independent functionalities of the units are tested which excludes any communication between the unit and other units. The interface part is out of scope of this test level. Apart from the above sections, the following sections are addressed, very specific to unit testing.
Unit Testing Tools
Priority of Program units
Naming convention for test cases
Status reporting mechanism
Regression test approach
ETVX criteria
Integration Test Plan
The integration test plan is the overall plan for carrying out the activities in the integration test level, which contains the following sections.
What is to be tested?
This section clearly specifies the kinds of interfaces fall under the scope of testing internal, external interfaces, with request and response is to be explained. This need not go deep in terms of technical details but the general approach how the interfaces are triggered is explained.
Sequence of Integration
When there are multiple modules present in an application, the sequence in which they are to be integrated will be specified in this section. In this, the dependencies between the modules play a vital role. If a unit B has to be executed, it may need the data that is fed by unit A and unit X. In this case, the units A and X have to be integrated and then using that data, the unit B has to be tested. This has to be stated to the whole set of units in the program. Given this correctly, the testing activities will lead to the product, slowly building the product, unit by unit and then integrating them.
System Test Plan {STP}
The system test plan is the overall plan carrying out the system test level activities. In the system test, apart from testing the functional aspects of the system, there are some special testing activities carried out, such as stress testing etc. The following are the sections normally present in system test plan.
What is to be tested?
This section defines the scope of system testing, very specific to the project. Normally, the system testing is based on the requirements. All requirements are to be verified in the scope of system testing. This covers the functionality of the product. Apart from this what special testing is performed are also stated here.
Functional Groups and the Sequence
The requirements can be grouped in terms of the functionality. Based on this, there may be priorities also among the functional groups. For example, in a banking application, anything related to customer accounts can be grouped into one area; anything related to inter-branch transactions may be grouped into one area etc. Same way for the product being tested, these areas are to be mentioned here and the suggested sequences of testing of these areas, based on the priorities are to be described.
Acceptance Test Plan {ATP}
The client at their place performs the acceptance testing. It will be very similar to the system test performed by the Software Development Unit. Since the client is the one who decides the format and testing methods as part of acceptance testing, there is no specific clue on the way they will carry out the testing. But it will not differ much from the system testing. Assume that all the rules, which are applicable to system test, can be implemented to acceptance testing also.
Since this is just one level of testing done by the client for the overall product, it may include test cases including the unit and integration test level details.

A sample Test Plan Outline along with their description is as shown below:
Test Plan Outline
1. BACKGROUND – This item summarizes the functions of the application system and the tests to be performed.2. INTRODUCTION 3. ASSUMPTIONS – Indicates any anticipated assumptions which will be made while testing the application.4. TEST ITEMS - List each of the items (programs) to be tested.5. FEATURES TO BE TESTED - List each of the features (functions or requirements) which will be tested or demonstrated by the test.6. FEATURES NOT TO BE TESTED - Explicitly lists each feature, function, or requirement which won't be tested and why not. 7. APPROACH - Describe the data flows and test philosophy.Simulation or Live execution, Etc. This section also mentions all the approaches which will be followed at the various stages of the test execution.8. ITEM PASS/FAIL CRITERIA Blanket statement - Itemized list of expected output and tolerances9. SUSPENSION/RESUMPTION CRITERIA - Must the test run from start to completion?Under what circumstances it may be resumed in the middle?Establish check-points in long tests.10. TEST DELIVERABLES - What, besides software, will be delivered?Test reportTest software11. TESTING TASKS Functional tasks (e.g., equipment set up) Administrative tasks12. ENVIRONMENTAL NEEDS - Security clearance, Office space & equipment, Hardware/software requirements13. RESPONSIBILITIES - Who does the tasks in Section 10? What does the user do?14. STAFFING & TRAINING15. SCHEDULE16. RESOURCES17. RISKS & CONTINGENCIES18. APPROVALS
The schedule details of the various test pass such as Unit tests, Integration tests, System Tests should be clearly mentioned along with the estimated efforts.
Risk Analysis:
A risk is a potential for loss or damage to an Organization from materialized threats. Risk Analysis attempts to identify all the risks and then quantify the severity of the risks. A threat as we have seen is a possible damaging event. If it occurs, it exploits vulnerability in the security of a computer based system.
Risk Identification: 1. Software Risks: Knowledge of the most common risks associated with Software development, and the platform you are working on.2. Business Risks: Most common risks associated with the business using the Software3. Testing Risks: Knowledge of the most common risks associated with Software Testing for the platform you are working on, tools being used, and test methods being applied.4. Premature Release Risk: Ability to determine the risk associated with releasing unsatisfactory or untested Software Products.5. Risk Methods: Strategies and approaches for identifying risks or problems associated with implementing and operating information technology, products and process; assessing their likelihood, and initiating strategies to test those risks.
Traceability means that you would like to be able to trace back and forth how and where any work product fulfills the directions of the preceding (source-) product. The matrix deals with the where, while the how you have to do yourself, once you know the where.
Take e.g. the Requirement of User Friendliness (UF). Since UF is a complex concept, it is not solved by just one design-solution and it is not solved by one line of code. Many partial design-solutions may contribute to this Requirement and many groups of lines of code may contribute to it.
A Requirements-Design Traceability Matrix puts on one side (e.g. left) the sub-requirements that together are supposed to solve the UF requirement, along with other (sub-) requirements. On the other side (e.g. top) you specify all design solutions. Now you can connect on the cross points of the matrix, which design solutions solve (more, or less) any requirement. If a design solution does not solve any requirement, it should be deleted, as it is of no value.
Having this matrix, you can check whether any requirement has at least one design solution and by checking the solution(s) you may see whether the requirement is sufficiently solved by this (or the set of) connected design(s).
If you have to change any requirement, you can see which designs are affected. And if you change any design, you can check which requirements may be affected and see what the impact is.
In a Design-Code Traceability Matrix you can do the same to keep trace of how and which code solves a particular design and how changes in design or code affect each other.
Demonstrates that the implemented system meets the user requirements.Serves as a single source for tracking purposes.Identifies gaps in the design and testing. Prevents delays in the project timeline, which can be brought about by having to backtrack to fill the gaps


Software Testing Life Cycle: The test development life cycle contains the following components: RequirementsUse Case DocumentTest PlanTest CaseTest Case executionReport AnalysisBug AnalysisBug Reporting
Typical interaction scenario from a user's perspective for system requirements studies or testing. In other words, "an actual or realistic example scenario". A use case describes the use of a system from start to finish. Use cases focus attention on aspects of a system useful to people outside of the system itself.
Users of a program are called users or clients.
Users of an enterprise are called customers, suppliers, etc.
Use Case:
A collection of possible scenarios between the system under discussion and external actors, characterized by the goal the primary actor has toward the system's declared responsibilities, showing how the primary actor's goal might be delivered or might fail.
Use cases are goals (use cases and goals are used interchangeably) that are made up of scenarios. Scenarios consist of a sequence of steps to achieve the goal; each step in a scenario is a sub (or mini) goal of the use case. As such each sub goal represents either another use case (subordinate use case) or an autonomous action that is at the lowest level desired by our use case decomposition.
This hierarchical relationship is needed to properly model the requirements of a system being developed. A complete use case analysis requires several levels. In addition the level at which the use case is operating at it is important to understand the scope it is addressing. The level and scope are important to assure that the language and granularity of scenario steps remain consistent within the use case.
There are two scopes that use cases are written from: Strategic and System. There are also three levels: Summary, User and Sub-function.

Scopes: Strategic and SystemStrategic Scope:
The goal (Use Case) is a strategic goal with respect to the system. These goals are goals of value to the organization. The use case shows how the system is used to benefit the organization.,/p> These strategic use cases will eventually use some of the same lower level (subordinate) use cases.
System Scope:
Use cases at system scope are bounded by the system under development. The goals represent specific functionality required of the system. The majority of the use cases are at system scope. These use cases are often steps in strategic level use cases
Levels: Summary Goal , User Goal and Sub-function.Sub-function Level Use Case:
A sub goal or step is below the main level of interest to the user. Examples are "logging in" and "locate a device in a DB". Always at System Scope.
User Level Use Case:
This is the level of greatest interest. It represents a user task or elementary business process. A user level goal addresses the question "Does your job performance depend on how many of these you do in a day". For example "Create Site View" or "Create New Device" would be user level goals but "Log In to System" would not. Always at System Scope.
Summary Level Use Case:
Written for either strategic or system scope. They represent collections of User Level Goals. For example summary goal "Configure Data Base" might include as a step, user level goal "Add Device to database". Either at System of Strategic Scope.

Test Documentation
Test documentation is a required tool for managing and maintaining the testing process. Documents produced by testers should answer the following questions:
What to test? Test Plan
How to test? Test Specification
What are the results? Test Results Analysis Report
Bug Life cycle:
In entomology(the study of real, living Bugs), the term life cycle refers to the various stages that an insect assumes over its life. If you think back to your high school biology class, you will remember that the life cycle stages for most insects are the egg, larvae, pupae and adult. It seems appropriate, given that software problems are also called bugs, that a similar life cycle system is used to identify their stages of life. Figure 18.2 shows an example of the simplest, and most optimal, software bug life cycle.
This example shows that when a bug is found by a Software Tester, it’s logged and assigned to a programmer to be fixed. This state is called open state. Once the programmer fixes the code , he assigns it back to the tester and the bugs enter the resolved state. The tester then performs a regression test to confirm that the bug is indeed fixed and, if it closes it out. The bug then enters its final state, the closed state.
In some situations though, the life cycle gets a bit more complicated.
In this case the life cycle starts out the same with the Tester opening the bug and assigning to the programmer, but the programmer doesn’t fix it. He doesn’t think its bad enough to fix and assigns it to the project manager to decide. The Project Manager agrees with the Programmer and places the Bug in the resolved state as a “wont-fix” bug. The tester disagrees, looks for and finds a more obvious and general case that demonstrates the bug, reopens it, and assigns it to the Programmer to fix. The programmer fixes the bug, resolves it as fixed, and assign it to the Tester. The tester confirms the fix and closes the bug.
You can see that a bug might undergo numerous changes and iterations over its life, sometimes looping back and starting the life all over again. Figure below takes the simple model above and adds to it possible decisions, approvals, and looping that can occur in most projects. Of course every software company and project will have its own system, but this figure is fairly generic and should cover most any bug life cycle that you’ll encounter
The generic life cycle has two additional states and extra connecting lines. The review state is where Project Manager or the committee, sometimes called a change Control Board, decides whether the bug should be fixed. In some projects all bugs go through the review state before they’re assigned to the programmer for fixing. In other projects, this may not occur until near the end of the project, or not at all. Notice that the review state can also go directly to the closed state. This happens if the review decides that the bug shouldn’t be fixed – it could be too minor is really not a problem, or is a testing error. The other is a deferred. The review may determine that the bug should be considered for fixing at sometime in the future, but not for this release of the software.
The additional line from resolved state back to the open state covers the situation where the tester finds that the bug hasn’t been fixed. It gets reopened and the bug’s life cycle repeats.
The two dotted lines that loop from the closed and the deferred state back to the open state rarely occur but are important enough to mention. Since a Tester never gives up, its possible that a bug was thought to be fixed, tested and closed could reappear. Such bugs are often called Regressions. It’s possible that a deferred bug could later be proven serious enough to fix immediately. If either of these occurs, the bug is reopened and started through the process again. Most Project teams adopt rules for who can change the state of a bug or assign it to someone else. For example, maybe only the Project Manager can decide to defer a bug or only a tester is permitted to close a bug. What’s important is that once you log a bug, you follow it through its life cycle, don’t lose track of it, and prove the necessary information to drive it to being fixed and closed.
Bug Report - Why
Communicate bug for reproducibility, resolution, and regression.
Track bug status (open, resolved, closed).
Ensure bug is not forgotten, lost or ignored.Used to back create test case where none existed before.
Software Testing Life Cycle:The test development life cycle contains the following components:RequirementsUse Case DocumentTest PlanTest CaseTest Case executionReport AnalysisBug AnalysisBug Reporting
Typical interaction scenario from a user's perspective for system requirements studies or testing. In other words, "an actual or realistic example scenario". A use case describes the use of a system from start to finish. Use cases focus attention on aspects of a system useful to people outside of the system itself.
Users of a program are called users or clients.
Users of an enterprise are called customers, suppliers, etc.
Use Case:
A collection of possible scenarios between the system under discussion and external actors, characterized by the goal the primary actor has toward the system's declared responsibilities, showing how the primary actor's goal might be delivered or might fail.
Use cases are goals (use cases and goals are used interchangeably) that are made up of scenarios. Scenarios consist of a sequence of steps to achieve the goal, each step in a scenario is a sub (or mini) goal of the use case. As such each sub goal represents either another use case (subordinate use case) or an autonomous action that is at the lowest level desired by our use case decomposition.
This hierarchical relationship is needed to properly model the requirements of a system being developed. A complete use case analysis requires several levels. In addition the level at which the use case is operating at it is important to understand the scope it is addressing. The level and scope are important to assure that the language and granularity of scenario steps remain consistent within the use case.
There are two scopes that use cases are written from: Strategic and System. There are also three levels: Summary, User and Sub-function.
Scopes: Strategic and SystemStrategic Scope:
The goal (Use Case) is a strategic goal with respect to the system. These goals are goals of value to the organization. The use case shows how the system is used to benefit the organization.,/p> These strategic use cases will eventually use some of the same lower level (subordinate) use cases.System Scope:
Use cases at system scope are bounded by the system under development. The goals represent specific functionality required of the system. The majority of the use cases are at system scope. These use cases are often steps in strategic level use cases
Levels: Summary Goal , User Goal and Sub-function.Sub-function Level Use Case:
A sub goal or step is below the main level of interest to the user. Examples are "logging in" and "locate a device in a DB". Always at System Scope.
User Level Use Case:
This is the level of greatest interest. It represents a user task or elementary business process. A user level goal addresses the question "Does your job performance depend on how many of these you do in a day". For example "Create Site View" or "Create New Device" would be user level goals but "Log In to System" would not. Always at System Scope.
Summary Level Use Case:
Written for either strategic or system scope. They represent collections of User Level Goals. For example summary goal "Configure Data Base" might include as a step, user level goal "Add Device to database". Either at System of Strategic Scope.
Test Documentation
Test documentation is a required tool for managing and maintaining the testing process. Documents produced by testers should answer the following questions:
What to test? Test Plan
How to test? Test Specification
What are the results? Test Results Analysis Report
The Verification activities fall into the category of Static Testing. During static testing, you have a checklist to check whether the work you are doing is going as per the set standards of the organization. These standards can be for Coding, Integrating and Deployment. Reviews, Inspection's and Walkthrough's are static testing methodologies.
Dynamic Testing involves working with the software, giving input values and checking if the output is as expected. These are the Validation activities. Unit Tests, Integration Tests, System Tests and Acceptance Tests are few of the Dynamic Testing methodologies. As we go further, let us understand the various Test Life Cycle's and get to know the Testing Terminologies. To understand more of software testing, various methodologies, tools and techniques, you can download the Software Testing Guide Book from here.
Difference Between Static and Dynamic Testing: Please refer the definition of Static Testing to observe the difference between the static testing and dynamic testing.
Dynamic Testing involves working with the software, giving input values and checking if the output is as expected. These are the Validation activities. Unit Tests, Integration Tests, System Tests and Acceptance Tests are few of the Dynamic Testing methodologies. As we go further, let us understand the various Test Life Cycle's and get to know the Testing Terminologies. To understand more of software testing, various methodologies, tools and techniques, you can download the Software Testing Guide Book from here.
Difference Between Static and Dynamic Testing: Please refer the definition of Static Testing to observe the difference between the static testing and dynamic testing.
White box testing is a test case design method that uses the control structure of the procedural design to derive test cases. Test cases can be derived that
1. guarantee that all independent paths within a module have been exercised at least once,2. exercise all logical decisions on their true and false sides,3. execute all loops at their boundaries and within their operational bounds, and4. exercise internal data structures to ensure their validity.The Nature of Software Defects
Logic errors and incorrect assumptions are inversely proportional to the probability that a program path will be executed. General processing tends to be well understood while special case processing tends to be prone to errors.
We often believe that a logical path is not likely to be executed when it may be executed on a regular basis. Our unconscious assumptions about control flow and data lead to design errors that can only be detected by path testing.Typographical errors are random.Basis Path Testing
This method enables the designer to derive a logical complexity measure of a procedural design and use it as a guide for defining a basis set of execution paths. Test cases that exercise the basis set are guaranteed to execute every statement in the program at least once during testing.
Flow Graphs
Flow graphs can be used to represent control flow in a program and can help in the derivation of the basis set. Each flow graph node represents one or more procedural statements. The edges between nodes represent flow of control. An edge must terminate at a node, even if the node does not represent any useful procedural statements. A region in a flow graph is an area bounded by edges and nodes. Each node that contains a condition is called a predicate node. Cyclomatic complexity is a metric that provides a quantitative measure of the logical complexity of a program. It defines the number of independent paths in the basis set and thus provides an upper bound for the number of tests that must be performed.
The Basis SetAn independent path is any path through a program that introduces at least one new set of processing statements (must move along at least one new edge in the path). The basis set is not unique. Any number of different basis sets can be derived for a given procedural design. Cyclomatic complexity, V(G), for a flow graph G is equal to 1. The number of regions in the flow graph.2. V(G) = E - N + 2 where E is the number of edges and N is the number of nodes.3. V(G) = P + 1 where P is the number of predicate nodes.Deriving Test Cases1. From the design or source code, derive a flow graph.2. Determine the cyclomatic complexity of this flow graph. Even without a flow graph, V(G) can be determined by counting the number of conditional statements in the code.3. Determine a basis set of linearly independent paths. Predicate nodes are useful for determining the necessary paths.4. Prepare test cases that will force execution of each path in the basis set. Each test case is executed and compared to the expected results.Automating Basis Set DerivationThe derivation of the flow graph and the set of basis paths is amenable to automation. A software tool to do this can be developed using a data structure called a graph matrix. A graph matrix is a square matrix whose size is equivalent to the number of nodes in the flow graph. Each row and column correspond to a particular node and the matrix corresponds to the connections (edges) between nodes. By adding a link weight to each matrix entry, more information about the control flow can be captured. In its simplest form, the link weight is 1 if an edge exists and 0 if it does not. But other types of link weights can be represented:� the probability that an edge will be executed,� the processing time expended during link traversal,� the memory required during link traversal, or � the resources required during link traversal.Graph theory algorithms can be applied to these graph matrices to help in the analysis necessary to produce the basis set.Loop TestingThis white box technique focuses exclusively on the validity of loop constructs. Four different classes of loops can be defined:1. simple loops,2. nested loops,3. concatenated loops, and4. unstructured loops.Simple LoopsThe following tests should be applied to simple loops where n is the maximum number of allowable passes through the loop:1. skip the loop entirely,2. only pass once through the loop,3. m passes through the loop where m < n,4. n - 1, n, n + 1 passes through the loop.Nested LoopsThe testing of nested loops cannot simply extend the technique of simple loops since this would result in a geometrically increasing number of test cases. One approach for nested loops:1. Start at the innermost loop. Set all other loops to minimum values.2. Conduct simple loop tests for the innermost loop while holding the outer loops at their minimums. Add tests for out-of-range or excluded values.3. Work outward, conducting tests for the next loop while keeping all other outer loops at minimums and other nested loops to typical values.4. Continue until all loops have been tested.Concatenated LoopsConcatenated loops can be tested as simple loops if each loop is independent of the others. If they are not independent (e.g. the loop counter for one is the loop counter for the other), then the nested approach can be used.Unstructured LoopsThis type of loop should be redesigned not tested!!!Other White Box TechniquesOther white box testing techniques include:1. Condition testingexercises the logical conditions in a program.2. Data flow testingselects test paths according to the locations of definitions and uses of variables in the program.
In computer programming, a unit test is a method of testing the correctness of a particular module of source code. The idea is to write test cases for every non-trivial function or method in the module so that each test case is separate from the others if possible. This type of testing is mostly done by the developers. Benefits The goal of unit testing is to isolate each part of the program and show that the individual parts are correct. It provides a written contract that the piece must satisfy. This isolated testing provides four main benefits:
Encourages change Unit testing allows the programmer to refactor code at a later date, and make sure the module still works correctly (regression testing). This provides the benefit of encouraging programmers to make changes to the code since it is easy for the programmer to check if the piece is still working properly. Simplifies Integration Unit testing helps eliminate uncertainty in the pieces themselves and can be used in a bottom-up testing style approach. By testing the parts of a program first and then testing the sum of its parts will make integration testing easier. Documents the code Unit testing provides a sort of "living document" for the class being tested. Clients looking to learn how to use the class can look at the unit tests to determine how to use the class to fit their needs. Separation of Interface from Implementation Because some classes may have references to other classes, testing a class can frequently spill over into testing another class. A common example of this is classes that depend on a database; in order to test the class, the tester finds herself writing code that interacts with the database. This is a mistake, because a unit test should never go outside of its own class boundary. As a result, the software developer abstracts an interface around the database connection, and then implements that interface with their own Mock Object. This results in loosely coupled code, thus minimizing dependencies in the system. Limitations It is important to realize that unit-testing will not catch every error in the program. By definition, it only tests the functionality of the units themselves. Therefore, it will not catch integration errors, performance problems and any other system-wide issues. In addition, it may not be trivial to anticipate all special cases of input the program unit under study may receive in reality. Unit testing is only effective if it is used in conjunction with other software testing activities.
Unit Testing - Software Unit Testing, Tools, Research Topics, Toolkits, Extreme Programming Unit Testing

Requirements Testing
Usage:
To ensure that system performs correctly
To ensure that correctness can be sustained for a considerable period of time.
System can be tested for correctness through all phases of SDLC but incase of reliability the programs should be in place to make system operational.
Objective:
Successfully implementation of user requirements,/li>
Correctness maintained over considerable period of time Processing of the application complies with the organization’s policies and procedures.
Secondary user’s needs are fulfilled:
Security officer
DBA
Internal auditors
Record retention
Comptroller
How to UseTest conditions created
These test conditions are generalized ones, which becomes test cases as the SDLC progresses until system is fully operational.
Test conditions are more effective when created from user’s requirements.
Test conditions if created from documents then if there are any error in the documents those will get incorporated in Test conditions and testing would not be able to find those errors.
Test conditions if created from other sources (other than documents) error trapping is effective.
Functional Checklist created.
When to Use
Every application should be Requirement tested
Should start at Requirements phase and should progress till operations and maintenance phase.
The method used to carry requirement testing and the extent of it is important.
Example
Creating test matrix to prove that system requirements as documented are the requirements desired by the user.
Creating checklist to verify that application complies to the organizational policies and procedures.

Regression Testing
Usage:All aspects of system remain functional after testing.
Change in one segment does not change the functionality of other segment.
Objective:Determine System documents remain current
Determine System test data and test conditions remain current
Determine Previously tested system functions properly without getting effected though changes are made in some other segment of application system.
How to Use
Test cases, which were used previously for the already tested segment is, re-run to ensure that the results of the segment tested currently and the results of same segment tested earlier are same.
Test automation is needed to carry out the test transactions (test condition execution) else the process is very time consuming and tedious.
In this case of testing cost/benefit should be carefully evaluated else the efforts spend on testing would be more and payback would be minimum.
When to Use
When there is high risk that the new changes may effect the unchanged areas of application system.
In development process: Regression testing should be carried out after the pre-determined changes are incorporated in the application system.
In Maintenance phase : regression testing should be carried out if there is a high risk that loss may occur when the changes are made to the system
Example
Re-running of previously conducted tests to ensure that the unchanged portion of system functions properly.
Reviewing previously prepared system documents (manuals) to ensure that they do not get effected after changes are made to the application system.
Disadvantage
Time consuming and tedious if test automation not done
Regression Testing - Software Testing - Network Regression Testing - Web & Automated Regression Testing
Error Handling Testing
Usage:
It determines the ability of applications system to process the incorrect transactions properly
Errors encompass all unexpected conditions.
In some system approx. 50% of programming effort will be devoted to handling error condition.
Objective:
Determine Application system recognizes all expected error conditions
Determine Accountability of processing errors has been assigned and procedures provide a high probability that errors will be properly corrected
Determine During correction process reasonable control is maintained over errors.
How to Use
A group of knowledgeable people is required to anticipate what can go wrong in the application system.
It is needed that all the application knowledgeable people assemble to integrate their knowledge of user area, auditing and error tracking.
Then logical test error conditions should be created based on this assimilated information.
When to Use
Throughout SDLC.
Impact from errors should be identified and should be corrected to reduce the errors to acceptable level.
Used to assist in error management process of system development and maintenance.
Example
Create a set of erroneous transactions and enter them into the application system then find out whether the system is able to identify the problems..
Using iterative testing enters transactions and trap errors. Correct them. Then enter transactions with errors, which were not present in the system earlier.
Manual Support Testing
Usage:
It involves testing of all the functions performed by the people while preparing the data and using these data from automated system.
Objective:
Verify manual support documents and procedures are correct.
Determine Manual support responsibility is correct
Determine Manual support people are adequately trained.
Determine Manual support and automated segment are properly interfaced.
How to Use
Process evaluated in all segments of SDLC.
Execution of the can be done in conjunction with normal system testing.
Instead of preparing, execution and entering actual test transactions the clerical and supervisory personnel can use the results of processing from application system.
To test people it requires testing the interface between the people and application system.
When to Use
Verification that manual systems function properly should be conducted throughout the SDLC.
Should not be done at later stages of SDLC.
Best done at installation stage so that the clerical people do not get used to the actual system just before system goes to production.
Example
Provide input personnel with the type of information they would normally receive from their customers and then have them transcribe that information and enter it in the computer.
Users can be provided a series of test conditions and then asked to respond to those conditions. Conducted in this manner, manual support testing is like an examination in which the users are asked to obtain the answer from the procedures and manuals available to them.
Intersystem Testing
Usage:
To ensure interconnection between application functions correctly.
Objective:
Determine Proper parameters and data are correctly passed between the applications
Documentation for involved system is correct and accurate.
Ensure Proper timing and coordination of functions exists between the application system.
How to Use
Operations of multiple systems are tested.
Multiple systems are run from one another to check that they are acceptable and processed properly.
When to Use
When there is change in parameters in application system
The parameters, which are erroneous then risk associated to such parameters, would decide the extent of testing and type of testing.
Intersystem parameters would be checked / verified after the change or new application is placed in the production.
Example
Develop test transaction set in one application and passing to another system to verify the processing.
Entering test transactions in live production environment and then using integrated test facility to check the processing from one system to another.
Verifying new changes of the parameters in the system, which are being tested, are corrected in the document.
Disadvantage
Time consuming and tedious if test automation not done
Cost may be expensive if system is run several times iteratively.

Control Testing
Usage:
Control is a management tool to ensure that processing is performed in accordance to what management desire or intents of management.
Objective:
Accurate and complete data
Authorized transactions
Maintenance of adequate audit trail of information.
Efficient, effective and economical process.
Process meeting the needs of the user.
How to Use
To test controls risks must be identified.
Testers should have negative approach i.e. should determine or anticipate what can go wrong in the application system.
Develop risk matrix, which identifies the risks, controls; segment within application system in which control resides.
When to Use
Should be tested with other system tests.
Example
file reconciliation procedures work
Manual controls in place.

Parallel Testing
Usage:
To ensure that the processing of new application (new version) is consistent with respect to the processing of previous application version.
Objective:
Conducting redundant processing to ensure that the new version or application performs correctly.
Demonstrating consistency and inconsistency between 2 versions of the application.
How to Use
Same input data should be run through 2 versions of same application system.
Parallel testing can be done with whole system or part of system (segment).
When to Use
When there is uncertainty regarding correctness of processing of new application where the new and old version are similar.
In financial applications like banking where there are many similar applications the processing can be verified for old and new version through parallel testing
Example
Operating new and old version of a payroll system to determine that the paychecks from both systems are reconcilable.
Running old version of application to ensure that the functions of old system are working fine with respect to the problems encountered in the new system.

Volume testing
Whichever title you choose (for us volume test) here we are talking about realistically exercising an application in order to measure the service delivered to users at different levels of usage. We are particularly interested in its behavior when the maximum number of users are concurrently active and when the database contains the greatest data volume.
The creation of a volume test environment requires considerable effort. It is essential that the correct level of complexity exists in terms of the data within the database and the range of transactions and data used by the scripted users, if the tests are to reliably reflect the to be production environment. Once the test environment is built it must be fully utilized. Volume tests offer much more than simple service delivery measurement. The exercise should seek to answer the following questions:
What service level can be guaranteed. How can it be specified and monitored? Are changes in user behaviour likely? What impact will such changes have on resource consumption and service delivery? Which transactions/processes are resource hungry in relation to their tasks? What are the resource bottlenecks? Can they be addressed? How much spare capacity is there? The purpose of volume testing is to find weaknesses in the system with respect to its handling of large amount of data during extended time periods


Stress testing

The purpose of stress testing is to find defects of the system capacity of handling large numbers of transactions during peak periods. For example, a script might require users to login and proceed with their daily activities while, at the same time, requiring that a series of workstations emulating a large number of other systems are running recorded scripts that add, update, or delete from the database.
Performance testing

System performance is generally assessed in terms of response time and throughput rates under differing processing and configuration conditions. To attack the performance problems, there are several questions should be asked first:
 How much application logic should be remotely executed?
 How much updating should be done to the database server over the network from the client workstation?
 How much data should be sent to each in each transaction?According to Hamilton [10], the performance problems are most often the result of the client or server being configured inappropriately.The best strategy for improving client-sever performance is a three-step process [11]. First, execute controlled performance tests that collect the data about volume, stress, and loading tests. Second, analyze the collected data. Third, examine and tune the database queries and, if necessary, provide temporary data storage on the client while the application is executing

Unit Testing
In computer programming, a unit test is a method of testing the correctness of a particular module of source code. The idea is to write test cases for every non-trivial function or method in the module so that each test case is separate from the others if possible. This type of testing is mostly done by the developers. Benefits The goal of unit testing is to isolate each part of the program and show that the individual parts are correct. It provides a written contract that the piece must satisfy. This isolated testing provides four main benefits: Encourages change Unit testing allows the programmer to refactor code at a later date, and make sure the module still works correctly (regression testing). This provides the benefit of encouraging programmers to make changes to the code since it is easy for the programmer to check if the piece is still working properly. Simplifies Integration Unit testing helps eliminate uncertainty in the pieces themselves and can be used in a bottom-up testing style approach. By testing the parts of a program first and then testing the sum of its parts will make integration testing easier. Documents the code Unit testing provides a sort of "living document" for the class being tested. Clients looking to learn how to use the class can look at the unit tests to determine how to use the class to fit their needs.
Separation of Interface from Implementation Because some classes may have references to other classes, testing a class can frequently spill over into testing another class. A common example of this is classes that depend on a database; in order to test the class, the tester finds herself writing code that interacts with the database. This is a mistake, because a unit test should never go outside of its own class boundary. As a result, the software developer abstracts an interface around the database connection, and then implements that interface with their own Mock Object. This results in loosely coupled code, thus minimizing dependencies in the system. Limitations It is important to realize that unit-testing will not catch every error in the program. By definition, it only tests the functionality of the units themselves. Therefore, it will not catch integration errors, performance problems and any other system-wide issues. In addition, it may not be trivial to anticipate all special cases of input the program unit under study may receive in reality. Unit testing is only effective if it is used in conjunction with other software testing activities.
Unit Testing - Software Unit Testing, Tools, Research Topics, Toolkits, Extreme Programming Unit Testing
Manual Support Testing
Usage:
It involves testing of all the functions performed by the people while preparing the data and using these data from automated system.
Objective:
Verify manual support documents and procedures are correct.
Determine Manual support responsibility is correct
Determine Manual support people are adequately trained.
Determine Manual support and automated segment are properly interfaced.
How to Use
Process evaluated in all segments of SDLC.
Execution of the can be done in conjunction with normal system testing.
Instead of preparing, execution and entering actual test transactions the clerical and supervisory personnel can use the results of processing from application system.
To test people it requires testing the interface between the people and application system.
When to Use
Verification that manual systems function properly should be conducted throughout the SDLC.
Should not be done at later stages of SDLC.
Best done at installation stage so that the clerical people do not get used to the actual system just before system goes to production.
Example
Provide input personnel with the type of information they would normally receive from their customers and then have them transcribe that information and enter it in the computer.
Users can be provided a series of test conditions and then asked to respond to those conditions. Conducted in this manner, manual support testing is like an examination in which the users are asked to obtain the answer from the procedures and manuals available to them.



Dynamic Testing
Dynamic Testing involves working with the software, giving input values and checking if the output is as expected. These are the Validation activities. Unit Tests, Integration Tests, System Tests and Acceptance Tests are few of the Dynamic Testing methodologies. As we go further, let us understand the various Test Life Cycle's and get to know the Testing Terminologies. To understand more of software testing, various methodologies, tools and techniques, you can download the Software Testing Guide Book from here.
Difference Between Static and Dynamic Testing: Please refer the definition of Static Testing to observe the difference between the static testing and dynamic testing.
Static Testing
The Verification activities fall into the category of Static Testing. During static testing, you have a checklist to check whether the work you are doing is going as per the set standards of the organization. These standards can be for Coding, Integrating and Deployment. Reviews, Inspection's and Walkthrough's are static testing methodologies.
Black box testing
IntroductionBlack box testing attempts to derive sets of inputs that will fully exercise all the functional requirements of a system. It is not an alternative to white box testing. This type of testing attempts to find errors in the following categories:1. incorrect or missing functions, 2. interface errors, 3. errors in data structures or external database access,4. performance errors, and 5. initialization and termination errors.Tests are designed to answer the following questions:1. How is the function's validity tested?2. What classes of input will make good test cases?3. Is the system particularly sensitive to certain input values?4. How are the boundaries of a data class isolated?5. What data rates and data volume can the system tolerate?6. What effect will specific combinations of data have on system operation?White box testing should be performed early in the testing process, while black box testing tends to be applied during later stages. Test cases should be derived which1. reduce the number of additional test cases that must be designed to achieve reasonable testing, and2. tell us something about the presence or absence of classes of errors, rather than an error associated only with the specific test at hand.Equivalence PartitioningThis method divides the input domain of a program into classes of data from which test cases can be derived. Equivalence partitioning strives to define a test case that uncovers classes of errors and thereby reduces the number of test cases needed. It is based on an evaluation of equivalence classes for an input condition. An equivalence class represents a set of valid or invalid states for input conditions.Equivalence classes may be defined according to the following guidelines:1. If an input condition specifies a range, one valid and two invalid equivalence classes are defined.2. If an input condition requires a specific value, then one valid and two invalid equivalence classes are defined.3. If an input condition specifies a member of a set, then one valid and one invalid equivalence class are defined.4. If an input condition is Boolean, then one valid and one invalid equivalence class are defined.Boundary Value AnalysisThis method leads to a selection of test cases that exercise boundary values. It complements equivalence partitioning since it selects test cases at the edges of a class. Rather than focusing on input conditions solely, BVA derives test cases from the output domain also. BVA guidelines include:1. For input ranges bounded by a and b, test cases should include values a and b and just above and just below a and b respectively.2. If an input condition specifies a number of values, test cases should be developed to exercise the minimum and maximum numbers and values just above and below these limits.3. Apply guidelines 1 and 2 to the output.4. If internal data structures have prescribed boundaries, a test case should be designed to exercise the data structure at its boundary.Cause-Effect Graphing TechniquesCause-effect graphing is a technique that provides a concise representation of logical conditions and corresponding actions. There are four steps:1. Causes (input conditions) and effects (actions) are listed for a module and an identifier is assigned to each.2. A cause-effect graph is developed.3. The graph is converted to a decision table.4. Decision table rules are converted to test cases.
What is blackbox testing, difference between blackbox testing and whitebox testing, Blackbox Testing plans, unbiased blackbox testing

White box testing
White box testing is a test case design method that uses the control structure of the procedural design to derive test cases. Test cases can be derived that
1. guarantee that all independent paths within a module have been exercised at least once,2. exercise all logical decisions on their true and false sides,3. execute all loops at their boundaries and within their operational bounds, and4. exercise internal data structures to ensure their validity.The Nature of Software Defects
Logic errors and incorrect assumptions are inversely proportional to the probability that a program path will be executed. General processing tends to be well understood while special case processing tends to be prone to errors.
We often believe that a logical path is not likely to be executed when it may be executed on a regular basis. Our unconscious assumptions about control flow and data lead to design errors that can only be detected by path testing.Typographical errors are random.Basis Path Testing
This method enables the designer to derive a logical complexity measure of a procedural design and use it as a guide for defining a basis set of execution paths. Test cases that exercise the basis set are guaranteed to execute every statement in the program at least once during testing.
Flow Graphs
Flow graphs can be used to represent control flow in a program and can help in the derivation of the basis set. Each flow graph node represents one or more procedural statements. The edges between nodes represent flow of control. An edge must terminate at a node, even if the node does not represent any useful procedural statements. A region in a flow graph is an area bounded by edges and nodes. Each node that contains a condition is called a predicate node. Cyclomatic complexity is a metric that provides a quantitative measure of the logical complexity of a program. It defines the number of independent paths in the basis set and thus provides an upper bound for the number of tests that must be performed.
The Basis SetAn independent path is any path through a program that introduces at least one new set of processing statements (must move along at least one new edge in the path). The basis set is not unique. Any number of different basis sets can be derived for a given procedural design. Cyclomatic complexity, V(G), for a flow graph G is equal to 1. The number of regions in the flow graph.2. V(G) = E - N + 2 where E is the number of edges and N is the number of nodes.3. V(G) = P + 1 where P is the number of predicate nodes.Deriving Test Cases1. From the design or source code, derive a flow graph.2. Determine the cyclomatic complexity of this flow graph. Even without a flow graph, V(G) can be determined by counting the number of conditional statements in the code.3. Determine a basis set of linearly independent paths. Predicate nodes are useful for determining the necessary paths.4. Prepare test cases that will force execution of each path in the basis set. Each test case is executed and compared to the expected results.Automating Basis Set DerivationThe derivation of the flow graph and the set of basis paths is amenable to automation. A software tool to do this can be developed using a data structure called a graph matrix. A graph matrix is a square matrix whose size is equivalent to the number of nodes in the flow graph. Each row and column correspond to a particular node and the matrix corresponds to the connections (edges) between nodes. By adding a link weight to each matrix entry, more information about the control flow can be captured. In its simplest form, the link weight is 1 if an edge exists and 0 if it does not. But other types of link weights can be represented:� the probability that an edge will be executed,� the processing time expended during link traversal,� the memory required during link traversal, or � the resources required during link traversal.Graph theory algorithms can be applied to these graph matrices to help in the analysis necessary to produce the basis set.Loop TestingThis white box technique focuses exclusively on the validity of loop constructs. Four different classes of loops can be defined:1. simple loops,2. nested loops,3. concatenated loops, and4. unstructured loops.
Simple LoopsThe following tests should be applied to simple loops where n is the maximum number of allowable passes through the loop:1. skip the loop entirely,2. only pass once through the loop,3. m passes through the loop where m < n,4. n - 1, n, n + 1 passes through the loop.Nested LoopsThe testing of nested loops cannot simply extend the technique of simple loops since this would result in a geometrically increasing number of test cases. One approach for nested loops:1. Start at the innermost loop. Set all other loops to minimum values.2. Conduct simple loop tests for the innermost loop while holding the outer loops at their minimums. Add tests for out-of-range or excluded values.3. Work outward, conducting tests for the next loop while keeping all other outer loops at minimums and other nested loops to typical values.4. Continue until all loops have been tested.Concatenated LoopsConcatenated loops can be tested as simple loops if each loop is independent of the others. If they are not independent (e.g. the loop counter for one is the loop counter for the other), then the nested approach can be used.Unstructured LoopsThis type of loop should be redesigned not tested!!!Other White Box TechniquesOther white box testing techniques include:1. Condition testingexercises the logical conditions in a program.2. Data flow testingselects test paths according to the locations of definitions and uses of variables in the program.

Unit Testing
In computer programming, a unit test is a method of testing the correctness of a particular module of source code. The idea is to write test cases for every non-trivial function or method in the module so that each test case is separate from the others if possible. This type of testing is mostly done by the developers. Benefits The goal of unit testing is to isolate each part of the program and show that the individual parts are correct. It provides a written contract that the piece must satisfy. This isolated testing provides four main benefits: Encourages change Unit testing allows the programmer to refactor code at a later date, and make sure the module still works correctly (regression testing). This provides the benefit of encouraging programmers to make changes to the code since it is easy for the programmer to check if the piece is still working properly. Simplifies Integration Unit testing helps eliminate uncertainty in the pieces themselves and can be used in a bottom-up testing style approach. By testing the parts of a program first and then testing the sum of its parts will make integration testing easier. Documents the code Unit testing provides a sort of "living document" for the class being tested. Clients looking to learn how to use the class can look at the unit tests to determine how to use the class to fit their needs. Separation of Interface from Implementation Because some classes may have references to other classes, testing a class can frequently spill over into testing another class. A common example of this is classes that depend on a database; in order to test the class, the tester finds herself writing code that interacts with the database. This is a mistake, because a unit test should never go outside of its own class boundary. As a result, the software developer abstracts an interface around the database connection, and then implements that interface with their own Mock Object. This results in loosely coupled code, thus minimizing dependencies in the system. Limitations It is important to realize that unit-testing will not catch every error in the program. By definition, it only tests the functionality of the units themselves. Therefore, it will not catch integration errors, performance problems and any other system-wide issues. In addition, it may not be trivial to anticipate all special cases of input the program unit under study may receive in reality. Unit testing is only effective if it is used in conjunction with other software testing activities.
Unit Testing - Software Unit Testing, Tools, Research Topics, Toolkits, Extreme Programming Unit Testing

Win Runner
WinRunner, Mercury Interactive enterprise functional testing tool. It is used to quickly create and run sophisticated automated tests on your application. Winrunner helps you automate the testing process, from test development to execution. You create adaptable and reusable test scripts that challenge the functionality of your application. Prior to a software release, you can run these tests in a single overnight run- enabling you to detect and ensure superior software quality.
WinRunner Interview Questions
1) How you used WinRunner in your project?
Ans. Yes, I have been WinRunner for creating automates scripts for GUI, functional and regression testing of the AUT.
2) Explain WinRunner testing process?Ans. WinRunner testing process involves six main stagesi. Create GUI Map File so that WinRunner can recognize the GUI objects in the application being testedii. Create test scripts by recording, programming, or a combination of both. While recording tests, insert checkpoints where you want to check the response of the application being tested.iii. Debug Test: run tests in Debug mode to make sure they run smoothlyiv. Run Tests: run tests in Verify mode to test your application.v. View Results: determines the success or failure of the tests.vi. Report Defects: If a test run fails due to a defect in the application being tested, you can report information about the defect directly from the Test Results window.3) What in contained in the GUI map?
Ans. WinRunner stores information it learns about a window or object in a GUI Map. When WinRunner runs a test, it uses the GUI map to locate objects. It reads an object’s description in the GUI map and then looks for an object with the same properties in the application being tested. Each of these objects in the GUI Map file will be having a logical name and a physical description.
There are 2 types of GUI Map files.i. Global GUI Map file: a single GUI Map file for the entire applicationii. GUI Map File per Test: WinRunner automatically creates a GUI Map file for each test created.4) How does WinRunner recognize objects on the application?
Ans. WinRunner uses the GUI Map file to recognize objects on the application. When WinRunner runs a test, it uses the GUI map to locate objects. It reads an object’s description in the GUI map and then looks for an object with the same properties in the application being tested.
5) Have you created test scripts and what is contained in the test scripts?
Ans. Yes I have created test scripts. It contains the statement in Mercury Interactive’s Test Script Language (TSL). These statements appear as a test script in a test window. You can then enhance your recorded test script, either by typing in additional TSL functions and programming elements or by using WinRunner’s visual programming tool, the Function Generator.
6) How does WinRunner evaluates test results?
Ans. Following each test run, WinRunner displays the results in a report. The report details all the major events that occurred during the run, such as checkpoints, error messages, system messages, or user messages. If mismatches are detected at checkpoints during the test run, you can view the expected results and the actual results from the Test Results window.
7) Have you performed debugging of the scripts?
Ans. Yes, I have performed debugging of scripts. We can debug the script by executing the script in the debug mode. We can also debug script using the Step, Step Into, Step out functionalities provided by the WinRunner.
8) How do you run your test scripts?
Ans. We run tests in Verify mode to test your application. Each time WinRunner encounters a checkpoint in the test script, it compares the current data of the application being tested to the expected data captured earlier. If any mismatches are found, WinRunner captures them as actual results.
9) How do you analyze results and report the defects?
Ans. Following each test run, WinRunner displays the results in a report. The report details all the major events that occurred during the run, such as checkpoints, error messages, system messages, or user messages. If mismatches are detected at checkpoints during the test run, you can view the expected results and the actual results from the Test Results window. If a test run fails due to a defect in the application being tested, you can report information about the defect directly from the Test Results window. This information is sent via e-mail to the quality assurance manager, who tracks the defect until it is fixed.
10) What is the use of Test Director software?
Ans. Test Director is Mercury Interactive’s software test management tool. It helps quality assurance personnel plan and organize the testing process. With Test Director you can create a database of manual and automated tests, build test cycles, run tests, and report and track defects. You can also create reports and graphs to help review the progress of planning tests, running tests, and tracking defects before a software release.
11) How you integrated your automated scripts from Test Director?
Ans. When you work with WinRunner, you can choose to save your tests directly to your Test Director database or while creating a test case in the Test Director we can specify whether the script in automated or manual. And if it is automated script then Test Director will build a skeleton for the script that can be later modified into one which could be used to test the AUT.
12) What are the different modes of recording?Ans. There are two type of recording in WinRunner.i. Context Sensitive recording records the operations you perform on your application by identifying Graphical User Interface (GUI) objects. ii. Analog recording records keyboard input, mouse clicks, and the precise x- and y-coordinates traveled by the mouse pointer across the screen.13) What is the purpose of loading WinRunner Add-Ins?
Ans. Add-Ins are used in WinRunner to load functions specific to the particular add-in to the memory. While creating a script only those functions in the add-in selected will be listed in the function generator and while executing the script only those functions in the loaded add-in will be executed else WinRunner will give an error message saying it does not recognize the function.
14) What are the reasons that WinRunner fails to identify an object on the GUI?Ans. WinRunner fails to identify an object in a GUI due to various reasons.i. The object is not a standard windows object.ii. If the browser used is not compatible with the WinRunner version, GUI Map Editor will not be able to learn any of the objects displayed in the browser window.15) What do you mean by the logical name of the object.
Ans. An object’s logical name is determined by its class. In most cases, the logical name is the label that appears on an object.
16) If the object does not have a name then what will be the logical name?Ans. If the object does not have a name then the logical name could be the attached text.17) What is the different between GUI map and GUI map files?Ans. The GUI map is actually the sum of one or more GUI map files. There are two modes for organizing GUI map files.i. Global GUI Map file: a single GUI Map file for the entire application
ii. GUI Map File per Test: WinRunner automatically creates a GUI Map file for each test created. GUI Map file is a file which contains the windows and the objects learned by the WinRunner with its logical name and their physical description.
18) How do you view the contents of the GUI map?
Ans. GUI Map editor displays the content of a GUI Map. We can invoke GUI Map Editor from the Tools Menu in WinRunner. The GUI Map Editor displays the various GUI Map files created and the windows and objects learned in to them with their logical name and physical description.
19) When you create GUI map do you record all the objects of specific objects?Ans. If we are learning a window then WinRunner automatically learns all the objects in the window else we will we identifying those object, which are to be learned in a window, since we will be working with only those objects while creating scripts.20) What is the purpose of set_window command?
Ans. Set_Window command sets the focus to the specified window. We use this command to set the focus to the required window before executing tests on a particular window.
Syntax: set_window(, time); The logical name is the logical name of the window and time is the time the execution has to wait till it gets the given window into focus.
21) How do you load GUI map?Ans. We can load a GUI Map by using the GUI_load command.Syntax: GUI_load();22) What is the disadvantage of loading the GUI maps through start up scripts?
Ans.1.If we are using a single GUI Map file for the entire AUT then the memory used by the GUI Map may be much high.
2.If there is any change in the object being learned then WinRunner will not be able to recognize the object, as it is not in the GUI Map file loaded in the memory. So we will have to learn the object again and update the GUI File and reload it.
23) How do you unload the GUI map?
Ans. We can use GUI_close to unload a specific GUI Map file or else we call use GUI_close_all command to unload all the GUI Map files loaded in the memory.
Syntax: GUI_close(); or GUI_close_all;24) What actually happens when you load GUI map?
Ans. When we load a GUI Map file, the information about the windows and the objects with their logical names and physical description are loaded into memory. So when the WinRunner executes a script on a particular window, it can identify the objects using this information loaded in the memory.
25) What is the purpose of the temp GUI map file?
Ans. While recording a script, WinRunner learns objects and windows by itself. This is actually stored into the temporary GUI Map file. We can specify whether we have to load this temporary GUI Map file should be loaded each time in the General Options.
26) What is the extension of gui map file?Ans. The extension for a GUI Map file is “.gui”.27) How do you find an object in an GUI map.Ans. The GUI Map Editor is been provided with a Find and Show Buttons.
i. To find a particular object in the GUI Map file in the application, select the object and click the Show window. This blinks the selected object.
ii. To find a particular object in a GUI Map file click the Find button, which gives the option to select the object. When the object is selected, if the object has been learned to the GUI Map file it will be focused in the GUI Map file.
28) What different actions are performed by find and show button?
Ans. 1.To find a particular object in the GUI Map file in the application, select the object and click the Show window. This blinks the selected object.
2.To find a particular object in a GUI Map file click the Find button, which gives the option to select the object. When the object is selected, if the object has been learned to the GUI Map file it will be focused in the GUI Map file.
29) How do you identify which files are loaded in the GUI map?Ans. The GUI Map Editor has a drop down “GUI File” displaying all the GUI Map files loaded into the memory.30) How do you modify the logical name or the physical description of the objects in GUI map?Ans. You can modify the logical name or the physical description of an object in a GUI map file using the GUI Map Editor.
31) When do you feel you need to modify the logical name?
Ans. Changing the logical name of an object is useful when the assigned logical name is not sufficiently descriptive or is too long.
32) When it is appropriate to change physical description?
Ans. Changing the physical description is necessary when the property value of an object changes.
33) How WinRunner handles varying window labels?
Ans. We can handle varying window labels using regular expressions. WinRunner uses two “hidden” properties in order to use regular expression in an object’s physical description. These properties are regexp_label and regexp_MSW_class.
i. The regexp_label property is used for windows only. It operates “behind the scenes” to insert a regular expression into a window’s label description.
ii. The regexp_MSW_class property inserts a regular expression into an object’s MSW_class. It is obligatory for all types of windows and for the object class object.
34) What is the purpose of regexp_label property and regexp_MSW_class property?
Ans. The regexp_label property is used for windows only. It operates “behind the scenes” to insert a regular expression into a window’s label description.
The regexp_MSW_class property inserts a regular expression into an object’s MSW_class. It is obligatory for all types of windows and for the object class object.
35) How do you suppress a regular expression?
Ans. We can suppress the regular expression of a window by replacing the regexp_label property with label property.
36) How do you copy and move objects between different GUI map files?
Ans. We can copy and move objects between different GUI Map files using the GUI Map Editor. The steps to be followed are:
i. Choose Tools > GUI Map Editor to open the GUI Map Editor.ii. Choose View > GUI Files.iii. Click Expand in the GUI Map Editor. The dialog box expands to display two GUI map files simultaneously.iv. View a different GUI map file on each side of the dialog box by clicking the file names in the GUI File lists.v. In one file, select the objects you want to copy or move. Use the Shift key and/or Control key to select multiple objects. To select all objects in a GUI map file, choose Edit > Select All.vi. Click Copy or Move.vii. To restore the GUI Map Editor to its original size, click Collapse.37) How do you select multiple objects during merging the files?
Ans. Use the Shift key and/or Control key to select multiple objects. To select all objects in a GUI map file, choose Edit > Select All.
38) How do you clear a GUI map files?Ans. We can clear a GUI Map file using the “Clear All” option in the GUI Map Editor.39) How do you filter the objects in the GUI map?Ans. GUI Map Editor has a Filter option. This provides for filtering with 3 different types of options.i. Logical name displays only objects with the specified logical name.ii. Physical description displays only objects matching the specified physical description. Use any substring belonging to the physical description.iii. Class displays only objects of the specified class, such as all the push buttons.40) How do you configure GUI map?
a. When WinRunner learns the description of a GUI object, it does not learn all its properties. Instead, it learns the minimum number of properties to provide a unique identification of the object.
b. Many applications also contain custom GUI objects. A custom object is any object not belonging to one of the standard classes used by WinRunner. These objects are therefore assigned to the generic “object” class. When WinRunner records an operation on a custom object, it generates obj_mouse_ statements in the test script.
c. If a custom object is similar to a standard object, you can map it to one of the standard classes. You can also configure the properties WinRunner uses to identify a custom object during Context Sensitive testing. The mapping and the configuration you set are valid only for the current WinRunner session. To make the mapping and the configuration permanent, you must add configuration statements to your startup test script.
General Testing Interview Questions
1.What is 'Software Quality Assurance'?
Software QA involves the entire software development Process - monitoring and improving the process, making sure that any agreed-upon standards and procedures are followed, and ensuring that problems are found and dealt with. It is oriented to 'prevention'. (See the Books section for a list of useful books on Software Quality Assurance.) 2.What is 'Software Testing'?
Testing involves operation of a system or application under controlled conditions and evaluating the results (eg, 'if the user is in interface A of the application while using hardware B, and does C, then D should happen'). The controlled conditions should include both normal and abnormal conditions. Testing should intentionally attempt to make things go wrong to determine if things happen when they shouldn't or things don't happen when they should. It is oriented to 'detection'. Organizations vary considerably in how they assign responsibility for QA and testing. Sometimes they're the combined responsibility of one group or individual. Also common are project teams that include a mix of testers and developers who work closely together, with overall QA processes monitored by project managers. It will depend on what best fits an organization's size and business structure.
4.Why is it often hard for management to get serious about quality assurance?
* Solving problems is a high-visibility process; preventing problems is low-visibility. This is illustrated by an old parable: In ancient China there was a family of healers, one of whom was known throughout the land and employed as a physician to a great lord. 5.Why does software have bugs?
* Miscommunication or no communication - as to specifics of what an application should or shouldn't do (the application's requirements). * Software complexity - the complexity of current software applications can be difficult to comprehend for anyone without experience in modern-day software development. Multi-tiered applications, client-server and distributed applications, data communications, enormous relational databases, and sheer size of applications have all contributed to the exponential growth in software/system complexity. * Programming errors - programmers, like anyone else, can make mistakes. * Changing requirements (whether documented or undocumented) - the end-user may not understand the effects of changes, or may understand and request them anyway - redesign, rescheduling of engineers, effects on other projects, work already completed that may have to be redone or thrown out, hardware requirements that may be affected, etc. If there are many minor changes or any major changes, known and unknown dependencies among parts of the project are likely to interact and cause problems, and the complexity of coordinating changes may result in errors. Enthusiasm of engineering staff may be affected. In some fast-changing business environments, continuously modified requirements may be a fact of life. In this case, management must understand the resulting risks, and QA and test engineers must adapt and plan for continuous extensive testing to keep the inevitable bugs from running out of control - see 'What can be done if requirements are changing continuously?' in Part 2 of the FAQ. Also see information about 'agile' approaches such as XP, also in Part 2 of the FAQ. * Time pressures - scheduling of software projects is difficult at best, often requiring a lot of guesswork. When deadlines loom and the crunch comes, mistakes will be made.
6.How can new Software QA processes be introduced in an existing organization?
* A lot depends on the size of the organization and the risks involved. For large organizations with high-risk (in terms of lives or property) projects, serious management buy-in is required and a formalized QA process is necessary. * Where the risk is lower, management and organizational buy-in and QA implementation may be a slower, step-at-a-time process. QA processes should be balanced with productivity so as to keep bureaucracy from getting out of hand. * For small groups or projects, a more ad-hoc process may be appropriate, depending on the type of customers and projects. A lot will depend on team leads or managers, feedback to developers, and ensuring adequate communications among customers, managers, developers, and testers. * The most value for effort will often be in (a) requirements management processes, with a goal of clear, complete, testable requirement specifications embodied in requirements or design documentation, or in 'agile'-type environments extensive continuous coordination with end-users, (b) design inspections and code inspections, and (c) post-mortems/retrospectives. 7.What is verification? validation? * Verification typically involves reviews and meetings to evaluate documents, plans, code, requirements, and specifications. This can be done with checklists, issues lists, walkthroughs, and inspection meetings. Validation typically involves actual testing and takes place after verifications are completed. The term 'IV & V' refers to Independent Verification and Validation. 8.What is a 'walkthrough'? * A 'walkthrough' is an informal meeting for evaluation or informational purposes. Little or no preparation is usually required.
9.What's an 'inspection'? * An inspection is more formalized than a 'walkthrough', typically with 3-8 people including a moderator, reader, and a recorder to take notes. The subject of the inspection is typically a document such as a requirements spec or a test plan, and the purpose is to find problems and see what's missing, not to fix anything. Attendees should prepare for this type of meeting by reading thru the document; most problems will be found during this preparation. The result of the inspection meeting should be a written report. 10.What kinds of testing should be considered? * Black box testing - not based on any knowledge of internal design or code. Tests are based on requirements and functionality. * White box testing - based on knowledge of the internal logic of an application's code. Tests are based on coverage of code statements, branches, paths, conditions. * Unit testing - the most 'micro' scale of testing; to test particular functions or code modules. Typically done by the programmer and not by testers, as it requires detailed knowledge of the internal program design and code. Not always easily done unless the application has a well-designed architecture with tight code; may require developing test driver modules or test harnesses. * Incremental integration testing - continuous testing of an application as new functionality is added; requires that various aspects of an application's functionality be independent enough to work separately before all parts of the program are completed, or that test drivers be developed as needed; done by programmers or by testers. * Integration testing - testing of combined parts of an application to determine if they function together correctly. The 'parts' can be code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems. * Functional testing - black-box type testing geared to functional requirements of an application; this type of testing should be done by testers. This doesn't mean that the programmers shouldn't check that their code works before releasing it (which of course applies to any stage of testing.) * System testing - black-box type testing that is based on overall requirements specifications; covers all combined parts of a system.
* End-to-end testing - similar to system testing; the 'macro' end of the test scale; involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate. * Sanity testing or smoke testing - typically an initial testing effort to determine if a new software version is performing well enough to accept it for a major testing effort. For example, if the new software is crashing systems every 5 minutes, bogging down systems to a crawl, or corrupting databases, the software may not be in a 'sane' enough condition to warrant further testing in its current state. * Regression testing - re-testing after fixes or modifications of the software or its environment. It can be difficult to determine how much re-testing is needed, especially near the end of the development cycle. Automated testing tools can be especially useful for this type of testing. * Acceptance testing - final testing based on specifications of the end-user or customer, or based on use by end-users/customers over some limited period of time. * Load testing - testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the system's response time degrades or fails. * Stress testing - term often used interchangeably with 'load' and 'performance' testing. Also used to describe such tests as system functional testing while under unusually heavy loads, heavy repetition of certain actions or inputs, input of large numerical values, large complex queries to a database system, etc. * Performance testing - term often used interchangeably with 'stress' and 'load' testing. Ideally 'performance' testing (and any other 'type' of testing) is defined in requirements documentation or QA or Test Plans. * Usability testing - testing for 'user-friendliness'. Clearly this is subjective, and will depend on the targeted end-user or customer. User interviews, surveys, video recording of user sessions, and other techniques can be used. Programmers and testers are usually not appropriate as usability testers. * Install/uninstall testing - testing of full, partial, or upgrade install/uninstall processes. * Recovery testing - testing how well a system recovers from crashes, hardware failures, or other catastrophic problems. * Failover testing - typically used interchangeably with 'recovery testing' * Security testing - testing how well the system protects against unauthorized internal or external access, willful damage, etc; may require sophisticated testing techniques. * Compatibility testing - testing how well software performs in a particular hardware/software/operating system/network/etc. environment. * Exploratory testing - often taken to mean a creative, informal software test that is not based on formal test plans or test cases; testers may be learning the software as they test it. * Ad-hoc testing - similar to exploratory testing, but often taken to mean that the testers have significant understanding of the software before testing it. * Context-driven testing - testing driven by an understanding of the environment, culture, and intended use of software. For example, the testing approach for life-critical medical equipment software would be completely different than that for a low-cost computer game. * User acceptance testing - determining if software is satisfactory to an end-user or customer. * Comparison testing - comparing software weaknesses and strengths to competing products. * Alpha testing - testing of an application when development is nearing completion; minor design changes may still be made as a result of such testing. Typically done by end-users or others, not by programmers or testers. * Beta testing - testing when development and testing are essentially completed and final bugs and problems need to be found before final release. Typically done by end-users or others, not by programmers or testers. * Mutation testing - a method for determining if a set of test data or test cases is useful, by deliberately introducing various code changes ('bugs') and retesting with the original test data/cases to determine if the 'bugs' are detected. Proper implementation requires large computational resources. 11.What are 5 common problems in the software development process? * Solid requirements - clear, complete, detailed, cohesive, attainable, testable requirements that are agreed to by all players. Use prototypes to help nail down requirements. In 'agile'-type environments, continuous coordination with customers/end-users is necessary. * Realistic schedules - allow adequate time for planning, design, testing, bug fixing, re-testing, changes, and documentation; personnel should be able to complete the project without burning out. * Adequate testing - start testing early on, re-test after fixes or changes, plan for adequate time for testing and bug-fixing. 'Early' testing ideally includes unit testing by developers and built-in testing and diagnostic capabilities. * Stick to initial requirements as much as possible - be prepared to defend against excessive changes and additions once development has begun, and be prepared to explain consequences. If changes are necessary, they should be adequately reflected in related schedule changes. If possible, work closely with customers/end-users to manage expectations. This will provide them a higher comfort level with their requirements decisions and minimize excessive changes later on. * Communication - require walkthroughs and inspections when appropriate; make extensive use of group communication tools - e-mail, groupware, networked bug-tracking tools and change management tools, intranet capabilities, etc.; insure that information/documentation is available and up-to-date - preferably electronic, not paper; promote teamwork and cooperation; use prototypes if possible to clarify customers' expectations.
12.What is software 'quality'? * Quality software is reasonably bug-free, delivered on time and within budget, meets requirements and/or expectations, and is maintainable. However, quality is obviously a subjective term. It will depend on who the 'customer' is and their overall influence in the scheme of things. A wide-angle view of the 'customers' of a software development project might include end-users, customer acceptance testers, customer contract officers, customer management, and the development organizations. * Management/accountants/testers/salespeople, future software maintenance engineers, stockholders, magazine columnists, etc. Each type of 'customer' will have their own slant on 'quality' - the accounting department might define quality in terms of profits while an end-user might define quality as user-friendly and bug-free. 13.What is 'good code'? * * 'Good code' is code that works, is bug free, and is readable and maintainable. Some organizations have coding 'standards' that all developers are supposed to adhere to, but everyone has different ideas about what's best, or what is too many or too few rules. There are also various theories and metrics, such as McCabe Complexity metrics. It should be kept in mind that excessive use of standards and rules can stifle productivity and creativity. 'Peer reviews', 'buddy checks' code analysis tools, etc. can be used to check for problems and enforce standards. For C and C++ coding, here are some typical ideas to consider in setting rules/standards; these may or may not apply to a particular situation: * Minimize or eliminate use of global variables. * Use descriptive function and method names - use both upper and lower case, avoid abbreviations, use as many characters as necessary to be adequately descriptive (use of more than 20 characters is not out of line); be consistent in naming conventions. * Use descriptive variable names - use both upper and lower case, avoid abbreviations, use as many characters as necessary to be adequately descriptive (use of more than 20 characters is not out of line); be consistent in naming conventions. * Function and method sizes should be minimized; less than 100 lines of code is good, less than 50 lines is preferable. * Function descriptions should be clearly spelled out in comments preceding a function's code. * Organize code for readability. * Use whitespace generously - vertically and horizontally. * Each line of code should contain 70 characters max. * One code statement per line. * Coding style should be consistent throughout a program (eg, use of brackets, indentations, naming conventions, etc.) * In adding comments, err on the side of too many rather than too few comments; a common rule of thumb is that there should be at least as many lines of comments (including header blocks) as lines of code. * No matter how small, an application should include documentation of the overall program function and flow (even a few paragraphs is better than nothing); or if possible a separate flow chart and detailed program documentation. * Make extensive use of error handling procedures and status and error logging. * For C++, to minimize complexity and increase maintainability, avoid too many levels of inheritance in class hierarchies (relative to the size and complexity of the application). Minimize use of multiple inheritance, and minimize use of operator overloading (note that the Java programming language eliminates multiple inheritance and operator overloading.) * For C++, keep class methods small, less than 50 lines of code per method is preferable. * For C++, make liberal use of exception handlers. 14.What is 'good design'? * * 'Design' could refer to many things, but often refers to 'functional design' or 'internal design'. Good internal design is indicated by software code whose overall structure is clear, understandable, easily modifiable, and maintainable; is robust with sufficient error-handling and status logging capability; and works correctly when implemented. Good functional design is indicated by an application whose functionality can be traced back to customer and end-user requirements. For programs that have a user interface, it's often a good idea to assume that the end user will have little computer knowledge and may not read a user manual or even the on-line help; some common rules-of-thumb include: * The program should act in a way that least surprises the user * It should always be evident to the user what can be done next and how to exit * The program shouldn't let the users do something stupid without warning them. 15.What is SEI? CMM? CMMI? ISO? IEEE? ANSI? Will it help? * SEI = 'Software Engineering Institute' at Carnegie-Mellon University; initiated by the U.S. Defense Department to help improve software development processes. * CMM = 'Capability Maturity Model', now called the CMMI ('Capability Maturity Model Integration'), developed by the SEI. It's a model of 5 levels of process 'maturity' that determine effectiveness in delivering quality software. It is geared to large organizations such as large U.S. Defense Department contractors. However, many of the QA processes involved are appropriate to any organization, and if reasonably applied can be helpful. Organizations can receive CMMI ratings by undergoing assessments by qualified auditors. * Level 1 - characterized by chaos, periodic panics, and heroic efforts required by individuals to successfully complete projects. Few if any processes in place; successes may not be repeatable. * Level 2 - software project tracking, requirements management, realistic planning, and configuration management processes are in place; successful practices can be repeated. * Level 3 - standard software development and maintenance processes are integrated throughout an organization; a Software Engineering Process Group is in place to oversee software processes, and training programs are used to ensure understanding and compliance. * Level 4 - metrics are used to track productivity, processes, and products. Project performance is predictable, and quality is consistently high. * Level 5 - the focus is on continuous process improvement. The impact of new processes and technologies can be predicted and effectively implemented when required. * Perspective on CMM ratings: During 1997-2001, 1018 organizations were assessed. Of those, 27% were rated at Level 1, 39% at 2, 23% at 3, 6% at 4, and 5% at 5. (For ratings during the period 1992-96, 62% were at Level 1, 23% at 2, 13% at 3, 2% at 4, and 0.4% at 5.) The median size of organizations was 100 software engineering/maintenance personnel; 32% of organizations were U.S. federal contractors or agencies. For those rated at Level 1, the most problematical key process area was in Software Quality Assurance. * ISO = 'International Organization for Standardization' - The ISO 9001:2000 standard (which replaces the previous standard of 1994) concerns quality systems that are assessed by outside auditors, and it applies to many kinds of production and manufacturing organizations, not just software. It covers documentation, design, development, production, testing, installation, servicing, and other processes. The full set of standards consists of: (a)Q9001-2000 - Quality Management Systems: Requirements; (b)Q9000-2000 - Quality Management Systems: Fundamentals and Vocabulary; (c)Q9004-2000 - Quality Management Systems: Guidelines for Performance Improvements. To be ISO 9001 certified, a third-party auditor assesses an organization, and certification is typically good for about 3 years, after which a complete reassessment is required. Note that ISO certification does not necessarily indicate quality products - it indicates only that documented processes are followed. Also see http://www.iso.ch/ for the latest information. In the U.S. the standards can be purchased via the ASQ web site at http://e-standards.asq.org/ * IEEE = 'Institute of Electrical and Electronics Engineers' - among other things, creates standards such as 'IEEE Standard for Software Test Documentation' (IEEE/ANSI Standard 829), 'IEEE Standard of Software Unit Testing (IEEE/ANSI Standard 1008), 'IEEE Standard for Software Quality Assurance Plans' (IEEE/ANSI Standard 730), and others. * ANSI = 'American National Standards Institute', the primary industrial standards body in the U.S.; publishes some software-related standards in conjunction with the IEEE and ASQ (American Society for Quality). * Other software development/IT management process assessment methods besides CMMI and ISO 9000 include SPICE, Trillium, TickIT, Bootstrap, ITIL, MOF, and CobiT. 16.What is the 'software life cycle'? * The life cycle begins when an application is first conceived and ends when it is no longer in use. It includes aspects such as initial concept, requirements analysis, functional design, internal design, documentation planning, test planning, coding, document preparation, integration, testing, maintenance, updates, retesting, phase-out, and other aspects. 17.Will automated testing tools make testing easier? * Possibly For small projects, the time needed to learn and implement them may not be worth it. For larger projects, or on-going long-term projects they can be valuable. * A common type of automated tool is the 'record/playback' type. For example, a tester could click through all combinations of menu choices, dialog box choices, buttons, etc. in an application GUI and have them 'recorded' and the results logged by a tool. The 'recording' is typically in the form of text based on a scripting language that is interpretable by the testing tool. If new buttons are added, or some underlying code in the application is changed, etc. the application might then be retested by just 'playing back' the 'recorded' actions, and comparing the logging results to check effects of the changes. The problem with such tools is that if there are continual changes to the system being tested, the 'recordings' may have to be changed so much that it becomes very time-consuming to continuously update the scripts. Additionally, interpretation and analysis of results (screens, data, logs, etc.) can be a difficult task. Note that there are record/playback tools for text-based interfaces also, and for all types of platforms. * Another common type of approach for automation of functional testing is 'data-driven' or 'keyword-driven' automated testing, in which the test drivers are separated from the data and/or actions utilized in testing (an 'action' would be something like 'enter a value in a text box'). Test drivers can be in the form of automated test tools or custom-written testing software. The data and actions can be more easily maintained - such as via a spreadsheet - since they are separate from the test drivers. The test drivers 'read' the data/action information to perform specified tests. This approach can enable more efficient control, development, documentation, and maintenance of automated tests/test cases. * Other automated tools can include: * Code analyzers - monitor code complexity, adherence to standards, etc. * Coverage analyzers - these tools check which parts of the code have been exercised by a test, and may be oriented to code statement coverage, condition coverage, path coverage, etc. * Memory analyzers - such as bounds-checkers and leak detectors. * Load/performance test tools - for testing client/server and web applications under various load levels. * Web test tools - to check that links are valid, HTML code usage is correct, client-side and server-side programs work, a web site's interactions are secure. * Other tools - for test case management, documentation management, bug reporting, and configuration management.
SDLC
Once upon a time, software development consisted of a programmer writing code to solve a problem or automate a procedure. Nowadays, systems are so big and complex that teams of architects, analysts, programmers, testers and users must work together to create the millions of lines of custom-written code that drive our enterprises.
To manage this, a number of system development life cycle (SDLC) models have been created: waterfall, fountain, and spiral, build and fix, rapid prototyping, incremental, and synchronize and stabilize.
The oldest of these, and the best known, is the waterfall: a sequence of stages in which the output of each stage becomes the input for the next. These stages can be characterized and divided up in different ways, including the following:
· Project planning, feasibility study: Establishes a high-level view of the intended project and determines its goals.
· Systems analysis, requirements definition: Refines project goals into defined functions and operation of the intended application. Analyzes end-user information needs.
· Systems design: Describes desired features and operations in detail, including screen layouts, business rules, process diagrams, pseudocode and other documentation.
· Implementation: The real code is written here.
· Integration and testing: Brings all the pieces together into a special testing environment, then checks for errors, bugs and interoperability.
· Acceptance, installation, deployment: The final stage of initial development, where the software is put into production and runs actual business.
· Maintenance: What happens during the rest of the software's life: changes, correction, additions, and moves to a different computing platform and more. This, the least glamorous and perhaps most important step of all, goes on seemingly forever.
SOFTWARE DEVELOPMENT LIFE CYCLE MODELS

I was asked to put together this high-level and traditional software life cycle information as a favor for a friend of a friend, so I thought I might as well share it with everybody.
The General Model
Software life cycle models describe phases of the software cycle and the order in which those phases are executed. There are tons of models, and many companies adopt their own, but all have very similar patterns. The general, basic model is shown below:

Each phase produces deliverables required by the next phase in the life cycle. Requirements are translated into design. Code is produced during implementation that is driven by the design. Testing verifies the deliverable of the implementation phase against requirements.
Requirements
Business requirements are gathered in this phase. This phase is the main focus of the project managers and stake holders. Meetings with managers, stake holders and users are held in order to determine the requirements. Who is going to use the system? How will they use the system? What data should be input into the system? What data should be output by the system? These are general questions that get answered during a requirements gathering phase. This produces a nice big list of functionality that the system should provide, which describes functions the system should perform, business logic that processes data, what data is stored and used by the system, and how the user interface should work. The overall result is the system as a whole and how it performs, not how it is actually going to do it.
Design
The software system design is produced from the results of the requirements phase. Architects have the ball in their court during this phase and this is the phase in which their focus lies. This is where the details on how the system will work is produced. Architecture, including hardware and software, communication, software design (UML is produced here) are all part of the deliverables of a design phase.
Implementation
Code is produced from the deliverables of the design phase during implementation, and this is the longest phase of the software development life cycle. For a developer, this is the main focus of the life cycle because this is where the code is produced. Implementation my overlap with both the design and testing phases. Many tools exists (CASE tools) to actually automate the production of code using information gathered and produced during the design phase.
Testing
During testing, the implementation is tested against the requirements to make sure that the product is actually solving the needs addressed and gathered during the requirements phase. Unit tests and system/acceptance tests are done during this phase. Unit tests act on a specific component of the system, while system tests act on the system as a whole.
So in a nutshell, that is a very basic overview of the general software development life cycle model. Now lets delve into some of the traditional and widely used variations.


Waterfall Model
This is the most common and classic of life cycle models, also referred to as a linear-sequential life cycle model. It is very simple to understand and use. In a waterfall model, each phase must be completed in its entirety before the next phase can begin. At the end of each phase, a review takes place to determine if the project is on the right path and whether or not to continue or discard the project. Unlike what I mentioned in the general model, phases do not overlap in a waterfall model.
Waterfall Life Cycle Model
Advantages
Simple and easy to use.
Easy to manage due to the rigidity of the model – each phase has specific deliverables and a review process.
Phases are processed and completed one at a time.
Works well for smaller projects where requirements are very well understood.
Disadvantages
Adjusting scope during the life cycle can kill a project
No working software is produced until late during the life cycle.
High amounts of risk and uncertainty.
Poor model for complex and object-oriented projects.
Poor model for long and ongoing projects.
Poor model where requirements are at a moderate to high risk of changing.

V-Shaped Model
Just like the waterfall model, the V-Shaped life cycle is a sequential path of execution of processes. Each phase must be completed before the next phase begins. Testing is emphasized in this model more so than the waterfall model though. The testing procedures are developed early in the life cycle before any coding is done, during each of the phases preceding implementation.
Requirements begin the life cycle model just like the waterfall model. Before development is started, a system test plan is created. The test plan focuses on meeting the functionality specified in the requirements gathering.
The high-level design phase focuses on system architecture and design. An integration test plan is created in this phase as well in order to test the pieces of the software systems ability to work together.
The low-level design phase is where the actual software components are designed, and unit tests are created in this phase as well.
The implementation phase is, again, where all coding takes place. Once coding is complete, the path of execution continues up the right side of the V where the test plans developed earlier are now put to use.
V-Shaped Life Cycle Model
Advantages
Simple and easy to use.
Each phase has specific deliverables.
Higher chance of success over the waterfall model due to the development of test plans early on during the life cycle.
Works well for small projects where requirements are easily understood.
Disadvantages
Very rigid, like the waterfall model.
Little flexibility and adjusting scope is difficult and expensive.
Software is developed during the implementation phase, so no early prototypes of the software are produced.
Model doesn’t provide a clear path for problems found during testing phases.

Incremental Model
The incremental model is an intuitive approach to the waterfall model. Multiple development cycles take place here, making the life cycle a “multi-waterfall” cycle. Cycles are divided up into smaller, more easily managed iterations. Each iteration passes through the requirements, design, implementation and testing phases.
A working version of software is produced during the first iteration, so you have working software early on during the software life cycle. Subsequent iterations build on the initial software produced during the first iteration.
Incremental Life Cycle Model
Advantages
Generates working software quickly and early during the software life cycle.
More flexible – less costly to change scope and requirements.
Easier to test and debug during a smaller iteration.
Easier to manage risk because risky pieces are identified and handled during its iteration.
Each iteration is an easily managed milestone.
Disadvantages
Each phase of an iteration is rigid and do not overlap each other.
Problems may arise pertaining to system architecture because not all requirements are gathered up front for the entire software life cycle.

Spiral Model
The spiral model is similar to the incremental model, with more emphases placed on risk analysis. The spiral model has four phases: Planning, Risk Analysis, Engineering and Evaluation. A software project repeatedly passes through these phases in iterations (called Spirals in this model). The baseline spiral, starting in the planning phase, requirements are gathered and risk is assessed. Each subsequent spirals builds on the baseline spiral.Requirements are gathered during the planning phase. In the risk analysis phase, a process is undertaken to identify risk and alternate solutions. A prototype is produced at the end of the risk analysis phase.Software is produced in the engineering phase, along with testing at the end of the phase. The evaluation phase allows the customer to evaluate the output of the project to date before the project continues to the next spiral.In the spiral model, the angular component represents progress, and the radius of the spiral represents cost.
Spiral Life Cycle Model
Advantages
· High amount of risk analysis
· Good for large and mission-critical projects.
· Software is produced early in the software life cycle.
Disadvantages
· Can be a costly model to use.
· Risk analysis requires highly specific expertise.
· Project’s success is highly dependent on the risk analysis phase.
· Doesn’t work well for smaller projects.



Testing Types

Acceptance Test
The test performed by users of a new or changed system in order to approve the system and go live.
Active Test
Introducing test data and analyzing the results. Contrast with "passive test" (below).Ad Hoc Test
Informal testing without a test case.Age Test (aging)
Evaluating a system's ability to perform in the future. To perform these tests, hardware and/or test data is modified to a future date.Alpha Test
The first testing of a product in the lab. Then comes beta testing.
The first test of newly developed hardware or software in a laboratory setting. When the first round of bugs has been fixed, the product goes into beta test with actual users. For custom software, the customer may be invited into the vendor's facilities for an alpha test to ensure the client's vision has been interpreted properly by the developer.
Automated Test
Using software to test software. Automated tests may still require human intervention to monitor stages for analysis or errors.Beta Test
Testing by end users. Follows alpha testing.A test of new or revised hardware or software that is performed by users at their facilities under normal operating conditions. Beta testing follows alpha testing. Vendors of packaged software often offer their customers the opportunity of beta testing new releases or versions, and the beta testing of elaborate products such as operating systems can take months.
Black Box Test
Testing software based on output only without any knowledge of internal operation. Contrast with "white box test."Dirty Test
Same as "negative test."


Environment Test
A test of new software that determines whether all transactions flow properly between input, output and storage devices. A test of new software that determines whether all transactions flow properly between input, output and storage devices. Typically done by systems programmers, an environment test ensures that all the parts of the system are in place. It does not test for content or validate output, which is performed by quality assurance personnel who develop test suites
Functional Test
Testing functional requirements of software, such as menus and key commands. Testing software based on its functional requirements. It ensures that the program physically works the way it was intended and all required menu options are present. It also ensures that the program conforms to the industry standards relevant to that environment; for example, in a Windows program, pressing F1 brings up help.
Negative Test
Using invalid input to test a program's error handling.Passive Test
Monitoring the results of a running system without introducing any special test data. Contrast with "active test" (above).Recovery Test
Testing a system's ability to recover from a hardware or software failure.Regression Test
To test revised software to see if previously working functions were impacted. In software development, testing a program that has been modified in order to ensure that additional bugs have not been introduced. When a program is enhanced, testing is often done only on the new features. However, adding source code to a program often introduces errors in other routines, and many of the old and stable functions must be retested along with the new ones. Regression testing is one of the most important aspects of software testing and is often overlooked or given scant attention.
Smoke Test
Turn it on and see what happens. A test of new or repaired equipment by turning it on. If it smokes... guess what... it doesn't work! The term also refers to testing the basic functions of software. The term was originally coined in the manufacture of containers and pipes, where smoke was introduced to determine if there were any leaks.
System Test
Overall testing in the lab and in the user environment. See alpha test and beta test.Test Case
A set of test data, test programs and expected results. A set of test data and test programs (test scripts) and their expected results. A test case validates one or more system requirements and generates a pass or fail.Test Scenario
A set of test cases. A set of test cases that ensure that the business process flows are tested from end to end. They may be independent tests or a series of tests that follow each other, each dependent on the output of the previous one. The terms "test scenario" and "test case" are often used synonymously.
Test Suite
A collection of test cases and/or test scenarios. A collection of test scenarios and/or test cases that are related or that may cooperate with each other.
Unit Test
A test of one component of the system. Contrast with "system test."User Acceptance Test (UAT)
See "acceptance test" above.
The final testing stages by users of a new or changed information system. If successful, it signals the approval to implement the system live. Cosmetic and other small changes may still be required as a result of the test, but the system is considered stable and processing data according to requirements.White Box Test
Testing software with knowledge of the internal operation. Contrast with "black box test."

Mutation Testing

A: Mutation testing is testing where our goal is to make mutant software fail, and thus demonstrate the adequacy of our test case. How do we perform mutation testing?
Step one: We create a set of mutant software. In other words, each mutant software differs from the original software by one mutation, i.e. one single syntax change made to one of its program statements, i.e. each mutant software contains one single fault.
Step two: We write and apply test cases to the original software and to the mutant software.
Step three: We evaluate the results, based on the following set of criteria: Our test case is inadequate, if both the original software and all mutant software generate the same output. Our test case is adequate, if our test case detects faults in our software, or, if, at least, one mutant software generates a different output than does the original software for our test case.

Monkey Testing

A: Monkey testing is random testing performed by automated testing tools (after the latter are developed by humans). These automated testing tools are considered "monkeys", if they work at random. We call them "monkeys" because it is widely believed that if we allow six monkeys to pound on six typewriters at random, for a million years, they will recreate all the works of Isaac Asimov.

There are "smart monkeys" and "dumb monkeys". "Smart monkeys" are valuable for load and stress testing; they will find a significant number of bugs, but are also very expensive to develop. "Dumb monkeys", on the other hand, are inexpensive to develop, are able to do some basic testing, but they will find few bugs. However, the bugs "dumb monkeys" do find will be hangs and crashes, i.e. the bugs you least want to have in your software product. "Monkey testing" can be valuable, but they should not be your only testing.

Why we compare files?

We compare files because of configuration management, revision control, requirement version control, or document version control. Examples are Rational ClearCase, DOORS, PVCS, and CVS. CVS, for example, enables several, often distant, developers to work together on the same source code.


What is stochastic testing?

Stochastic testing is the same as "monkey testing", but stochastic testing is a lot more technical sounding name for the same testing process.
Stochastic testing is black box testing, random testing, performed by automated testing tools. Stochastic testing is a series of random tests over time. The software under test typically passes the individual tests, but our goal is to see if it can pass a large number of individual tests.

What is PDR?

PDR is an acronym. In the world of software QA or testing, it stands for "peer design review", informally known as "peer review".

What is Peer-Review?
In the most general of terms, peer-review is the act of having another writer read what you have written and respond in terms of its effectiveness. This reader attempts to identify the writing's strengths and weaknesses, and then suggests strategies for revising it. The hope is that not only will the specific piece of writing be improved, but that future writing attempts will also be more successful. Peer-review happens with all types of writing, at any stage of the process, and with all levels of writers.


Why is that my company requires a PDR?

Your company requires a PDR, because your company wants to be the owner of the very best possible design and documentation. Your company requires a PDR, because when you organize a PDR, you invite, assemble and encourage the company's best experts to voice their concerns as to what should or should not go into your design and documentation, and why.

Please don't be negative. Please do not assume your company is finding fault with your work, or distrusting you in any way. Remember, PDRs are not about you, but about design and documentation. There is a 90+ per cent probability your company wants you, likes you and trust you because you're a specialist, and because your company hired you after a long and careful selection process.

Your company requires a PDR, because PDRs are useful and constructive. Just about everyone - even corporate chief executive officers (CEOs) - attend PDRs from time to time. When a corporate CEO attends a PDR, he has to listen for "feedback" from shareholders. When a CEO attends a PDR, the meeting is called the "annual shareholders' meeting".


Smoke and sanity testing

There are two types of test types, Smoke and Sanity. What they are exactly? Here we go.....
The general definition (related to Hardware) of Smoke Testing is: Smoke testing is a safe harmless procedure of blowing smoke into parts of the sewer and drain lines to detect sources of unwanted leaks and sources of sewer odors.
It it is related to s/w, the definition is Smoke testing is non-exhaustive software testing, ascertaining that the most crucial functions of a program work, but not bothering with finer details.
Sanity testing is a cursory testing; it is performed whenever a cursory testing is sufficient to prove the application is functioning according to specifications. This level of testing is a subset of regression testing. It normally includes a set of core tests of basic GUI functionality to demonstrate connectivity to the database, application servers, printers, etc.

Counts