Wednesday, February 27, 2008
Friday, November 16, 2007
How to Create a Requirements Traceability Matrix(RTM)
How to Create a Requirements Traceability Matrix
Introduction
A successful project cannot be achieved without the project manager having an excellent organizational skill set. Information must be readily available upon demand. A good project manager will be able to identify what works and what is broken in an instant. Having a requirements traceability matrix is an invaluable tool to accomplish this.
Things You'll Need
· Project deliverables
· Business requirements catalog
· Use cases
Steps:
Step One:
Create a template. There are many on the web from which to choose. The project manager, sponsor and decision makers will thank you when they are receiving information in a consistent and logical format.
Step Two:
Transfer data from your Business Requirements Catalog. You will need, at bare minimum, the exact requirement identified from the Business Requirements Catalog that you need to have already created.
Step Three:
Identify the requirement with a unique ID. The business requirements document should have already assigned an identifier that you will use in this matrix. If not, you will create one now and insert it next to the applicable requirement.
Step Four:
Copy the Use Case ID into the traceability matrix. You may or may not have used use cases to develop your requirements. If you did, you will have an identifier on your use case. You must transfer the ID to this matrix in order to see out of what data or scenario this requirement was born.
Step Five:
Insert the System Requirements Specification (SRS) ID into the traceability matrix. You might not be the actual author of the SRS, but there must be a line on the matrix to trace the business requirement to the corresponding system requirement needed.
Step Six:
Insert the testing data into the traceability matrix. There are many different testing methods and procedures that can be used in any project. The traceability matrix must account for the types of tests used in this project. This should clearly indicate the specific test type, the date tested and the outcome of pass/fail.
Step Seven:
Review your data. Your matrix should now clearly show the specific deliverable requirements from conception clearly through testing. This will ensure that nothing gets moved into production haphazardly and when asked, the Project Manager now has this information at the ready.
Sample Example:
Posted by Naveen Verma at 2:09 AM 1 comments
Thursday, November 15, 2007
Psychology of Software Testing
Software Testing-Psychology of Software Testing
The purpose of this section is to explore differences in perspective between tester and developer (buyer & builder) and explain some of the difficulties management and staff face when working together developing and testing computer software in Software Testing.
Different mindsets?
We have already discussed that none of the primary purposes of testing is to find faults in software i.e., it can be perceived as a destructive process. The development process on the other hand is a naturally creative one and experience shows that staff working in development has a different mindset to that of testers.We would never argue that one group is intellectually superior to another, merely that they view systems development from another perspective. A developer is looking to build new and exciting software based on user's requirements and really wants it to work (first time if possible). He or she will work long hours and is usually highly motivated and very determined to do a good job.A tester, however, is concerned that user really does get a system that does what they want, is reliable and doesn't do thing it shouldn't. He or she will also work long hours looking for faults in software but will often find the job frustrating as their destructive talents take their tool on the poor developers. At this point, there is often much friction between developer and tester. Developer wants to finish system but tester wants all faults in software fixed before their work is done.
In summary
Developers
-Are perceived as very creative - they write code without which there would be no system! .
-Are often highly valued within an organization.
-Are sent on relevant industry training courses to gain recognized qualifications.
-Are rarely good communicators (sorry guys)!
-Can often specialize in just one or two skills (e.g. VB, C++, JAVA, SQL).
Testers
-Are perceived as destructive - only happy when they are finding faults!
-Are often not valued within the organization.
-Usually do not have any industry recognized qualifications, until now
-Usually require good communication skills, tack & diplomacy.
-Normally need to be multi-talented (technical, testing, team skills).
Communication b/w developer and tester
It is vitally important that tester can explain and report fault to developer in professional manner to ensure fault gets fixed. Tester must not antagonize developer. Tact and diplomacy are essential, even if you've been up all night trying to test the wretched software
Posted by Naveen Verma at 4:18 AM 2 comments
Monday, October 22, 2007
Start Software Testing with :
Start Software Testing with All Five Essentials in Place
Five essential elements are required for successful software testing. If any one of the five is missing or inadequate, the test effort will most likely fall far short of what could otherwise be achieved. Exploring these five essentials can help improve the effectiveness and efficiency of any software testing program.
Here are the five essential test elements:
-A test strategy that indicates what types of testing and the amount of testing that will work best at finding the defects lurking in the software.
-A testing plan of the actual testing tasks that will need to be execute to carry out the test strategy.
-Test cases that have been prepared in advance in the form of detailed examples you will use to check that the software will actually meet its requirements
-Test data consisting of both input test data and database test data to use while you are executing your test cases, and
-A test environment which you will use to carry out your testing.
Test Strategy
The purpose of testing is to find defects, not to pass easy tests. A test strategy basically –––describes which types of testing seem best to do, the order in which to perform them, the proposed sequence of execution, and the optimum amount of effort to put into each test objective to make testing most effective. A test strategy is based on the prioritized requirements and any other available information about what is important to the customers.
Because there are always time and resource constraints, a test strategy faces up to this reality and outlines how to make the best use of whatever resources are available to locate most of the worst defects. Without a test strategy, a software company is apt to waste time on less fruitful testing and miss using some of the most powerful testing options. The test strategy should be created at about the middle of the design phase, as soon as the requirements have settled down.
Testing Plan
A testing plan is simply that part of a project plan that deals with the testing tasks. It details who will do which tasks – starting when, ending when, taking how much effort and depending on which other tasks. It provides a complete list of all the things that need to be done for testing, including all the preparation work during all of the phases before testing. It shows the dependencies among the tasks to clearly create a critical path without surprises. The details of a testing plan can be filled in starting as soon as the test strategy is completed. Both the test strategy and testing plan are subject to change as the project evolves. If modification is necessary, start with the strategy first, and then the testing plan.
Test Cases
Test cases (and automated test scripts if called for by the strategy) are prepared based on the strategy which outlines how much of each type of testing to do. Test cases are developed based on prioritized requirements and acceptance criteria for the software, keeping in mind the customer's emphasis on quality dimensions and the project's latest risk assessment of what could go wrong. Except for a small amount of ad hoc testing, all test cases should be prepared in advance of the start of testing.
There are many different approaches to developing test cases. Test case development is an activity performed in parallel with software development. It is just as difficult to do a good job of coming up with test cases as it is to program the system itself. In addition to figuring out what steps to take to test the system, the requirements and business rules need to be known well enough to predict exactly what the expected results should be. Without expected results to compare to actual results, it will be impossible to say whether a test will pass or fail. A good test case checks to make sure requirements are being met and has a good chance of uncovering defects.
Test Data
In addition to the steps to perform to execute test cases, there also is a need to systematically come up with test data to use. This often mens sets of names, addresses, product orders, or whatever other information the system uses. Since query functions, change functions and delete functions are probably going to be tested, a starting database of data will be needed in addition to the examples to input. Consider how many times those doing the testing might need to go back to the starting point of the database to restart the testing, and how many new customer names will be needed for all the testing in the plan. Test data development is usually done simultaneously with test case development.
Test Environment
Obviously a place and the right equipment will be needed to do the testing. Unless the software is very simple, one PC will not suffice. A system with all of the components of the system on which the software eventually will be used is best. Test environments may be scaled-down versions of the real thing, but all the parts need to be there for the system to actually run.
Building a test environment usually involves setting aside separate regions on mainframe computers and/or servers, networks and PCs that can be dedicated to the test effort and that can be reset to restart testing as often as needed. Sometimes lab rooms of equipment are set aside, especially for performance or usability testing. A wish list of components that will be needed is part of the test strategy, which then needs to be reality checked as part of the test planning process. Steps to set up the environment are part of the testing plan and need to be completed before testing begins.
Conclusion: All Five Needed
For those who want to improve their software testing or those who are new to software testing, its is essential to make sure you have all five of these testing elements in place. Many testers struggle with inadequate resources, undocumented requirements and lack of involvement with the development process early in the software development life cycle. Pushing for all five of the essentials and proper timing is one way to significantly improve the effectiveness of testing as an essential part of software engineering.
Posted by Naveen Verma at 1:37 AM 1 comments
Thursday, October 11, 2007
Software Testing framework
Software Testing framework
Testing plays an important role in today's System Development Life Cycle. During Testing, we follow a systematic procedure to uncover defects at various stages of the life cycle.
This framework is aimed at providing the reader various Test Types, Test Phases, Test Models and Test Metrics and guide as to how to perform effective Testing in the project.
Here is an outline of the Framework:
1. Introduction.
2. Varification and Validation Strategies.
3. Testing Types.
4. Test Phases.
5. Metrics.
6. Test Models.
7. Defect Tracking Process.
8. Test Process for a Project
9. Deliverables.
Posted by Naveen Verma at 5:43 AM 0 comments
Monday, October 8, 2007
Bug Life Cycle
Life cycle of a BUG:
As defects move through the system they are given various states. At each state there are a number of possible transitions to other statesThe following figure is the life cycle of a bug
to ASSIGNED by acceptanceto RESOLVED by analysis and maybe fixingto NEW by reassignment
ASSIGNED The owner, i.e. the person referenced by Assigned-To has accepted this bug as something they need to work onto NEW by reassignmentto RESOLVED by analysis and maybe fixing
REOPENED Was once resolved but has been reopenedto NEW by reassignmentto ASSIGNED by acceptanceto RESOLVED by analysis and maybe fixing
RESOLVED Has been resolved (e.g. fixed, deemed unfixable, etc. See "resolution" column)to REOPENED by reopeningto VERIFIED by verificationton CLOSED by closing
VERIFIED The resolution has been approved by QAto CLOSED when the product shipsto REOPENED by reopening
CLOSED Over and done withto REOPENED by reopening
Posted by Naveen Verma at 2:02 AM 1 comments
Wednesday, October 3, 2007
Difference between Bug Tracking and Testing
Bug Tracking:
Receiving and filing bugs reported against a software project, and tracking those bugs until they are fixed. We use BTS ( Bug tracking System) to track them properly. Most major software projects have their own BTS, the source code of which is often available for use by other projects.
Testing:
Software Testing is the process used to help identify the correctness, completeness, security, and quality of developed computer software.
Posted by Naveen Verma at 11:43 PM 0 comments
Monday, September 17, 2007
Quality Assurance Software Testing levels:
Testing is applied to different types of targets, in different stages or levels of work effort. These levels are distinguished typically by those roles that are best skilled to design and conduct the tests, and where techniques are most appropriate for testing at each level. It's important to ensure a balance of focus is retained across these different work efforts.
Developer Testing
Developer testing denotes the aspects of test design and implementation most appropriate for the team of developers to undertake. In most cases, test execution initially occurs with the developer-testing group who designed and implemented the test, but it is a good practice for the developers to create their tests in such a way so as to make them available to independent testing groups for execution.
Independent Testing
Independent testing denotes the test design and implementation most appropriately performed by someone who is independent from the team of developers. In most cases, test execution initially occurs with the independent testing group that designed and implemented the test, but the independent testers should create their tests to make them available to the developer testing groups for execution.
The other levels include:
Independent Stakeholder Testing- testing that is based on the needs and concerns of various stakeholders
Unit testing - Unit testing focuses on verifying the smallest testable elements of the software.
Integration testing - to ensure that the components in the implementation model operate properly when combined to execute a use case.
System Testing - Usually the target is the system's end- to-end functioning elements
Acceptance Testing - to verify that the software is ready, and that it can be used by end users to perform those functions and tasks for which the software was built.
Posted by Naveen Verma at 10:59 PM 1 comments
Key Measures of Quality Assurance Software Testing
The key measures of a test include Coverage and Quality. Test Coverage is the measurement of testing completeness. It is based on the coverage of testing expressed by the coverage of test requirements and test cases or by the coverage of executed code. Test coverage includes requirements based coverage and code based coverage. Quality is a measure of the reliability, stability, and performance of the target-of-test (system or application-under-test). Quality is based on evaluating test results and analyzing change requests (defects) identified during testing.
Posted by Naveen Verma at 10:58 PM 0 comments
Advantages of Quality Assurance Software Testing
The most effective way to reduce risk is to start testing early in the development cycle and to test iteratively, with every build. With this approach, defects are removed as the features are implemented. The testing of the application is completed shortly after the final features are coded, and as a result the product is ready for release much earlier. Additionally, the knowledge of what features are completed (i.e. both coded and tested) affords management greater control over the entire process and promotes effective execution of the business strategy. Testing with every iteration may require some additional upfront planning between developers and testers, and a more earnest effort to design for testability; but these are both inherently positive undertakings, and the rewards are substantial.
There are several key advantages gained by testing early and with every build to close the quality gap quickly:
Risk is identified and reduced in the primary stages of development instead of in the closing stages. ·
Repairs to problems are less costly.
The release date can be more accurately predicted throughout the project.
Results will be given by the way of requirement.
The product can be shipped sooner.
The business strategy can be executed more effectively.
Transparency established.
Artifacts can be reused for regression testing.
Not bound to any particular vendor.
Posted by Naveen Verma at 10:58 PM 2 comments