Thursday, August 13, 2009

Black Box Testing Vs White Box Testing

Black Box Testing

Black Box Testing refers to the technique of testing a system with no knowledge of the internals of the system. Black Box testers do not have access to the source code and are oblivious of the system architecture.

A Black Box tester typically interacts with a system through a user interface by providing inputs and examining outputs without knowing how and where the inputs where operated upon.

In Black Box testing, target software is exercised over a range of inputs and the outputs are observed for correctness.

Advantages

  1. Efficient Testing- Well-suited and efficient for large code segments or units
  2. Unbiased Testing- clearly separates user's perspective from developer's perspective.
  3. Non-intrusive- Code access not required
  4. Easy to execute- can be carried out by moderately skilled testers with no knowledge of implementation, programming language, operating systems or networks.
Disadvantages

  1. Localized Testing- Limited code path coverage since only a limited number of test inputs are actually tested.
  2. Inefficient Test Authoring- Without implementation information, exhaustive input coverage would take forever and would require tremendous resources.
  3. Blind Coverage- Cannot control targeting code segments which may be more error prone than others.
White Box Testing

White Box testing refers to the technique of testing a system with knowledge of the internals of the system. White Box testers have access to the source code and are aware of the system architecture.

A White Box tester typically analyzes source code, derives test cases from knowledge of the source code and finally targets specific code paths to achieve a certain level of code coverage.

A White Box tester with access to details about both operations can readily craft efficient test cases that exercise boundary conditions.

Advantages

  1. Increased Effectiveness- Cross-checking design and assumptions against source code outlives effective implementation.
  2. Full code pathway capable- All the possible code pathways can be tested including error handling, resource dependencies and additional internal code logic/flow.
  3. Early Defect Identification- Analyzing source code and developing test based on the implementation details enables testers to find programming errors quickly.
  4. Reveal hidden code flaws- Access to source code improves understanding and uncovering unintended hidden behaviour of program modules.
Disadvantages

  1. Difficult to scale- requires skilled and expert testers, with intimate knowledge of target system, testing tools and coding languages.
  2. Difficult to maintain- requires specialized tools such as source code analyzers, debuggers and fault injectors.
  3. Highly Intrusive- Requires code modification to be done using interactive debuggers or by actually changing the source code. This may be adequate for small programs, but it does not scale well to larger applications.

Waterfall Model Vs V-Model

Waterfall Model

Sequential software development process in which progress is seen as flowing steadily downwards from requirements analysis to design, construction, testing and maintenance.

Phases

  1. Requirements Specification
  2. Design
  3. Construction(Coding)
  4. Integration
  5. Testing and Debugging
  6. Installation
  7. Maintenance
The Waterfall model maintains that one should move to a phase only when its preceding phase is completed and perfected. To follow this model, one proceeds from one phase to the next in a purely sequential manner.

Advantages

  1. Each phase is 100% complete and absolutely correct before proceeding to the next phase. This turns out that time spent early on making sure that requirements and design are absolutely correct will save you much time and effort later.
  2. It emphasizes on documentation, requirements documents, design documents and source code. So if new team members or even entirely new teams should be able to familiarize themselves by reading the documents.
  3. It follows a simple approach and is more disciplined.
  4. It can be suited to software projects that are stapled, with unchanging requirements.
Disadvantages

  1. It is impossible to get one phase of a product's life-cycle perfected before moving on to the next phase.
  2. Clients may change their requirements after a design is finished, so the entire design must be modified to accomodate the new requirements.


V-Model


The V-Model is a software development process which can be assumed to be the extension of the waterfall model. Instead of moving down in a linear way, the process steps are bent upwards after the coding phase, to form the typical V shape. The V-model demonstrates the relationships between each phase of the development life cycle and its associated phase of testing.

Testing activities like test designing start at the beginning of the project well before coding and therefore saves a huge amount of the project time.

Phases

The verification phase are on the left hand side of the V. Coding phase is at the bottom of the v and the validation phases are on the right hand side of the V.

Verification Phases

  1. Requirements Analysis
  2. System Design
  3. Architecture Design
  4. Module Design
Validation Phases

  1. Unit Testing
  2. Integration Testing
  3. System Testing
  4. User Acceptance Testing


Friday, May 29, 2009

Bug Life Cycle

Bug can be defined as the abnormal behaviour of the software. No software exists without a bug. The elimination of bugs from the software depends upon the efficiency of testing done on the software.

Bug Life Cycle

In software development process, the bug has a life cycle. The bug should go through the life cycle to be closed. The bug attains different states in the life cycle. The life cycle of the bug can be shown as follows-

  1. New
  2. Open
  3. Assign
  4. Test
  5. Verified
  6. Deferred
  7. Reopened
  8. Duplicate
  9. Rejected
  10. Closed
New

When the bug is posted for the first time, its state will be 'New'. This means that the bug is not yet approved.

Open

After a tester has posted a bug, the lead of the tester approves that thhe bug is genuine and he changes the state as 'Open'.

Assign

Once the lead changes the state as 'Open', he assigns the bug to corresponding developer or developer team. The state of the bug is now changed to 'Assign'.

Test

Once the developer fixes the bug, he has to assign the bug to the testing team for next round of testing. Before he releases the software with bug fixed, he changes the state of bug to 'Test'.

Rejected

If the developer feels that the bug is not genuine, he rejects the bug, changing the status of the bug to 'Rejected'.

Deferred

The bug, changed to deferred means that the bug is expected to be fixed in next releases.

Duplicate

If the bug is repeated twice or the two bugs mention the same concept, then one bug is changed to 'Duplicate'.

Verified

Once the bug is fixed and the status is changed to 'Test', the tester tests the bug. If the bug is not present in the software, he approves that the bug is fixed and changes the status to 'Verified'.

Reopened

If the bug still exists even after the bug is fixed by the developer, the tester changes the status to 'Reopened'. The bug traverses the life cycle once again.

Closed

Once the bug is fixed, it is tested by the tester. If the tester feels that the bug no longer exists in the software, he changes the status of the bug to 'Closed'. This state means that the bug is fixed, tested and approved.

Priority Vs Severity

Priority

It describes the importance and order in which a bug should be fixed. It is used by developers to prioritize their work to be done.

Severity

Indicates the impact each defect has on testing efforts or users and administrators of the application under test. This information is used by developers and management as the basis for assigning priority of work on defects.

Priority is business whereas severity is technical.

Business Priority- How important is to the business that the bug should be fixed.
Technical Severity- How important is it to fix the bug from a technical perspective.

Thursday, May 28, 2009

Testing Maturity Model (TMM)

TMM is a set of levels that defines a testing maturity hierarchy. Each level represents a stage in the evaluation to a mature testing process. TMM is a set of maturity goals for each level and the activities, tasks and responsibilities needed to support them.

Testing Maturity Model recommends practices that allow organisations to improve their testing process.

Why TMM?

Testing is a critical component of software development process and it is one of the most challenging and costly process activities which provides strong support for the production of quality software.

TMM contains a set of maturity levels through which an organisation can progress toward testing process maturity, a set of recommended practices at each level of maturity, and an assessment model that will allow organizations to evaluate and improve their testing process.

Software Engineering Institute Capability Maturity Model(CMM) specifically addresses issues important to test managers, test specialists and software quality assurance staff.

Testing Maturity Model provides a set of levels and an assessment model and presents a set of recommended practices that allow organizations to improve their testing process.

The levels in TMM can be roughly stated as:

Level 1: Initial

Level 2: Phase Definition

Initiating a test plan and developing testing goals.

Level 3: Integration

Integrating testing into the software life cycle and monitoring the test process.

Level 4: Management & Measurement

Establishing a test measurement program through reviews and software quality evaluation.

Level 5: Optimization, Defect Prevention & Quality Control

Applying process data for defect prevention and quality control.

Software Process, SDLC & STLC

Software Process

Software process deals with the technical and management issues of developing software. It specifies the abstract set of activities that should be performed to go from user needs to the final product.

Software process follows the PDCA cycle.

PDCA Cycle

1) Plan(P): Device a plan. Define your objective and determine the strategy and supporting methods required to achieve that objective.

2) Do(D): Execute the plan. Create the conditions and perform the necessary training to execute the plan.

3) Check(C): Check the results to detemine whether work is progressing according to the plan and whether the results are obtained.

4) Act(A): Take the necessary action if work is not being performed according to plan or if results are not what was anticipated.

SDLC

The software development life cycle consists of the following phases.
  • Requirements Analysis
  • Design
  • Development
  • Testing
  • Implementation
  • Maintenance

Requirements Analysis

The main objective of the requirements analysis is to produce a document that properly specifies all requirements of the customer.The Software Requirements Specification(SRS) is the primary output of this phase. Proper requirements analysis and specification are critical for having a successful project.


Design Process

The development process is the process by which the user requirements are elicited and software satisfying these requirements is designed, built, tested and delivered to the customer.

High-Level Design(System Design)

High-Level Design is the phase of the life cycle when a logical view of the solution is developed. The participants in this phase are the design team, the review team and the customer. The entry criteria is that the SRS document has been reviewed and approved. The exit criteria is that the high-level documents have been reviewed and approved.

Low-Level Design(Detailed Design)

During the detailed design phase, the view of the application developed during the high-level design is broken down into modules and programs. The participants in this phase are the members of the design team. The entry criteria is that the high-level documents are reviewed and authorized. The input is the high-level design documents and the output is the functional specification document and unit test plans. The exit criteria is that the program specifications and unit test plans have been reviewed and authorized.


Development(Coding)

During the build-phase, the detailed design is used to produce the required programs in a programming language. This stage produces the source code and databases following the appropriate coding standards. The output of this phase is the subject of subsequent testing and validation. The participants are members of the team and the team leader. The entry criteria is that the program specifications are reviewed and authorized.


Testing

Unit Testing

The testing carried out in modules/programs immediately after design is unit testing. The exit criteria is that all test cases in the unit test plan are successfully executed.

Integration Testing

Integration is the systematic approach to build the complete software specified in the design from unit-tested modules.During this phase, tests are also conducted to find defects associated with interfacing. Integration is performed in the order specified in the integration plan and corresponding test case for each integration phase executed. The entry criteria is that the high-level design documents are reviewed and authorized. The exit criteria is that the integration plan and integration test plan have been reviewed and authorized.

System Testing

System testing is an activity to validate the software product against the requirements specification. This stage is intended to find defects that can be exposed only by testing the entire system. The entry criteria are that the SRS document are reviewed and authorized. It is often done in parallel with coding. The exit criteria is that the system test plans have been reviewed and authorized.


Acceptance & Installation(Implementation)

Acceptance and installation is the phase in the software life cycle in which a software product is integrated into its operational environment and tested in this environment to ensure that it performs as required. This phase includes getting the software accepted and installing the software at the customer site.

Acceptance Testing is the formal testing conducted by the customer according to the acceptance test plan prepared earlier and analysis of the test results to determine whether the system satisfies the acceptance criteria. When the results satisfy the acceptance criteria, the user accepts the software.Installation involves placing the accepted software in the actual producion environment.

The main inputs are the tested software and the acceptance criteria document. The exit criteria is that the customer signs off the acceptance letter, and the main output is the installed software.


STLC

The Software Testing Life Cycle consists of the following phases:

  1. Test Plan Preparation
  2. Test Case Design
  3. Test Execution & Test Log Preparation
  4. Defect Tracking
  5. Test Report Preparation

Recovery Testing

Alpha Testing

Alpha Testing is a type of acceptance testing done in the presence of customer at the developer's site. The customer performs the test in an environment similar to the actual working environment for the software.

Beta Testing

Type of acceptance testing done by the customer in a live application of the software at the end user's site in an environment not controlled by the developer.

Ad hoc Testing

Ad hoc testing is testing in which the focus is not only on planned test case execution, but also on unplanned testing.

Sanity Testing

Sanity Testing is a narrow regression test that focuses on one or a few areas of functionality. Usually unscripted, it is used to determine if a small section of the application is still working after a minor change has been made. One has to verify whether requirements are met or not, checking all basic functionality are working properly.

Smoke Testing

Smoke Testing is testing all areas of the application without getting into too many details. It is scripted, either using a written set of tests or automated tests. Smoke testing is conducted to ensure whether the most crucial functions of a program are working. It is designed to touch every part of the application in a cursory way.

End To End Testing

Testing the application starting from scratch to the end after integrating all the modules.

Monkey Testing

Monkey testing simulates the behaviour of monkey jumping from one tree to another in search of a better fruit assuming that all fruits are similar. It is the automated testing done randomly without typical user specifications. The use of monkey testing is to simulate how customers will use the software in real time.

Retesting

After any change is made to the application, testing only the particular module and not the entire application is called retesting.

Regression Testing

Testing done to ensure that enhancements or defect fixes made to the software works properly and does not affect existing functionality. Compared to retesting, regression testing is testing the entire module to ensure that changes made in particular module do not affect previously working modules.

Compatibility Testing

Testing conducted on the application to evaluate the application's compatibility with the computing environment.

Positive Testing

Positive testing is testing done to verify known test conditions.

Negative Testing

Negative testing is testing the application for improper conditions and invalid inputs.

Grey Box Testing

Grey Box testing refers to the technique of testing a system with limited knowledge of the internals of the system. It involves having access to internal data structures and algorithms for the purpose of designing test cases, but testing at the user or black box level.

Quality, QC & QA

Quality Principles

What is quality?
  • Quality is defined as meeting the customer's requirements in the first time and every time.
  • Quality is much more than the absence of defects which helps us to meet customer's expectations.
  • Quality can only be seen through the eyes of the customer.
Why is quality important?
  • Quality is important as it is the most important factor affecting an organisation's performance and business.
  • Quality is the way to achieve improved productivity and competitiveness in any organisation.
Quality Assurance & Quality Control

Quality Assurance
  • It is a planned and systematic set of activities necessary to provide adequate confidence that the products and services will conform to specified requirements and meet user needs.
  • Quality Assurance is preventing errors/defects.
  • Quality Assurance is project-based.
Quality Control
  • It is the process by which product quality is compared with applicable standards and the action taken when non-conformance is detected.
  • Quality control is detecting errors and work is done to ensure that the product conforms to standards/requirements.
  • Quality Control is product-based.

Testing Terms

Difference between Test Plan and Test Case

Test Plan

It is the road map or the entire set of activities planned in the testing phase of an application.

Test Case

It is the set of procedures executed in our system in order to find defects/errors.

Difference between Bug, Failure and Fault

Bug

Bug is any deviation between the expected results and the actual results.

Failure

Failure is any defect which occurs after the execution of the project.

Fault

Fault is any defect identified before execution or completion of the project.

Difference between Version, Variant and Release

Version


An instance of a system, which is functionality distinct in some way from other system instances.

Variant

An instance of a system, which is functionality identical but non-functionality distinct from other instances of a system.

Release

An instance of a system, which is distributed to users outside of the development team.

Difference between Wrong, Missing & Extra (Defect)


A defect is a variance from a desired product attribute, either from product specifications(producer view) or from customer/ user expectations(customer view).

Defects are of the following three categories:

Wrong

The defect is a variance from customer/ user specification when the specifications have been implemented incorrectly.

Missing

A specified or wanted requirement is not in the built product. This can be a variance from specification or the specification was not implemented.

Extra

A requirement incorporated into the product that was not specified. This is always a variance from specifications, but may the user of the product desire an attribute.

Wednesday, May 27, 2009

Testing Techniques

Black Box Testing

Black Box testing validates that the software meets the functional requirements irrespective of the paths of execution taken to meet each requirement. It is conducted on integrated, functional components whose design integrity has been verified through the completion of traceable white box tests.

Black Box testing treats the software as a 'black box' without any knowledge of internal implementation. It is a specification-based testing which aims to test the functionality of software according to the applicable requirements. The tester inputs data into and only sees the output from the test object. This level of testing usually requires thorough test cases to be provided and then simply verify that for a given input, the output value 'is' or 'is not' the same as the expected value specified in the test case.

When creating black box test cases, the input data used is critical. Three successful techniques for managing input data are
  • Equivalence Partitioning
  • Boundary Value Analysis
  • Error Guessing
Equivalence Partitioning

This technique is used to reduce the number of test cases to a necessary minimum and to select the right test cases out of a larger class rather than undertaking exhaustive testing of each value of the larger class.

For eg, a program which edits credit limits within a given range ($10,000- $15,000) would have three equivalence classes, <$10,000, between $10,000 and $15,000 and > $15,000. The values within one partition are considered to be equivalent, so it is sufficient to select one value out of each partition. Thus the number of test cases can be reduced considerably.

Boundary Value Analysis

The technique that consists of developing test cases and data that focus on the input and output boundaries of a given function.

For the above example, the boundary values would be, low boundary plus or minus one($9,999 and $10,001), on the boundary($10,000 and $15,000) and upper boundary plus or minus one($14,999 and $15,001).

Error Guessing

This technique is based on the theory that test cases can be developed based upon the intuition and experience of the test engineer.

For eg, in a test case where the input is the date, the test engineer may try February 29,2000.

Remark

Specification-based testing is necessary, but it is insufficient to guard against certain risks, because the tester doesn't know how the software being tested was actually constructed.


White Box Testing

White Box testing examines the basic program structure and derives the test data from the program logic, ensuring that all statements and conditions have been executed atleast once.

The tester has access to the internal data structures and algorithms and the code that implemented these. The tester validates that the software design is valid and whether it was built according to the specified design.

Code Coverage

The test designer can create test cases to ensure that all statements in the program have been executed atleast once. This allows the software team to examine parts of a system that are rarely tested and ensures that the most important function points have been tested.

Code coverage includes decision coverage, statement coverage and condition coverage.

Incremental Testing

Incremental testing is partial testing of an incomplete product. The goal of incremental testing is to provide an early feedback to software developers. It involves adding unit-tested programs to a given module or component one by one, and testing each result and combination.

Types

Top-down- Begins testing from the top of the module hierarchy and works down to the bottom.

Bottom-up- Begins testing from the bottom of the hierarchy and works up to the top.

Thread Testing

Thread testing is often used during early integration testing. It is done by testing a string of units that accomplish a specific function in the application.

Thread testing and incremental testing are usually utilized together. Units can undergo incremental testing until enough units are integrated and a single business function can be performed threading through the integrated components.

Testing Levels

Testing Process

The recent practice in software testing is to start software testing at the same moment the project starts and it is a continuous process until the project finishes. The levels in software testing can be differentiated as
  1. Unit Testing
  2. Integration Testing
  3. System Testing
  4. Acceptance Testing
Unit Testing

Unit testing is a testing in which the individual unit/module of the software is tested in isolation from other parts of a program.

Integration Testing

Integration testing refers to the testing in which individual software units are combined and tested for a communication interface between them.

Types
  • Big Bang Testing
  • Bottom Up Testing
  • Top Down Testing
Big Bang Testing- Big Bang Testing is the type of integration testing in which every module is first unit tested in isolation from every other module. After that each module is combined all at once and tested.

Bottom Up Integration Testing- In Bottom Up testing, lower level modules are tested, then the next set of higher level modules are tested with the previously tested lower modules.

Top Down Integration Testing- In Top Down testing, higher level modules are tested and then the next set of lower level modules tested with the previously tested higher modules.

System Testing

System testing is the testing conducted on a complete, integrated system in accordance with its specified requirements.

Acceptance Testing

Acceptance testing is the testing conducted by the client/user to evaluate the system as per the business requirements.

Software Testing

Testing Principles

  • Testing is a process of executing a program with the intent of finding an error.
  • A good test case is one that has a high probability of finding an as-yet undiscovered error.
  • A successful test is one that uncovers an as-yet undiscovered error.
  • All tests should be traceable to customer requirements.
  • Tests should be planned long before testing begins.
  • The Pareto Principle applies to software testing, which states that 80% of all errors uncovered during testing will likely be traceable to 20% of all program components.
  • Testing should begin 'in the small' and progress toward testing 'in the large'.
  • Exhaustible testing is not possible.

What is Software Testing?


Software testing is a process of evaluating a system by manual or automated means and verify that it satisfies specified requirements or identify differences between actual and expected results.

Why Software Testing?

Effective software testing helps to deliver quality software products that satisfy user requirements, needs and expectations. If not done properly, defects are found during operation which results in high maintenance cost and user dissatisfaction.

Objectives of software testing
  • To find bugs
  • To find bugs as early as possible
  • To make sure that the bugs get fixed

Manual & Automated Testing

Merits & Demerits

Manual Testing

Merits
  • Cost-effective
  • Easily understood as it involves more documentation
Demerits
  • Consumes more time
  • More work is to be done by testers
Automated Testing

Merits
  • Consumes less time
  • Less work to be done by testers
  • Can be continued by anyone even if stopped half-way while testing
Demerits
  • Testing tools are costly