Monday, 3 June 2013

Black Box Testing

Black box testing is a testing technique that ignores the internal mechanism of the system and focuses on the output generated against any input and execution of the system. It is also called functional testing.

Black-box testing methods include:

1. Equivalence partitioning 
Equivalence partitioning (also called Equivalence Class Partitioning or ECP) is a software testing technique that divides the input data of a software unit into partitions of equivalent data from which test cases can be derived.

This method is typically used to reduce the total number of test cases to a finite set of testable test cases, still covering maximum requirements.
E.g.: If you are testing for an input box accepting numbers from 1 to 1000 then there is no use in writing thousand test cases for all 1000 valid input numbers plus other test cases for invalid data.

Using equivalence partitioning method above test cases can be divided into three sets of input data called as classes. Each test case is a representative of respective class.

Valid partition: One input data class with all valid inputs. Pick a single value from range 1 to 1000 as a valid test case. If you select other values between 1 and 1000 then result is going to be same. So one test case for valid input data should be sufficient.

Invalid partition 1:  Input data class with all values below lower limit. I.e. any value below 1, as a invalid input data test case.

Invalid partition 2:  Input data with any value greater than 1000 to represent third invalid input class.
      ........-3,-2,-1,0      1,2,3.......999,        1000 1001,1002,.........
      -------------------|---------------------|------------------------------
    invalid partition 1     valid partition    invalid partition 2

2. Boundary value analysis:

Boundary Value Analysis(BVA) is a Software testing technique that is used to verify the conditions at boundaries. In most of applications, the errors are occurred at the boundaries of input values. ‘Boundary value analysis’ testing technique is used to identify errors at boundaries.

Boundary value analysis is mostly used when we checking a range of numbers as inputs.For each range of inputs, there are two boundaries, the lower boundary and the upper boundary, the boundaries are the beginning and end of each valid partition. We should write test cases which can validate the program functionality at the boundaries, and with values just inside and just outside the boundaries. 

Consider a Test Scenario, we need to test a input field that will accept the numbers ranges from 1 to 100. Most of the errors may occur at the boundaries of the input values. To test the above test scenario,

Test cases for input box accepting numbers between 1 and 100 using Boundary value analysis:

1) Test cases with test data exactly as the input boundaries of input domain i.e. values 1 and 100 in our case.
0,1,2..................99,100,101

2) Test data with values just below the extreme edges of input domains i.e. values 0 and 99.
0,1,2..................99,100,101

3) Test data with values just above the extreme edges of input domain i.e. values 2 and 101.
0,1,2..................99,100,101

Boundary value analysis is often called as a part of negative testing.

By checking the above condition we can find-out the hidden errors at the input boundary values.

3. All-pairs testing:
State transition tables
Decision table testing
Use case testing
Exploratory testing and specification-based testing. 

Wednesday, 22 May 2013

Introduction to Software Testing


What is Software Testing?
  • Software testing is an investigation conducted to provide stakeholders with information about the quality of the product or service under test.
  • (IEEE) - Software testing is the process of analyzing a software item to detect the differences between existing and required conditions (that is, bugs) and to evaluate the features of the software item.
  • Software testing can also provide an objective, independent view of the software to allow the business to appreciate and understand the risks of software implementation. Test techniques include, but are not limited to, the process of executing a program or application with the intent of finding software bugs (errors or other defects).
  • Software testing is a process used to identify the correctness, completeness, and quality of developed computer software.
  • Software testing can be stated as the process of validating and verifying that a computer program/application/product:
             meets the requirements that guided its design and development,

             works as expected,
             can be implemented with the same characteristics,and
             satisfies the needs of stakeholders.


Software testing, depending on the testing method employed, can be implemented at any time in the development process. Traditionally most of the test effort occurs after the requirements have been defined and the coding process has been completed, but in the Agile approaches most of the test effort is on-going. As such, the methodology of the test is governed by the chosen software development methodology.

Testing can never completely identify all the defects within software. Instead, it furnishes a criticism or comparison that compares the state and behavior of the product against oracles—principles or mechanisms by which someone might recognize a problem. These oracles may include (but are not limited to) specifications, contracts, comparable products, past versions of the same product, inferences about intended or expected purpose, user or customer expectations, relevant standards, applicable laws, or other criteria.

A primary purpose of testing is to detect software failures so that defects may be discovered and corrected. Testing cannot establish that a product functions properly under all conditions but can only establish that it does not function properly under specific conditions. 

The scope of software testing often includes examination of code as well as execution of that code in various environments and conditions as well as examining the aspects of code: does it do what it is supposed to do and do what it needs to do. 

In the current culture of software development, a testing organization may be separate from the development team. There are various roles for testing team members. Information derived from software testing may be used to correct the process by which software is developed.


Testing includes a set of activities conducted with the intent of finding errors in software so that it could be corrected before the product is released to the end users.

software testing is an activity to check whether the actual results match the expected results and to ensure that the software system is defect free.

Importance of Testing: 
  • This isChinaAirlines Airbus A300 crashing due to a software bug on April 26, 1994 killing 264 innocent lives
  • Software bugs can potentially cause monetary and human loss, history is full of such examples
  • In 1985,Canada's Therac-25 radiation therapy machine malfunctioned due to software bug and delivered lethal radiation doses to patients ,leaving 3 people dead and critically injuring 3 others
  • In April of 1999 ,a software bug caused the failure of a $1.2 billion military satellite launch, the costliest accident in history
  • In may of 1996, a software bug caused the bank accounts of 823 customers of a major U.S. bank to be credited with 920 million US dollars
  • As you see, testing is important because software bugs could be expensive or even dangerous



Software Quality


Quality:
  • Quality can be defined as meeting customer’s requirements.
  • Fitness for purpose
  • Conformance to requirements.
  • Fit for use

Quality means various aspects such as,
  • Free from bugs.
  • With-in budget.
  • With-in schedule.

Quality can be achieved consistently by using,
  1. Quality Assurance.
  2. Quality Control.

Quality Assurance:
   QA is a planned and systematic activity that provides confidence about the software products that will confirm to the specified requirements and meets user needs.
  1. It’s a Proactive approach.
  2. It involves defining and implementing process and measurements.
  3. It includes audit of quality management system against standard.
  4. CMMI model is widely used to implement QA.

Quality Control:
  •    Quality control (QC) is a procedure or set of procedures intended to ensure that a manufactured product or performed service adheres to a defined set of quality criteria or meets the requirements of the client or customer.
  •    It is the actual operational technique to ensure that quality of the product meets the requirements of the client or customer.


Monday, 20 May 2013

Software Testing

Software Testing

  • Software testing is an investigation conducted to provide stakeholders with information about the quality of the product or service under test.
  • (IEEE) - Software testing is the process of analyzing a software item to detect the differences between existing and required conditions (that is, bugs) and to evaluate the features of the software item.
Testing Techniques
                               

Testing Methods:
  1. Black box Testing
  2. White box Testing
Testing Types:
  • A test type is focused on a particular test objective, which could be the testing of a function to be performed by the component or system.
  • A non-functional quality characteristic, such as reliability or usability; the structure or architecture of the component or system; or related to changes,
  • Functional Testing - Testing of functional characteristics.
  • Non–Functional Testing - Testing of non-functional characteristics of software product (reliability, scalability, etc)
  • Structural Testing - Testing of software structure/architecture.
  • Confirmation And Regression Testing - Testing related to changes in the software product.


Testing Levels:

1. Unit / Component Testing
Unit Testing is a level of the software testing process where individual units/components of a software/system are tested. The purpose is to validate that each unit of the software performs as designed.

2. Integration Testing
Integration Testing is a level of the software testing process where individual units are combined and tested as a group. The purpose of this level of testing is to expose faults in the interaction between integrated units.
There are two methods of doing Integration Testing, Bottom-up Integration approach and Top Down Integration approach.
       I. Bottom-Up Testing:
            Bottom-Up Testing is an approach to integrated testing where the lowest level components are tested first, then used to facilitate the testing of higher level components. The process is repeated until the component at the top of the hierarchy is tested.
            In this approach testing is conducted from sub module to main module, if the main module is not developed a temporary program called DRIVERS is used to simulate the main module.
       II. Top down Testing:
            Top down Testing is an approach to integrated testing where the top integrated modules are tested and the branch of the module is tested step by step until the end of the related module.
            In this approach testing is conducted from main module to sub module. if the sub module is not developed a temporary program called STUB is used for simulate the sub module.
       III. Big-bang:
            In this approach, all or most of the developed modules are coupled together to form a complete software system or major part of the system and then used for integration testing. The Big Bang method is very effective for saving time in the integration testing process.

3. System Testing
System Testing is a level of the software testing process where a complete, integrated system/software is tested. The purpose of this test is to evaluate the system’s compliance with the specified requirements.

4. Acceptance testing
Acceptance Testing is a level of the software testing process where a system is tested for acceptability. The purpose of this test is to evaluate the system’s compliance with the business requirements and assess whether it is acceptable for delivery.


Testing Process/Test Cycle:
1) Requirement Analysis
During this phase, test team studies the requirements from a testing point of view to identify the testable requirements. The QA team may interact with various stakeholders (Client, Business Analyst, Technical Leads, and System Architects etc) to understand the requirements in detail. Requirements could be either Functional (defining what the software must do) or Non Functional (defining system performance /security availability).

2) Test Planning

3) Test Design

4) Test Environment Setup

5) Test Execution

6) Test Reporting

Test Plan:

      A document describing the scope, approach, resources, and schedule of intended testing activities. It identifies test items, the features to be tested, the testing tasks, who will do each task, and any risks requiring contingency planning.


Test Procedure:
      A document providing detailed instructions for the execution of one or more test cases.

Test Suite:
      A collection of tests used to validate the behavior of a product. The scope of a Test Suite varies from organization to organization. There may be several Test Suites for a particular product for example. In most cases however a Test Suite is a high level concept, grouping together hundreds or thousands of tests related by what they are intended to test.

Test Bed:
      An execution environment configured for testing. May consist of specific hardware, OS, network topology, configuration of the product under test, other application or system software, etc. The Test Plan for a project should enumerate the test beds(s) to be used.


Verification & Validation:

Verification: Are we building the product right?
      Verification is concerned with evaluating a work product, component or system to determine whether it meets the requirements set. In fact, verification focuses on the question 'Is the deliverable built according to the specification?’.

Validation: Are we building the right product?
      Validation is concerned with evaluating a work product, component or system to determine whether it meets the user needs and requirements. Validation focuses on the question 'Is the deliverable fit for purpose.

Defect Terminologies:
1. Fault:
   A fault is condition that causes system to fail in performing the required function.

2. Failure:   
   Failure is inability of the software to perform required function to its specification.

3. Bug:
   It is defined as a coding error that causes an unexpected defect, fault, flaw, or imperfection in a computer program.

4. Error:
   A human action that will produces an incorrect result. 

5. Defect:
   A condition in a software product which does not meet a software requirement (as stated in the requirement specifications) or end-user expectations. In other words, a defect is an error in coding or logic that causes a program to malfunction or to produce incorrect/unexpected results.
   A flaw in a component or system that can cause the component or system to fail to perform its required function.

6. Severity:
   The degree of impact that a defect has on the development or operation of a component or system. (ISTQB)

7. Priority:
   It indicates the importance or urgency of fixing a defect. Though priority may be initially set by the Software Tester, it is usually finalized by the Project/Product Manager.