Jump to content

Open-box testing

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Prahlad balaji (talk | contribs) at 03:27, 4 September 2020 (Added {{Uncategorized}} tag (using Twinkle✧)). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.
Illustration of Closed Box Testing discipline

Closed box testing (also known as black box testing, behavioral testing and specification-based testing[1]) describes a software validation and verification discipline that is typically performed by the software testing professionals that are not directly accessing or contributing to the source code that comprises a project. The various types of testing activities within this discipline mainly focus on the objective analysis of inputs and outputs of software applications based on software requirements and specifications of what the project is expected to do.

Since closed box testing practitioners are well aware of what the software is supposed to do but is not focused upon how it is produced, these individuals are considered to be interacting with "a closed box" system. Software testing professionals working within this discipline would know that a particular input has an expected output but would not be fully aware of how the software produces that output.[2]

Since the specified, expected functionality of software is being tested without having knowledge of internal code structure or implementation details there are various cognitive biases, particularly confirmation bias, that are minimized and objective, third party analysis is enabled.

The term closed box testing is an attempt to provide a more practical description and illustration of this software testing discipline as well as an intentional departure from the racial connotations of the long used "black box" and "white box" terminology.

Disciplines within closed box testing

The following, commonly referenced types of functional and non-functional testing activities often comprise closed box testing disciplines within organizations that are following modern software engineering practices:

Types of functional testing

Functional testing deals with the functional requirements or specifications of a program. Different actions or functions of the system are being tested by exercising the application's inputs and comparing the actual results with the expected outputs. An example of functional testing would be verifying that an application's login functionality is working correctly.

Among the types of testing that occur within this discipline are:

Types of non-functional testing

Non-functional testing checks non-functional aspects of a program that include the software's performance, usability, reliability, etc. These disciplines validate and verify the readiness of a system as per nonfunctional parameters which are not addressed by functional testing. An example of non-functional testing would be verifying that all functionality within an application's login process consistently occurs within two seconds.

Among the types of testing that occur within this discipline are:

Common closed box testing techniques

Equivalence Partitioning

In this technique, also known as Equivalence Class Partitioning (ECP) input values to the system are divided into different partitions or groups, based on their similarity within the expected outcome. From this test cases can be derived and instead of using each and every possible input value, practitioners can use any one value from the partition/group to verify the expected outcome. In this way, testers can maintain appropriate test coverage while reducing time spent and rework costs.

Boundary Value Analysis

Tests within the boundary value analysis technique are designed to include representatives of boundary values in a range. Both the valid inputs and invalid inputs are tested to verify that the software is working as expected.

Using this technique to test an input where values from 1 to 100 are expected to work correctly, values on the minimum and maximum edges of an equivalence partition would be tested. In this example testers would use the values of 0, 1, 2, 99, 100, and 101 (1-1, 1 and 1+1 from the minimum edge as well as 100-1, 100, and 100+1 from the maximum edge) instead of using all the values from 1 to 100.

Decision Table Testing

The decision table technique, also known as the cause-effect table, is used for testing the system behavior for different input combinations. This systematic approach is where the different input combinations and their corresponding system behavior are captured in the form of a table.

Using a tabular form helps testers deal with different combination inputs and their associated expected outputs. It is considered especially helpful in test design as it focuses on logical diagramming and supports considering the various effects of different input combinations.

State Transition Testing

This technique is used to verify the different states of the system under test and analyzes how changes made in input conditions cause state changes or output changes in the running software. This technique enables testers to analyze behavior of an application for different input conditions. Testers can provide positive and negative input test values and record the overall system behavior.

Error Guessing

Using this technique, testers use their experience and expertise about an application's behavior and functionality to guess what error-prone areas might be impacted by code changes made to the project. Many defects can be found using error guessing where most of the developers usually make mistakes.

Common mistakes that application engineers forget to handle from time-to-time are:

  • Division by zero
  • Not properly handling null values within text fields
  • User initiated file uploads processes without attachment being provided
  • File uploads with less than or more than the limit size supported by the software

Graph-Based Testing

Software testing professionals using this technique, which is also known as state based testing, to first build a graph model for the program under test and then try to cover certain elements in the graph model with valuable test cases. From this object graph, each object relationship is identified and test cases are written accordingly to discover potential errors.

Use case testing

The use case software testing technique is based on the identification of test cases that cover entire system, from start to end, on a transaction by transaction basis. Use cases are the interactions between users and software application so it is considered ‘user-oriented’ not ‘system-oriented’. Use case testing helps to identify gaps in software application that might not be found by testing individual software components.

Comprehensive use of this discipline involves using both "happy path"/"sunny day" positive test cases as well as "unhappy path"/"rainy day" negative test cases that ensure all aspects of the software are working as intended.

Happy path / sunny day use cases

  • These are the primary cases that are most likely to happen when everything does well within the project. These positive use cases are typically given a higher priority than the other cases.

Unhappy path / rainy day use cases

  • These are often defined as the various edge cases that exist within the operation of the software. The priority of these typically come after the positive test cases.

User story testing

The User story testing discipline is based on knowing what the product's users will be experiencing in the real world. Requirements for functionality within the software are written down into "user stories" that are typically one or two lines long. A user story is intended to be the simplest statement possible about a single function or feature to be performed within the running application. A simple example of a user story is:

As a (user role/customer), I want to (goal to be accomplished) so that I can (reason of the goal).

Benefits of the discipline

Practitioners of closed box testing disciplines are typically removed from directly contributing to, reading or having an in-depth understanding the source code that comprises the project that they are testing. Within this regimen, testers have the freedom to assess the reliability of project independently from the knowledge of specific programming languages and source code of the project.

Third party, unprejudiced validations of the work of others is a long determined best practice throughout many industries.

Biases within the disciplines

There are many well documented articles on the various cognitive biases [3][4][5] that impact the various software testing disciplines, especially as internet-enabled businesses are moving at speeds that did not exist prior to the commercialization of the global computer network.

The entirety of the Software Development Life Cycle (SDLC) is comprised of human beings so it is inherently impossible to certify any non-trivial software project as 100% bug-free. test automation is a tool that can provide cost efficiencies and provide confidence that previously developed and tested software continues to perform correctly after an addition or change to the project.

Using code to methodically and highly repetitively test code can be a highly effective way to exclude personal opinions and judgments, whether perceived or previously observed, about who developed the functionality to be tested, which part of the product they believe contains the most challenges, or other factors that can impact objective evaluations of a project. What to automate, when to automate, or even whether one really needs automation are crucial decisions which the testing (or development) team must make.[6]

Importance of discipline

When malfunctions are discovered in the later stages of a software project, it is usually significantly more expensive to fix them. Advocates of these practices understand the significant cost savings over time associated with identifying and locating as many issues as possible as early as possible.

Closed box testing disciplines look into real-world, runtime use of a product once it has been installed and made available for software testing professionals to verify. When combined with the shift left enabling open box testing and the code quality disciplines that use static analysis and pre-installation verification techniques, teams are able to share the overall product quality responsibility and use the underlying project to empower collaboration.

Exploratory testing

This discipline is widely used within progressive agile software development methodologies. Practitioners of it are typically passionate about the aspects of discovery, investigation, learning, personal freedom and responsibility that comprise this thinking-based approach. Whereas the execution of previously scripted tests are more repetitive, non-thinking activities to compare actual results with expected results, exploratory testing recognizes that highly repetitive tasks and automation have distinct limits.

Among the commonly realized benefits of exploratory testing, also known as session based testing, is that the investigative process helps find more bugs than standard testing, bugs that are normally ignored by other testing techniques are often uncovered, the imaginations of testers are often expanded, and it can overcome limitations of scripted testing.[7]

Agile software practices

Within Agile software development enabled projects, teams are asked to reduce the length of time of software delivery while continuously improving quality of each release of their software. At the same time, there is typically an increased pressure to reduce overall testing costs.

Agile testing promotes that all members of a cross-functional teams, with special expertise contributed by testers, are working at a sustainable pace and delivering the desired business value at frequent intervals. Investments in these disciplines provide for the long term reliability and maintainability of the overall software project by emphasizing the "whole-team" approach to "baking quality in".

Illustration of Closed Box and Open Box Testing disciplines
Illustration of Open Box and Closed Box Testing disciplines

See also

References

  1. ^ Jerry Gao; H.-S. J. Tsao; Ye Wu (2003). Testing and Quality Assurance for Component-based Software. Artech House. pp. 170–. ISBN 978-1-58053-735-3.
  2. ^ Patton, Ron (2005). Software Testing (2nd ed.). Indianapolis: Sams Publishing. ISBN 978-0672327988.
  3. ^ "Cognitive Bias In Software Testing: Why Do Testers Miss Bugs?". Retrieved 1 September 2020.
  4. ^ Salman, Iflaah; Turhan, Burak; Vegas, Sira. "A controlled experiment on time pressure and confirmation bias in functional software testing". Retrieved 18 December 2018.
  5. ^ Ben Salem, Malek. "Cognitive biases in software testing and quality assurance". Retrieved 26 June 2019.
  6. ^ Brian Marick. "When Should a Test Be Automated?". StickyMinds.com. Retrieved 2009-08-20.
  7. ^ "What is Exploratory Testing? Techniques with Examples".
  8. ^ Bach, Jonathan (November 2000). "Session-Based Test Management" (PDF).