Jump to content

User:J-at-ywalters-dot-net/sandbox

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by J-at-ywalters-dot-net (talk | contribs) at 23:53, 30 August 2020. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

J-at-ywalters-dot-net (j@ywalters.net) sandbox page

Closed box testing

Illustration of Closed Box Testing discipline

Closed box testing (also known as black box testing or behavioral testing) describes a software validation and verification discipline that is typically performed by the software testing professionals that are not directly accessing or contributing to the source code that comprises a project. The various types of testing activities within this discipline mainly focus on the objective analysis of inputs and outputs of software applications based on software requirements and specifications of what the project is expected to do.

Since closed box testing practitioners are well aware of what the software is supposed to do but is not focused upon how it is produced, these individuals are considered to be interacting with "a closed box" system. Software testing professionals practicing this discipline would know that a particular input returns a certain, expected output but would not be fully aware of how the software produces the output.[1]

Since the specified, expected functionalities of software are tested without having knowledge of internal code structure or implementation details various cognitive biases, particularly confirmation bias, are minimized and objective, third party analysis is enabled.

This term is an attempt to provide a more practical description and illustration of this software testing discipline as well as an intentional departure from the racial connotations of the long used "black box" and "white box" terminology.

Types of Closed box testing

The following, commonly referenced types of testing activities often comprise closed box testing disciplines within organizations that are following modern software engineering practices:

Functional Testing

This type deals with the functional requirements or specifications of an application. Here, different actions or functions of the system are being tested by providing the input and comparing the actual output with the expected output.

Smoke Testing

Sanity Testing

Integration Testing

System Testing

Regression Testing

User Acceptance Testing

Non-Functional Testing

Usability Testing

Load Testing

Performance Testing

Compatibility Testing

Stress Testing

Scalability Testing

Closed Box Testing Techniques

Equivalence Partitioning

In this technique, also known as Equivalence Class Partitioning (ECP) input values to the system are divided into different partitions or groups, based on their similarity within the expected outcome. From this test cases can be derived and instead of using each and every possible input value, practitioners can use any one value from the partition/group to verify the expected outcome. In this way, testers can maintain appropriate test coverage while reducing time spent and rework costs.

Boundary Value Analysis

This technique we focus on the values at boundaries as it is found that many applications have a high amount of issues on the boundaries.

Boundary means the values near the limit where the behavior of the system changes. In boundary value analysis both the valid inputs and invalid inputs are being tested to verify the issues.

Example: To test a field where values from 1 to 100 are expected to work correctly, boundary value analysis would have the testers use 1-1, 1, 1+1, 100-1, 100, and 100+1 instead of using all the values from 1 to 100. The values of 0, 1, 2, 99, 100, and 101 would be used.

Decision Table Testing

State Transition Testing

Error Guessing

Graph-Based Testing Methods

Comparison Testing

Illustration of Closed Box and Open Box Testing disciplines
Illustration of Open Box and Closed Box Testing disciplines

See also

Open box testing

Illustration of Open Box Testing discipline

Open box testing (also known as white box testing, clear box testing, glass box testing, transparent box testing, and structural testing) describes a software validation and verification discipline that is typically performed by the application engineers that have directly contributed to a project's source code. Individuals that have direct access to source code are considered to be able to "see within the open box" and with that ability comes a significant understanding of the computer programming that comprises the project.

This term is an attempt to provide a more practical description and illustration of this software testing discipline as well as an intentional departure from the racial connotations of the long used "white box" and "black box" terminology.

Types of Open box testing

The following, commonly referenced types of testing activities often comprise open box testing disciplines within organizations that are following modern software engineering practices:

  • A focus on and ability to verify the smallest piece of code that can be logically isolated is consistently working as designed and expected. Usually unit testing verifies that a single, cohesive function accepts the planned input and consistently produces the expected output.
  • This discipline, also known as program or module testing, involves verifying that larger, individual pieces of a program that make use of multiple, smaller units of work are working correctly without running the entirety of the overall project. This practice is performed after the smaller and often considerably faster unit tests.
  • Multiple, individual software modules are combined and tested as a group. This type of testing occurs after unit testing and before the application is fully built and installed for other types of testing activities. Integration testing takes as its input modules that have been unit and component tested, groups them in larger aggregates, and performs more comprehensive and extensive system verifications.

Biases within discipline

The ability to directly read and understand the source code enables certain activities but also creates some bias challenges.

Practitioners of open box testing disciplines are subject to various cognitive biases, particularly confirmation bias, as it is impractical to have people objectively evaluate the quality of work that they have done. Third party, unprejudiced validations of the work of others is a long determined best practice throughout many industries.

We suck at testing our own code. We suck so badly at it that it has led to entire professions like as Quality Assurance analysts and test engineers. We're not necessarily bad at coding, but we're notoriously bad at finding our own bugs.
– Confirmation Bias: How your brain wants to wreck your code [2]

Importance of discipline

When malfunctions are discovered in the later stages of a software project, it is usually significantly more expensive to fix them. Advocates of these practices understand the significant cost savings over time encourage developers to identify and locate as many issues as they can at an early stage of development and then to automate the process for validating every change in code going forward. Articles like "Unit Testing: Time Consuming but Product Saving"[3] illustrate the importance and potential significant savings of early error identification.

Open box testing disciplines, when done well, can also make it significantly easier for developers to deal with a relatively unfamiliar piece of code. Code that is written by other programmers becomes more manageable as highly effective open box tests will short circuit inadvertently introduced problems from continuing within the development process.

By writing open box tests, code creators can communicate the intent of the functionality they have created. By reading previously created tests, others get to see how the author expected the code to be used, and possibly more importantly, how it was intended not to be used.

Code that is easy to test can also be easier to understand. Succinct tests can and often do lead to succinct code. When done well, open box tests enable the software development process to become far more predictable and repeatable over time but they are not the end-all be-all of any comprehensive approach to software quality.

Shift-left testing

The open box testing disciplines are among the central tenets of the Shift-left testing approach to software development. As part of this "test early and often" modern and progressive approach, developer responsibilities are incorporated into the overall testing cycle earlier than ever before. Focusing on finding and remediating software defects as early as possible within the Software Development Life Cycle (SDLC) has profound benefits to organizations that support it because quality is clearly acknowledged as a shared responsibility and testing is prevalent throughout the process. Significant cost savings over time are often the return on investment achieved for teams that are highly successful at these practices but it is important that people do not try to “boil the ocean” and achieve unreasonable levels of test coverage for their projects.

Quality assurance practices involve identifying bugs, fixing them and ensuring that previously working functionality has not been inadvertently changed as significant issues can dramatically damage a company’s reputation. For example, many car companies have had to bear reputation and financial damages because of recalled vehicles whose parts were not properly tested. The use of open box testing practices within the shift left testing disciplines are proactive investments in a product's quality.

Illustration of Closed Box Testing discipline

Open box testing activities are typically performed prior to an official, fully integrated installation of a piece of software and the start of an official, objective testing cycle begins. Post-installation, a wide variety of closed box testing disciplines are performed by a variety of different types of software testing professionals.

Agile software practices

Within Agile software development enabled projects, teams are asked to reduce the length of time of software delivery while continuously improving quality of each release of their software. At the same time, there is typically an increased pressure to reduce overall testing costs.

Highly capable application engineers that demonstrate significant open box testing skills typically produce well designed units of work that are well covered with meaningful, open box type tests. Investments in these types of tests provide for the long term reliability, maintainability and comprehensive documentation of the expected functionality within the project by ensuring that the units of work continue to function correctly over time.

Numerous surveys and studies over the past decade illustrate that software engineers frequently spend large portions of their time working with, maintaining and needing to improving existing code. [4] Software maintenance related code changes are particularly risky when project contributors do not have the appropriate level of open box type tests in place that demonstrate that software functionality that was working correctly before the change continues to work correctly after the change.

The ability to review the specifications, designs and coding implementations that comprise the internal logic and structure of the underlying application enables teams to create sustainable projects that support the addition of, or transition to, future project contributors but

All software testing disciplines involve identifying flaws and errors in the application code that must be fixed. The processes that are undertaken provide for confidence that the functionality and correctness of a software has been analyzed and verified to be correct.

Continuous integration

In software engineering, continuous integration (CI) means the repeated application of quality control processes on every small discrete change or addition. A fundamental tenet of this practice is that the project's automated tests are first run locally then run on a remote system to provide fast feedback to the project's contributors and prevent the advancement of code that is known to be functioning incorrectly.

Open box tests are the straight-forward first line of defense that ensures that code changes do not introduce unintended consequences within the project. The intertwined nature of open box testing disciplines and all derivatives of continuous integration practices have led to a wide variety of articles about how essential open box type tests are. Articles like "Continuous Integration is Absurd without Unit Testing"[5] and "Unit Tests, How to Write Testable Code and Why it Matters"[6] illustrate how these two disciplines are inextricably linked.

Illustration of Open Box and Closed Box Testing disciplines
Illustration of Closed Box and Open Box Testing disciplines

See also

References

  1. ^ Patton, Ron (2005). Software Testing (2nd ed.). Indianapolis: Sams Publishing. ISBN 978-0672327988.
  2. ^ Eland, Matt. "Confirmation Bias: How your brain wants to wreck your code". Retrieved 12 September 2019.
  3. ^ Riggins, Jennifer. "Unit Testing: Time Consuming but Product Saving". Retrieved 22 December 2017.
  4. ^ Grams, Chris. "How Much Time Do Developers Spend Actually Writing Code?". Retrieved 15 October 2019.
  5. ^ Mackay, Adam. "Continuous Integration is Absurd without Unit Testing". Retrieved 16 July 2019.
  6. ^ Kolodiy, Sergey. "Unit Tests, How to Write Testable Code and Why it Matters". Retrieved 14 January 2020.