Talk:Software testing/Archive 1
- How about (external) links to Ward Cunningham's wiki? His wiki has a lot of software development stuff: patterns, methodology, etc.
- I added a bit about test driven code; which indirectly refers to the brunt of the discussion on Ward's wiki.JimD 04:48, 2005 Jan 9 (UTC)
I also added a link to Ivars Peterson and a reference to his book. I feel a little awkward doing so as I'm not trying to endorse it in particular; it just happens to be the resource that I thought appropriate to that juncture.
I'm also too tired to go back through and clean up my edits more and do additional research at the moment; but I feel like the work I did was better submitted, even in rough form, then discarded. So we'll see what the rest of the wikipedian community as a whole makes of it. :) Edit Boldly (In particular I know the Latin is awkward. It sticks out like a sore thumb. Fix it. Please)JimD 04:48, 2005 Jan 9 (UTC)
Removed paragraph on being able to prove things in a court of law. Almost all software companies disclaim liability for buggy software, and except for a few life-critical pieces of software, the prospect of being sued isn't a strong motivating factor. Also, except for life-critical software, most software testing does not seek to eliminate all defects since this is generally too expensive to be worth the cost.
- However I did engage in an assignment once, the objective of which was to test whether the software might leave the supplier exposed to an anti-trust law suit. The software processed data feeds containing financial data and the licence required that this data was capable of being processed by competitors' systems. Without using any of these systems we had to verify that the software and documenation didn't breach this licence. Matt Stan 01:04, 10 Jan 2005 (UTC)
Software testing, like software engineering and methodologies, is largely defined by common practices and fashions.
Despite companies disclaiming liability for buggy software, many of those claims have not been upheld in court. Cem Kaner has suggest this a few time. --Walter Görlitz 20:49, 12 Nov 2004 (UTC)
Gamma Testing discussion is off-beat. This might be a better description: http://www.smartcomputing.com/editorial/dictionary/detail.asp?guid=&searchtype=1&DicID=10215&RefType=Dictionary
"Some cynics say..." -- Really, does someone have a reference for this? If not, i suggest deleting entire discussion of gamma testing.
Controversy Section, etc.
It's about time we recognized that prominent people in the industry have very different views of testing. I confess, I was tempted to throw this entire software testing article out and rewrite it without the gratuitous references to weak testing folklore such as black box and white box testing-- an idea that has almost no content, conveys no skill or insight, and merely takes up valuable room that could be used to discuss the bona fide skills and techniques of software testing. (If that sounds arrogant to you then my point is proven: there is a lot of disagreement...)
But, in the spirit of wikidom, rather than tear the article up, I added the section on controversy, and I just added the second paragraph which introduces the notion of investigation and questioning as central to testing.
I intend to come back periodically and morph this entry into something I think is useful, unless there's a lot of push back, in which case we should incorporate the controversy into the article, I believe. Or establish new articles where the various schools of thought can play alone.
-- JamesBach
- I write software for a career, and have been in the business for 40 years. Everyone who writes software has a responsibility to address assurance that the software will work correctly, but we usually work for managers who have some kind of budget for how much work is justifiable. The most common rule of thumb is "good enough." The methods of testing, that I have used, have evolved over time, based on what I have learned in my profession, and from experience. Some major considerations:
- Ideally we want to have a model of the produciton data that can test all posibilities and if anything goes wrong, because the software is not yet perfected, then what is damaged is the model, the test data that was copied from the real production data. 99% of the time, having a test data base, or model that is representative of the real data, is an expense that the managers do not support.
- Before testing software that is to update files, it is smart to make a backup of the files that are to be updated, so that if anything goes wrong, we can recover from the backup.
- While we are expecting, hoping for certain changes to the data, it is always possible that we will get unexpected updates where not wanted, so we need to have tools that can compare the before and after data to identify exactly what did change as a result of the tests.
User:AlMac|(talk) 08:25, 31 January 2006 (UTC)
Test Scripts versus Test Cases
This article perpetuates the confusion between test cases and test scripts. It would be best if someone could point out that common usage is not "test script" but test case and test scenario. A test script is usually used in automated testing such Functional GUI tools (WinRunner, Silk Test, Rational Robot, etc.) and Unit tools (XUnit, ANT, etc.). --Walter Görlitz 14:45, 28 August 2005 (UTC)
I have attempted to address this in the test cases section, but common usage is not test case and test scenario, it is test case and test suite. Scenario tests are not necessarily related to traditional test cases. --Walter Görlitz 18:23, 20 October 2005 (UTC)
- I disagree with --Walter Görlitz, with respect to "test script." It is not unusual, before testing, to write out some kind of an outline of the planned test. What will be be checking on, what are we looking for, how will we be able to tell if something went wrong, and what will we do about it. 90% of my tests are on production data, and I have had the good fortune to be able to run them at a time when, if something goes badly wrong, I can recover the data to what it was before the test started. This written outline of the plan of action, and how the data is to be fecovered if the test goes badly, it is a "test script." User:AlMac|(talk) 08:29, 31 January 2006 (UTC)
- Disagree all you want. A test script has two definitions and the one you have defined falls into neither of them. You have described a Test plan or possibly a test strategy: how you plan to do the testing. A script could be a written test case or it could be used for automated testing. Feel free to use your derivative form though, but it's not common usage. --Walter Görlitz 23:48, 2 February 2006 (UTC)
Regarding the Custodiet Ipsos Custodes section
It was my understanding that a Heisenbug was a defect that occurred only when run under software in release mode, but stopped occurring when in debug mode. The act of observation changed the nature of the application and the defect disappears. --Walter Görlitz 18:23, 20 October 2005 (UTC)
Certification by ISO 9001??
In section 13 there is stated that "No certification is based on a widely accepted body of knowledge.".
Isn't there the ISO 9001 quality assurance certification? If someone knows more details about ISO 9001, it would be cool if he could take a look and change the section of it's needed. Thank you. :)
(For tracking: Don't remove this comment until this question is 100% clear.)
- There is an ISO 9000 certification specifically related to computer software quality, and one related to computer security assurance. There are similar systems in other nations, which are in the process of being combined. There has also been legislation in the USA called Sarbanes Oxley or SOX for short, which mandates a process of software change methodology that approves and tests changes. Hundreds of companies have been audited to make sure they are compliant with these standards. User:AlMac|(talk) 08:19, 31 January 2006 (UTC)
The real scope of testing
You said: "In other words, testing is nothing but criticism or comparison, that is comparing the actual value with expected one." I must say that TESTING is much more then that. First of all, testing is about improving (ensuring) the quality of a product.
I would add that Testing or Quality Analysis also provides more the comparison of actual and expected. It also provides customer facing/business logic testing to help ensure that the product being created is really meeting the needs of the customer...by meeting the requirements.
The real problem with trying to "define" software testing is that you first need to understand the many aspects of software testing. So far I haven't discovered any articles that attempt to cover these.
I'd say put this under the controversy section, and while your at it, you might as well put the rest of the section there as well. I have to say that the starting page for Software testing was definately not to my liking, and not differentiating between Software Testing and Software Quality Assurance is a bad start.
- I agree. That's like claiming that medicine is nothing but comparing observed symptoms to diseases. There's more to both medicine and software testing than that. The fact that you cannot get an absolute guarantee is irrelevant. You can't get absolute certainty out of anything.--RLent 16:39, 20 February 2006 (UTC)
Software Reliability Engineering (SRE)
This article is weak. SRE is not mentioned. There is no mention of any testing practice designed to measure mean time to failure (MTTF). (The ambiguous Load Testing is mentioned but with no focus on a MTTF goal.)
No mention of the Cleanroom-related controversy where the Cleanroomers advocated SRE as the only testing and no coverage testing.
No mention of the fact that practical coverage testing cannot provide reliability, since it is not practical to cover all the places where a bug can lurk and coverage testing includes no attempt to measure the failure rate of the remaining bugs. (By reliability, I mean a low MTTF.) (BTW, Dick Hamlet proved that perfect coverage testing is equivalent to a formal proof of correctness, in "Partition Testing does not Inspire Confidence")
Need a discussion of the goals of testing are. Need to discuss the strengths and weaknesses of various methods in reaching these goals. Two kinds of goals: Goals for the testing process, like line coverage, and goals for the delivered system, like low MTTF.
Need a discussion of how to determine when to stop testing.
- Isn't "mean time to failure" more of a hardware than software concept?
- I have worked for small companies for approx 40 years, where most of the time I report to a manager who is outside of computer expertise.
- The purpose of testing has been multi-fold
- Does the software do that which was requested by whoever it was, outside of computer staff? (me)
- Do we see room for obvious imporvements, and have those improvements been made successfully?
- Is the software user-friendly and intuitively obvious to operate, such that the risk of human error, using it, as low as we can make it?
- When humans enter bad data, does the software catch that in a manner that is easy to resolve?
- "The time to stop testing" is when the software is working satisfactorily, collectively we not see how to further improve it, other software projects have become more important to be working on.
User:AlMac|(talk) 08:39, 31 January 2006 (UTC)
Mislinks
I've removed the link to 'testers' from the fifth paragraph of the 'Introduction' section, as it linked to an inappropriate article with no disambiguator. It's difficult to see how a separate section on 'testers' would be justifiable anyway. --Dazzla 15:58, 5 January 2006 (UTC)
Recent addition
Testing analysis can be also measured the bugs reported from site or from customer in the deployed application for the customer requirement. If bugs are releated to functional then it is better to again review the functional test case because may be there is possibility some testing or functional scanrios are missed out.
I may be dim today, but I don't see what this means. It certainly needs to be rewritten in better English. --David.alex.lamb 20:27, 24 February 2006 (UTC)