Talk:Software testing/Archive 1
Introduction of this page is an insult to our craft
I find the introduction to this page a bit of an insult to the craft of software testing. To state that a good test is one that finds and error is totally misleading and incorrect. Tests can be good if they prove that an error is not present, or if they prove that the system is functionally complient. To make such a sweeping and incorrect statement only serves to lower the status of wikipedia.
Additionally, I would strongly disagree with the citation to ISO and IEEE standards as providing any form of "Complete" list. A comprehensive list yes but the term complete here is totally inacurate, there are many people who disagree with these standards. IEEE 829 is a good example of where our field is completely divided. Perhaps it's wrong to attempt to sum up our craft in wikipedia. It's a profesion and it's wrong to try to pin it down to such an inane and narrow subset of views.
Finally to try and mix the terms software testing and Quality assurance is so incorrect. It implies that testing is something by which we can assure quality, not so. It's a means of assesing quality.
Links to Cunningham's wiki
- How about (external) links to Ward Cunningham's wiki? His wiki has a lot of software development stuff: patterns, methodology, etc.
- I added a bit about test driven code; which indirectly refers to the brunt of the discussion on Ward's wiki.JimD 04:48, 2005 Jan 9 (UTC)
I also added a link to Ivars Peterson and a reference to his book. I feel a little awkward doing so as I'm not trying to endorse it in particular; it just happens to be the resource that I thought appropriate to that juncture.
I'm also too tired to go back through and clean up my edits more and do additional research at the moment; but I feel like the work I did was better submitted, even in rough form, then discarded. So we'll see what the rest of the wikipedian community as a whole makes of it. :) Edit Boldly (In particular I know the Latin is awkward. It sticks out like a sore thumb. Fix it. Please)JimD 04:48, 2005 Jan 9 (UTC)
Removed paragraph on being able to prove things in a court of law. Almost all software companies disclaim liability for buggy software, and except for a few life-critical pieces of software, the prospect of being sued isn't a strong motivating factor. Also, except for life-critical software, most software testing does not seek to eliminate all defects since this is generally too expensive to be worth the cost.
- However I did engage in an assignment once, the objective of which was to test whether the software might leave the supplier exposed to an anti-trust law suit. The software processed data feeds containing financial data and the licence required that this data was capable of being processed by competitors' systems. Without using any of these systems we had to verify that the software and documenation didn't breach this licence. Matt Stan 01:04, 10 Jan 2005 (UTC)
Software testing, like software engineering and methodologies, is largely defined by common practices and fashions.
Despite companies disclaiming liability for buggy software, many of those claims have not been upheld in court. Cem Kaner has suggest this a few time. --Walter Görlitz 20:49, 12 Nov 2004 (UTC)
Gamma Testing discussion is off-beat. This might be a better description: http://www.smartcomputing.com/editorial/dictionary/detail.asp?guid=&searchtype=1&DicID=10215&RefType=Dictionary
"Some cynics say..." -- Really, does someone have a reference for this? If not, i suggest deleting entire discussion of gamma testing.
Fault X failure X error X defect
The current version of the article differentiates "fault" from "failure" in a comprehensive way. However, it does not differentiate both concepts from "error" and "defect". I think this is an important observation that has been overlooked so far. --Antonielly 18:31, 16 March 2006 (UTC)
Controversy Section, etc.
It's about time we recognized that prominent people in the industry have very different views of testing. I confess, I was tempted to throw this entire software testing article out and rewrite it without the gratuitous references to weak testing folklore such as black box and white box testing-- an idea that has almost no content, conveys no skill or insight, and merely takes up valuable room that could be used to discuss the bona fide skills and techniques of software testing. (If that sounds arrogant to you then my point is proven: there is a lot of disagreement...)
But, in the spirit of wikidom, rather than tear the article up, I added the section on controversy, and I just added the second paragraph which introduces the notion of investigation and questioning as central to testing.
I intend to come back periodically and morph this entry into something I think is useful, unless there's a lot of push back, in which case we should incorporate the controversy into the article, I believe. Or establish new articles where the various schools of thought can play alone.
-- JamesBach
- I write software for a career, and have been in the business for 40 years. Everyone who writes software has a responsibility to address assurance that the software will work correctly, but we usually work for managers who have some kind of budget for how much work is justifiable. The most common rule of thumb is "good enough." The methods of testing, that I have used, have evolved over time, based on what I have learned in my profession, and from experience. Some major considerations:
- Ideally we want to have a model of the produciton data that can test all posibilities and if anything goes wrong, because the software is not yet perfected, then what is damaged is the model, the test data that was copied from the real production data. 99% of the time, having a test data base, or model that is representative of the real data, is an expense that the managers do not support.
- Before testing software that is to update files, it is smart to make a backup of the files that are to be updated, so that if anything goes wrong, we can recover from the backup.
- While we are expecting, hoping for certain changes to the data, it is always possible that we will get unexpected updates where not wanted, so we need to have tools that can compare the before and after data to identify exactly what did change as a result of the tests.
User:AlMac|(talk) 08:25, 31 January 2006 (UTC)
Whoever authored/edited this portion of the entry:
"The self-declared members of the Context-Driven School..."
...clearly does not hold that group in very high esteem. Perhaps dropping "self-declared" would minimize the bias of that statement. 66.195.137.2 14:59, 24 March 2006 (UTC)
- I, James Bach, wrote that. I am a founder of the Context Driven School. It's not disparaging, it's just honest. We are self-declared. Maybe there's a better way to say, though? User:JamesBach
Test Scripts versus Test Cases
This article perpetuates the confusion between test cases and test scripts. It would be best if someone could point out that common usage is not "test script" but test case and test scenario. A test script is usually used in automated testing such Functional GUI tools (WinRunner, Silk Test, Rational Robot, etc.) and Unit tools (XUnit, ANT, etc.). --Walter Görlitz 14:45, 28 August 2005 (UTC)
I have attempted to address this in the test cases section, but common usage is not test case and test scenario, it is test case and test suite. Scenario tests are not necessarily related to traditional test cases. --Walter Görlitz 18:23, 20 October 2005 (UTC)
- I disagree with --Walter Görlitz, with respect to "test script." It is not unusual, before testing, to write out some kind of an outline of the planned test. What will be be checking on, what are we looking for, how will we be able to tell if something went wrong, and what will we do about it. 90% of my tests are on production data, and I have had the good fortune to be able to run them at a time when, if something goes badly wrong, I can recover the data to what it was before the test started. This written outline of the plan of action, and how the data is to be fecovered if the test goes badly, it is a "test script." User:AlMac|(talk) 08:29, 31 January 2006 (UTC)
- Disagree all you want. A test script has two definitions and the one you have defined falls into neither of them. You have described a Test plan or possibly a test strategy: how you plan to do the testing. A script could be a written test case or it could be used for automated testing. Feel free to use your derivative form though, but it's not common usage. --Walter Görlitz 23:48, 2 February 2006 (UTC)
- Guys, common usage varies with community. In my circle, we use "test script" as a synonym for test procedure, which means a set of instructions for executing a test. It may or may not be automated. That issue is disambiguated in context. I can't speak for the whole testing universe on this, but then again, neither can anyone else. So, maybe if you have a variation you want to talk about, then talk about it. -- User:JamesBach
- I use the following definitions. Test Plan - An outline of the approach to test the product, describing in general terms the features and improvements, schedule, resources and responsibilities, test lab, etc. etc. Test Matrix - An outline of the test cases to be written during the release cycle (referred to above in the discussion as a Test Script). Test Case - A document or paragraph outlining the steps required to query the issue being tested. Test Script - an automated test case executed by whichever application delights you. Test Suite - The suite of manual and automated tests executed against the release. Methylgrace 19:44, 30 August 2006 (UTC)
Regarding the Custodiet Ipsos Custodes section
It was my understanding that a Heisenbug was a defect that occurred only when run under software in release mode, but stopped occurring when in debug mode. The act of observation changed the nature of the application and the defect disappears. --Walter Görlitz 18:23, 20 October 2005 (UTC)
Certification by ISO 9001??
In section 13 there is stated that "No certification is based on a widely accepted body of knowledge.".
Isn't there the ISO 9001 quality assurance certification? If someone knows more details about ISO 9001, it would be cool if he could take a look and change the section of it's needed. Thank you. :)
(For tracking: Don't remove this comment until this question is 100% clear.)
- There is an ISO 9000 certification specifically related to computer software quality, and one related to computer security assurance. There are similar systems in other nations, which are in the process of being combined. There has also been legislation in the USA called Sarbanes Oxley or SOX for short, which mandates a process of software change methodology that approves and tests changes. Hundreds of companies have been audited to make sure they are compliant with these standards. User:AlMac|(talk) 08:19, 31 January 2006 (UTC)
- The certifications you speak of have absolutely nothing to do with testing skill. In fact, it has little to do with anything that should matter to people who want excellent software testing. Both ISO 9001 and Sarbanes Oxley are simply mechanisms by which large consulting companies suck money from other large corporations. We should be ashamed that our craft is so manipulated. Besides, neither certification has anything to do with a body of knowledge, widely accepted or not.-- User:JamesBach
The real scope of testing
You said: "In other words, testing is nothing but criticism or comparison, that is comparing the actual value with expected one." I must say that TESTING is much more then that. First of all, testing is about improving (ensuring) the quality of a product.
I would add that Testing or Quality Analysis also provides more the comparison of actual and expected. It also provides customer facing/business logic testing to help ensure that the product being created is really meeting the needs of the customer...by meeting the requirements.
The real problem with trying to "define" software testing is that you first need to understand the many aspects of software testing. So far I haven't discovered any articles that attempt to cover these.
I'd say put this under the controversy section, and while your at it, you might as well put the rest of the section there as well. I have to say that the starting page for Software testing was definately not to my liking, and not differentiating between Software Testing and Software Quality Assurance is a bad start.
- I agree. That's like claiming that medicine is nothing but comparing observed symptoms to diseases. There's more to both medicine and software testing than that. The fact that you cannot get an absolute guarantee is irrelevant. You can't get absolute certainty out of anything.--RLent 16:39, 20 February 2006 (UTC)
- I disagree. I think testing can be about improving the product, but only if you're not a tester. A tester's job is to observe, compare, infer, report, but not to improve anything. Testers who consider themselves paladins of quality get marginalized. That way lies madness, friends. In any case, if you believe that, then you are a represntative of a particular school of testing theory, as am I a representative of a different school. By all means, say whatever you want to say, but don't claim to speak for all of us. I think that's a big problem with Wikipedia. How are we to write entries for controversial subjects? Testing is controversial. We just have to deal with that. User:JamesBach
Software Reliability Engineering (SRE)
This article is weak. SRE is not mentioned. There is no mention of any testing practice designed to measure mean time to failure (MTTF). (The ambiguous Load Testing is mentioned but with no focus on a MTTF goal.)
No mention of the Cleanroom-related controversy where the Cleanroomers advocated SRE as the only testing and no coverage testing.
No mention of the fact that practical coverage testing cannot provide reliability, since it is not practical to cover all the places where a bug can lurk and coverage testing includes no attempt to measure the failure rate of the remaining bugs. (By reliability, I mean a low MTTF.) (BTW, Dick Hamlet proved that perfect coverage testing is equivalent to a formal proof of correctness, in "Partition Testing does not Inspire Confidence")
Need a discussion of the goals of testing are. Need to discuss the strengths and weaknesses of various methods in reaching these goals. Two kinds of goals: Goals for the testing process, like line coverage, and goals for the delivered system, like low MTTF.
Need a discussion of how to determine when to stop testing.
- Isn't "mean time to failure" more of a hardware than software concept?
- I have worked for small companies for approx 40 years, where most of the time I report to a manager who is outside of computer expertise.
- The purpose of testing has been multi-fold
- Does the software do that which was requested by whoever it was, outside of computer staff? (me)
- Do we see room for obvious imporvements, and have those improvements been made successfully?
- Is the software user-friendly and intuitively obvious to operate, such that the risk of human error, using it, as low as we can make it?
- When humans enter bad data, does the software catch that in a manner that is easy to resolve?
- "The time to stop testing" is when the software is working satisfactorily, collectively we not see how to further improve it, other software projects have become more important to be working on.
User:AlMac|(talk) 08:39, 31 January 2006 (UTC)
Mislinks
I've removed the link to 'testers' from the fifth paragraph of the 'Introduction' section, as it linked to an inappropriate article with no disambiguator. It's difficult to see how a separate section on 'testers' would be justifiable anyway. --Dazzla 15:58, 5 January 2006 (UTC)
Recent addition
Testing analysis can be also measured the bugs reported from site or from customer in the deployed application for the customer requirement. If bugs are releated to functional then it is better to again review the functional test case because may be there is possibility some testing or functional scanrios are missed out.
I may be dim today, but I don't see what this means. It certainly needs to be rewritten in better English. --David.alex.lamb 20:27, 24 February 2006 (UTC)
Beta test(ing)
This might be a little confusing to readers: Beta testing redirects to this article, yet Beta test redirects to Development stage. They should be more consistent, because I happened upon the latter article after omitting the "-ing" suffix out of curiosity. – Minh Nguyễn (talk, contribs) 07:51, 23 April 2006 (UTC)
Copyvio
I've just removed a large section which was copied straight from [1]. If people could keep an eye out in case it gets added back, I'd appreciate it. Shimgray | talk | 16:32, 23 April 2006 (UTC)
External Links
I was wondering what peoples views are about what should, and should not, go in the External links section. There are many Open Source test tools, such as Bugzilla and the test case management tool QaTraq which are very relevant to Software testing. As a contributor to one of these tools I personally feel there is nothing wrong with listing these tools in the External Links section.
Personally I feel it's both useful and informative to add links to tools like these in the External Links section. However, before adding a link again I'd like to know if other people consider this type of link useful and informative under the Software Testing article. William Echlin
- I would not like to see such links added. Wikipedia is not a vehicle for advertising (regardless of whether or not the thing being advertised is open source). Wikipedia is also not a directory of links. Start adding links for a few tools, and pretty soon it everyone wants their favorite tools added to the list — a list that will soon grow to dominate the article. Style guidance for the use of external links can also be found here.
- There are plenty of other sites that provide directory services. Why not just include a few relevant links to such sites, such as the Open Directory Project directory of software testing products and tools, or SourceForge's testing category? These sites are likely to be far more comprehensive than anything that might be added here, and also far more likely to stay up to date. --Allan McInnes (talk) 21:31, 26 May 2006 (UTC)
- You make some good points there. I see now that this is not the right place for individual links to tools. Perhaps a single link to OpenSourceTesting.org would be worth considering as well. Thank you for pointing me in the right direction to the 'Style guidance for External Links' too. I had been looking for something along these lines. You make a good custodian of this topic. William Echlin 08:06, 27 May 2006 (UTC)
- Thank you. I think a link to OpenSourceTesting.org would be fine. I'll add that, and a few of the directory links I mentioned, to the article. --Allan McInnes (talk) 16:58, 27 May 2006 (UTC)
Quotes
Some quotes were removed and I cannot seem to find the discussion that went with it, nor can I really see the reason or need for removal. Being new to looking at the wiki editing backgrounds I was wondering if this is common practise. If the removal is done based on a single persons view, could I edit it back (and gain little)? The main reason for this question is the quote "Software Testers: Depraved minds, usefully employed.'" -- Rex Black which I found very accurate and recognizable, which was removed. --Keeton 08:19, 11 July 2006 (UTC)
- I removed the quotes in question because the quotes section seemed to be getting large (the quote section is largely deprecated on Wikipedia, and is generally supposed to be kept small if it exists at all). The removed quotes were all credited to people who are apparently not even prominent enough to warrant a Wikipedia article (i.e. they were red-links). The removal was a bold move on my part. If you believe the qujotes in question are useful and important, feel free to revert my changes. --Allan McInnes (talk) 14:29, 11 July 2006 (UTC)
Alpha Testing
I don't believe the description of alpha testing concurs with the definition that I understood, and that appears to be backed up by googling, which is that alpha testing involves inviting customers/end-users to test the software on the developer's site. This being the distinction from beta-testing, which involves testing by customers on their own site. Testing by developers/in-house test team is, as I understand it, separate from alpha testing (and ideally done before alpha testing). Can anyone provide authoritative references that support the existing definition? --Michig 09:37, 17 July 2006 (UTC)
- Generally Alpha testing is the software prototype stage when the software is first able to run. It will not have all the intended functionality, but it will have core functions and will be able to accept inputs and generate outputs.In-depth software reliability testing, installation testing, and documentation testing are not done at alpha test time, as the software is only a prototype.
- Digitalfunda 11:27, 26 September 2006 (UTC)
- Perhaps the confusion is between alpha versions and alpha testing. The difference between Alpha and Beta testing is how the testing is carried out, and not necessarily how 'complete' the software is (though of course, later stages of testing will generally correspond to software being more complete). I have found several authoritative sources that describe alpha testing as testing by customers on the developer's site, and none that describe it as initial testing by developers/test staff. --Michig 12:41, 26 September 2006 (UTC)
Yeah..that's correct however different companies take ALPHA testing differently..I have been in this domain for last 4.5 yrs..and i have seen both alpha versions and alpha testing, as you have confirmed this in this case I would like you to add the details regarding alpha testing so that everyone can be benifitted.Digitalfunda 06:05, 27 September 2006 (UTC)
Levels
Regression testing can be performed at unit, module, system or project level.
What is the difference between unit level and module level? (and may someone explain system level and project level as well?) Thanks, --Abdull 14:34, 27 July 2006 (UTC)
Qualification
What does the qualification of a product actually mean? Does it install, function with the new product? I am currently trying to qualify a product with SQL 2005 and want to broaden my scope.
Certification
This section seems to be the subject of controversy recently. While the statement "No certification currently offered actually requires the applicant to demonstrate the ability to test software" is true of probably most of the available certifications (and the same criticism could be levelled at many areas of IT certification), the ISEB Practiotioner-level certification would appear to be different, as according to the BCS website "The Practitioner Certificate is for experienced testing practitioners. This certificate demonstrates a depth of knowledge of testing topics and the ability to perform testing activities in practice." Are they just making this up, or does the article need to be changed to reflect this certification? --Michig 10:17, 5 September 2006 (UTC)
- We need to change this article to a certain extent as these days certifications like CSTE are becoming a must if you are in the field of software testing.
- I have updated this section with list of popular certification.Digitalfunda 06:22, 27 September 2006 (UTC)
- I know for ISTQB Certified Tester Advanced Level, that there is needed professional work experience before you can do Advanced Level, but for Foundation Level no premises are necessary. More info you can find on websites: [2] [3]
- --Erkan Yilmaz 02:08, 30 September 2006 (UTC)
- Digitalfunda, I think that with your updates, Certification deserves its own section. It doesn't make sense anymore to have it in the Controversy section. DRogers 14:48, 3 October 2006 (UTC)
I have added a new article on CSTE lets c the response it gets, discussion page is open for peer reviews.Digitalfunda 03:52, 13 October 2006 (UTC)
- see my edits on CSTE, hope they help you, to improve it. :-) --Erkan Yilmaz 16:33, 12 October 2006 (UTC)
also divided the certification into: Exam-based and Education-based. This is from:
Dr. Magdy Hanna, IIST Chairman & CEO (2006): The Value of Certification for Software Test and Quality Professionals
--Erkan Yilmaz 16:33, 12 October 2006 (UTC)
- removed Certification from Controversy section, I dont remember any controversy surrounding them Digitalfunda 03:52, 13 October 2006 (UTC)
Code Ceverage
I think the section on code coverage is worthwhile information. But maybe it could be moved either to the code coverage article, and this article could still link to it, or it could be moved to the white box testing article. Any input? DRogers 14:41, 3 October 2006 (UTC)
Regression testing
I'm not so sure the significance of having a section for regression testing. All the information in that section is in the article on the subject. Maybe we could work it into another section. DRogers 17:55, 3 October 2006 (UTC)
Gray box testing
I'd like to see a better explanation of "seeding the database". I also disagree with the sentence "It can also be used of testers who know the internal workings or algorithm of the software...". Doesn't that describe white box exactly?
Also, how do I suggest a cleanup of the Talk page? DRogers 16:52, 5 October 2006 (UTC)
RE: Editorial Viability Assessment
Software is defined by a specification -- unless its hacked. Sofware test and measurement consists of stimulus against the product specification to determine defects in the embodiment. If that's not an editorial function, I don't know what is. Perhaps you object to the abstraction? Perhaps you object to the application of pre-checkin evaluation to preclude regression introduction? I've been in the software test engineering game for over 10 years now. The technique I outline has been successfully applied in very large and very small software factories. That curriculum does not exist previously describing it should not interfere with discussion about effectiveness.
Can the material be moved somewhere? I suppose so. But where does it belong? Seems appropriate to outline additional test engineering techniques under the subject of software testing. --Rmstein 19:37, 5 October 2006 (UTC)
- I would say that it belongs in an article of the same name if anywhere at all. The Software Testing article is already getting too large and would be much better just giving an overview of the main aspects of software testing, and I don't believe Editorial Viability Assessment is sufficiently notable to be in the main article. The (large) image isn't particularly helpful, either.--Michig 19:49, 5 October 2006 (UTC)
- Given my experience, I'd say I'm still at the introductory level. I haven't come across this terminology in this industry yet. That tells me that, like you said, maybe this is at the wrong level of abstraction. So maybe it belongs in its own article. But I'm not sure. Can you cite some sources or list some references, external links, etc.? DRogers 20:01, 5 October 2006 (UTC)
- I ripped out the content and stuck it in a separate page, placing a link to it in the software testing page. Appreciate the feedback. I have not encountered any technical discussion of this abstraction in the literature. One sees lot about test-driven development, agile techniques, etc. which may or may not mirror the pre-checkin/post-integration evaluation technique I discuss. Depends on the software factory size primarily -- fewer talented authors are preferred to many mediocre authors. I've applied this methodology in large and small factories to stabilize releases. If the bits aren't stable, you've got a software toxic wastedump to manage. An editorial technique saves a little on kitty litter. A lot of shops seem to churn and burn their customers, products, and organizational participants. Save for the product monopolies, a broken software factory that cannot embrace a proactive means to enforce continuous release-ready bit maintenance is usually doomed to the bit-bucket (in a globalized economy at least).-- Rmstein 12:24, 6 October 2006 (UTC)
Exploratory vs. Scripted
Did you mean to say Exploratory versus Systematic software testing or even better yet, call Exploratory by its real name ad-hoc testing? The word scripted as it pertains to software testing is the pass tense of writing an automated script or test case. When explaining or describing Madonna life choices in Wikipedia I could understand the use of the word misunderstood. As sergeant Friday said in the TV show just the facts ma’am just the facts.
MichaelDeady 22:07, 9 October 2006 (UTC)
Both exploratory and scripted approaches can be systematic. The approach I choose (exploratory or scripted) often has little to do with how systematic I am, and has more to do with how I want to structure my work and how much freedom I want as a tester. Just because I write test cases down (scripted) doesn't mean I'm methodical in my test design, coverage, or execution. It just means I wrote my tests down. Likewise, just beacuse I say I'm doing exploratory testing, it doesn't mean I'm methodical in my test design, coverage, or execution. How systematic you are is not dictated by the approach.
Some exploratory testing is ad-hoc, but not all ad-hoc testing is exploratory. There are specific skills and tactics that occur in exploratory testing that may or may not appear in ad-hoc testing: modeling, resourcing, questioning, chartering, observing, manipulating, pairing, generating and elaborating, overproduction/abandonment/recovery, refocusing, alternating, collaborating, branching and backtracking, conjecturing, recording, and reporting. For descriptions, do a Google search on 'exploratory testing dynamics' and read a little about what ET is and how people actually do it.
-Mike Kelly
If the term exploratory can be used to explain ad-hoc testing the same rules applies that a person may perhaps say that scripted testing could be called by is proper name of Systematic Testing. You could also call what I do as exploratory when I write test plans, cases, and risk assessments. But when I place the word “exploratory” in the aforementioned context it mean a over all saving’s of time and money
I just wanted to point out the in corrected use of the word scripted with a little bit of flare. I believe the correct statement should be along the lines of “Exploratory vs. Systematic”. Both methodologies have there good and bad points.
Just as you had stated above exploratory goes much farther into process then just saying ad-hoc the same can be said about Systematic when using the word scripting to describe over all processes. “Systematic Test” is unique in that it defines a test to include more than a procedure performed after the system was fully assembled. Systematic testing includes test planning, test case design, test implementation, and test execution. The key of systematic testing is with the intention of time and effort exerted on fixing problems is sharply decreased by early detection. But more importantly, the test process helps to put all the issues on the table so that fewer open items remain in the later development stages.
Just as if we where writing white paper’s it really just boils down to power words and semantics’
MichaelDeady 15:53, 11 October 2006 (UTC)
Roles in software testing
Hi everybody, after getting a friendly remind by Pascal.Tesson I added here the roles of software testing, since the term software testers leads here. The phases and goals of testing can be seen in:
- Gelperin, D., and Hetzel, B. (1988): "The Growth of Software Testing," CACM, Vol. 31, No. 6
The roles of software testers I have from:
- Thomas Müller (chair), Rex Black, Sigrid Eldh, Dorothy Graham, Klaus Olsen, Maaret Pyhäjärvi, Geoff Thompson and Erik van Veendendal. (2005). Certified Tester - Foundation Level Syllabus - Version 2005, International Software Testing Qualifications Board (ISTQB), Möhrendorf, Germany. (PDF; 0,424 MB).
I am thinking of adding also the phases of softwrae testing here - let's see, where I can add them at best. Searching... --Erkan Yilmaz 17:37, 12 October 2006 (UTC)
History of software testing
So, added one viewpoint of the history. This is from:
- Gelperin, D., and Hetzel, B. (1988): "The Growth of Software Testing," CACM, Vol. 31, No. 6
- also a summary can be found on the first in: Laycock, G. T. (1993): "The Theory and Practice of Specification Based Software Testing," University of Sheffield Department of Computer Science
What do you think, should we add one of these into the references? --Erkan Yilmaz 17:50, 12 October 2006 (UTC)
- We should probably add both, no? DRogers 20:25, 12 October 2006 (UTC)
- works for me DRogers, will add both. added - the 2nd is also published free by Laycock :-) have fun reading
- --Erkan Yilmaz 21:12, 12 October 2006 (UTC)