Talk:Defensive programming
![]() | Computing Start‑class | |||||||||
|
Random notes
...copied from article, "Please expand this article. These random notes should be changed to a more coherent article." Badanedwa 22:52, May 1, 2004 (UTC)
humm
This article its somewhat obsolete-ish, and feel too unix oriented. Maybe sould be edited to leverage this unix oriented style, then adding nowdays-sche stuff. Ok, maybe I am wrong. What other people think about the quality of this article? --Tei 15:09, 19 July 2005 (UTC)
I agree that this article needs cleaned up. - brenneman(t)(c) 05:24, 6 August 2005 (UTC)
Bug Free
How does Defensive Programming differ from Standards of Good Software Development? Now I have been a computer programmer for over 40 years and fully agree that there is an enormous volume of crud out there, as we can see with the volume of patches coming from major software suppliers to fix problems that should never have been there in the first place, but most everything I am seeing in this article is talking about standards that most every programmer should adhere to, with variations on how to implement them by actual computer language, but few do. User:AlMac|(talk) 18:11, 18 January 2006 (UTC)
I do NOT adhere to these rules
I see defensive programming a method that can be used when it is reasonable to both expect and possible to counteract intentional and accidental misuse of the code. The problem is that it is easy to hide the problem without alerting anyone about it. I it a design choice among others. Another approach if defensive programming is not used is to let the application crash, or in some other less drastic way make the call invalid. In other words enforce the correct use of the piece of software in question.
The idea that defensive programming is good programming is wrong, but one way of designing the code. The question is how to define the term properly. —Preceding unsigned comment added by 137.61.234.225 (talk • contribs)
- You didn't specify the programming language you use. In languages where an exception would be thrown and caught, and the traceback would be logged - sure. But in lower-level languages, most notably C, ignoring error returns will often lead to hard-to-debug situations much later in code, having possibly overwritten stack in some situations. Of course it depends on how well thought-out your error handling policy is; I personally consider it easier to drop log a message and attempt to abort the current operation as gracefully as is realistic. I realize the error handling codepaths can much more likely have bugs as they're not tested very often, but it's not a much worse situation. -- intgr 17:14, 3 October 2006 (UTC)
Removed section
I've removed the "Other examples" section, introduced by Beno1000 on 01:24, 29 September 2008 in this edit, since, besides being rather awkwardly tacked onto the end of the article, it actually contains some factual errors. (In particular, division by zero does not normally produce buffer overruns.) In case someone wants to try making something usable out of it, here's the content of the section as it was when I removed it:
“ | A rather common and infamous programming error is division by zero. Normally, this will cause a buffer overrun as the program tries to calculate to infinity. Some programs will detect this abnormality and quit gracefully, while others will hang or crash outright and others still will continue to run, but abnormally. However, this can be prevented by a simple if statement which will return an error message to the user, as in the pseudocode below.
If inputted data is zero then display an error message saying "Cannot divide by zero" Else Divide the inputted data by the number stored in the number buffer (variable) |
” |
—Ilmari Karonen (talk) 20:37, 3 October 2008 (UTC)
Murphy's Law
Citing Murphy's Law as justification for defensive programming is ridiculous. It is not a physical law, it's a humourous observation on life. The case can be argued much better using terms like risk. Nczempin (talk) 16:15, 21 October 2008 (UTC)