Jump to content

Wikipedia:Featured article candidates/Parallel computing

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Maclean25 (talk | contribs) at 01:36, 29 April 2008 (strike). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

previous FAC

This is one of mine. I spent some time last year improving it, and I think it's up to FA status. I previously nominated it, and I believe all of the substantive suggestions have been addressed. Raul654 (talk) 07:40, 18 April 2008 (UTC)[reply]

  • Comment, a very well written (and accessible) introduction to parallel computing. Sections such as the one covering GPGPUs seems a bit too choppy due to the one liner paragraphs, and could do with some editing to ensure better flow. Specialized hardware like Systolic arrays seem to have been left out. Also the hardwares described seems to focus a bit to much on von Neumann Machines; tackling parallelism from an Anti machine point of view seems to be absent. I am not exactly sure if they need to be in an introductory article, but does seem to be serious enough to hold back a support (I am open to discussion, though). But overall, is a very good article. Now we need one that tackles parallel/concurrent programming (from a programming language/compiler point of view). --soum talk 09:17, 18 April 2008 (UTC)[reply]
    • The article focuses on von Neumann Machines because more-or-less all modern computers are von Neumann Machines. Antimachines (that is, FPGAs and alike) are a relatively new area of parallel computing, covered in the "Reconfigurable computing with field-programmable gate arrays" section. This is appropriate weight given that they are rather small players in the field. Systolic arrays are mentioned briefly in the Flynn's taxonomy section, but I don't go into detail nobody ever figured out what they were useful for. Raul654 (talk) 17:54, 18 April 2008 (UTC)[reply]
      • I would have to agree with Raul here that the discussion of other computing architectures is given sufficient space. I had a similar objection at the first FAC, and I did some small additions to try to rephrase some parts from using von-Neumann specific terms to a more generic dependency discussion - perhaps that could be done a bit more in some parts (the "Instruction-level parallelism" section comes to mind) but in general I think the von Neumann focus is ok. henriktalk 21:11, 21 April 2008 (UTC)[reply]

OpposeSupport: much improved! TONY (talk) 13:44, 26 April 2008 (UTC)The writing needs a thorough cleanse. This article could scrub up very nicely, though. Here are random examples from the top that demonstrate the density of issues throughout.[reply]

Comments

All other links checked out okay. Ealdgyth - Talk 14:21, 18 April 2008 (UTC)[reply]

Oppose. I agree with Tony's comment about the article needing a thorough seeing to. There are way too many MoS glitches and prose problems as it stands. The article is also severely under-referenced, with too many sections being completely unreferenced.

  • "Traditionally, computer software has been written for serial computation". Traditionally? Which tradition is that? The article only discusses parallelism in digital computing; it ought to be made clear that analogue or quantum computing, for instance, are not covered.
      • Analog computing may or may not be parallel (depending on the design). Regardless, it went the way of the dinosaur over 40 years ago.
      • Quantum computing and DNA computing both might be parallel (at least theoretically) but much like cold fusion both of them are in their proto-infancy. Neither of them has ever produced a single useful result. (The most complicated quantum computing program I've heard of factors numbers up to 10). By "tradionally", I'm referring to the fact that 99% (or more) of source code out there is sequential. Raul654 (talk) 02:32, 19 April 2008 (UTC)[reply]
  • "Only one instruction may execute at a given time – after that instruction is finished, the next is executed." Seems to ignore pipelining, in which a processor will work on several instructions in parallel, although admittedly only executing one at a time.
  • "The total runtime of a program is proportional to the total number of instructions multiplied by the average time per instruction." Proportional to? Isn't it equal to? Total runtime? Total number of instructions?
  • "... advancements in computer architecture were done by doubling computer word size—the amount of information the processor can execute per cycle." Advancements? Were done? Increased word size increases the extent of available memory; it doesn't per se increase the amount of information that can be processed per cycle. What does "information" mean in this context anyway?
    • "advancements in computer architecture were done" - I suppose this could be rephrased. "Advancements in computer architecture were driven by doubling..."
    • Increased word size increases the extent of available memory; - true. Just one quick caveat here - increases in word size do not increase the amount of memory; they increase the amount of *addressable* memory. Raul654 (talk) 06:32, 19 April 2008 (UTC)[reply]
    • it doesn't per se increase the amount of information that can be processed per cycle. - very, very, very false. An 8-bit processor processes data in chunks (called "words" - see Word (computing)) of 8 bits; a 32-bit processor processes data in chunks of 32 bits. It can do 4 times as much computation in a cycle as an 8-bit processor can. Raul654 (talk) 05:07, 19 April 2008 (UTC)[reply]
      • Please don't try to patronise me. The number of bits assigned to the address of the data to be worked on does not of itself increase the amount of data that can can be worked on per cycle. --Malleus Fatuorum (talk) 01:04, 24 April 2008 (UTC)[reply]
        • I'm not patronizing you - you're completely and totally wrong. The word size is not just the number of bits used to address memory, it's also the size of the registers inside the processor. If you do "add R1, R2, R3" on an 8-bit processor, it adds two 8-bit registers and stores the value into a third 8-bit register; if you do it on a 32-bit processor, it adds two 32 bit registers and stores them into a 32 bit register -- 4 times as much work in a single cycle. Raul654 (talk) 01:08, 24 April 2008 (UTC)[reply]
          • I don't believe that you fully understand what you're talking about. My view remains that this article should not be featured for both prose and technical reasons. Others may decide for themselves, but the article would have to improve dramatically before I'd consider supporting it. --Malleus Fatuorum (talk) 01:35, 24 April 2008 (UTC)[reply]
            • The article is correct as it stands. The "error" you have described is a result of your misconception that a word pertains only to the size of the address space. It does not. Raul654 (talk) 02:18, 24 April 2008 (UTC)[reply]

These are just some examples of what needs to be addressed in this article, there are very many more. --Malleus Fatuorum (talk) 01:56, 19 April 2008 (UTC)[reply]

  • Oppose: This important article needs further improvement before it is ready for FA status.
Questions: What style guidelines is the proposer using for notes and referencing? We see pg and pgs as abbreviations for page and pages (with what warrant, from WP guidelines or elsewhere?). We see et al. sometimes in italics, sometimes lacking its full stop. We see quoted material surrounded by quote marks but also italicised – or just italicised. We see an opening double quote mark matched with a closing single quote mark. Some citations end with a full stop, while similar ones end without. Spaces are apparently inserted or omitted as if they were optional decorations, as in July 1998, 19(2):pgs 97–113(17). (What does the (17) mean, by the way?) I am surprised that I find no specific comment on formatting of references, above. I will oppose until some effort is made to fix it. I might help to fix it, once I see that the problem is taken seriously and worked on.
¡ɐɔıʇǝoNoetica!T23:59, 21 April 2008 (UTC)[reply]
Noetica, can you pls point us to the guideline that deals with pgs vs pp. and the correct usage of et al? As I recall, when we last fought out et al at MOS, there was no conclusion. SandyGeorgia (Talk) 03:17, 22 April 2008 (UTC)[reply]
My reading of the guidelines for page numbers (WP:CITATION and related MOSpages) turns up only inconsistencies and a failure to address the issue. There are examples with no abbreviation at all ("93–107"), with "p." or "pp." and a space ("pp. 93–107"), with "p." or "pp." but no space ("pp.93–107"). Editors also use "p" or "pp" with or without a space ("pp 93–107"; "pp93–107"), or "page" or "pages" ("pages 93–107"), or more rarely (and without support from style guides) "pg" or "pgs", with or without a space or a full stop ("pgs.93–107"; "pgs 93–107"; etc.).
Myself I recommend only the first two of these styles. They are the only ones that major style guides advocate: ("93–107"; "pp. 93–107", preferably done with a hard space: "pp. 93–107").
In particular, here I have asked why "pg" has been used. No reputable style guide that I know of gives it express support, and most well-edited articles do not use it.
But what loses me immediately is editors' inconsistency. In the present article we see this with "et al." (which almost all authorities want unitalicised and with a full stop). If an article comes before us here with three versions of the thing, I cannot think that the proposer is serious. In the present case, I have already shown that I am ready to help, once I can see that the proposer is paying attention.
¡ɐɔıʇǝoNoetica!T04:13, 22 April 2008 (UTC)[reply]
Well, I actually went in and did the cleanup you requested myself for Raul, since I have never encountered this kind of oppose before at FAC, and there are no guidelines. I did all I could; if you still see something, it should be minor enough that you might address it yourself. When there's no guideline, it's hard to know how to fix something, and even after all our discussion at MOS, I don't know how to use et al, because we came to no conclusion in those acrimonious MoS discussions. Hard to ask an editor to fix something that has no Wiki guideline. SandyGeorgia (Talk) 04:26, 22 April 2008 (UTC)[reply]
Thanks! Raul654 (talk) 21:32, 22 April 2008 (UTC)[reply]
Tony is satisfied on prose; User:Epbr123 reviewed MoS and I can't detect any other MoS deficiencies. SandyGeorgia (Talk) 16:51, 28 April 2008 (UTC)[reply]
  • Comment: I found this article excessively detailed and technical, and I have a bachelor's in Computer Science from MIT. For example, my eyes glazed over at the explanation of the power consumption equation; I don't see why it is necessary to include this instead of simply noting that increasing frequency increases power consumption.
I don't know if this is a consideration for FA's, so I will not vote. --Slashem (talk) 18:41, 22 April 2008 (UTC)[reply]
    • It should surprise no one that the article is technical - it's a highly technical topic. The question is accessibility, and several reviewers have explicitly noted that it is accesible to laypeople (user:soum's comment above; user:Awadewit's comment during the previous FAC). Raul654 (talk) 21:28, 22 April 2008 (UTC)[reply]
      • Perhaps you could answer my specific example. When you are done, I have more. --Slashem (talk) 21:32, 22 April 2008 (UTC)[reply]
        • I give the equation and the conclusion drawn from it because it is more pedagogically sound than simply giving the conclusion. (It also happens to be a rather important equation for computer engineers - one of the few really important equations in computer engineering, actually). Raul654 (talk) 21:34, 22 April 2008 (UTC)[reply]
"Pedagogically sound?" I don't use Wikipedia as a textbook, not to mention this is hypertext. Your audience is not composed of computer engineers. --Slashem (talk) 21:40, 22 April 2008 (UTC)[reply]
Yes, it is pedagogically sound, meaning 'this is a good way of making the information comprehensible'. As for the audience, I'm aware they are not computer engineers. But as I have already said, several laypeople (like Awadewit, an english-lit major) have already said they found it accessible. Thus, I conclude that I am doing it correctly. Raul654 (talk) 21:44, 22 April 2008 (UTC)[reply]
Fine, you don't value my opinion, I won't give it to you again. --Slashem (talk) 21:47, 22 April 2008 (UTC)[reply]
It's not that I don't value your opinion. However, the one specific suggestion you have given - that I should remove the equation (or at least that was the clear implication of your comments) - would in my opinion not be an improvement. Do you have any more specific suggestions? Raul654 (talk) 21:58, 22 April 2008 (UTC)[reply]
Apparently we have a philosophical disagreement, which may perhaps be best explored on the talk page. --Slashem (talk) 22:05, 22 April 2008 (UTC)[reply]
See Relational database for an example of an article which describes a technical topic in an accessible way while leaving more detailed and technical aspects to other articles which can be linked to. --Slashem (talk) 21:40, 22 April 2008 (UTC)[reply]

BTW if you want me to shut up, you can just admit it's not a consideration for FA's, since this is the FAC page. You don't have to try to argue about the facts, the way Bush tried to deny Global Warming. --Slashem (talk) 21:43, 22 April 2008 (UTC)[reply]

  • I think this is a good article on a topic that needed coverage here. I've merged some or all of the choppy parastubs and gone through it leaving a few inline queries. Happy to cahnge to support when they're dealt with. TONY (talk) 06:43, 23 April 2008 (UTC)[reply]

Replying to Tony's first inline citation:

No program can run more quickly than the longest chain of dependent calculations (known as the [[critical path]]), <!--fix the next clause: doesn't make sense-->since the fact that the dependent calculations force an execution order. <!--And the next sentence ...-->Fortunately, most algorithms do not consist of a long chain of dependent calculations and little else; opportunities usually exist to execute independent calculations in parallel.

Let's say you have something that looks like this:

A = something
B = f(A)
C = f(B)
D = f(C)
E = f(D)

You have to know A before you calculate B, calculate B before C, calculate C before D, etc. That is what the first sentence means. The second sentence says that in real life, this is not this is not a common situation. It's more common to see something like this:

A = something
B1 = f(A)
B2 = f(A)
C1 = f(B1)
C2 = f(B1+A)
C3 = f(B2)
C4 = f(B1+B2)
D1 = f(C1)
D2 = f(C2+B2)
D3 = f(C3+B2)
D4 = f(C4+B2)
E1 = f(D4)

The first example consisted of one chain of dependent operations with nothing else to do - there was no opportunity for parallelism. Unlike the first example, which had a critical path (the longest chain of operations that must be executed one-after-another) and nothing else to do, this has a critical path (which I think is A->B1, B2->C4->D4->E1) and lots of other things to do. This will parallelize much better than the above. Any suggestions for how to rephrase the article to make this clearer? Raul654 (talk) 07:00, 23 April 2008 (UTC)[reply]

I think the second clause raised by Tony merely has a piece of misplaced text that renders it confusing. It could be:
  • No program can run more quickly than the longest chain of dependent calculations (known as the critical path), since dependent calculations force an execution order. However, most algorithms do not consist of a long chain of dependent calculations; opportunities usually exist to execute independent calculations in parallel.
or
  • No program can run more quickly than the longest chain of dependent calculations (known as the critical path), since calculations that depend upon prior calculations in the chain must be executed in order. However, most algorithms do not consist of only a long chain of dependent calculations; there are usually opportunities to execute independent calculations in parallel.
Sorry, I'm not a good word nerd, Tony might improve. SandyGeorgia (Talk) 07:12, 23 April 2008 (UTC)[reply]
I used Sandy's second paragraph from above. I also went over all the things Tony commented on - most were good, I tweaked one or two of them. I think I've addressed all of the above objections now. Raul654 (talk) 00:42, 24 April 2008 (UTC)[reply]

For the record, I do not believe there are any remaining unaddressed objections. Raul654 (talk) 17:32, 26 April 2008 (UTC)[reply]

  • Support. Fulfills the FA criteria. Some comments, though: (a) "Only recently, with the advent of x86-64 architectures..." - for a topic that dates quickly (for those of us not in the know), could a more quantitative date/date-range be used? (b) I don't see anything from the "Hardware" section in the lead (c) "Automatic parallelization of a sequential program by a compiler is the "holy grail" of parallel computing." - It may be an obvious/shared feeling in the computing world, but here I think it best to cite/attribute such grand statements. maclean 19:18, 27 April 2008 (UTC)[reply]