Talk:Parallel computing
This is the talk page for discussing improvements to the Parallel computing article. This is not a forum for general discussion of the article's subject. |
Article policies
|
Find sources: Google (books · news · scholar · free images · WP refs) · FENS · JSTOR · TWL |
![]() | Parallel computing is a featured article; it (or a previous version of it) has been identified as one of the best articles produced by the Wikipedia community. Even so, if you can update or improve it, please do so. | |||||||||||||||
![]() | This article appeared on Wikipedia's Main Page as Today's featured article on March 18, 2009. | |||||||||||||||
| ||||||||||||||||
Current status: Featured article |
![]() | Computing: CompSci FA‑class Top‑importance | |||||||||||||||||||
|
![]() | Spoken Wikipedia | |||
|
This page has archives. Sections older than 60 days may be automatically archived by Lowercase sigmabot III when more than 4 sections are present. |
Application Checkpointing
Should the paragraph about Application Checkpointing be in this article about parallel computing?
I think it's not a core part of parallel computing but a part of the way applications work and store their state. Jan Hoeve (talk) 19:33, 8 March 2010 (UTC)
- Fault tolerance is a major (though often overlooked) part of parallel computing, and checkpointing is a major part of fault tolerance. So yes, it definitely belongs here. Raul654 (talk) 20:07, 8 March 2010 (UTC)
I came to the page to read the article and was also confused as to why checkpointing was there. It seems very out of place, and while fault tolerance may be important to parallelism, this isn't an article about fault tolerance mechanisms. It would be more logical to mention that parallelism has a strong need for fault tolerance and then link to other pages on the topic. 66.134.120.148 (talk) 01:27, 23 April 2011 (UTC)
Article quality
What a pleasant surprise. A Wikipedia article on advanced computing that is actually in good shape. The article structure is (surprise) logical, and I see no major errors in it. But the sub-articles it points to are often low quality, e.g. Automatic parallelization, Application checkpointing, etc.
The hardware aspects are handled better here than the software issues, however. The Algorithmic methods section can do with a serious rework.
Yet a few logical errors still remain even in the hardware aspects, e.g. computer clusters are viewed as not massively parallel, a case invalidated by the K computer, of course.
The template used here called programming paradigms, is however, in hopeless shape and I will remove that given that it is a sad spot on an otherwise nice article. History2007 (talk) 22:40, 8 February 2012 (UTC)
Babbage and parallelism
"The origins of true (MIMD) parallelism go back to Federico Luigi, Conte Menabrea and his "Sketch of the Analytic Engine Invented by Charles Babbage".[45][46][47]"
Not that I can see. This single mention refers to a system that does not appear in any other work, did not appear in Babbage's designs, and appears to be nothing more than "it would be nice if..." Well of course it would be. Unless someone has a much better reference, one that suggests how this was to work, I remain highly skeptical that the passage is correct in any way. Babbage's design did have parallelism in the ALU (which is all it was) but that is not parallel computing in the modern sense of the term. Maury Markowitz (talk) 14:25, 25 February 2015 (UTC)
Dear Maury Markowitz,
Forgive me for reverting a recent edit you made to the parallel computing article.
You are right that Babbage's machine had a parallel ALU, but does not have parallel instructions or operands and so does not meet the modern definition of the term "parallel computing".
However, at least one source says "The earliest reference to parallelism in computer design is thought to be in General L. F. Menabrea's publication ... It does not appear that this ability to perform parallel operation was included in the final design of Babbage's calculating engine" -- Hockney and Jesshope, p. 8. (Are they referring to the phrase "give several results at the same time" in (Augusta's translation of) Menabrea's article?)
So my understanding is that source says that the modern idea of parallel computing does go back at least to Menabrea's article, even though the idea of parallel computing was only a brief tangent in Menabrea's article whose main topic was a machine that does not meet the modern definition of parallel computing.
Perhaps that source is wrong. Can we find any sources that disagree? The first paragraph of the WP:VERIFY policy seems to encourage presenting what the various sources say, even when it is obvious that some of them are wrong. (Like many other aspects of Wikipedia, that aspect of "WP:VERIFY" strikes me as crazy at first, but then months later I start to think it's a good idea).
The main problem I have with that sentence is that implies that only MIMD qualifies as "true parallelism". So if systolic arrays (MISD) and the machines from MasPar and Thinking Machines Corporation (SIMD) don't qualify as true parallel computing, but they are not sequential computing (SISD) either, then what are they? Is the "MIMD" part of the sentence supported by any sources? --DavidCary (talk) 07:04, 26 February 2015 (UTC)
- The "idea" may indeed date back to Menabrea's article, in the same way that flying to the Moon dates to Lucian's 79BC story about a sun-moon war. I think we do the reader a major disservice if we suggest that Menabrea musings were any more serious than Lucian's. Typically I handle these sorts of claims in this fashion...
- "Menabrea's article on Babbage's Analytical Engine contains a passage musing about the potential performance improvements that might be achieved if the machine was able to perform calculations on several numbers at the same time. This appears to be the first historical mention of the concept of computing parallelism, although Menabrea does not explain how it might be achieved, and Babbage's designs did not include any sort of functionality along these lines."
- That statement is factually true and clearly explains the nature of the post. Frankly, I think this sort of trivia is precisely the sort of thing we should expunge from the Wiki (otherwise we'd have mentions of Tesla in every article) but if you think it's worthwhile to add, lets do so in a form that makes it clear. Maury Markowitz (talk) 14:44, 26 February 2015 (UTC)
Redirects from Concurrent language
Can we agree that parallel computing isn't the same as concurrent computing? See https://en.wikipedia.org/wiki/Concurrent_computing#Introduction — Preceding unsigned comment added by Mister Mormon (talk • contribs) 16:47, 3 February 2016 (UTC)
Parallel computing is not the same as asynchronous programming
I'm concerned that asynchronous programming redirects to this page. Asynchronous programming/computing is not the same as parallel computing. For example, JavaScript engines are asynchronous, but single-threaded (less web workers), meaning that tasks do not actually run in parallel, though they are still asynchronous. To my knowledge, that is how it works, and that is how NodeJS works, browsers, and Nginx. They are all single-threaded, yet asynchronous, and so not parallel. — Preceding unsigned comment added by 2605:A601:64C:9B01:7083:DDD:19AF:B6B7 (talk) 03:40, 14 February 2016 (UTC)
- There is an article about Asynchrony (computer science), but asynchronous programming redirects to parallel computing instead. Jarble (talk) 18:38, 5 September 2016 (UTC)