Jump to content

Talk:Concurrent computing

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Allan McInnes (talk | contribs) at 23:24, 1 October 2008 (Types of concurrency: futures). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.
WikiProject iconComputing Start‑class
WikiProject iconThis article is within the scope of WikiProject Computing, a collaborative effort to improve the coverage of computers, computing, and information technology on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
StartThis article has been rated as Start-class on Wikipedia's content assessment scale.
???This article has not yet received a rating on the project's importance scale.
WikiProject iconComputer science Start‑class Mid‑importance
WikiProject iconThis article is within the scope of WikiProject Computer science, a collaborative effort to improve the coverage of Computer science related articles on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
StartThis article has been rated as Start-class on Wikipedia's content assessment scale.
MidThis article has been rated as Mid-importance on the project's importance scale.
Things you can help WikiProject Computer science with:

See Talk:Concurrent programming language for earlier discussions on concurrent programming languages, as well as dicussion on the merge into Concurrent computing. --Allan McInnes 07:03, 27 January 2006 (UTC)[reply]


what is concurrency control

The concurrent computing article currently claims

"However, since both processes perform their withdrawals, the total amount withdrawn will end up being more than the original balance. These sorts of problems with shared resources require the use of concurrency control, or non-blocking algorithms."

Currently, the concurrency control article seems to be talking *only* about databases.

What about all the algorithms listed in Category:Concurrency control that involve blocking, but do not involve databases? Algorithms such as serializing tokens and mutual exclusion and monitor (synchronization)? Aren't they also perfectly valid ways of dealing with this sort of problem?

We need to either

  • Change this "concurrent computing" article to add another category of solutions, "These sorts of problems with shared resources require the use of concurrency control, ((new category here that includes monitors, etc.)), or non-blocking algorithms.". Or
  • Change the concurrency control article to discuss all blocking algorithms (whether or not they require a database).
  • change this "concurrent computing" article to *remove* a category of solutions, "These sorts of problems with shared resources require the use of concurrency control". And change the "concurrency control" article to discuss all the algorithms in Category:Concurrency control, blocking and non-blocking.

What is a good name for this "new category" (things like monitors, mutexes, serializing tokens, etc)?

Or is there a better 4th option?

--70.189.73.224 15:08, 23 September 2006 (UTC)[reply]

Types of concurrency

Atleast accordigng to CTM Book the three variants are

  • Shared memory concurrency
  • Message Passing concurrency
  • Dataflow concurrency

(i.e Dataflow is an independent branch) Blufox (talk) 09:32, 1 October 2008 (UTC)[reply]

Dataflow variables are a kind of future. The listing of shared memory and message-passing in the article is referring to forms of explicit communication between threads (as opposed to the presumably "implicit" communication inherent in a future). The implied dichotomoy between futures and other methods of communication is perhaps not correct, and it's certainly not well explained in the article. If you'd like to take a stab at rewriting it, I'd be interested to see what you come up with. --Allan McInnes (talk) 23:24, 1 October 2008 (UTC)[reply]

Clearer definition

Can someone clean up the definition so someone can actually understand the difference between parallel and concurrent computing?

Parallel computing is the execution of the exact same algorithm at the same time. Concurrent computing is the excetuion of multiple, but not necessarilly the same, software at the same time.

That's the only difference. Parallel computing IS concurrent computing. But concurrent computing is not always parallel computing. Although unlikely, they don't need to interact either.—The preceding unsigned comment was added by 142.167.85.107 (talk) 07:42, 16 February 2007 (UTC).[reply]

That's wrong. Parallelism is a property of computers, meaning that the control flow executes on multiple pathways in parallel. Concurrency, on the other hand, is a property of programs and programming languages, meaning they have constructs designed to exploit parallelism. Note that concurrent programs can run on ordinary, sequential computers as well -- simply by `simulating' parallelism, for example with multithreading --, and that ordinary sequential programs can exploit parallelism if the compiler performs appropriate transformations/optimizations. So parallelism and concurrency are closely related, but both express a different view of what one might call the same underlying idea. The article currently blurs the distinction between parallelism and concurrency, which is very unfortunate as the distinction is widely agreed upon in computer science and very useful from an educational and engineering point of view. Without this distinction, it becomes very difficult to concisely and precisely talk about things like realizing concurrency in the presence or absence of different forms of parallelism. I would be glad if the article could be modified to take this into account; however, I would leave this change to someone who is more familiar with the article. To reinforce my point, consider the following: The term `hardware parallelism' does not exist in the form of `hardware concurrency', while things like the pi-calculus are consistently referred to as designed for `concurrent' systems (the Wikipedia article itself follows this notion).