Jump to content

Dataflow

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by 1.41.54.73 (talk) at 05:22, 7 May 2017 (nothing). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

ftware architecture

Dataflow is a software paradigm based on the idea of disconnecting computational actors into stages (pipelines) that can execute concurrently. Dataflow can also be called stream processing or reactive programming.[1]

There have been multiple data-flow/stream processing languages of various forms (see Stream processing). Data-flow hardware (see Dataflow architecture) is an alternative to the classic Von Neumann architecture. The most obvious example of data-flow programming is the subset known as reactive programming with spreadsheets. As a user enters new values, they are instantly transmitted to the next logical "actor" or formula for calculation.

Distributed data flows have also been proposed as a programming abstraction that captures the dynamics of distributed multi-protocols. The data-centric perspective characteristic of data flow programming promotes high-level functional style of specifications, and simplifies formal reasoning about system components.

Hardware architecture

Hardware architectures for dataflow was a major topic in Computer architecture research in the 1970s and early 1980s. Jack Dennis of MIT pioneered the field of static dataflow architectures. Designs that use conventional memory addresses as data dependency tags are called static dataflow machines.

Content-addressable memory are called dynamic dataflow machines by Arvind. They use tags in memory to facilitate parallelism. Data flows around the computer through the components of the computer.

References

  1. ^ A Short Intro to Stream Processing
  2. ^ Carbone, P., Katsifodimos, A., Ewen, S., Markl, V., Haridi, S. et al. (2015) Apache flink: Stream and batch processing in a single engine. Bulletin of the IEEE Computer Society Technical Committee on Data Engineering, 36(4)