Jump to content

Neural processing unit

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Fmadd (talk | contribs) at 16:06, 16 June 2016 (creating this page speculatively in response to the AI accelerator redirect discussion. it might be overkill? It just repeats the information you get from the category, I don't know why they object to category redirects.). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.
(diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff)

As of 2016, AI accelerators are an emerging class of microprocessor designed to accelerate artificial neural networks, machine vision and other machine learning algorithms for robotics, internet of things and other data-intensive/sensor driven tasks. They are frequently manycore designs (mirroring the massively-parallel nature of biological neural networks).

They are distinct from GPUs which are commonly used for the same role in that they lack any fixed function units for graphics, and generally focus on lower precision arithmetic. Other past example architectures such as the Cell microprocessor have exhibited attributes with significant overlap with AI accelerators (low precision arithmetic, dataflow architecture, throughput over latency).

As of 2016, vendors are pushing their own terms (in the hope that their designs will dominate, as happened with the worlds adoption of NVIdia's term GPU for "graphics accelerators" and there is no consensus on the boundary between these devices, however several examples clearly aim to fill this new space.

Examples

* e.g. Movidius Myriad 2, which is a manycore VLIW AI accelerator at it's heart, complemented with video fixed function units.
  • SpiNNaker, a manycore ARM design specialised for simulating a large neural network
  • TrueNorth The most unconventional example, a manycore design based on spiking neurons rather than