Neural processing unit
As of 2016, AI accelerators are an emerging class of microprocessor designed to accelerate artificial neural networks, machine vision and other machine learning algorithms for robotics, internet of things and other data-intensive/sensor driven tasks. They are frequently manycore designs (mirroring the massively-parallel nature of biological neural networks).
They are distinct from GPUs which are commonly used for the same role in that they lack any fixed function units for graphics, and generally focus on lower precision arithmetic. Other past example architectures such as the Cell microprocessor have exhibited attributes with significant overlap with AI accelerators (low precision arithmetic, dataflow architecture, throughput over latency).
As of 2016, vendors are pushing their own terms (in the hope that their designs will dominate, as happened with the worlds adoption of NVIdia's term GPU for "graphics accelerators" and there is no consensus on the boundary between these devices, however several examples clearly aim to fill this new space.
Examples
* e.g. Movidius Myriad 2, which is a manycore VLIW AI accelerator at it's heart, complemented with video fixed function units.
- Tensor processing unit - presented as an accelerator for Google's TensorFlow framework, which is extensively used for convolutional neural networks. Focusses on a high volume of 8-bit precision arithmetic.
- TrueNorth The most unconventional example, a manycore design based on spiking neurons rather than