Jump to content

oneAPI (compute acceleration)

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by GoBuck76 (talk | contribs) at 00:27, 5 December 2019 (Page creation). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.
(diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff)

oneAPI is a cross-industry initiative for an open, standards-based unified programming model that creates a common developer experience across compute accelerator architectures. Its objective is to deliver an efficient, performant programming model that eliminates the need for developers to maintain separate code bases, multiple programming languages, and different tools and workflows for each architecture.

The oneAPI Specification

The oneAPI specification[1] extends existing developer programming models to enable a diverse set of hardware through a data-parallel language, a set of library APIs, and a low-level hardware interface to support cross-architecture programming. It builds upon industry standards and provides an open, cross-platform developer stack.

The Language – Data Parallel C++

DPC++[2] is an open, cross-architecture language built upon the ISO C++ and Khronos SYCL standards. DPC++ extends these standards with explicit parallel constructs like sub-groups and unified shared memory offload interfaces to support a broad range of computing architectures and processors, including CPUs and accelerators. Extensions are contributed back to standards bodies.

The oneAPI Libraries

The set of APIs spans several domains that benefit from acceleration, including an interface for deep learning; general libraries for linear algebra math, video, and media processing; and others.

Library Name Short

Name

Description
oneAPI DPC++ Library oneDPL Algorithms and functions to speed DPC++ kernel programming
oneAPI Math Kernel Library oneMKL Math routines including matrix algebra, FFT, and vector math
oneAPI Data Analytics Library oneDAL Machine learning and data analytics functions
oneAPI Deep Neural Network Library oneDNN Neural networks functions for deep learning training and inference
oneAPI Collective Communications Library oneCCL Communication patterns for distributed deep learning
oneAPI Threading Building Blocks oneTBB Threading and memory management template library
oneAPI Video Processing Library oneVPL Real-time video encode, decode, transcode, and processing

The Hardware Abstraction Layer

oneAPI Level Zero, the low-level hardware interface, defines a set of capabilities and services that a hardware accelerator needs to interface with the broad set of languages in support of consumer to Deep/Machine Learning and HPC class solutions.

Intel Reference Implementation

Intel has released a oneAPI Beta Product[3] that serves as a reference implementation of the specification and adds migration, analysis, and debug tools.

  1. ^ "The oneAPI Specification". oneAPI.{{cite web}}: CS1 maint: url-status (link)
  2. ^ "Data Parallel C++: Mastering DPC++ for Programming of Heterogeneous Systems Using C++ and SYCL". Apress.{{cite web}}: CS1 maint: url-status (link)
  3. ^ "Intel oneAPI Product". Intel oneAPI Toolkits.{{cite web}}: CS1 maint: url-status (link)