Jump to content

OpenVINO

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by AnomieBOT (talk | contribs) at 02:28, 15 February 2024 (Dating maintenance tags: {{Cn}}). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Developer(s)Intel Corporation
Initial releaseMay 16, 2018; 7 years ago (2018-05-16)
Stable release
2023.3 / January 2024.[1]
Repositorygithub.com/openvinotoolkit/openvino
Written inC++, Python
Operating systemCross-platform
LicenseApache License 2.0
Websitedocs.openvino.ai
As ofJanuary 2024

OpenVINO is an open-source software toolkit for optimizing and deploying deep learning models. It enables programmers to develop scalable and efficient AI solutions with relatively few lines of code. The supported models come from several existing frameworks and support different categories, including large language models, computer vision, and generative AI.

Actively developed by Intel, it prioritizes high-performance inference on Intel hardware but also supports ARM/ARM64 processors[2] and encourages contributors to add new devices to the portfolio.

OpenVINO is cross-platform and free for use under Apache License 2.0.[3]

Overview

The high level pipeline of OpenVINO consists of two parts: generate IR (Intermediate Representation) files via Model Optimizer using your trained model or public one and execute inference on Inference Engine on specified devices.

OpenVINO has different sample types: classification, object detection, style transfer, speech recognition, etc. It is possible to try inference on public models. There are a variety of models for tasks, such as:

  • classification
  • segmentation 
  • object detection 
  • face recognition 
  • human pose estimation 
  • monocular depth estimation
  • image inpainting
  • style transfer
  • action recognition
  • colorization

OpenVINO model format

OpenVINO IR[4] is the default format used to run inference. It is saved as a set of two files, *.bin and *.xml, containing weights and topology, respectively. It is obtained by converting a model from one of the supported frameworks, using the application's API or a dedicated converter.

Models of the supported formats may also be used for inference directly, without prior conversion to OpenVINO IR. Such an approach is more convenient but offers fewer optimization options and lower performance, since the conversion is performed automatically before inference.

The supported model formats are:[5]

Programming language

OpenVINO is written in C++ and Python.

OS support

OpenVINO runs on the following desktop operation systems: Windows, Linux and MacOS.[citation needed]

See also

References

  1. ^ "Release Notes for Intel Distribution of OpenVINO toolkit 2023.3". January 2024.
  2. ^ "OpenVINO Compatibility and Support". OpenVINO Documentation. 24 January 2024.
  3. ^ "License". OpenVINO repository. 16 October 2018.
  4. ^ "OpenVINO IR". www.docs.openvino.ai. 2 February 2024.
  5. ^ "OpenVINO Model Preparation". OpenVINO Documentation. 24 January 2024.