Inference Engine Developer Guide

Inference Engine Developer Guide

https://software.intel.com/en-us/articles/OpenVINO-InferEngine
November 19, 2018

Deployment Challenges

Deploying deep learning networks from the training environment to embedded platforms for inference is a complex task that introduces technical challenges, such as:

  • Several deep learning frameworks are widely used in the industry, such as Caffe*, TensorFlow*, MXNet*, among others
  • Training deep learning networks is typically performed in data centers or server farms and the inference often take place on embedded platforms that are optimized for performance and power consumption.

These platforms are typically limited from the software perspective:

  • programming languages
  • third party dependencies
  • memory consumption
  • supported operating systems

and the platforms are limited from the hardware perspective:

  • different data types
  • limited power envelope

Because of these limitations, it is usually not recommended, and sometimes not possible, to use original training framework for inference. As an alternative, use dedicated inference APIs that are optimized for specific hardware platforms.

For these reasons, ensuring the accuracy of the transforms networks can be a complex task.

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章