Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Getting error when trying to use OpenVINOExecution Provider #473

Open
NickM-27 opened this issue Oct 11, 2024 · 3 comments
Open

Getting error when trying to use OpenVINOExecution Provider #473

NickM-27 opened this issue Oct 11, 2024 · 3 comments
Assignees

Comments

@NickM-27
Copy link

Describe the issue

We run a service via python in a docker container, this service runs ONNX models including support for the OpenVINO execution provider. The service is started and run inside of S6. We have found that running the model with

import onnxruntime as ort
ort.InferenceSession(/config/model_cache/jinaai/jina-clip-v1/vision_model_fp16.onnx, providers=['OpenVINOExecutionProvider', 'CPUExecutionProvider'], provider_options=[{'cache_dir': '/config/model_cache/openvino/ort', 'device_type': 'GPU'}, {}])

results in the error:

EP Error /onnxruntime/onnxruntime/core/session/provider_bridge_ort.cc:1637 onnxruntime::Provider& onnxruntime::ProviderLibrary::Get() [ONNXRuntimeError] : 1 : FAIL : Failed to load library libonnxruntime_providers_openvino.so with error: /usr/local/lib/python3.9/dist-packages/onnxruntime/capi/libopenvino_onnx_frontend.so.2430: undefined symbol: _ZN2ov3Any4Base9to_stringB5cxx11Ev

however, when running the exact same in a python3 shell (as opposed to in the main python process) it appears that it does work correctly. I am hoping to understand if there is any significance to this error and if it might indicate what is going wrong.

To reproduce

import onnxruntime as ort
ort.InferenceSession(/config/model_cache/jinaai/jina-clip-v1/vision_model_fp16.onnx, providers=['OpenVINOExecutionProvider', 'CPUExecutionProvider'], provider_options=[{'cache_dir': '/config/model_cache/openvino/ort', 'device_type': 'GPU'}, {}])

Urgency

No response

Platform

Linux

OS Version

Debian

ONNX Runtime Installation

Released Package

ONNX Runtime Version or Commit ID

onnxruntime-openvino 1.19.*

ONNX Runtime API

Python

Architecture

X64

Execution Provider

OpenVINO

Execution Provider Library Version

openvino 2024.3.*

@sfatimar
Copy link

@jatinwadhwa921 please take a look

@jatinwadhwa921
Copy link

Could you please share the exact environment setup you’re using? While I attempted to replicate the issue on my end, everything seemed to work correctly.

Here’s what I did:

Created a Docker image with Ubuntu 22.04 as the base.
Installed the necessary packages:
pip install openvino==2024.3.0
pip install onnxruntime-openvino==1.19.0

Tested the setup with a sample script, where i used openvino execution provider in the below mentioned manner

import onnxruntime as ort ort.set_default_logger_severity(0) sess = ort.InferenceSession(<model_name.onnx>), providers=rt.get_available_providers()) sess.set_providers(['OpenVINOExecutionProvider'], [{'device_type' : <device>}])

The model executed without any issues inside the Docker container

@NickM-27
Copy link
Author

NickM-27 commented Nov 4, 2024

This is being run in https://github.com/blakeblackshear/frigate, but that would be a lot of setup to reproduce.

The base image here is debian 11, with the intel drivers being pulled from intels apt repository.

Like I said, if I get a shell in the docker container and run the above python code it works correctly. However, the same code fails with the error posted when running in the python process that was started via S6.

To be clear I was mostly hoping the error message would be a signal of something that is missing (permission, resource, etc.) and perhaps a change in the way S6 starts that would fix it. For now the solution is to just hand ONNX models directly to openvino and skip onnxruntime in this case

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants