diff --git a/docs/articles_en/openvino-workflow/model-preparation/convert-model-pytorch.rst b/docs/articles_en/openvino-workflow/model-preparation/convert-model-pytorch.rst index 62cfdf05f2b11f..fc2637aba9139e 100644 --- a/docs/articles_en/openvino-workflow/model-preparation/convert-model-pytorch.rst +++ b/docs/articles_en/openvino-workflow/model-preparation/convert-model-pytorch.rst @@ -206,14 +206,16 @@ Here is an example of how to convert a model obtained with ``torch.export``: Converting a PyTorch Model from Disk #################################### -PyTorch provides the capability to save models in two distinct formats: ``torch.jit.ScriptModule`` and ``torch.export.ExportedProgram``. -Both formats can be saved to disk as standalone files, enabling them to be reloaded independently of the original Python code. +PyTorch can save models in two formats: ``torch.jit.ScriptModule`` and ``torch.export.ExportedProgram``. +Both formats may be saved to drive as standalone files and reloaded later, independently of the +original Python code. ExportedProgram Format ++++++++++++++++++++++ -The ``ExportedProgram`` format is saved on disk using `torch.export.save() `__. -Below is an example of how to convert an ``ExportedProgram`` from disk: +You can save the ``ExportedProgram`` format using +`torch.export.save() `__. +Here is an example of how to convert it: .. tab-set:: @@ -236,8 +238,9 @@ Below is an example of how to convert an ``ExportedProgram`` from disk: ScriptModule Format +++++++++++++++++++ -`torch.jit.save() `__ serializes ``ScriptModule`` object on disk. -To convert the serialized ``ScriptModule`` format, run ``convert_model`` function with ``example_input`` parameter as follows: +`torch.jit.save() `__ serializes +the ``ScriptModule`` object on a drive. To convert the serialized ``ScriptModule`` format, run +the ``convert_model`` function with ``example_input`` parameter as follows: .. code-block:: py :force: @@ -252,15 +255,15 @@ To convert the serialized ``ScriptModule`` format, run ``convert_model`` functio Exporting a PyTorch Model to ONNX Format ######################################## -An alternative method of converting PyTorch models is exporting a PyTorch model to ONNX with -``torch.onnx.export`` first and then converting the resulting ``.onnx`` file to OpenVINO Model -with ``openvino.convert_model``. It can be considered as a backup solution if a model cannot be -converted directly from PyTorch to OpenVINO as described in the above chapters. Converting through -ONNX can be more expensive in terms of code, conversion time, and allocated memory. +An alternative method of converting a PyTorch models is to export it to ONNX first +(with ``torch.onnx.export``) and then convert the resulting ``.onnx`` file to the OpenVINO IR +model (with ``openvino.convert_model``). It should be considered a backup solution, if a model +cannot be converted directly, as described previously. Converting through ONNX can be more +expensive in terms of code overhead, conversion time, and allocated memory. 1. Refer to the `Exporting PyTorch models to ONNX format `__ guide to learn how to export models from PyTorch to ONNX. -2. Follow :doc:`Convert an ONNX model ` chapter to produce OpenVINO model. +2. Follow the :doc:`Convert an ONNX model ` guide to produce OpenVINO IR. Here is an illustration of using these two steps together: diff --git a/docs/articles_en/openvino-workflow/torch-compile.rst b/docs/articles_en/openvino-workflow/torch-compile.rst index 8c6016bfd4742f..d398704a819edc 100644 --- a/docs/articles_en/openvino-workflow/torch-compile.rst +++ b/docs/articles_en/openvino-workflow/torch-compile.rst @@ -5,7 +5,8 @@ PyTorch Deployment via "torch.compile" The ``torch.compile`` feature enables you to use OpenVINO for PyTorch-native applications. It speeds up PyTorch code by JIT-compiling it into optimized kernels. -By default, Torch code runs in eager-mode, but with the use of ``torch.compile`` it goes through the following steps: +By default, Torch code runs in eager-mode, but with the use of ``torch.compile`` it goes +through the following steps: 1. **Graph acquisition** - the model is rewritten as blocks of subgraphs that are either: diff --git a/docs/dev/pypi_publish/pypi-openvino-rt.md b/docs/dev/pypi_publish/pypi-openvino-rt.md index 854984ed2a0734..642eb12d65e8f9 100644 --- a/docs/dev/pypi_publish/pypi-openvino-rt.md +++ b/docs/dev/pypi_publish/pypi-openvino-rt.md @@ -6,8 +6,8 @@ Intel® Distribution of OpenVINO™ toolkit is an open-source toolkit for optimizing and deploying AI inference. It can be used to develop applications and solutions based on deep learning tasks, such as: emulation of human vision, automatic speech recognition, natural language processing, -recommendation systems, etc. It provides high-performance and rich deployment options, from -edge to cloud. +recommendation systems, image generation, etc. It provides high-performance and rich deployment +options, from edge to cloud. If you have chosen a model, you can integrate it with your application through OpenVINO™ and deploy it on various devices. The OpenVINO™ Python package includes a set of libraries for easy @@ -26,7 +26,7 @@ versions. The complete list of supported hardware is available on the ## Install OpenVINO™ -### Step 1. Set Up Python Virtual Environment +### Step 1. Set up Python virtual environment Use a virtual environment to avoid dependency conflicts. To create a virtual environment, use the following commands: @@ -43,7 +43,7 @@ python3 -m venv openvino_env > **NOTE**: On Linux and macOS, you may need to [install pip](https://pip.pypa.io/en/stable/installation/). -### Step 2. Activate the Virtual Environment +### Step 2. Activate the virtual environment On Windows: ```sh @@ -55,24 +55,23 @@ On Linux and macOS: source openvino_env/bin/activate ``` -### Step 3. Set Up and Update PIP to the Highest Version +### Step 3. Set up PIP and update it to the highest version -Run the command below: +Run the command: ```sh python -m pip install --upgrade pip ``` -### Step 4. Install the Package +### Step 4. Install the package -Run the command below:
- - ```sh - pip install openvino - ``` +Run the command: +```sh +pip install openvino +``` -### Step 5. Verify that the Package Is Installed +### Step 5. Verify that the package is installed -Run the command below: +Run the command: ```sh python -c "from openvino import Core; print(Core().available_devices)" ``` @@ -88,22 +87,22 @@ If installation was successful, you will see the list of available devices. Description - OpenVINO Runtime + OpenVINO Runtime `openvino package` OpenVINO Runtime is a set of C++ libraries with C and Python bindings providing a common API to deliver inference solutions on the platform of your choice. Use the OpenVINO Runtime API to read PyTorch, TensorFlow, TensorFlow Lite, ONNX, and PaddlePaddle models and execute them on preferred devices. OpenVINO Runtime uses a plugin architecture and includes the following plugins: - CPU, - GPU, - Auto Batch, - Auto, - Hetero, + CPU, + GPU, + Auto Batch, + Auto, + Hetero, - OpenVINO Model Converter (OVC) + OpenVINO Model Converter (OVC) `ovc` OpenVINO Model Converter converts models that were trained in popular frameworks to a format usable by OpenVINO components.
Supported frameworks include ONNX, TensorFlow, @@ -111,7 +110,7 @@ If installation was successful, you will see the list of available devices. - Benchmark Tool + Benchmark Tool `benchmark_app` Benchmark Application** allows you to estimate deep learning inference performance on supported devices for synchronous and asynchronous modes. @@ -122,8 +121,8 @@ If installation was successful, you will see the list of available devices. ## Troubleshooting -For general troubleshooting steps and issues, see -[Troubleshooting Guide for OpenVINO Installation](https://docs.openvino.ai/2024/get-started/troubleshooting-install-config.html). +For general troubleshooting, see the +[Troubleshooting Guide for OpenVINO Installation](https://docs.openvino.ai/2025/get-started/troubleshooting-install-config.html). The following sections also provide explanations to several error messages. ### Errors with Installing via PIP for Users in China @@ -145,11 +144,11 @@ the [C++ redistributable (.exe)](https://aka.ms/vs/17/release/vc_redist.x64.exe) You can also view a full download list on the [official support page](https://docs.microsoft.com/en-us/cpp/windows/latest-supported-vc-redist). -### ImportError: libpython3.8.so.1.0: cannot open shared object file: No such file or directory +### ImportError: libpython3.10.so.1.0: cannot open shared object file: No such file or directory To resolve missing external dependency on Ubuntu*, execute the following command: ```sh -sudo apt-get install libpython3.8 +sudo apt-get install libpython3.10 ``` ## Additional Resources @@ -159,7 +158,7 @@ sudo apt-get install libpython3.8 - [OpenVINO™ Notebooks](https://github.com/openvinotoolkit/openvino_notebooks) - [OpenVINO Installation Selector Tool](https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit/download.html) -Copyright © 2018-2024 Intel Corporation +Copyright © 2018-2025 Intel Corporation > **LEGAL NOTICE**: Your use of this software and any required dependent software (the “Software Package”) is subject to the terms and conditions of the [Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0.html) for the Software Package, diff --git a/docs/sphinx_setup/_static/html/footer.html b/docs/sphinx_setup/_static/html/footer.html index d2ed2756109938..75e046fa72fefc 100644 --- a/docs/sphinx_setup/_static/html/footer.html +++ b/docs/sphinx_setup/_static/html/footer.html @@ -106,7 +106,7 @@