Skip to content

Commit

Permalink
[DOCS] update pypi to 2025 and text adjustment for two PRs (#28249)
Browse files Browse the repository at this point in the history
Co-authored-by: Andrzej Kopytko <[email protected]>
  • Loading branch information
kblaszczak-intel and akopytko authored Jan 3, 2025
1 parent 4ebb6ed commit 0fb9e72
Show file tree
Hide file tree
Showing 4 changed files with 44 additions and 41 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -206,14 +206,16 @@ Here is an example of how to convert a model obtained with ``torch.export``:
Converting a PyTorch Model from Disk
####################################

PyTorch provides the capability to save models in two distinct formats: ``torch.jit.ScriptModule`` and ``torch.export.ExportedProgram``.
Both formats can be saved to disk as standalone files, enabling them to be reloaded independently of the original Python code.
PyTorch can save models in two formats: ``torch.jit.ScriptModule`` and ``torch.export.ExportedProgram``.
Both formats may be saved to drive as standalone files and reloaded later, independently of the
original Python code.

ExportedProgram Format
++++++++++++++++++++++

The ``ExportedProgram`` format is saved on disk using `torch.export.save() <https://pytorch.org/docs/stable/export.html#serialization>`__.
Below is an example of how to convert an ``ExportedProgram`` from disk:
You can save the ``ExportedProgram`` format using
`torch.export.save() <https://pytorch.org/docs/stable/export.html#serialization>`__.
Here is an example of how to convert it:

.. tab-set::

Expand All @@ -236,8 +238,9 @@ Below is an example of how to convert an ``ExportedProgram`` from disk:
ScriptModule Format
+++++++++++++++++++

`torch.jit.save() <https://pytorch.org/docs/stable/generated/torch.jit.save.html>`__ serializes ``ScriptModule`` object on disk.
To convert the serialized ``ScriptModule`` format, run ``convert_model`` function with ``example_input`` parameter as follows:
`torch.jit.save() <https://pytorch.org/docs/stable/generated/torch.jit.save.html>`__ serializes
the ``ScriptModule`` object on a drive. To convert the serialized ``ScriptModule`` format, run
the ``convert_model`` function with ``example_input`` parameter as follows:

.. code-block:: py
:force:
Expand All @@ -252,15 +255,15 @@ To convert the serialized ``ScriptModule`` format, run ``convert_model`` functio
Exporting a PyTorch Model to ONNX Format
########################################

An alternative method of converting PyTorch models is exporting a PyTorch model to ONNX with
``torch.onnx.export`` first and then converting the resulting ``.onnx`` file to OpenVINO Model
with ``openvino.convert_model``. It can be considered as a backup solution if a model cannot be
converted directly from PyTorch to OpenVINO as described in the above chapters. Converting through
ONNX can be more expensive in terms of code, conversion time, and allocated memory.
An alternative method of converting a PyTorch models is to export it to ONNX first
(with ``torch.onnx.export``) and then convert the resulting ``.onnx`` file to the OpenVINO IR
model (with ``openvino.convert_model``). It should be considered a backup solution, if a model
cannot be converted directly, as described previously. Converting through ONNX can be more
expensive in terms of code overhead, conversion time, and allocated memory.

1. Refer to the `Exporting PyTorch models to ONNX format <https://pytorch.org/docs/stable/onnx.html>`__
guide to learn how to export models from PyTorch to ONNX.
2. Follow :doc:`Convert an ONNX model <convert-model-onnx>` chapter to produce OpenVINO model.
2. Follow the :doc:`Convert an ONNX model <convert-model-onnx>` guide to produce OpenVINO IR.

Here is an illustration of using these two steps together:

Expand Down
3 changes: 2 additions & 1 deletion docs/articles_en/openvino-workflow/torch-compile.rst
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,8 @@ PyTorch Deployment via "torch.compile"

The ``torch.compile`` feature enables you to use OpenVINO for PyTorch-native applications.
It speeds up PyTorch code by JIT-compiling it into optimized kernels.
By default, Torch code runs in eager-mode, but with the use of ``torch.compile`` it goes through the following steps:
By default, Torch code runs in eager-mode, but with the use of ``torch.compile`` it goes
through the following steps:

1. **Graph acquisition** - the model is rewritten as blocks of subgraphs that are either:

Expand Down
53 changes: 26 additions & 27 deletions docs/dev/pypi_publish/pypi-openvino-rt.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,8 +6,8 @@
Intel® Distribution of OpenVINO™ toolkit is an open-source toolkit for optimizing and deploying
AI inference. It can be used to develop applications and solutions based on deep learning tasks,
such as: emulation of human vision, automatic speech recognition, natural language processing,
recommendation systems, etc. It provides high-performance and rich deployment options, from
edge to cloud.
recommendation systems, image generation, etc. It provides high-performance and rich deployment
options, from edge to cloud.

If you have chosen a model, you can integrate it with your application through OpenVINO™ and
deploy it on various devices. The OpenVINO™ Python package includes a set of libraries for easy
Expand All @@ -26,7 +26,7 @@ versions. The complete list of supported hardware is available on the
## Install OpenVINO™

### Step 1. Set Up Python Virtual Environment
### Step 1. Set up Python virtual environment

Use a virtual environment to avoid dependency conflicts. To create a virtual environment, use
the following commands:
Expand All @@ -43,7 +43,7 @@ python3 -m venv openvino_env

> **NOTE**: On Linux and macOS, you may need to [install pip](https://pip.pypa.io/en/stable/installation/).
### Step 2. Activate the Virtual Environment
### Step 2. Activate the virtual environment

On Windows:
```sh
Expand All @@ -55,24 +55,23 @@ On Linux and macOS:
source openvino_env/bin/activate
```

### Step 3. Set Up and Update PIP to the Highest Version
### Step 3. Set up PIP and update it to the highest version

Run the command below:
Run the command:
```sh
python -m pip install --upgrade pip
```

### Step 4. Install the Package
### Step 4. Install the package

Run the command below: <br>

```sh
pip install openvino
```
Run the command:
```sh
pip install openvino
```

### Step 5. Verify that the Package Is Installed
### Step 5. Verify that the package is installed

Run the command below:
Run the command:
```sh
python -c "from openvino import Core; print(Core().available_devices)"
```
Expand All @@ -88,30 +87,30 @@ If installation was successful, you will see the list of available devices.
<th>Description</th>
</tr>
<tr>
<td><a href="https://docs.openvino.ai/2024/openvino-workflow/running-inference.html">OpenVINO Runtime</a></td>
<td><a href="https://docs.openvino.ai/2025/openvino-workflow/running-inference.html">OpenVINO Runtime</a></td>
<td>`openvino package`</td>
<td>OpenVINO Runtime is a set of C++ libraries with C and Python bindings providing a common
API to deliver inference solutions on the platform of your choice. Use the OpenVINO
Runtime API to read PyTorch, TensorFlow, TensorFlow Lite, ONNX, and PaddlePaddle models
and execute them on preferred devices. OpenVINO Runtime uses a plugin architecture and
includes the following plugins:
<a href="https://docs.openvino.ai/2024/openvino-workflow/running-inference/inference-devices-and-modes/cpu-device.html">CPU</a>,
<a href="https://docs.openvino.ai/2024/openvino-workflow/running-inference/inference-devices-and-modes/gpu-device.html">GPU</a>,
<a href="https://docs.openvino.ai/2024/openvino-workflow/running-inference/inference-devices-and-modes/automatic-batching.html">Auto Batch</a>,
<a href="https://docs.openvino.ai/2024/openvino-workflow/running-inference/inference-devices-and-modes/auto-device-selection.html">Auto</a>,
<a href="https://docs.openvino.ai/2024/openvino-workflow/running-inference/inference-devices-and-modes/hetero-execution.html">Hetero</a>,
<a href="https://docs.openvino.ai/2025/openvino-workflow/running-inference/inference-devices-and-modes/cpu-device.html">CPU</a>,
<a href="https://docs.openvino.ai/2025/openvino-workflow/running-inference/inference-devices-and-modes/gpu-device.html">GPU</a>,
<a href="https://docs.openvino.ai/2025/openvino-workflow/running-inference/inference-devices-and-modes/automatic-batching.html">Auto Batch</a>,
<a href="https://docs.openvino.ai/2025/openvino-workflow/running-inference/inference-devices-and-modes/auto-device-selection.html">Auto</a>,
<a href="https://docs.openvino.ai/2025/openvino-workflow/running-inference/inference-devices-and-modes/hetero-execution.html">Hetero</a>,
</td>
</tr>
<tr>
<td><a href="https://docs.openvino.ai/2024/openvino-workflow/model-preparation.html#convert-a-model-in-cli-ovc">OpenVINO Model Converter (OVC)</a></td>
<td><a href="https://docs.openvino.ai/2025/openvino-workflow/model-preparation.html#convert-a-model-in-cli-ovc">OpenVINO Model Converter (OVC)</a></td>
<td>`ovc`</td>
<td>OpenVINO Model Converter converts models that were trained in popular frameworks to a
format usable by OpenVINO components. </br>Supported frameworks include ONNX, TensorFlow,
TensorFlow Lite, and PaddlePaddle.
</td>
</tr>
<tr>
<td><a href="https://docs.openvino.ai/2024/learn-openvino/openvino-samples/benchmark-tool.html">Benchmark Tool</a></td>
<td><a href="https://docs.openvino.ai/2025/learn-openvino/openvino-samples/benchmark-tool.html">Benchmark Tool</a></td>
<td>`benchmark_app`</td>
<td>Benchmark Application** allows you to estimate deep learning inference performance on
supported devices for synchronous and asynchronous modes.
Expand All @@ -122,8 +121,8 @@ If installation was successful, you will see the list of available devices.

## Troubleshooting

For general troubleshooting steps and issues, see
[Troubleshooting Guide for OpenVINO Installation](https://docs.openvino.ai/2024/get-started/troubleshooting-install-config.html).
For general troubleshooting, see the
[Troubleshooting Guide for OpenVINO Installation](https://docs.openvino.ai/2025/get-started/troubleshooting-install-config.html).
The following sections also provide explanations to several error messages.

### Errors with Installing via PIP for Users in China
Expand All @@ -145,11 +144,11 @@ the [C++ redistributable (.exe)](https://aka.ms/vs/17/release/vc_redist.x64.exe)
You can also view a full download list on the
[official support page](https://docs.microsoft.com/en-us/cpp/windows/latest-supported-vc-redist).

### ImportError: libpython3.8.so.1.0: cannot open shared object file: No such file or directory
### ImportError: libpython3.10.so.1.0: cannot open shared object file: No such file or directory

To resolve missing external dependency on Ubuntu*, execute the following command:
```sh
sudo apt-get install libpython3.8
sudo apt-get install libpython3.10
```

## Additional Resources
Expand All @@ -159,7 +158,7 @@ sudo apt-get install libpython3.8
- [OpenVINO™ Notebooks](https://github.com/openvinotoolkit/openvino_notebooks)
- [OpenVINO Installation Selector Tool](https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit/download.html)

Copyright © 2018-2024 Intel Corporation
Copyright © 2018-2025 Intel Corporation
> **LEGAL NOTICE**: Your use of this software and any required dependent software (the
“Software Package”) is subject to the terms and conditions of the
[Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0.html) for the Software Package,
Expand Down
2 changes: 1 addition & 1 deletion docs/sphinx_setup/_static/html/footer.html
Original file line number Diff line number Diff line change
Expand Up @@ -106,7 +106,7 @@
<div class="footer-list">
<div class="footer-link-list">
<ul class="no-bullets">
<lh>©2024 Intel Corporation</lh>
<lh>©2025 Intel Corporation</lh>
</ul>
<ul class="no-bullets">
<lh>OpenVINO</lh>
Expand Down

0 comments on commit 0fb9e72

Please sign in to comment.