Skip to content

Commit

Permalink
doc fixes (openvinotoolkit#6438)
Browse files Browse the repository at this point in the history
* doc fixes

* doc fix

* doc fix
  • Loading branch information
ntyukaev authored Jun 29, 2021
1 parent c2e8c3b commit cca5778
Show file tree
Hide file tree
Showing 28 changed files with 48 additions and 40 deletions.
2 changes: 1 addition & 1 deletion docs/IE_DG/Model_caching_overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

## Introduction

As described in [Inference Engine Introduction](inference_engine_intro.md), common application flow consists of the following steps:
As described in [Inference Engine Developer Guide](Deep_Learning_Inference_Engine_DevGuide.md), common application flow consists of the following steps:

1. **Create Inference Engine Core object**

Expand Down
2 changes: 1 addition & 1 deletion docs/IE_DG/inference_engine_intro.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
This Guide provides an overview of the Inference Engine describing the typical workflow for performing
inference of a pre-trained and optimized deep learning model and a set of sample applications.

> **NOTE:** Before you perform inference with the Inference Engine, your models should be converted to the Inference Engine format using the Model Optimizer or built directly in run-time using nGraph API. To learn about how to use Model Optimizer, refer to the [Model Optimizer Developer Guide](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md). To learn about the pre-trained and optimized models delivered with the OpenVINO™ toolkit, refer to [Pre-Trained Models](@ref omz_models_intel_index).
> **NOTE:** Before you perform inference with the Inference Engine, your models should be converted to the Inference Engine format using the Model Optimizer or built directly in run-time using nGraph API. To learn about how to use Model Optimizer, refer to the [Model Optimizer Developer Guide](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md). To learn about the pre-trained and optimized models delivered with the OpenVINO™ toolkit, refer to [Pre-Trained Models](@ref omz_models_group_intel).
After you have used the Model Optimizer to create an Intermediate Representation (IR), use the Inference Engine to infer the result for a given input data.

Expand Down
3 changes: 3 additions & 0 deletions docs/doxygen/ie_docs.xml
Original file line number Diff line number Diff line change
Expand Up @@ -97,6 +97,9 @@ limitations under the License.
<tab type="user" title="opset2 Specification" url="@ref openvino_docs_ops_opset2"/>
<tab type="user" title="opset1 Specification" url="@ref openvino_docs_ops_opset1"/>
</tab>
<tab type="usergroup" title="Broadcast Rules For Elementwise Operations" url="@ref openvino_docs_ops_broadcast_rules">
<tab type="usergroup" title="Broadcast Rules For Elementwise Operations" url="@ref openvino_docs_ops_broadcast_rules"/>
</tab>
<tab type="usergroup" title="Operations Specifications" url="">
<tab type="user" title="Abs-1" url="@ref openvino_docs_ops_arithmetic_Abs_1"/>
<tab type="user" title="Acos-1" url="@ref openvino_docs_ops_arithmetic_Acos_1"/>
Expand Down
5 changes: 5 additions & 0 deletions docs/doxygen/openvino_docs.xml
Original file line number Diff line number Diff line change
Expand Up @@ -158,6 +158,9 @@ limitations under the License.
<xi:include href="omz_docs.xml" xpointer="omz_models">
<xi:fallback/>
</xi:include>
<tab type="user" title="Dataset Preparation Guide" url="@ref omz_data_datasets"/>
<tab type="user" title="Intel's Pre-Trained Models Device Support" url="@ref omz_models_intel_device_support"/>
<tab type="user" title="Public Pre-Trained Models Device Support" url="@ref omz_models_public_device_support"/>
<xi:include href="omz_docs.xml" xpointer="omz_demos">
<xi:fallback/>
</xi:include>
Expand Down Expand Up @@ -205,6 +208,8 @@ limitations under the License.
<tab type="user" title="MetaPublish Listeners" url="@ref gst_samples_gst_launch_metapublish_listener"/>
</tab>
<tab type="user" title="gvapython Sample" url="@ref gst_samples_gst_launch_gvapython_face_detection_and_classification_README"/>
<tab type="user" title="Action Recognition Sample" url="@ref gst_samples_gst_launch_action_recognition_README"/>
<tab type="user" title="Human Pose Estimation Sample" url="@ref gst_samples_gst_launch_human_pose_estimation_README"/>
</tab>
<tab type="user" title="Draw Face Attributes C++ Sample" url="@ref gst_samples_cpp_draw_face_attributes_README"/>
<tab type="user" title="Draw Face Attributes Python Sample" url="@ref gst_samples_python_draw_face_attributes_README"/>
Expand Down
2 changes: 1 addition & 1 deletion docs/get_started/get_started_linux.md
Original file line number Diff line number Diff line change
Expand Up @@ -522,7 +522,7 @@ source /opt/intel/openvino_2021/bin/setupvars.sh

## <a name="syntax-examples"></a> Typical Code Sample and Demo Application Syntax Examples

This section explains how to build and use the sample and demo applications provided with the toolkit. You will need CMake 3.10 or later installed. Build details are on the [Inference Engine Samples](../IE_DG/Samples_Overview.md) and [Demo Applications](@ref omz_demos_README) pages.
This section explains how to build and use the sample and demo applications provided with the toolkit. You will need CMake 3.10 or later installed. Build details are on the [Inference Engine Samples](../IE_DG/Samples_Overview.md) and [Demo Applications](@ref omz_demos) pages.

To build all the demos and samples:

Expand Down
2 changes: 1 addition & 1 deletion docs/get_started/get_started_macos.md
Original file line number Diff line number Diff line change
Expand Up @@ -476,7 +476,7 @@ source /opt/intel/openvino_2021/bin/setupvars.sh

## <a name="syntax-examples"></a> Typical Code Sample and Demo Application Syntax Examples

This section explains how to build and use the sample and demo applications provided with the toolkit. You will need CMake 3.13 or later installed. Build details are on the [Inference Engine Samples](../IE_DG/Samples_Overview.md) and [Demo Applications](@ref omz_demos_README) pages.
This section explains how to build and use the sample and demo applications provided with the toolkit. You will need CMake 3.13 or later installed. Build details are on the [Inference Engine Samples](../IE_DG/Samples_Overview.md) and [Demo Applications](@ref omz_demos) pages.

To build all the demos and samples:

Expand Down
2 changes: 1 addition & 1 deletion docs/get_started/get_started_windows.md
Original file line number Diff line number Diff line change
Expand Up @@ -484,7 +484,7 @@ Below you can find basic guidelines for executing the OpenVINO™ workflow using

## <a name="syntax-examples"></a> Typical Code Sample and Demo Application Syntax Examples

This section explains how to build and use the sample and demo applications provided with the toolkit. You will need CMake 3.10 or later and Microsoft Visual Studio 2017 or 2019 installed. Build details are on the [Inference Engine Samples](../IE_DG/Samples_Overview.md) and [Demo Applications](@ref omz_demos_README) pages.
This section explains how to build and use the sample and demo applications provided with the toolkit. You will need CMake 3.10 or later and Microsoft Visual Studio 2017 or 2019 installed. Build details are on the [Inference Engine Samples](../IE_DG/Samples_Overview.md) and [Demo Applications](@ref omz_demos) pages.

To build all the demos and samples:

Expand Down
2 changes: 1 addition & 1 deletion docs/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ The following diagram illustrates the typical OpenVINO™ workflow (click to see
### Model Preparation, Conversion and Optimization

You can use your framework of choice to prepare and train a deep learning model or just download a pre-trained model from the Open Model Zoo. The Open Model Zoo includes deep learning solutions to a variety of vision problems, including object recognition, face recognition, pose estimation, text detection, and action recognition, at a range of measured complexities.
Several of these pre-trained models are used also in the [code samples](IE_DG/Samples_Overview.md) and [application demos](@ref omz_demos_README). To download models from the Open Model Zoo, the [Model Downloader](@ref omz_tools_downloader_README) tool is used.
Several of these pre-trained models are used also in the [code samples](IE_DG/Samples_Overview.md) and [application demos](@ref omz_demos). To download models from the Open Model Zoo, the [Model Downloader](@ref omz_tools_downloader) tool is used.

One of the core component of the OpenVINO™ toolkit is the [Model Optimizer](MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md) a cross-platform command-line
tool that converts a trained neural network from its source framework to an open-source, nGraph-compatible [Intermediate Representation (IR)](MO_DG/IR_and_opsets.md) for use in inference operations. The Model Optimizer imports models trained in popular frameworks such as Caffe*, TensorFlow*, MXNet*, Kaldi*, and ONNX* and performs a few optimizations to remove excess layers and group operations when possible into simpler, faster graphs.
Expand Down
2 changes: 1 addition & 1 deletion docs/install_guides/installing-openvino-docker-linux.md
Original file line number Diff line number Diff line change
Expand Up @@ -312,7 +312,7 @@ For instructions for previous releases with FPGA Support, see documentation for

## Troubleshooting

If you got proxy issues, please setup proxy settings for Docker. See the Proxy section in the [Install the DL Workbench from Docker Hub* ](@ref workbench_docs_Workbench_DG_Install_from_Docker_Hub) topic.
If you got proxy issues, please setup proxy settings for Docker. See the Proxy section in the [Install the DL Workbench from Docker Hub* ](@ref workbench_docs_Workbench_DG_Run_Locally) topic.

## Additional Resources

Expand Down
2 changes: 1 addition & 1 deletion docs/install_guides/installing-openvino-docker-windows.md
Original file line number Diff line number Diff line change
Expand Up @@ -141,7 +141,7 @@ GPU Acceleration in Windows containers feature requires to meet Windows host, Op
## Troubleshooting

If you got proxy issues, please setup proxy settings for Docker. See the Proxy section in the [Install the DL Workbench from Docker Hub* ](@ref workbench_docs_Workbench_DG_Install_from_Docker_Hub) topic.
If you got proxy issues, please setup proxy settings for Docker. See the Proxy section in the [Install the DL Workbench from Docker Hub* ](@ref workbench_docs_Workbench_DG_Run_Locally) topic.

## Additional Resources

Expand Down
2 changes: 1 addition & 1 deletion docs/security_guide/workbench.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ is only accessible from the machine the Docker container is built on:
application are accessible only from the `localhost` by default.

* When using `docker run` to [start the DL Workbench from Docker
Hub](@ref workbench_docs_Workbench_DG_Install_from_Docker_Hub), limit connections for the host IP 127.0.0.1.
Hub](@ref workbench_docs_Workbench_DG_Run_Locally), limit connections for the host IP 127.0.0.1.
For example, limit the connections for the host IP to the port `5665` with the `-p
127.0.0.1:5665:5665` command . Refer to [Container
networking](https://docs.docker.com/config/containers/container-networking/#published-ports) for
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ To build the sample, please use instructions available at [Build the Sample Appl

To run the sample, you need specify a model and image:

- you can use [public](@ref omz_models_public_index) or [Intel's](@ref omz_models_intel_index) pre-trained models from the Open Model Zoo. The models can be downloaded using the [Model Downloader](@ref omz_tools_downloader_README).
- you can use [public](@ref omz_models_group_public) or [Intel's](@ref omz_models_group_intel) pre-trained models from the Open Model Zoo. The models can be downloaded using the [Model Downloader](@ref omz_tools_downloader).
- you can use images from the media files collection available at https://storage.openvinotoolkit.org/data/test_data.

> **NOTES**:
Expand Down Expand Up @@ -82,7 +82,7 @@ This sample is an API example, for any performance measurements please use the d

- [Integrate the Inference Engine with Your Application](../../../../../docs/IE_DG/Integrate_with_customer_application_new_API.md)
- [Using Inference Engine Samples](../../../../../docs/IE_DG/Samples_Overview.md)
- [Model Downloader](@ref omz_tools_downloader_README)
- [Model Downloader](@ref omz_tools_downloader)
- [Model Optimizer](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)

[ie_core_create]:https://docs.openvinotoolkit.org/latest/ie_c_api/group__Core.html#gaab73c7ee3704c742eaac457636259541
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ To build the sample, please use instructions available at [Build the Sample Appl

To run the sample, you need specify a model and image:

- you can use [public](@ref omz_models_public_index) or [Intel's](@ref omz_models_intel_index) pre-trained models from the Open Model Zoo. The models can be downloaded using the [Model Downloader](@ref omz_tools_downloader_README).
- you can use [public](@ref omz_models_group_public) or [Intel's](@ref omz_models_group_intel) pre-trained models from the Open Model Zoo. The models can be downloaded using the [Model Downloader](@ref omz_tools_downloader).
- you can use images from the media files collection available at https://storage.openvinotoolkit.org/data/test_data.

The sample accepts an uncompressed image in the NV12 color format. To run the sample, you need to
Expand Down Expand Up @@ -97,7 +97,7 @@ This sample is an API example, for any performance measurements please use the d

- [Integrate the Inference Engine with Your Application](../../../../../docs/IE_DG/Integrate_with_customer_application_new_API.md)
- [Using Inference Engine Samples](../../../../../docs/IE_DG/Samples_Overview.md)
- [Model Downloader](@ref omz_tools_downloader_README)
- [Model Downloader](@ref omz_tools_downloader)
- [Model Optimizer](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)

[ie_network_set_color_format]:https://docs.openvinotoolkit.org/latest/ie_c_api/group__Network.html#ga85f3251f1f7b08507c297e73baa58969
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ To build the sample, please use instructions available at [Build the Sample Appl

To run the sample, you need specify a model and image:

- you can use [public](@ref omz_models_public_index) or [Intel's](@ref omz_models_intel_index) pre-trained models from the Open Model Zoo. The models can be downloaded using the [Model Downloader](@ref omz_tools_downloader_README).
- you can use [public](@ref omz_models_group_public) or [Intel's](@ref omz_models_group_intel) pre-trained models from the Open Model Zoo. The models can be downloaded using the [Model Downloader](@ref omz_tools_downloader).
- you can use images from the media files collection available at https://storage.openvinotoolkit.org/data/test_data.

Running the application with the <code>-h</code> option yields the following usage message:
Expand Down Expand Up @@ -141,7 +141,7 @@ This sample is an API example, for any performance measurements please use the d

- [Integrate the Inference Engine with Your Application](../../../../../docs/IE_DG/Integrate_with_customer_application_new_API.md)
- [Using Inference Engine Samples](../../../../../docs/IE_DG/Samples_Overview.md)
- [Model Downloader](@ref omz_tools_downloader_README)
- [Model Downloader](@ref omz_tools_downloader)
- [Model Optimizer](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)

[ie_infer_request_infer_async]:https://docs.openvinotoolkit.org/latest/ie_c_api/group__InferRequest.html#gad2351010e292b6faec959a3d5a8fb60e
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,7 @@ Options:

To run the sample, you need specify a model and image:

- you can use [public](@ref omz_models_public_index) or [Intel's](@ref omz_models_intel_index) pre-trained models from the Open Model Zoo. The models can be downloaded using the [Model Downloader](@ref omz_tools_downloader_README).
- you can use [public](@ref omz_models_group_public) or [Intel's](@ref omz_models_group_intel) pre-trained models from the Open Model Zoo. The models can be downloaded using the [Model Downloader](@ref omz_tools_downloader).
- you can use images from the media files collection available at https://storage.openvinotoolkit.org/data/test_data.

> **NOTES**:
Expand Down Expand Up @@ -136,7 +136,7 @@ The sample application logs each step in a standard output stream and outputs to

- [Integrate the Inference Engine with Your Application](../../../../../docs/IE_DG/Integrate_with_customer_application_new_API.md)
- [Using Inference Engine Samples](../../../../../docs/IE_DG/Samples_Overview.md)
- [Model Downloader](@ref omz_tools_downloader_README)
- [Model Downloader](@ref omz_tools_downloader)
- [Model Optimizer](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)

[IECore]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ Options:
```

To run the sample, you need specify a model and image:
- you can use [public](@ref omz_models_public_index) or [Intel's](@ref omz_models_intel_index) pre-trained models from the Open Model Zoo. The models can be downloaded using the [Model Downloader](@ref omz_tools_downloader_README).
- you can use [public](@ref omz_models_group_public) or [Intel's](@ref omz_models_group_intel) pre-trained models from the Open Model Zoo. The models can be downloaded using the [Model Downloader](@ref omz_tools_downloader).
- you can use images from the media files collection available at https://storage.openvinotoolkit.org/data/test_data.

> **NOTES**:
Expand Down Expand Up @@ -107,7 +107,7 @@ The sample application logs each step in a standard output stream and outputs to

- [Integrate the Inference Engine with Your Application](../../../../../docs/IE_DG/Integrate_with_customer_application_new_API.md)
- [Using Inference Engine Samples](../../../../../docs/IE_DG/Samples_Overview.md)
- [Model Downloader](@ref omz_tools_downloader_README)
- [Model Downloader](@ref omz_tools_downloader)
- [Model Optimizer](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)

[IECore]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@ Options:
```

To run the sample, you need specify a model and image:
- you can use [public](@ref omz_models_public_index) or [Intel's](@ref omz_models_intel_index) pre-trained models from the Open Model Zoo. The models can be downloaded using the [Model Downloader](@ref omz_tools_downloader_README).
- you can use [public](@ref omz_models_group_public) or [Intel's](@ref omz_models_group_intel) pre-trained models from the Open Model Zoo. The models can be downloaded using the [Model Downloader](@ref omz_tools_downloader).
- you can use images from the media files collection available at https://storage.openvinotoolkit.org/data/test_data.

> **NOTES**:
Expand Down Expand Up @@ -104,7 +104,7 @@ The sample application logs each step in a standard output stream and creates an

- [Integrate the Inference Engine with Your Application](../../../../../docs/IE_DG/Integrate_with_customer_application_new_API.md)
- [Using Inference Engine Samples](../../../../../docs/IE_DG/Samples_Overview.md)
- [Model Downloader](@ref omz_tools_downloader_README)
- [Model Downloader](@ref omz_tools_downloader)
- [Model Optimizer](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)

[IECore]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -130,7 +130,7 @@ The sample application logs each step in a standard output stream and outputs to

- [Integrate the Inference Engine with Your Application](../../../../../docs/IE_DG/Integrate_with_customer_application_new_API.md)
- [Using Inference Engine Samples](../../../../../docs/IE_DG/Samples_Overview.md)
- [Model Downloader](@ref omz_tools_downloader_README)
- [Model Downloader](@ref omz_tools_downloader)
- [Model Optimizer](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)

[IECore]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,7 @@ Options:

To run the sample, you need specify a model and image:

- you can use [public](@ref omz_models_public_index) or [Intel's](@ref omz_models_intel_index) pre-trained models from the Open Model Zoo. The models can be downloaded using the [Model Downloader](@ref omz_tools_downloader_README).
- you can use [public](@ref omz_models_group_public) or [Intel's](@ref omz_models_group_intel) pre-trained models from the Open Model Zoo. The models can be downloaded using the [Model Downloader](@ref omz_tools_downloader).
- you can use images from the media files collection available at https://storage.openvinotoolkit.org/data/test_data.

> **NOTES**:
Expand Down Expand Up @@ -103,7 +103,7 @@ The sample application logs each step in a standard output stream and creates an

- [Integrate the Inference Engine with Your Application](../../../../../docs/IE_DG/Integrate_with_customer_application_new_API.md)
- [Using Inference Engine Samples](../../../../../docs/IE_DG/Samples_Overview.md)
- [Model Downloader](@ref omz_tools_downloader_README)
- [Model Downloader](@ref omz_tools_downloader)
- [Model Optimizer](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)

[IECore]:https://docs.openvinotoolkit.org/latest/ie_python_api/classie__api_1_1IECore.html
Expand Down
Loading

0 comments on commit cca5778

Please sign in to comment.