Skip to content

Commit

Permalink
Merge branch 'docusaurus-version' of https://github.com/meislisha/wik…
Browse files Browse the repository at this point in the history
…i-documents into docusaurus-version
  • Loading branch information
lisha committed Aug 2, 2023
2 parents 9a47b64 + cae0ea1 commit 599da93
Show file tree
Hide file tree
Showing 7 changed files with 450 additions and 7 deletions.
Binary file added assets/knowledgebase/knowledge_base5.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Original file line number Diff line number Diff line change
Expand Up @@ -132,6 +132,7 @@ By creating and using [Templates](https://docs.n3uron.com/docs/platform-template
Just drag and drop the desired object into the templates section and start building your template using [custom properties](https://docs.n3uron.com/docs/platform-templates-custom-properties), [inheritance](https://docs.n3uron.com/docs/platform-templates-inheritance) and [more](https://docs.n3uron.com/docs/platform-templates-nesting).

<div align="center"><img src="https://files.seeedstudio.com/wiki/Edge_Box/n3uron/gif3.gif" alt="pir" width="600" height="auto" /></div>

### Configure MQTT Client

**Step 1**: Go to Config→Modules, click on the menu and then create a **New Module** named MqttClient.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ last_update:
author: Lakshantha
---

# Deploy YOLOv8 on NVIDIA Jetson using TensorRT and DeepStream SDK
# Deploy YOLOv8 on NVIDIA Jetson using TensorRT and DeepStream SDK Support

This guide explains how to deploy a trained AI model into NVIDIA Jetson Platform and perform inference using TensorRT and DeepStream SDK. Here we use TensorRT to maximize the inference performance on the Jetson platform.

Expand Down
12 changes: 6 additions & 6 deletions docs/Edge/reComputer/Application/YOLOv8-TRT-Jetson.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ Different computer vision tasks will be introduced here such as:
- [reComputer Jetson](https://www.seeedstudio.com/reComputer-J4012-p-5586.html) or any other NVIDIA Jetson device running JetPack 5.1.1 or higher

:::note
This wiki has been tested and verified on a [reComputer J4012](https://www.seeedstudio.com/reComputer-J4012-p-5586.html) powered by NVIDIA Jetson orin NX 16GB module
This wiki has been tested and verified on a [reComputer J4012](https://www.seeedstudio.com/reComputer-J4012-p-5586.html) and reComputer Industrial J4012[https://www.seeedstudio.com/reComputer-Industrial-J4012-p-5684.html] powered by NVIDIA Jetson orin NX 16GB module
:::

## Flash JetPack to Jetson
Expand Down Expand Up @@ -241,7 +241,7 @@ If you face any errors when executing the above commands, try adding "device=0"
" style={{width:1000, height:'auto'}}/></div>

:::note
The above is run on a reComputer J4012 and uses YOLOv8s model trained with 640x640 input and uses TensorRT FP16 precision.
The above is run on a reComputer J4012/ reComputer Industrial J4012 and uses YOLOv8s model trained with 640x640 input and uses TensorRT FP16 precision.
:::

</TabItem>
Expand Down Expand Up @@ -341,7 +341,7 @@ If you face any errors when executing the above commands, try adding "device=0"
" style={{width:1000, height:'auto'}}/></div>

:::note
The above is run on a reComputer J4012 and uses YOLOv8s-cls model trained with 224x224 input and uses TensorRT FP16 precision. Also, make sure to pass the argument **imgsz=224** inside the inference command with TensorRT exports because the inference engine accepts 640 image size by default when using TensorRT models.
The above is run on a reComputer J4012/ reComputer Industrial J4012 and uses YOLOv8s-cls model trained with 224x224 input and uses TensorRT FP16 precision. Also, make sure to pass the argument **imgsz=224** inside the inference command with TensorRT exports because the inference engine accepts 640 image size by default when using TensorRT models.
:::

</TabItem>
Expand Down Expand Up @@ -440,7 +440,7 @@ If you face any errors when executing the above commands, try adding "device=0"
" style={{width:1000, height:'auto'}}/></div>

:::note
The above is run on a reComputer J4012 and uses YOLOv8s-seg model trained with 640x640 input and uses TensorRT FP16 precision.
The above is run on a reComputer J4012/ reComputer Industrial J4012 and uses YOLOv8s-seg model trained with 640x640 input and uses TensorRT FP16 precision.
:::

</TabItem>
Expand Down Expand Up @@ -933,7 +933,7 @@ yolo detect predict model=<your_model.pt> source='0' show=True

### Preparation

We have done performance benchmarks for all computer vision tasks supported by YOLOv8 running on reComputer J4012 powered by NVIDIA Jetson Orin NX 16GB module.
We have done performance benchmarks for all computer vision tasks supported by YOLOv8 running on reComputer J4012/ reComputer Industrial J4012 powered by NVIDIA Jetson Orin NX 16GB module.

Included in the samples directory is a command-line wrapper tool called [trtexec](https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/index.html#trtexec). trtexec is a tool to use TensorRT without having to develop your own application. The trtexec tool has three main purposes:

Expand Down Expand Up @@ -981,7 +981,7 @@ However, if you want **INT8** precision which offers better performance than **F

### Results

Below we summarize the results that we get from all the four computer vision tasks running on reComputer J4012.
Below we summarize the results that we get from all the four computer vision tasks running on reComputer J4012/ reComputer Industrial J4012.

<div style={{textAlign:'center'}}><img src="https://files.seeedstudio.com/wiki/YOLOV8-TRT/45.png
" style={{width:1000, height:'auto'}}/></div>
Expand Down
Loading

0 comments on commit 599da93

Please sign in to comment.