Skip to content

Commit

Permalink
vis
Browse files Browse the repository at this point in the history
  • Loading branch information
tp-nan committed Jan 10, 2024
1 parent 46e009a commit 6dad271
Show file tree
Hide file tree
Showing 6 changed files with 65 additions and 29 deletions.
19 changes: 19 additions & 0 deletions docs/Intra-node/extensible_backend.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -95,6 +95,25 @@ model(input)
assert(input["result"] == b"123")
```

Or you can:
```python
tp.utils.cpp_extension.load_filter(
name = 'Skip',
sources='status forward(dict data){return status::Skip;}',
sources_header="")



tp.utils.cpp_extension.load_backend(
name = 'identity',
sources='void forward(dict data){(*data)["result"] = (*data)["data"];}',
sources_header="")
model = tp.pipe({"backend":'identity'})
input = {"data":2}
model(input)
assert input["result"] == 2
```

## Binding with Python
When using Python as the front-end language, the back-end is called from Python and the results are returned to Python, requiring type conversion.
### From Python Types to Any {#py2any}
Expand Down
8 changes: 4 additions & 4 deletions docs/installation.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -145,12 +145,12 @@ For more examples, see [Showcase](./showcase/showcase.mdx).
## Customizing Dockerfile {#selfdocker}

Refer to the [example Dockerfile](https://github.com/torchpipe/torchpipe/blob/main/docker/trt9.1.base). After downloading [TensorRT](https://github.com/NVIDIA/TensorRT/tree/release/9.1#optional---if-not-using-tensorrt-container-specify-the-tensorrt-ga-release-build-path) in advance, you can compile the corresponding base image.
```
# put TensorRT-9.1.0.4.Linux.x86_64-gnu.cuda-11.8.tar.gz into thirdparty/
```bash
# put TensorRT-9.*.Linux.x86_64-gnu.cuda-11.8.tar.gz into thirdparty/

# docker build --network=host -f docker/trt9.1.base -t torchpipe:base_trt-9.1 .
# docker build --network=host -f docker/trt9.base -t torchpipe:base_trt-9 .

# docker run --rm --network=host --gpus=all --ulimit memlock=-1 --ulimit stack=67108864 --privileged=true -v `pwd`:/workspace -it torchpipe:base_trt-9.1 /bin/bash
# docker run --rm --network=host --gpus=all --ulimit memlock=-1 --ulimit stack=67108864 --privileged=true -v `pwd`:/workspace -it torchpipe:base_trt-9 /bin/bash

```
Base images compiled in this way have smaller sizes than NGC PyTorch images. Please note that `_GLIBCXX_USE_CXX11_ABI==0`.
Expand Down
22 changes: 11 additions & 11 deletions docs/tools/vis.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -4,25 +4,25 @@ title: Configuration Visualizing
type: explainer
---

(From v0.4.0)
We provide a simple web-based visualization feature for configuration files.
## Environment Setup

```bash
apt-get update
apt install graphviz
pip install pydot gradio
pip install gradio
```

## Usage {#parameter}

`torchpipe.utils.vis [-h] [--port PORT] [--save] toml`

:::tip Parameters
- **--save** - Whether to save the graph as an SVG image. The image will be saved in the current directory with a different file extension than the TOML file.
:::
```python
import torchpipe as tp

a=tp.parse_toml("examples/ppocr/ocr.toml")

## Example
```bash
python -m torchpipe.utils.vis your.toml # --port 2211
```
tp.utils.Visual(a).launch()
```




Original file line number Diff line number Diff line change
Expand Up @@ -99,6 +99,26 @@ model(input)
assert(input["result"] == b"123")
```


Or you can:
```python
tp.utils.cpp_extension.load_filter(
name = 'Skip',
sources='status forward(dict data){return status::Skip;}',
sources_header="")



tp.utils.cpp_extension.load_backend(
name = 'identity',
sources='void forward(dict data){(*data)["result"] = (*data)["data"];}',
sources_header="")
model = tp.pipe({"backend":'identity'})
input = {"data":2}
model(input)
assert input["result"] == 2
```

## 与python的绑定
以python为前端语言时,会从python中调用后端,并且将结果返回到python中,需要进行类型转换。
### 从python类型到any {#py2any}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -133,13 +133,13 @@ print(input["result"].shape) # 失败则此键值一定不存在,即使输入

## 自定义dockerfile {#selfdocker}

参考[示例dockerfile](https://github.com/torchpipe/torchpipe/blob/main/docker/trt9.1.base),预先下载[TensorRT](https://github.com/NVIDIA/TensorRT/tree/release/9.1#optional---if-not-using-tensorrt-container-specify-the-tensorrt-ga-release-build-path)后可编译相关基础环境镜像。
参考[示例dockerfile](https://github.com/torchpipe/torchpipe/blob/main/docker/trt9.base),预先下载[TensorRT](https://github.com/NVIDIA/TensorRT/tree/release/9.1#optional---if-not-using-tensorrt-container-specify-the-tensorrt-ga-release-build-path)后可编译相关基础环境镜像。
```bash
# put TensorRT-9.1.0.4.Linux.x86_64-gnu.cuda-11.8.tar.gz into thirdparty/
# put TensorRT-9.*.Linux.x86_64-gnu.cuda-11.8.tar.gz into thirdparty/

# docker build --network=host -f docker/trt9.1.base -t torchpipe:base_trt-9.1 .
# docker build --network=host -f docker/trt9.base -t torchpipe:base_trt-9 .

# docker run --rm --network=host --gpus=all --ulimit memlock=-1 --ulimit stack=67108864 --privileged=true -v `pwd`:/workspace -it torchpipe:base_trt-9.1 /bin/bash
# docker run --rm --network=host --gpus=all --ulimit memlock=-1 --ulimit stack=67108864 --privileged=true -v `pwd`:/workspace -it torchpipe:base_trt-9 /bin/bash

```
这种方式编译出的基础镜像比NGC pytorch镜像体积更小. 需要注意,其`_GLIBCXX_USE_CXX11_ABI==0`
Expand Down
17 changes: 7 additions & 10 deletions i18n/zh/docusaurus-plugin-content-docs/current/tools/vis.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -3,26 +3,23 @@ id: vis
title: 配置文件可视化
type: explainer
---
从0.4.4版本开始生效

针对配置文件,我们提供了简单的网页可视化功能。

## 环境准备
```bash
apt-get update
apt install graphviz
pip install pydot gradio

pip install gradio
```

## 使用方法 {#parameter}

`torchpipe.utils.vis [-h] [--port PORT] [--save] toml`

:::tip 参数
- **--save** - 是否将图保存为svg图片。图片将保存在当前目录下,与toml文件(后缀不同)
:::
```python
import torchpipe as tp

a=tp.parse_toml("examples/ppocr/ocr.toml")

## 示例
```bash
python -m torchpipe.utils.vis your.toml # --port 2211
tp.utils.Visual(a).launch()
```

0 comments on commit 6dad271

Please sign in to comment.