From 6dad271d9e19586bd139edb65aea499b14ad7dd6 Mon Sep 17 00:00:00 2001 From: tp-nan Date: Wed, 10 Jan 2024 16:26:27 +0800 Subject: [PATCH] vis --- docs/Intra-node/extensible_backend.mdx | 19 ++++++++++++++++ docs/installation.mdx | 8 +++---- docs/tools/vis.mdx | 22 +++++++++---------- .../current/Intra-node/extensible_backend.mdx | 20 +++++++++++++++++ .../current/installation.mdx | 8 +++---- .../current/tools/vis.mdx | 17 ++++++-------- 6 files changed, 65 insertions(+), 29 deletions(-) diff --git a/docs/Intra-node/extensible_backend.mdx b/docs/Intra-node/extensible_backend.mdx index 4d9cbe3..90c275e 100755 --- a/docs/Intra-node/extensible_backend.mdx +++ b/docs/Intra-node/extensible_backend.mdx @@ -95,6 +95,25 @@ model(input) assert(input["result"] == b"123") ``` +Or you can: +```python +tp.utils.cpp_extension.load_filter( + name = 'Skip', + sources='status forward(dict data){return status::Skip;}', + sources_header="") + + + +tp.utils.cpp_extension.load_backend( + name = 'identity', + sources='void forward(dict data){(*data)["result"] = (*data)["data"];}', + sources_header="") +model = tp.pipe({"backend":'identity'}) +input = {"data":2} +model(input) +assert input["result"] == 2 +``` + ## Binding with Python When using Python as the front-end language, the back-end is called from Python and the results are returned to Python, requiring type conversion. ### From Python Types to Any {#py2any} diff --git a/docs/installation.mdx b/docs/installation.mdx index 6438d5f..bf838fd 100755 --- a/docs/installation.mdx +++ b/docs/installation.mdx @@ -145,12 +145,12 @@ For more examples, see [Showcase](./showcase/showcase.mdx). ## Customizing Dockerfile {#selfdocker} Refer to the [example Dockerfile](https://github.com/torchpipe/torchpipe/blob/main/docker/trt9.1.base). After downloading [TensorRT](https://github.com/NVIDIA/TensorRT/tree/release/9.1#optional---if-not-using-tensorrt-container-specify-the-tensorrt-ga-release-build-path) in advance, you can compile the corresponding base image. -``` -# put TensorRT-9.1.0.4.Linux.x86_64-gnu.cuda-11.8.tar.gz into thirdparty/ +```bash +# put TensorRT-9.*.Linux.x86_64-gnu.cuda-11.8.tar.gz into thirdparty/ -# docker build --network=host -f docker/trt9.1.base -t torchpipe:base_trt-9.1 . +# docker build --network=host -f docker/trt9.base -t torchpipe:base_trt-9 . -# docker run --rm --network=host --gpus=all --ulimit memlock=-1 --ulimit stack=67108864 --privileged=true -v `pwd`:/workspace -it torchpipe:base_trt-9.1 /bin/bash +# docker run --rm --network=host --gpus=all --ulimit memlock=-1 --ulimit stack=67108864 --privileged=true -v `pwd`:/workspace -it torchpipe:base_trt-9 /bin/bash ``` Base images compiled in this way have smaller sizes than NGC PyTorch images. Please note that `_GLIBCXX_USE_CXX11_ABI==0`. diff --git a/docs/tools/vis.mdx b/docs/tools/vis.mdx index 97a7bc3..60a1035 100755 --- a/docs/tools/vis.mdx +++ b/docs/tools/vis.mdx @@ -4,25 +4,25 @@ title: Configuration Visualizing type: explainer --- +(From v0.4.0) We provide a simple web-based visualization feature for configuration files. ## Environment Setup ```bash -apt-get update -apt install graphviz -pip install pydot gradio +pip install gradio ``` ## Usage {#parameter} - `torchpipe.utils.vis [-h] [--port PORT] [--save] toml` -:::tip Parameters -- **--save** - Whether to save the graph as an SVG image. The image will be saved in the current directory with a different file extension than the TOML file. -::: +```python +import torchpipe as tp +a=tp.parse_toml("examples/ppocr/ocr.toml") -## Example -```bash -python -m torchpipe.utils.vis your.toml # --port 2211 -``` \ No newline at end of file +tp.utils.Visual(a).launch() +``` + + + + \ No newline at end of file diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/Intra-node/extensible_backend.mdx b/i18n/zh/docusaurus-plugin-content-docs/current/Intra-node/extensible_backend.mdx index d477a10..535c845 100755 --- a/i18n/zh/docusaurus-plugin-content-docs/current/Intra-node/extensible_backend.mdx +++ b/i18n/zh/docusaurus-plugin-content-docs/current/Intra-node/extensible_backend.mdx @@ -99,6 +99,26 @@ model(input) assert(input["result"] == b"123") ``` + +Or you can: +```python +tp.utils.cpp_extension.load_filter( + name = 'Skip', + sources='status forward(dict data){return status::Skip;}', + sources_header="") + + + +tp.utils.cpp_extension.load_backend( + name = 'identity', + sources='void forward(dict data){(*data)["result"] = (*data)["data"];}', + sources_header="") +model = tp.pipe({"backend":'identity'}) +input = {"data":2} +model(input) +assert input["result"] == 2 +``` + ## 与python的绑定 以python为前端语言时,会从python中调用后端,并且将结果返回到python中,需要进行类型转换。 ### 从python类型到any {#py2any} diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/installation.mdx b/i18n/zh/docusaurus-plugin-content-docs/current/installation.mdx index b24d414..e66baff 100755 --- a/i18n/zh/docusaurus-plugin-content-docs/current/installation.mdx +++ b/i18n/zh/docusaurus-plugin-content-docs/current/installation.mdx @@ -133,13 +133,13 @@ print(input["result"].shape) # 失败则此键值一定不存在,即使输入 ## 自定义dockerfile {#selfdocker} -参考[示例dockerfile](https://github.com/torchpipe/torchpipe/blob/main/docker/trt9.1.base),预先下载[TensorRT](https://github.com/NVIDIA/TensorRT/tree/release/9.1#optional---if-not-using-tensorrt-container-specify-the-tensorrt-ga-release-build-path)后可编译相关基础环境镜像。 +参考[示例dockerfile](https://github.com/torchpipe/torchpipe/blob/main/docker/trt9.base),预先下载[TensorRT](https://github.com/NVIDIA/TensorRT/tree/release/9.1#optional---if-not-using-tensorrt-container-specify-the-tensorrt-ga-release-build-path)后可编译相关基础环境镜像。 ```bash -# put TensorRT-9.1.0.4.Linux.x86_64-gnu.cuda-11.8.tar.gz into thirdparty/ +# put TensorRT-9.*.Linux.x86_64-gnu.cuda-11.8.tar.gz into thirdparty/ -# docker build --network=host -f docker/trt9.1.base -t torchpipe:base_trt-9.1 . +# docker build --network=host -f docker/trt9.base -t torchpipe:base_trt-9 . -# docker run --rm --network=host --gpus=all --ulimit memlock=-1 --ulimit stack=67108864 --privileged=true -v `pwd`:/workspace -it torchpipe:base_trt-9.1 /bin/bash +# docker run --rm --network=host --gpus=all --ulimit memlock=-1 --ulimit stack=67108864 --privileged=true -v `pwd`:/workspace -it torchpipe:base_trt-9 /bin/bash ``` 这种方式编译出的基础镜像比NGC pytorch镜像体积更小. 需要注意,其`_GLIBCXX_USE_CXX11_ABI==0` diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/tools/vis.mdx b/i18n/zh/docusaurus-plugin-content-docs/current/tools/vis.mdx index 7feeee8..87ca247 100755 --- a/i18n/zh/docusaurus-plugin-content-docs/current/tools/vis.mdx +++ b/i18n/zh/docusaurus-plugin-content-docs/current/tools/vis.mdx @@ -3,26 +3,23 @@ id: vis title: 配置文件可视化 type: explainer --- +从0.4.4版本开始生效 针对配置文件,我们提供了简单的网页可视化功能。 ## 环境准备 ```bash -apt-get update -apt install graphviz -pip install pydot gradio + +pip install gradio ``` ## 使用方法 {#parameter} - `torchpipe.utils.vis [-h] [--port PORT] [--save] toml` -:::tip 参数 -- **--save** - 是否将图保存为svg图片。图片将保存在当前目录下,与toml文件(后缀不同) -::: +```python +import torchpipe as tp +a=tp.parse_toml("examples/ppocr/ocr.toml") -## 示例 -```bash -python -m torchpipe.utils.vis your.toml # --port 2211 +tp.utils.Visual(a).launch() ``` \ No newline at end of file