This is an adapter which implements the internal model-mesh model management API for Triton Inference Server.
-
Clone the repository
$ git clone https://github.com/kserve/modelmesh-runtime-adapter.git $ cd modelmesh-runtime-adapter/model-mesh-triton-adapter
-
Pull Triton Serving Docker Image
$ docker pull nvcr.io/nvidia/tritonserver:20.09-py3
-
Run Triton Serving Container with model data mounted
By default, Triton Serving Docker expose Port
8000
for HTTP and Port8001
for gRPC.Using following command to forward container's
8000
to your workstation's8000
and container's8001
to your workstation's8001
.$ docker run -p 8000:8000 -p 8001:8001 -v $(pwd)/examples/models:/models nvcr.io/nvidia/tritonserver:20.09-py3 tritonserver --model-store=/models --model-control-mode=explicit --strict-model-config=false --strict-readiness=false
-
Setup your Golang, gRPC and Protobuff Development Environment locally
Follow this gRPC Go Quick Start Guide
-
Run Triton adapter with:
$ export ROOT_MODEL_DIR=$(pwd)/examples/models $ export CONTAINER_MEM_REQ_BYTES=268435456 # 256MB $ go run main.go
-
Test adapter with this client from another terminal:
$ go run triton/adapter_client/adapter_client.go