Skip to content

Commit

Permalink
Update mlflow to avoid CVE-2024-27132 and CVE-2024-27133 (nv-morpheus…
Browse files Browse the repository at this point in the history
…#1609)

* Avoids two related CVEs involving MLflow
* Update mlflow-triton documentation 

## By Submitting this PR I confirm:
- I am familiar with the [Contributing Guidelines](https://github.com/nv-morpheus/Morpheus/blob/main/docs/source/developer_guide/contributing.md).
- When the PR is ready for review, new or existing tests cover these changes.
- When the PR is ready for review, the documentation is up to date with these changes.

Authors:
  - David Gardner (https://github.com/dagardner-nv)

Approvers:
  - Michael Demoret (https://github.com/mdemoret-nv)

URL: nv-morpheus#1609
  • Loading branch information
dagardner-nv authored Apr 10, 2024
1 parent 52f77f5 commit 0db1086
Show file tree
Hide file tree
Showing 10 changed files with 19 additions and 19 deletions.
2 changes: 1 addition & 1 deletion ci/conda/recipes/morpheus/meta.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -91,7 +91,7 @@ outputs:
- grpcio # Version determined from cudf
- libmrc
- libwebp>=1.3.2 # Required for CVE mitigation: https://nvd.nist.gov/vuln/detail/CVE-2023-4863
- mlflow>=2.2.1,<3
- mlflow>=2.10.0,<3
- mrc
- networkx>=2.8
- numpydoc =1.5.*
Expand Down
2 changes: 1 addition & 1 deletion conda/environments/all_cuda-121_arch-x86_64.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ dependencies:
- librdkafka>=1.9.2,<1.10.0a0
- libtool
- libwebp=1.3.2
- mlflow=2.9.2
- mlflow>=2.10.0,<3
- mrc=24.03
- myst-parser=0.18.1
- nbsphinx
Expand Down
2 changes: 1 addition & 1 deletion conda/environments/dev_cuda-121_arch-x86_64.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ dependencies:
- isort
- librdkafka>=1.9.2,<1.10.0a0
- libtool
- mlflow=2.9.2
- mlflow>=2.10.0,<3
- mrc=24.03
- myst-parser=0.18.1
- nbsphinx
Expand Down
2 changes: 1 addition & 1 deletion conda/environments/examples_cuda-121_arch-x86_64.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ dependencies:
- jsonpatch>=1.33
- kfp
- libwebp=1.3.2
- mlflow=2.9.2
- mlflow>=2.10.0,<3
- networkx=2.8.8
- newspaper3k=0.2
- nodejs=18.*
Expand Down
2 changes: 1 addition & 1 deletion conda/environments/runtime_cuda-121_arch-x86_64.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ dependencies:
- elasticsearch==8.9.0
- feedparser=6.0.10
- grpcio=1.59
- mlflow=2.9.2
- mlflow>=2.10.0,<3
- networkx=2.8.8
- numpydoc=1.5
- nvtabular=23.08.00
Expand Down
4 changes: 2 additions & 2 deletions dependencies.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -253,7 +253,7 @@ dependencies:
- elasticsearch==8.9.0
- feedparser=6.0.10
- grpcio=1.59
- mlflow=2.9.2
- mlflow>=2.10.0,<3
- networkx=2.8.8
- nvtabular=23.08.00
- pydantic
Expand Down Expand Up @@ -301,7 +301,7 @@ dependencies:
- dask=2023.12.1
- distributed=2023.12.1
- kfp
- mlflow=2.9.2
- mlflow>=2.10.0,<3
- papermill=2.4.0
- s3fs=2023.12.2

Expand Down
2 changes: 1 addition & 1 deletion examples/digital_fingerprinting/production/conda_env.yml
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ dependencies:
- distributed
- kfp
- librdkafka
- mlflow>=2.2.1,<3
- mlflow>=2.10.0,<3
- nodejs=18.*
- nvtabular=23.06
- papermill
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ RUN apt update && \
rm -rf /var/cache/apt/* /var/lib/apt/lists/*

# Install python packages
RUN pip install "mlflow >=2.2.1,<3" boto3 pymysql pyyaml
RUN pip install "mlflow >=2.10.0,<3" boto3 pymysql pyyaml

# We run on port 5000
EXPOSE 5000
Expand Down
18 changes: 9 additions & 9 deletions models/mlflow/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,8 +22,8 @@ are included for publishing TensorRT, ONNX and FIL models to your MLflow Model R

## Requirements

* MLflow (tested on 1.24.0)
* Python (tested on 3.8)
* MLflow (tested on 2.11.3)
* Python (tested on 3.11)

## Install Triton Docker Image

Expand Down Expand Up @@ -89,7 +89,7 @@ Create an MLflow container with a volume mounting the Triton model repository:
```bash
docker run -it -v /opt/triton_models:/triton_models \
--env TRITON_MODEL_REPO=/triton_models \
--env MLFLOW_TRACKING_URI=localhost:5000 \
--env MLFLOW_TRACKING_URI="http://localhost:5000" \
--gpus '"device=0"' \
--net=host \
--rm \
Expand All @@ -115,29 +115,29 @@ The `publish_model_to_mlflow` script is used to publish `triton` flavor models t
```
python publish_model_to_mlflow.py \
--model_name sid-minibert-onnx \
--model_directory <path-to-morpheus-models-repo>/models/triton-model-repo/sid-minibert-onnx \
--model_directory /triton_models/triton-model-repo/sid-minibert-onnx \
--flavor triton
```

## Deployments

The Triton `mlflow-triton-plugin` is installed on this container and can be used to deploy your models from MLflow to Triton Inference Server. The following are examples of how the plugin is used with the `sid-minibert-onnx` model that we published to MLflow above. For more information about the
`mlflow-triton-plugin`, refer to Triton's [documentation](https://github.com/triton-inference-server/server/tree/r23.01/deploy/mlflow-triton-plugin)
`mlflow-triton-plugin`, refer to Triton's [documentation](https://github.com/triton-inference-server/server/tree/r24.03/deploy/mlflow-triton-plugin)

### Create Deployment

To create a deployment use the following command

##### CLI
```
mlflow deployments create -t triton --flavor triton --name sid-minibert-onnx -m models:/sid-minibert-onnx/1
mlflow deployments create -t triton --flavor triton --name sid-minibert-onnx -m "models:/sid-minibert-onnx/1"
```

##### Python API
```
from mlflow.deployments import get_deploy_client
client = get_deploy_client('triton')
client.create_deployment("sid-minibert-onnx", " models:/sid-minibert-onnx/1", flavor="triton")
client.create_deployment("sid-minibert-onnx", "models:/sid-minibert-onnx/1", flavor="triton")
```

### Delete Deployment
Expand All @@ -158,14 +158,14 @@ client.delete_deployment("sid-minibert-onnx")

##### CLI
```
mlflow deployments update -t triton --flavor triton --name sid-minibert-onnx -m models:/sid-minibert-onnx/2
mlflow deployments update -t triton --flavor triton --name sid-minibert-onnx -m "models:/sid-minibert-onnx/1"
```

##### Python API
```
from mlflow.deployments import get_deploy_client
client = get_deploy_client('triton')
client.update_deployment("sid-minibert-onnx", "models:/sid-minibert-onnx/2", flavor="triton")
client.update_deployment("sid-minibert-onnx", "models:/sid-minibert-onnx/1", flavor="triton")
```

### List Deployments
Expand Down
2 changes: 1 addition & 1 deletion models/mlflow/docker/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ RUN sed -i 's/conda activate base/conda activate mlflow/g' ~/.bashrc
SHELL ["/opt/conda/bin/conda", "run", "-n", "mlflow", "/bin/bash", "-c"]

ARG TRITON_DIR=/mlflow/triton-inference-server
ARG TRITON_VER=r24.01
ARG TRITON_VER=r24.03

RUN mkdir ${TRITON_DIR} && \
cd ${TRITON_DIR} && \
Expand Down

0 comments on commit 0db1086

Please sign in to comment.