Skip to content

Commit

Permalink
Update stable version to 0.24.1
Browse files Browse the repository at this point in the history
  • Loading branch information
deliahu committed Dec 13, 2020
1 parent c4c65fc commit 35f1291
Show file tree
Hide file tree
Showing 7 changed files with 43 additions and 43 deletions.
26 changes: 13 additions & 13 deletions docs/aws/install.md
Original file line number Diff line number Diff line change
Expand Up @@ -82,19 +82,19 @@ The docker images used by the Cortex cluster can also be overridden, although th
<!-- CORTEX_VERSION_BRANCH_STABLE -->
```yaml
image_operator: quay.io/cortexlabs/operator:0.24.0
image_manager: quay.io/cortexlabs/manager:0.24.0
image_downloader: quay.io/cortexlabs/downloader:0.24.0
image_request_monitor: quay.io/cortexlabs/request-monitor:0.24.0
image_cluster_autoscaler: quay.io/cortexlabs/cluster-autoscaler:0.24.0
image_metrics_server: quay.io/cortexlabs/metrics-server:0.24.0
image_inferentia: quay.io/cortexlabs/inferentia:0.24.0
image_neuron_rtd: quay.io/cortexlabs/neuron-rtd:0.24.0
image_nvidia: quay.io/cortexlabs/nvidia:0.24.0
image_fluentd: quay.io/cortexlabs/fluentd:0.24.0
image_statsd: quay.io/cortexlabs/statsd:0.24.0
image_istio_proxy: quay.io/cortexlabs/istio-proxy:0.24.0
image_istio_pilot: quay.io/cortexlabs/istio-pilot:0.24.0
image_operator: quay.io/cortexlabs/operator:0.24.1
image_manager: quay.io/cortexlabs/manager:0.24.1
image_downloader: quay.io/cortexlabs/downloader:0.24.1
image_request_monitor: quay.io/cortexlabs/request-monitor:0.24.1
image_cluster_autoscaler: quay.io/cortexlabs/cluster-autoscaler:0.24.1
image_metrics_server: quay.io/cortexlabs/metrics-server:0.24.1
image_inferentia: quay.io/cortexlabs/inferentia:0.24.1
image_neuron_rtd: quay.io/cortexlabs/neuron-rtd:0.24.1
image_nvidia: quay.io/cortexlabs/nvidia:0.24.1
image_fluentd: quay.io/cortexlabs/fluentd:0.24.1
image_statsd: quay.io/cortexlabs/statsd:0.24.1
image_istio_proxy: quay.io/cortexlabs/istio-proxy:0.24.1
image_istio_pilot: quay.io/cortexlabs/istio-pilot:0.24.1
```
## Advanced
Expand Down
14 changes: 7 additions & 7 deletions docs/gcp/install.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,11 +48,11 @@ The docker images used by the Cortex cluster can also be overridden, although th
<!-- CORTEX_VERSION_BRANCH_STABLE -->
```yaml
image_operator: quay.io/cortexlabs/operator:0.24.0
image_manager: quay.io/cortexlabs/manager:0.24.0
image_downloader: quay.io/cortexlabs/downloader:0.24.0
image_statsd: quay.io/cortexlabs/statsd:0.24.0
image_istio_proxy: quay.io/cortexlabs/istio-proxy:0.24.0
image_istio_pilot: quay.io/cortexlabs/istio-pilot:0.24.0
image_pause: quay.io/cortexlabs/pause:0.24.0
image_operator: quay.io/cortexlabs/operator:0.24.1
image_manager: quay.io/cortexlabs/manager:0.24.1
image_downloader: quay.io/cortexlabs/downloader:0.24.1
image_statsd: quay.io/cortexlabs/statsd:0.24.1
image_istio_proxy: quay.io/cortexlabs/istio-proxy:0.24.1
image_istio_pilot: quay.io/cortexlabs/istio-pilot:0.24.1
image_pause: quay.io/cortexlabs/pause:0.24.1
```
2 changes: 1 addition & 1 deletion docs/guides/self-hosted-images.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ set -euo pipefail
# user set variables
ecr_region="us-west-2"
aws_account_id="620970939130" # example account ID
cortex_version="0.24.0"
cortex_version="0.24.1"

source_registry="quay.io/cortexlabs"
destination_ecr_prefix="cortexlabs"
Expand Down
8 changes: 4 additions & 4 deletions docs/workloads/batch/configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@
path: <string> # path to a python file with a PythonPredictor class definition, relative to the Cortex root (required)
config: <string: value> # arbitrary dictionary passed to the constructor of the Predictor (can be overridden by config passed in job submission) (optional)
python_path: <string> # path to the root of your Python folder that will be appended to PYTHONPATH (default: folder containing cortex.yaml)
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/python-predictor-cpu:0.24.0 or quay.io/cortexlabs/python-predictor-gpu:0.24.0 based on compute)
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/python-predictor-cpu:0.24.1 or quay.io/cortexlabs/python-predictor-gpu:0.24.1 based on compute)
env: <string: string> # dictionary of environment variables
networking:
endpoint: <string> # the endpoint for the API (default: <api_name>)
Expand Down Expand Up @@ -44,8 +44,8 @@
batch_interval: <duration> # the maximum amount of time to spend waiting for additional requests before running inference on the batch of requests
config: <string: value> # arbitrary dictionary passed to the constructor of the Predictor (can be overridden by config passed in job submission) (optional)
python_path: <string> # path to the root of your Python folder that will be appended to PYTHONPATH (default: folder containing cortex.yaml)
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/tensorflow-predictor:0.24.0)
tensorflow_serving_image: <string> # docker image to use for the TensorFlow Serving container (default: quay.io/cortexlabs/tensorflow-serving-gpu:0.24.0 or quay.io/cortexlabs/tensorflow-serving-cpu:0.24.0 based on compute)
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/tensorflow-predictor:0.24.1)
tensorflow_serving_image: <string> # docker image to use for the TensorFlow Serving container (default: quay.io/cortexlabs/tensorflow-serving-gpu:0.24.1 or quay.io/cortexlabs/tensorflow-serving-cpu:0.24.1 based on compute)
env: <string: string> # dictionary of environment variables
networking:
endpoint: <string> # the endpoint for the API (default: <api_name>)
Expand Down Expand Up @@ -74,7 +74,7 @@
...
config: <string: value> # arbitrary dictionary passed to the constructor of the Predictor (can be overridden by config passed in job submission) (optional)
python_path: <string> # path to the root of your Python folder that will be appended to PYTHONPATH (default: folder containing cortex.yaml)
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/onnx-predictor-gpu:0.24.0 or quay.io/cortexlabs/onnx-predictor-cpu:0.24.0 based on compute)
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/onnx-predictor-gpu:0.24.1 or quay.io/cortexlabs/onnx-predictor-cpu:0.24.1 based on compute)
env: <string: string> # dictionary of environment variables
networking:
endpoint: <string> # the endpoint for the API (default: <api_name>)
Expand Down
8 changes: 4 additions & 4 deletions docs/workloads/realtime/configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@
threads_per_process: <int> # the number of threads per process (default: 1)
config: <string: value> # arbitrary dictionary passed to the constructor of the Predictor (optional)
python_path: <string> # path to the root of your Python folder that will be appended to PYTHONPATH (default: folder containing cortex.yaml)
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/python-predictor-cpu:0.24.0 or quay.io/cortexlabs/python-predictor-gpu:0.24.0 based on compute)
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/python-predictor-cpu:0.24.1 or quay.io/cortexlabs/python-predictor-gpu:0.24.1 based on compute)
env: <string: string> # dictionary of environment variables
networking:
endpoint: <string> # the endpoint for the API (aws only) (default: <api_name>)
Expand Down Expand Up @@ -82,8 +82,8 @@
threads_per_process: <int> # the number of threads per process (default: 1)
config: <string: value> # arbitrary dictionary passed to the constructor of the Predictor (optional)
python_path: <string> # path to the root of your Python folder that will be appended to PYTHONPATH (default: folder containing cortex.yaml)
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/tensorflow-predictor:0.24.0)
tensorflow_serving_image: <string> # docker image to use for the TensorFlow Serving container (default: quay.io/cortexlabs/tensorflow-serving-gpu:0.24.0 or quay.io/cortexlabs/tensorflow-serving-cpu:0.24.0 based on compute)
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/tensorflow-predictor:0.24.1)
tensorflow_serving_image: <string> # docker image to use for the TensorFlow Serving container (default: quay.io/cortexlabs/tensorflow-serving-gpu:0.24.1 or quay.io/cortexlabs/tensorflow-serving-cpu:0.24.1 based on compute)
env: <string: string> # dictionary of environment variables
networking:
endpoint: <string> # the endpoint for the API (aws only) (default: <api_name>)
Expand Down Expand Up @@ -138,7 +138,7 @@
threads_per_process: <int> # the number of threads per process (default: 1)
config: <string: value> # arbitrary dictionary passed to the constructor of the Predictor (optional)
python_path: <string> # path to the root of your Python folder that will be appended to PYTHONPATH (default: folder containing cortex.yaml)
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/onnx-predictor-gpu:0.24.0 or quay.io/cortexlabs/onnx-predictor-cpu:0.24.0 based on compute)
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/onnx-predictor-gpu:0.24.1 or quay.io/cortexlabs/onnx-predictor-cpu:0.24.1 based on compute)
env: <string: string> # dictionary of environment variables
networking:
endpoint: <string> # the endpoint for the API (aws only) (default: <api_name>)
Expand Down
26 changes: 13 additions & 13 deletions docs/workloads/system-packages.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,19 +47,19 @@ mkdir my-api && cd my-api && touch Dockerfile
Cortex's base Docker images are listed below. Depending on the Cortex Predictor and compute type specified in your API configuration, choose one of these images to use as the base for your Docker image:

<!-- CORTEX_VERSION_BRANCH_STABLE x12 -->
* Python Predictor (CPU): `quay.io/cortexlabs/python-predictor-cpu-slim:0.24.0`
* Python Predictor (CPU): `quay.io/cortexlabs/python-predictor-cpu-slim:0.24.1`
* Python Predictor (GPU): choose one of the following:
* `quay.io/cortexlabs/python-predictor-gpu-slim:0.24.0-cuda10.0-cudnn7`
* `quay.io/cortexlabs/python-predictor-gpu-slim:0.24.0-cuda10.1-cudnn7`
* `quay.io/cortexlabs/python-predictor-gpu-slim:0.24.0-cuda10.1-cudnn8`
* `quay.io/cortexlabs/python-predictor-gpu-slim:0.24.0-cuda10.2-cudnn7`
* `quay.io/cortexlabs/python-predictor-gpu-slim:0.24.0-cuda10.2-cudnn8`
* `quay.io/cortexlabs/python-predictor-gpu-slim:0.24.0-cuda11.0-cudnn8`
* `quay.io/cortexlabs/python-predictor-gpu-slim:0.24.0-cuda11.1-cudnn8`
* Python Predictor (Inferentia): `quay.io/cortexlabs/python-predictor-inf-slim:0.24.0`
* TensorFlow Predictor (CPU, GPU, Inferentia): `quay.io/cortexlabs/tensorflow-predictor-slim:0.24.0`
* ONNX Predictor (CPU): `quay.io/cortexlabs/onnx-predictor-cpu-slim:0.24.0`
* ONNX Predictor (GPU): `quay.io/cortexlabs/onnx-predictor-gpu-slim:0.24.0`
* `quay.io/cortexlabs/python-predictor-gpu-slim:0.24.1-cuda10.0-cudnn7`
* `quay.io/cortexlabs/python-predictor-gpu-slim:0.24.1-cuda10.1-cudnn7`
* `quay.io/cortexlabs/python-predictor-gpu-slim:0.24.1-cuda10.1-cudnn8`
* `quay.io/cortexlabs/python-predictor-gpu-slim:0.24.1-cuda10.2-cudnn7`
* `quay.io/cortexlabs/python-predictor-gpu-slim:0.24.1-cuda10.2-cudnn8`
* `quay.io/cortexlabs/python-predictor-gpu-slim:0.24.1-cuda11.0-cudnn8`
* `quay.io/cortexlabs/python-predictor-gpu-slim:0.24.1-cuda11.1-cudnn8`
* Python Predictor (Inferentia): `quay.io/cortexlabs/python-predictor-inf-slim:0.24.1`
* TensorFlow Predictor (CPU, GPU, Inferentia): `quay.io/cortexlabs/tensorflow-predictor-slim:0.24.1`
* ONNX Predictor (CPU): `quay.io/cortexlabs/onnx-predictor-cpu-slim:0.24.1`
* ONNX Predictor (GPU): `quay.io/cortexlabs/onnx-predictor-gpu-slim:0.24.1`

Note: the images listed above use the `-slim` suffix; Cortex's default API images are not `-slim`, since they have additional dependencies installed to cover common use cases. If you are building your own Docker image, starting with a `-slim` Predictor image will result in a smaller image size.

Expand All @@ -69,7 +69,7 @@ The sample Dockerfile below inherits from Cortex's Python CPU serving image, and
```dockerfile
# Dockerfile

FROM quay.io/cortexlabs/python-predictor-cpu-slim:0.24.0
FROM quay.io/cortexlabs/python-predictor-cpu-slim:0.24.1

RUN apt-get update \
&& apt-get install -y tree \
Expand Down
2 changes: 1 addition & 1 deletion get-cli.sh
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@

set -e

CORTEX_VERSION_BRANCH_STABLE=0.24.0
CORTEX_VERSION_BRANCH_STABLE=0.24.1
CORTEX_INSTALL_PATH="${CORTEX_INSTALL_PATH:-/usr/local/bin/cortex}"

# replace ~ with the home directory path
Expand Down

0 comments on commit 35f1291

Please sign in to comment.