Skip to content

Commit

Permalink
Update version to 0.29.0
Browse files Browse the repository at this point in the history
  • Loading branch information
deliahu committed Feb 16, 2021
1 parent ada69b9 commit 1ae8849
Show file tree
Hide file tree
Showing 35 changed files with 114 additions and 114 deletions.
2 changes: 1 addition & 1 deletion build/build-image.sh
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ set -euo pipefail

ROOT="$(cd "$(dirname "${BASH_SOURCE[0]}")"/.. >/dev/null && pwd)"

CORTEX_VERSION=master
CORTEX_VERSION=0.29.0

image=$1

Expand Down
2 changes: 1 addition & 1 deletion build/cli.sh
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ set -euo pipefail

ROOT="$(cd "$(dirname "${BASH_SOURCE[0]}")"/.. >/dev/null && pwd)"

CORTEX_VERSION=master
CORTEX_VERSION=0.29.0

arg1=${1:-""}
upload="false"
Expand Down
2 changes: 1 addition & 1 deletion build/push-image.sh
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@

set -euo pipefail

CORTEX_VERSION=master
CORTEX_VERSION=0.29.0

image=$1

Expand Down
4 changes: 2 additions & 2 deletions charts/Chart.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -2,5 +2,5 @@ apiVersion: v2
name: cortex
description: A Helm chart for installing Cortex
type: application
version: 0.1.0 # CORTEX_VERSION
appVersion: "master" # CORTEX_VERSION
version: 0.29.0 # CORTEX_VERSION
appVersion: "0.29.0" # CORTEX_VERSION
4 changes: 2 additions & 2 deletions charts/charts/networking/Chart.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -2,5 +2,5 @@ apiVersion: v2
name: networking
description: A Helm chart for setting up Cortex's networking dependencies
type: application
version: 0.1.0 # CORTEX_VERSION
appVersion: "master" # CORTEX_VERSION
version: 0.29.0 # CORTEX_VERSION
appVersion: "0.29.0" # CORTEX_VERSION
2 changes: 1 addition & 1 deletion charts/charts/networking/charts/api-ingress/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -186,7 +186,7 @@ global:
hub: quay.io/cortexlabs

# Default tag for Istio images.
tag: master # CORTEX_VERSION
tag: "0.29.0" # CORTEX_VERSION

# Specify image pull policy if default behavior isn't desired.
# Default behavior: latest images will be Always else IfNotPresent.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -186,7 +186,7 @@ global:
hub: quay.io/cortexlabs

# Default tag for Istio images.
tag: master # CORTEX_VERSION
tag: "0.29.0" # CORTEX_VERSION

# Specify image pull policy if default behavior isn't desired.
# Default behavior: latest images will be Always else IfNotPresent.
Expand Down
4 changes: 2 additions & 2 deletions charts/charts/networking/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -89,7 +89,7 @@ istio-discovery:
rollingMaxUnavailable: 25%

hub: quay.io/cortexlabs
tag: master # CORTEX_VERSION
tag: "0.29.0" # CORTEX_VERSION

# Can be a full hub/image:tag
image: istio-pilot
Expand Down Expand Up @@ -128,7 +128,7 @@ global:
hub: quay.io/cortexlabs

# Default tag for Istio images.
tag: master # CORTEX_VERSION
tag: "0.29.0" # CORTEX_VERSION

# Comma-separated minimum per-scope logging level of messages to output, in the form of <scope>:<level>,<scope>:<level>
# The control plane has different scopes depending on component, but can configure default log level across all components
Expand Down
40 changes: 20 additions & 20 deletions charts/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -11,30 +11,30 @@ cortex:
project: ""

# CORTEX_VERSION
image_operator: quay.io/cortexlabs/operator:master
image_manager: quay.io/cortexlabs/manager:master
image_downloader: quay.io/cortexlabs/downloader:master
image_request_monitor: quay.io/cortexlabs/request-monitor:master
image_cluster_autoscaler: quay.io/cortexlabs/cluster-autoscaler:master
image_metrics_server: quay.io/cortexlabs/metrics-server:master
image_inferentia: quay.io/cortexlabs/inferentia:master
image_neuron_rtd: quay.io/cortexlabs/neuron-rtd:master
image_nvidia: quay.io/cortexlabs/nvidia:master
image_fluent_bit: quay.io/cortexlabs/fluent-bit:master
image_istio_proxy: quay.io/cortexlabs/istio-proxy:master
image_istio_pilot: quay.io/cortexlabs/istio-pilot:master
image_google_pause: quay.io/cortexlabs/pause:master
image_prometheus: quay.io/cortexlabs/prometheus:master
image_prometheus_config_reloader: quay.io/cortexlabs/prometheus-config-reloader:master
image_prometheus_operator: quay.io/cortexlabs/prometheus-operator:master
image_prometheus_statsd_exporter: quay.io/cortexlabs/prometheus-statsd-exporter:master
image_grafana: quay.io/cortexlabs/grafana:master
image_operator: quay.io/cortexlabs/operator:0.29.0
image_manager: quay.io/cortexlabs/manager:0.29.0
image_downloader: quay.io/cortexlabs/downloader:0.29.0
image_request_monitor: quay.io/cortexlabs/request-monitor:0.29.0
image_cluster_autoscaler: quay.io/cortexlabs/cluster-autoscaler:0.29.0
image_metrics_server: quay.io/cortexlabs/metrics-server:0.29.0
image_inferentia: quay.io/cortexlabs/inferentia:0.29.0
image_neuron_rtd: quay.io/cortexlabs/neuron-rtd:0.29.0
image_nvidia: quay.io/cortexlabs/nvidia:0.29.0
image_fluent_bit: quay.io/cortexlabs/fluent-bit:0.29.0
image_istio_proxy: quay.io/cortexlabs/istio-proxy:0.29.0
image_istio_pilot: quay.io/cortexlabs/istio-pilot:0.29.0
image_google_pause: quay.io/cortexlabs/pause:0.29.0
image_prometheus: quay.io/cortexlabs/prometheus:0.29.0
image_prometheus_config_reloader: quay.io/cortexlabs/prometheus-config-reloader:0.29.0
image_prometheus_operator: quay.io/cortexlabs/prometheus-operator:0.29.0
image_prometheus_statsd_exporter: quay.io/cortexlabs/prometheus-statsd-exporter:0.29.0
image_grafana: quay.io/cortexlabs/grafana:0.29.0

networking:
istio-discovery:
pilot:
hub: quay.io/cortexlabs
tag: master # CORTEX_VERSION
tag: "0.29.0" # CORTEX_VERSION

# Can be a full hub/image:tag
image: istio-pilot
Expand All @@ -48,7 +48,7 @@ global:
hub: quay.io/cortexlabs

# Default tag for Istio images.
tag: master # CORTEX_VERSION
tag: "0.29.0" # CORTEX_VERSION

proxy:
image: istio-proxy
Expand Down
2 changes: 1 addition & 1 deletion dev/registry.sh
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.

CORTEX_VERSION=master
CORTEX_VERSION=0.29.0

set -eo pipefail

Expand Down
8 changes: 4 additions & 4 deletions docs/clients/install.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,10 +9,10 @@ pip install cortex
```

<!-- CORTEX_VERSION_README x2 -->
To install or upgrade to a specific version (e.g. v0.28.0):
To install or upgrade to a specific version (e.g. v0.29.0):

```bash
pip install cortex==0.28.0
pip install cortex==0.29.0
```

To upgrade to the latest version:
Expand All @@ -25,8 +25,8 @@ pip install --upgrade cortex

<!-- CORTEX_VERSION_README x2 -->
```bash
# For example to download CLI version 0.28.0 (Note the "v"):
$ bash -c "$(curl -sS https://raw.githubusercontent.com/cortexlabs/cortex/v0.28.0/get-cli.sh)"
# For example to download CLI version 0.29.0 (Note the "v"):
$ bash -c "$(curl -sS https://raw.githubusercontent.com/cortexlabs/cortex/v0.29.0/get-cli.sh)"
```

By default, the Cortex CLI is installed at `/usr/local/bin/cortex`. To install the executable elsewhere, export the `CORTEX_INSTALL_PATH` environment variable to your desired location before running the command above.
Expand Down
2 changes: 1 addition & 1 deletion docs/clients/python.md
Original file line number Diff line number Diff line change
Expand Up @@ -88,7 +88,7 @@ Deploy an API.

**Arguments**:

- `api_spec` - A dictionary defining a single Cortex API. See https://docs.cortex.dev/v/master/ for schema.
- `api_spec` - A dictionary defining a single Cortex API. See https://docs.cortex.dev/v/0.29/ for schema.
- `predictor` - A Cortex Predictor class implementation. Not required for TaskAPI/TrafficSplitter kinds.
- `task` - A callable class/function implementation. Not required for RealtimeAPI/BatchAPI/TrafficSplitter kinds.
- `requirements` - A list of PyPI dependencies that will be installed before the predictor class implementation is invoked.
Expand Down
34 changes: 17 additions & 17 deletions docs/clusters/aws/install.md
Original file line number Diff line number Diff line change
Expand Up @@ -89,21 +89,21 @@ The docker images used by the Cortex cluster can also be overridden, although th
<!-- CORTEX_VERSION_BRANCH_STABLE -->
```yaml
image_operator: quay.io/cortexlabs/operator:master
image_manager: quay.io/cortexlabs/manager:master
image_downloader: quay.io/cortexlabs/downloader:master
image_request_monitor: quay.io/cortexlabs/request-monitor:master
image_cluster_autoscaler: quay.io/cortexlabs/cluster-autoscaler:master
image_metrics_server: quay.io/cortexlabs/metrics-server:master
image_inferentia: quay.io/cortexlabs/inferentia:master
image_neuron_rtd: quay.io/cortexlabs/neuron-rtd:master
image_nvidia: quay.io/cortexlabs/nvidia:master
image_fluent_bit: quay.io/cortexlabs/fluent-bit:master
image_istio_proxy: quay.io/cortexlabs/istio-proxy:master
image_istio_pilot: quay.io/cortexlabs/istio-pilot:master
image_prometheus: quay.io/cortexlabs/prometheus:master
image_prometheus_config_reloader: quay.io/cortexlabs/prometheus-config-reloader:master
image_prometheus_operator: quay.io/cortexlabs/prometheus-operator:master
image_prometheus_statsd_exporter: quay.io/cortexlabs/prometheus-statsd-exporter:master
image_grafana: quay.io/cortexlabs/grafana:master
image_operator: quay.io/cortexlabs/operator:0.29.0
image_manager: quay.io/cortexlabs/manager:0.29.0
image_downloader: quay.io/cortexlabs/downloader:0.29.0
image_request_monitor: quay.io/cortexlabs/request-monitor:0.29.0
image_cluster_autoscaler: quay.io/cortexlabs/cluster-autoscaler:0.29.0
image_metrics_server: quay.io/cortexlabs/metrics-server:0.29.0
image_inferentia: quay.io/cortexlabs/inferentia:0.29.0
image_neuron_rtd: quay.io/cortexlabs/neuron-rtd:0.29.0
image_nvidia: quay.io/cortexlabs/nvidia:0.29.0
image_fluent_bit: quay.io/cortexlabs/fluent-bit:0.29.0
image_istio_proxy: quay.io/cortexlabs/istio-proxy:0.29.0
image_istio_pilot: quay.io/cortexlabs/istio-pilot:0.29.0
image_prometheus: quay.io/cortexlabs/prometheus:0.29.0
image_prometheus_config_reloader: quay.io/cortexlabs/prometheus-config-reloader:0.29.0
image_prometheus_operator: quay.io/cortexlabs/prometheus-operator:0.29.0
image_prometheus_statsd_exporter: quay.io/cortexlabs/prometheus-statsd-exporter:0.29.0
image_grafana: quay.io/cortexlabs/grafana:0.29.0
```
24 changes: 12 additions & 12 deletions docs/clusters/gcp/install.md
Original file line number Diff line number Diff line change
Expand Up @@ -71,16 +71,16 @@ The docker images used by the Cortex cluster can also be overridden, although th
<!-- CORTEX_VERSION_BRANCH_STABLE -->
```yaml
image_operator: quay.io/cortexlabs/operator:master
image_manager: quay.io/cortexlabs/manager:master
image_downloader: quay.io/cortexlabs/downloader:master
image_request_monitor: quay.io/cortexlabs/request-monitor:master
image_istio_proxy: quay.io/cortexlabs/istio-proxy:master
image_istio_pilot: quay.io/cortexlabs/istio-pilot:master
image_google_pause: quay.io/cortexlabs/google-pause:master
image_prometheus: quay.io/cortexlabs/prometheus:master
image_prometheus_config_reloader: quay.io/cortexlabs/prometheus-config-reloader:master
image_prometheus_operator: quay.io/cortexlabs/prometheus-operator:master
image_prometheus_statsd_exporter: quay.io/cortexlabs/prometheus-statsd-exporter:master
image_grafana: quay.io/cortexlabs/grafana:master
image_operator: quay.io/cortexlabs/operator:0.29.0
image_manager: quay.io/cortexlabs/manager:0.29.0
image_downloader: quay.io/cortexlabs/downloader:0.29.0
image_request_monitor: quay.io/cortexlabs/request-monitor:0.29.0
image_istio_proxy: quay.io/cortexlabs/istio-proxy:0.29.0
image_istio_pilot: quay.io/cortexlabs/istio-pilot:0.29.0
image_google_pause: quay.io/cortexlabs/google-pause:0.29.0
image_prometheus: quay.io/cortexlabs/prometheus:0.29.0
image_prometheus_config_reloader: quay.io/cortexlabs/prometheus-config-reloader:0.29.0
image_prometheus_operator: quay.io/cortexlabs/prometheus-operator:0.29.0
image_prometheus_statsd_exporter: quay.io/cortexlabs/prometheus-statsd-exporter:0.29.0
image_grafana: quay.io/cortexlabs/grafana:0.29.0
```
8 changes: 4 additions & 4 deletions docs/clusters/kubernetes/install.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,8 +25,8 @@ Note that installing Cortex on your Kubernetes cluster will not provide some of

<!-- CORTEX_VERSION_BRANCH_STABLE x3 -->
```bash
wget https://s3-us-west-2.amazonaws.com/get-cortex/master/charts/cortex-master.tar.gz
tar -xzf cortex-master.tar.gz
wget https://s3-us-west-2.amazonaws.com/get-cortex/0.29.0/charts/cortex-0.29.0.tar.gz
tar -xzf cortex-0.29.0.tar.gz
```

### Create a bucket in S3
Expand Down Expand Up @@ -132,8 +132,8 @@ Note that installing Cortex on your Kubernetes cluster will not provide some of
<!-- CORTEX_VERSION_BRANCH_STABLE x3 -->
```bash
wget https://s3-us-west-2.amazonaws.com/get-cortex/master/charts/cortex-master.tar.gz
tar -xzf cortex-master.tar.gz
wget https://s3-us-west-2.amazonaws.com/get-cortex/0.29.0/charts/cortex-0.29.0.tar.gz
tar -xzf cortex-0.29.0.tar.gz
```

### Create a bucket in GCS
Expand Down
8 changes: 4 additions & 4 deletions docs/workloads/batch/configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ predictor:
path: <string> # path to a python file with a PythonPredictor class definition, relative to the Cortex root (required)
config: <string: value> # arbitrary dictionary passed to the constructor of the Predictor (can be overridden by config passed in job submission) (optional)
python_path: <string> # path to the root of your Python folder that will be appended to PYTHONPATH (default: folder containing cortex.yaml)
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/python-predictor-cpu:master or quay.io/cortexlabs/python-predictor-gpu:master-cuda10.2-cudnn8 based on compute)
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/python-predictor-cpu:0.29.0 or quay.io/cortexlabs/python-predictor-gpu:0.29.0-cuda10.2-cudnn8 based on compute)
env: <string: string> # dictionary of environment variables
log_level: <string> # log level that can be "debug", "info", "warning" or "error" (default: "info")
shm_size: <string> # size of shared memory (/dev/shm) for sharing data between multiple processes, e.g. 64Mi or 1Gi (default: Null)
Expand All @@ -45,8 +45,8 @@ predictor:
batch_interval: <duration> # the maximum amount of time to spend waiting for additional requests before running inference on the batch of requests
config: <string: value> # arbitrary dictionary passed to the constructor of the Predictor (can be overridden by config passed in job submission) (optional)
python_path: <string> # path to the root of your Python folder that will be appended to PYTHONPATH (default: folder containing cortex.yaml)
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/tensorflow-predictor:master)
tensorflow_serving_image: <string> # docker image to use for the TensorFlow Serving container (default: quay.io/cortexlabs/tensorflow-serving-cpu:master or quay.io/cortexlabs/tensorflow-serving-gpu:master based on compute)
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/tensorflow-predictor:0.29.0)
tensorflow_serving_image: <string> # docker image to use for the TensorFlow Serving container (default: quay.io/cortexlabs/tensorflow-serving-cpu:0.29.0 or quay.io/cortexlabs/tensorflow-serving-gpu:0.29.0 based on compute)
env: <string: string> # dictionary of environment variables
log_level: <string> # log level that can be "debug", "info", "warning" or "error" (default: "info")
shm_size: <string> # size of shared memory (/dev/shm) for sharing data between multiple processes, e.g. 64Mi or 1Gi (default: Null)
Expand All @@ -67,7 +67,7 @@ predictor:
...
config: <string: value> # arbitrary dictionary passed to the constructor of the Predictor (can be overridden by config passed in job submission) (optional)
python_path: <string> # path to the root of your Python folder that will be appended to PYTHONPATH (default: folder containing cortex.yaml)
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/onnx-predictor-cpu:master or quay.io/cortexlabs/onnx-predictor-gpu:master based on compute)
image: <string> # docker image to use for the Predictor (default: quay.io/cortexlabs/onnx-predictor-cpu:0.29.0 or quay.io/cortexlabs/onnx-predictor-gpu:0.29.0 based on compute)
env: <string: string> # dictionary of environment variables
log_level: <string> # log level that can be "debug", "info", "warning" or "error" (default: "info")
shm_size: <string> # size of shared memory (/dev/shm) for sharing data between multiple processes, e.g. 64Mi or 1Gi (default: Null)
Expand Down
4 changes: 2 additions & 2 deletions docs/workloads/batch/predictors.md
Original file line number Diff line number Diff line change
Expand Up @@ -143,7 +143,7 @@ class TensorFlowPredictor:
```

<!-- CORTEX_VERSION_MINOR -->
Cortex provides a `tensorflow_client` to your Predictor's constructor. `tensorflow_client` is an instance of [TensorFlowClient](https://github.com/cortexlabs/cortex/tree/master/pkg/cortex/serve/cortex_internal/lib/client/tensorflow.py) that manages a connection to a TensorFlow Serving container to make predictions using your model. It should be saved as an instance variable in your Predictor, and your `predict()` function should call `tensorflow_client.predict()` to make an inference with your exported TensorFlow model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your `predict()` function as well.
Cortex provides a `tensorflow_client` to your Predictor's constructor. `tensorflow_client` is an instance of [TensorFlowClient](https://github.com/cortexlabs/cortex/tree/0.29/pkg/cortex/serve/cortex_internal/lib/client/tensorflow.py) that manages a connection to a TensorFlow Serving container to make predictions using your model. It should be saved as an instance variable in your Predictor, and your `predict()` function should call `tensorflow_client.predict()` to make an inference with your exported TensorFlow model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your `predict()` function as well.

When multiple models are defined using the Predictor's `models` field, the `tensorflow_client.predict()` method expects a second argument `model_name` which must hold the name of the model that you want to use for inference (for example: `self.client.predict(payload, "text-generator")`).

Expand Down Expand Up @@ -204,7 +204,7 @@ class ONNXPredictor:
```

<!-- CORTEX_VERSION_MINOR -->
Cortex provides an `onnx_client` to your Predictor's constructor. `onnx_client` is an instance of [ONNXClient](https://github.com/cortexlabs/cortex/tree/master/pkg/cortex/serve/cortex_internal/lib/client/onnx.py) that manages an ONNX Runtime session to make predictions using your model. It should be saved as an instance variable in your Predictor, and your `predict()` function should call `onnx_client.predict()` to make an inference with your exported ONNX model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your `predict()` function as well.
Cortex provides an `onnx_client` to your Predictor's constructor. `onnx_client` is an instance of [ONNXClient](https://github.com/cortexlabs/cortex/tree/0.29/pkg/cortex/serve/cortex_internal/lib/client/onnx.py) that manages an ONNX Runtime session to make predictions using your model. It should be saved as an instance variable in your Predictor, and your `predict()` function should call `onnx_client.predict()` to make an inference with your exported ONNX model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your `predict()` function as well.

When multiple models are defined using the Predictor's `models` field, the `onnx_client.predict()` method expects a second argument `model_name` which must hold the name of the model that you want to use for inference (for example: `self.client.predict(model_input, "text-generator")`).

Expand Down
Loading

0 comments on commit 1ae8849

Please sign in to comment.