Skip to content

Commit

Permalink
Update version to 0.35.0
Browse files Browse the repository at this point in the history
  • Loading branch information
deliahu committed May 11, 2021
1 parent 3d17e8b commit 1309717
Show file tree
Hide file tree
Showing 30 changed files with 90 additions and 90 deletions.
2 changes: 1 addition & 1 deletion build/build-image.sh
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ set -euo pipefail

ROOT="$(cd "$(dirname "${BASH_SOURCE[0]}")"/.. >/dev/null && pwd)"

CORTEX_VERSION=master
CORTEX_VERSION=0.35.0

image=$1

Expand Down
2 changes: 1 addition & 1 deletion build/cli.sh
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ set -euo pipefail

ROOT="$(cd "$(dirname "${BASH_SOURCE[0]}")"/.. >/dev/null && pwd)"

CORTEX_VERSION=master
CORTEX_VERSION=0.35.0

arg1=${1:-""}
upload="false"
Expand Down
2 changes: 1 addition & 1 deletion build/push-image.sh
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@

set -euo pipefail

CORTEX_VERSION=master
CORTEX_VERSION=0.35.0

host=$1
image=$2
Expand Down
2 changes: 1 addition & 1 deletion dev/export_images.sh
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ set -euo pipefail
ROOT="$(cd "$(dirname "${BASH_SOURCE[0]}")"/.. >/dev/null && pwd)"

# CORTEX_VERSION
cortex_version=master
cortex_version=0.35.0

# user set variables
ecr_region=$1
Expand Down
2 changes: 1 addition & 1 deletion dev/registry.sh
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.

CORTEX_VERSION=master
CORTEX_VERSION=0.35.0

set -eo pipefail

Expand Down
8 changes: 4 additions & 4 deletions docs/clients/install.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,10 +9,10 @@ pip install cortex
```

<!-- CORTEX_VERSION_README x2 -->
To install or upgrade to a specific version (e.g. v0.34.0):
To install or upgrade to a specific version (e.g. v0.35.0):

```bash
pip install cortex==0.34.0
pip install cortex==0.35.0
```

To upgrade to the latest version:
Expand All @@ -25,8 +25,8 @@ pip install --upgrade cortex

<!-- CORTEX_VERSION_README x2 -->
```bash
# For example to download CLI version 0.34.0 (Note the "v"):
bash -c "$(curl -sS https://raw.githubusercontent.com/cortexlabs/cortex/v0.34.0/get-cli.sh)"
# For example to download CLI version 0.35.0 (Note the "v"):
bash -c "$(curl -sS https://raw.githubusercontent.com/cortexlabs/cortex/v0.35.0/get-cli.sh)"
```

By default, the Cortex CLI is installed at `/usr/local/bin/cortex`. To install the executable elsewhere, export the `CORTEX_INSTALL_PATH` environment variable to your desired location before running the command above.
Expand Down
12 changes: 6 additions & 6 deletions docs/clients/python.md
Original file line number Diff line number Diff line change
Expand Up @@ -93,7 +93,7 @@ Deploy API(s) from a project directory.

**Arguments**:

- `api_spec` - A dictionary defining a single Cortex API. See https://docs.cortex.dev/v/master/ for schema.
- `api_spec` - A dictionary defining a single Cortex API. See https://docs.cortex.dev/v/0.35/ for schema.
- `project_dir` - Path to a python project.
- `force` - Override any in-progress api updates.
- `wait` - Streams logs until the APIs are ready.
Expand All @@ -115,7 +115,7 @@ Deploy a Realtime API.

**Arguments**:

- `api_spec` - A dictionary defining a single Cortex API. See https://docs.cortex.dev/v/master/workloads/realtime-apis/configuration for schema.
- `api_spec` - A dictionary defining a single Cortex API. See https://docs.cortex.dev/v/0.35/workloads/realtime-apis/configuration for schema.
- `handler` - A Cortex Handler class implementation.
- `requirements` - A list of PyPI dependencies that will be installed before the handler class implementation is invoked.
- `conda_packages` - A list of Conda dependencies that will be installed before the handler class implementation is invoked.
Expand All @@ -139,7 +139,7 @@ Deploy an Async API.

**Arguments**:

- `api_spec` - A dictionary defining a single Cortex API. See https://docs.cortex.dev/v/master/workloads/async-apis/configuration for schema.
- `api_spec` - A dictionary defining a single Cortex API. See https://docs.cortex.dev/v/0.35/workloads/async-apis/configuration for schema.
- `handler` - A Cortex Handler class implementation.
- `requirements` - A list of PyPI dependencies that will be installed before the handler class implementation is invoked.
- `conda_packages` - A list of Conda dependencies that will be installed before the handler class implementation is invoked.
Expand All @@ -162,7 +162,7 @@ Deploy a Batch API.

**Arguments**:

- `api_spec` - A dictionary defining a single Cortex API. See https://docs.cortex.dev/v/master/workloads/batch-apis/configuration for schema.
- `api_spec` - A dictionary defining a single Cortex API. See https://docs.cortex.dev/v/0.35/workloads/batch-apis/configuration for schema.
- `handler` - A Cortex Handler class implementation.
- `requirements` - A list of PyPI dependencies that will be installed before the handler class implementation is invoked.
- `conda_packages` - A list of Conda dependencies that will be installed before the handler class implementation is invoked.
Expand All @@ -184,7 +184,7 @@ Deploy a Task API.

**Arguments**:

- `api_spec` - A dictionary defining a single Cortex API. See https://docs.cortex.dev/v/master/workloads/task-apis/configuration for schema.
- `api_spec` - A dictionary defining a single Cortex API. See https://docs.cortex.dev/v/0.35/workloads/task-apis/configuration for schema.
- `task` - A callable class implementation.
- `requirements` - A list of PyPI dependencies that will be installed before the handler class implementation is invoked.
- `conda_packages` - A list of Conda dependencies that will be installed before the handler class implementation is invoked.
Expand All @@ -206,7 +206,7 @@ Deploy a Task API.

**Arguments**:

- `api_spec` - A dictionary defining a single Cortex API. See https://docs.cortex.dev/v/master/workloads/realtime-apis/traffic-splitter/configuration for schema.
- `api_spec` - A dictionary defining a single Cortex API. See https://docs.cortex.dev/v/0.35/workloads/realtime-apis/traffic-splitter/configuration for schema.


**Returns**:
Expand Down
2 changes: 1 addition & 1 deletion docs/clusters/advanced/self-hosted-images.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ Clone the Cortex repo using the release tag corresponding to your version (which
<!-- CORTEX_VERSION_README -->

```bash
export CORTEX_VERSION=0.34.0
export CORTEX_VERSION=0.35.0
git clone --depth 1 --branch v$CORTEX_VERSION https://github.com/cortexlabs/cortex.git
```

Expand Down
52 changes: 26 additions & 26 deletions docs/clusters/management/create.md
Original file line number Diff line number Diff line change
Expand Up @@ -96,30 +96,30 @@ The docker images used by the cluster can also be overridden. They can be config
<!-- CORTEX_VERSION_BRANCH_STABLE -->
```yaml
image_operator: quay.io/cortexlabs/operator:master
image_controller_manager: quay.io/cortexlabs/controller-manager:master
image_manager: quay.io/cortexlabs/manager:master
image_downloader: quay.io/cortexlabs/downloader:master
image_request_monitor: quay.io/cortexlabs/request-monitor:master
image_async_gateway: quay.io/cortexlabs/async-gateway:master
image_cluster_autoscaler: quay.io/cortexlabs/cluster-autoscaler:master
image_metrics_server: quay.io/cortexlabs/metrics-server:master
image_inferentia: quay.io/cortexlabs/inferentia:master
image_neuron_rtd: quay.io/cortexlabs/neuron-rtd:master
image_nvidia: quay.io/cortexlabs/nvidia:master
image_fluent_bit: quay.io/cortexlabs/fluent-bit:master
image_istio_proxy: quay.io/cortexlabs/istio-proxy:master
image_istio_pilot: quay.io/cortexlabs/istio-pilot:master
image_prometheus: quay.io/cortexlabs/prometheus:master
image_prometheus_config_reloader: quay.io/cortexlabs/prometheus-config-reloader:master
image_prometheus_operator: quay.io/cortexlabs/prometheus-operator:master
image_prometheus_statsd_exporter: quay.io/cortexlabs/prometheus-statsd-exporter:master
image_prometheus_dcgm_exporter: quay.io/cortexlabs/prometheus-dcgm-exporter:master
image_prometheus_kube_state_metrics: quay.io/cortexlabs/prometheus-kube-state-metrics:master
image_prometheus_node_exporter: quay.io/cortexlabs/prometheus-node-exporter:master
image_kube_rbac_proxy: quay.io/cortexlabs/kube-rbac-proxy:master
image_grafana: quay.io/cortexlabs/grafana:master
image_event_exporter: quay.io/cortexlabs/event-exporter:master
image_enqueuer: quay.io/cortexlabs/enqueuer:master
image_kubexit: quay.io/cortexlabs/kubexit:master
image_operator: quay.io/cortexlabs/operator:0.35.0
image_controller_manager: quay.io/cortexlabs/controller-manager:0.35.0
image_manager: quay.io/cortexlabs/manager:0.35.0
image_downloader: quay.io/cortexlabs/downloader:0.35.0
image_request_monitor: quay.io/cortexlabs/request-monitor:0.35.0
image_async_gateway: quay.io/cortexlabs/async-gateway:0.35.0
image_cluster_autoscaler: quay.io/cortexlabs/cluster-autoscaler:0.35.0
image_metrics_server: quay.io/cortexlabs/metrics-server:0.35.0
image_inferentia: quay.io/cortexlabs/inferentia:0.35.0
image_neuron_rtd: quay.io/cortexlabs/neuron-rtd:0.35.0
image_nvidia: quay.io/cortexlabs/nvidia:0.35.0
image_fluent_bit: quay.io/cortexlabs/fluent-bit:0.35.0
image_istio_proxy: quay.io/cortexlabs/istio-proxy:0.35.0
image_istio_pilot: quay.io/cortexlabs/istio-pilot:0.35.0
image_prometheus: quay.io/cortexlabs/prometheus:0.35.0
image_prometheus_config_reloader: quay.io/cortexlabs/prometheus-config-reloader:0.35.0
image_prometheus_operator: quay.io/cortexlabs/prometheus-operator:0.35.0
image_prometheus_statsd_exporter: quay.io/cortexlabs/prometheus-statsd-exporter:0.35.0
image_prometheus_dcgm_exporter: quay.io/cortexlabs/prometheus-dcgm-exporter:0.35.0
image_prometheus_kube_state_metrics: quay.io/cortexlabs/prometheus-kube-state-metrics:0.35.0
image_prometheus_node_exporter: quay.io/cortexlabs/prometheus-node-exporter:0.35.0
image_kube_rbac_proxy: quay.io/cortexlabs/kube-rbac-proxy:0.35.0
image_grafana: quay.io/cortexlabs/grafana:0.35.0
image_event_exporter: quay.io/cortexlabs/event-exporter:0.35.0
image_enqueuer: quay.io/cortexlabs/enqueuer:0.35.0
image_kubexit: quay.io/cortexlabs/kubexit:0.35.0
```
6 changes: 3 additions & 3 deletions docs/workloads/async/configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ handler:
shell: <string> # relative path to a shell script for system package installation (default: dependencies.sh)
config: <string: value> # arbitrary dictionary passed to the constructor of the Handler class (optional)
python_path: <string> # path to the root of your Python folder that will be appended to PYTHONPATH (default: folder containing cortex.yaml)
image: <string> # docker image to use for the handler (default: quay.io/cortexlabs/python-handler-cpu:master, quay.io/cortexlabs/python-handler-gpu:master-cuda10.2-cudnn8, or quay.io/cortexlabs/python-handler-inf:master based on compute)
image: <string> # docker image to use for the handler (default: quay.io/cortexlabs/python-handler-cpu:0.35.0, quay.io/cortexlabs/python-handler-gpu:0.35.0-cuda10.2-cudnn8, or quay.io/cortexlabs/python-handler-inf:0.35.0 based on compute)
env: <string: string> # dictionary of environment variables
log_level: <string> # log level that can be "debug", "info", "warning" or "error" (default: "info")
shm_size: <string> # size of shared memory (/dev/shm) for sharing data between multiple processes, e.g. 64Mi or 1Gi (default: Null)
Expand Down Expand Up @@ -55,8 +55,8 @@ handler:
signature_key: # name of the signature def to use for prediction (required if your model has more than one signature def)
config: <string: value> # arbitrary dictionary passed to the constructor of the Handler class (optional)
python_path: <string> # path to the root of your Python folder that will be appended to PYTHONPATH (default: folder containing cortex.yaml)
image: <string> # docker image to use for the handler (default: quay.io/cortexlabs/tensorflow-handler:master)
tensorflow_serving_image: <string> # docker image to use for the TensorFlow Serving container (default: quay.io/cortexlabs/tensorflow-serving-cpu:master, quay.io/cortexlabs/tensorflow-serving-gpu:master, or quay.io/cortexlabs/tensorflow-serving-inf:master based on compute)
image: <string> # docker image to use for the handler (default: quay.io/cortexlabs/tensorflow-handler:0.35.0)
tensorflow_serving_image: <string> # docker image to use for the TensorFlow Serving container (default: quay.io/cortexlabs/tensorflow-serving-cpu:0.35.0, quay.io/cortexlabs/tensorflow-serving-gpu:0.35.0, or quay.io/cortexlabs/tensorflow-serving-inf:0.35.0 based on compute)
env: <string: string> # dictionary of environment variables
log_level: <string> # log level that can be "debug", "info", "warning" or "error" (default: "info")
shm_size: <string> # size of shared memory (/dev/shm) for sharing data between multiple processes, e.g. 64Mi or 1Gi (default: Null)
Expand Down
2 changes: 1 addition & 1 deletion docs/workloads/async/models.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ class Handler:
<!-- CORTEX_VERSION_MINOR -->

Cortex provides a `tensorflow_client` to your handler's constructor. `tensorflow_client` is an instance
of [TensorFlowClient](https://github.com/cortexlabs/cortex/tree/master/python/serve/cortex_internal/lib/client/tensorflow.py)
of [TensorFlowClient](https://github.com/cortexlabs/cortex/tree/0.35/python/serve/cortex_internal/lib/client/tensorflow.py)
that manages a connection to a TensorFlow Serving container to make predictions using your model. It should be saved as
an instance variable in your handler class, and your `handle_async()` function should call `tensorflow_client.predict()` to make
an inference with your exported TensorFlow model. Preprocessing of the JSON payload and postprocessing of predictions
Expand Down
6 changes: 3 additions & 3 deletions docs/workloads/batch/configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ handler:
path: <string> # path to a python file with a Handler class definition, relative to the Cortex root (required)
config: <string: value> # arbitrary dictionary passed to the constructor of the Handler class (can be overridden by config passed in job submission) (optional)
python_path: <string> # path to the root of your Python folder that will be appended to PYTHONPATH (default: folder containing cortex.yaml)
image: <string> # docker image to use for the handler (default: quay.io/cortexlabs/python-handler-cpu:master or quay.io/cortexlabs/python-handler-gpu:master-cuda10.2-cudnn8 based on compute)
image: <string> # docker image to use for the handler (default: quay.io/cortexlabs/python-handler-cpu:0.35.0 or quay.io/cortexlabs/python-handler-gpu:0.35.0-cuda10.2-cudnn8 based on compute)
env: <string: string> # dictionary of environment variables
log_level: <string> # log level that can be "debug", "info", "warning" or "error" (default: "info")
shm_size: <string> # size of shared memory (/dev/shm) for sharing data between multiple processes, e.g. 64Mi or 1Gi (default: Null)
Expand Down Expand Up @@ -49,8 +49,8 @@ handler:
batch_interval: <duration> # the maximum amount of time to spend waiting for additional requests before running inference on the batch of requests
config: <string: value> # arbitrary dictionary passed to the constructor of the Handler class (can be overridden by config passed in job submission) (optional)
python_path: <string> # path to the root of your Python folder that will be appended to PYTHONPATH (default: folder containing cortex.yaml)
image: <string> # docker image to use for the handler (default: quay.io/cortexlabs/tensorflow-handler:master)
tensorflow_serving_image: <string> # docker image to use for the TensorFlow Serving container (default: quay.io/cortexlabs/tensorflow-serving-cpu:master or quay.io/cortexlabs/tensorflow-serving-gpu:master based on compute)
image: <string> # docker image to use for the handler (default: quay.io/cortexlabs/tensorflow-handler:0.35.0)
tensorflow_serving_image: <string> # docker image to use for the TensorFlow Serving container (default: quay.io/cortexlabs/tensorflow-serving-cpu:0.35.0 or quay.io/cortexlabs/tensorflow-serving-gpu:0.35.0 based on compute)
env: <string: string> # dictionary of environment variables
log_level: <string> # log level that can be "debug", "info", "warning" or "error" (default: "info")
shm_size: <string> # size of shared memory (/dev/shm) for sharing data between multiple processes, e.g. 64Mi or 1Gi (default: Null)
Expand Down
2 changes: 1 addition & 1 deletion docs/workloads/batch/models.md
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ class Handler:
```

<!-- CORTEX_VERSION_MINOR -->
Cortex provides a `tensorflow_client` to your Handler class' constructor. `tensorflow_client` is an instance of [TensorFlowClient](https://github.com/cortexlabs/cortex/tree/master/python/serve/cortex_internal/lib/client/tensorflow.py) that manages a connection to a TensorFlow Serving container to make predictions using your model. It should be saved as an instance variable in your Handler class, and your `handle_batch()` function should call `tensorflow_client.predict()` to make an inference with your exported TensorFlow model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your `handle_batch()` function as well.
Cortex provides a `tensorflow_client` to your Handler class' constructor. `tensorflow_client` is an instance of [TensorFlowClient](https://github.com/cortexlabs/cortex/tree/0.35/python/serve/cortex_internal/lib/client/tensorflow.py) that manages a connection to a TensorFlow Serving container to make predictions using your model. It should be saved as an instance variable in your Handler class, and your `handle_batch()` function should call `tensorflow_client.predict()` to make an inference with your exported TensorFlow model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your `handle_batch()` function as well.

When multiple models are defined using the Handler's `models` field, the `tensorflow_client.predict()` method expects a second argument `model_name` which must hold the name of the model that you want to use for inference (for example: `self.client.predict(payload, "text-generator")`). There is also an optional third argument to specify the model version.

Expand Down
4 changes: 2 additions & 2 deletions docs/workloads/debugging.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,10 +12,10 @@ For example:
cortex prepare-debug cortex.yaml iris-classifier

> docker run -p 9000:8888 \
> -e "CORTEX_VERSION=0.34.0" \
> -e "CORTEX_VERSION=0.35.0" \
> -e "CORTEX_API_SPEC=/mnt/project/iris-classifier.debug.json" \
> -v /home/ubuntu/iris-classifier:/mnt/project \
> quay.io/cortexlabs/python-handler-cpu:0.34.0
> quay.io/cortexlabs/python-handler-cpu:0.35.0
```

Make a request to the api container:
Expand Down
24 changes: 12 additions & 12 deletions docs/workloads/dependencies/images.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,25 +11,25 @@ mkdir my-api && cd my-api && touch Dockerfile
Cortex's base Docker images are listed below. Depending on the Cortex Handler and compute type specified in your API configuration, choose one of these images to use as the base for your Docker image:

<!-- CORTEX_VERSION_BRANCH_STABLE x10 -->
* Python Handler (CPU): `quay.io/cortexlabs/python-handler-cpu:master`
* Python Handler (CPU): `quay.io/cortexlabs/python-handler-cpu:0.35.0`
* Python Handler (GPU): choose one of the following:
* `quay.io/cortexlabs/python-handler-gpu:master-cuda10.0-cudnn7`
* `quay.io/cortexlabs/python-handler-gpu:master-cuda10.1-cudnn7`
* `quay.io/cortexlabs/python-handler-gpu:master-cuda10.1-cudnn8`
* `quay.io/cortexlabs/python-handler-gpu:master-cuda10.2-cudnn7`
* `quay.io/cortexlabs/python-handler-gpu:master-cuda10.2-cudnn8`
* `quay.io/cortexlabs/python-handler-gpu:master-cuda11.0-cudnn8`
* `quay.io/cortexlabs/python-handler-gpu:master-cuda11.1-cudnn8`
* Python Handler (Inferentia): `quay.io/cortexlabs/python-handler-inf:master`
* TensorFlow Handler (CPU, GPU, Inferentia): `quay.io/cortexlabs/tensorflow-handler:master`
* `quay.io/cortexlabs/python-handler-gpu:0.35.0-cuda10.0-cudnn7`
* `quay.io/cortexlabs/python-handler-gpu:0.35.0-cuda10.1-cudnn7`
* `quay.io/cortexlabs/python-handler-gpu:0.35.0-cuda10.1-cudnn8`
* `quay.io/cortexlabs/python-handler-gpu:0.35.0-cuda10.2-cudnn7`
* `quay.io/cortexlabs/python-handler-gpu:0.35.0-cuda10.2-cudnn8`
* `quay.io/cortexlabs/python-handler-gpu:0.35.0-cuda11.0-cudnn8`
* `quay.io/cortexlabs/python-handler-gpu:0.35.0-cuda11.1-cudnn8`
* Python Handler (Inferentia): `quay.io/cortexlabs/python-handler-inf:0.35.0`
* TensorFlow Handler (CPU, GPU, Inferentia): `quay.io/cortexlabs/tensorflow-handler:0.35.0`

The sample `Dockerfile` below inherits from Cortex's Python CPU serving image, and installs 3 packages. `tree` is a system package and `pandas` and `rdkit` are Python packages.

<!-- CORTEX_VERSION_BRANCH_STABLE -->
```dockerfile
# Dockerfile

FROM quay.io/cortexlabs/python-handler-cpu:master
FROM quay.io/cortexlabs/python-handler-cpu:0.35.0

RUN apt-get update \
&& apt-get install -y tree \
Expand All @@ -47,7 +47,7 @@ If you need to upgrade the Python Runtime version on your image, you can follow
```Dockerfile
# Dockerfile

FROM quay.io/cortexlabs/python-handler-cpu:master
FROM quay.io/cortexlabs/python-handler-cpu:0.35.0

# upgrade python runtime version
RUN conda update -n base -c defaults conda
Expand Down
Loading

0 comments on commit 1309717

Please sign in to comment.