Skip to content

Commit

Permalink
Update version to 0.14.0
Browse files Browse the repository at this point in the history
  • Loading branch information
deliahu committed Mar 5, 2020
1 parent 1539585 commit f978586
Show file tree
Hide file tree
Showing 23 changed files with 53 additions and 58 deletions.
16 changes: 6 additions & 10 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,10 +4,6 @@ Cortex is an open source platform for deploying machine learning models as produ

<br>

<!-- Delete on release branches -->
<!-- CORTEX_VERSION_README_MINOR -->
[install](https://cortex.dev/install)[tutorial](https://cortex.dev/iris-classifier)[docs](https://cortex.dev)[examples](https://github.com/cortexlabs/cortex/tree/0.13/examples)[we're hiring](https://angel.co/cortex-labs-inc/jobs)[email us](mailto:[email protected])[chat with us](https://gitter.im/cortexlabs/cortex)<br><br>

<!-- Set header Cache-Control=no-cache on the S3 object metadata (see https://help.github.com/en/articles/about-anonymized-image-urls) -->
![Demo](https://d1zqebknpdh033.cloudfront.net/demo/gif/v0.13_2.gif)

Expand All @@ -33,7 +29,7 @@ Cortex is designed to be self-hosted on any AWS account. You can spin up a clust
<!-- CORTEX_VERSION_README_MINOR -->
```bash
# install the CLI on your machine
$ bash -c "$(curl -sS https://raw.githubusercontent.com/cortexlabs/cortex/0.13/get-cli.sh)"
$ bash -c "$(curl -sS https://raw.githubusercontent.com/cortexlabs/cortex/0.14/get-cli.sh)"

# provision infrastructure on AWS and spin up a cluster
$ cortex cluster up
Expand Down Expand Up @@ -140,8 +136,8 @@ The CLI sends configuration and code to the cluster every time you run `cortex d
## Examples of Cortex deployments

<!-- CORTEX_VERSION_README_MINOR x5 -->
* [Sentiment analysis](https://github.com/cortexlabs/cortex/tree/0.13/examples/tensorflow/sentiment-analyzer): deploy a BERT model for sentiment analysis.
* [Image classification](https://github.com/cortexlabs/cortex/tree/0.13/examples/tensorflow/image-classifier): deploy an Inception model to classify images.
* [Search completion](https://github.com/cortexlabs/cortex/tree/0.13/examples/pytorch/search-completer): deploy Facebook's RoBERTa model to complete search terms.
* [Text generation](https://github.com/cortexlabs/cortex/tree/0.13/examples/pytorch/text-generator): deploy Hugging Face's DistilGPT2 model to generate text.
* [Iris classification](https://github.com/cortexlabs/cortex/tree/0.13/examples/sklearn/iris-classifier): deploy a scikit-learn model to classify iris flowers.
* [Sentiment analysis](https://github.com/cortexlabs/cortex/tree/0.14/examples/tensorflow/sentiment-analyzer): deploy a BERT model for sentiment analysis.
* [Image classification](https://github.com/cortexlabs/cortex/tree/0.14/examples/tensorflow/image-classifier): deploy an Inception model to classify images.
* [Search completion](https://github.com/cortexlabs/cortex/tree/0.14/examples/pytorch/search-completer): deploy Facebook's RoBERTa model to complete search terms.
* [Text generation](https://github.com/cortexlabs/cortex/tree/0.14/examples/pytorch/text-generator): deploy Hugging Face's DistilGPT2 model to generate text.
* [Iris classification](https://github.com/cortexlabs/cortex/tree/0.14/examples/sklearn/iris-classifier): deploy a scikit-learn model to classify iris flowers.
2 changes: 1 addition & 1 deletion build/build-image.sh
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ set -euo pipefail

ROOT="$(cd "$(dirname "${BASH_SOURCE[0]}")"/.. >/dev/null && pwd)"

CORTEX_VERSION=master
CORTEX_VERSION=0.14.0

dir=$1
image=$2
Expand Down
2 changes: 1 addition & 1 deletion build/cli.sh
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ set -euo pipefail

ROOT="$(cd "$(dirname "${BASH_SOURCE[0]}")"/.. >/dev/null && pwd)"

CORTEX_VERSION=master
CORTEX_VERSION=0.14.0

arg1=${1:-""}
upload="false"
Expand Down
2 changes: 1 addition & 1 deletion build/push-image.sh
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@

set -euo pipefail

CORTEX_VERSION=master
CORTEX_VERSION=0.14.0

image=$1

Expand Down
42 changes: 21 additions & 21 deletions docs/cluster-management/config.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,28 +43,28 @@ instance_volume_size: 50
log_group: cortex

# whether to use spot instances in the cluster (default: false)
# see https://cortex.dev/v/master/cluster-management/spot-instances for additional details on spot configuration
# see https://cortex.dev/v/0.14/cluster-management/spot-instances for additional details on spot configuration
spot: false

# docker image paths
image_python_serve: cortexlabs/python-serve:master
image_python_serve_gpu: cortexlabs/python-serve-gpu:master
image_tf_serve: cortexlabs/tf-serve:master
image_tf_serve_gpu: cortexlabs/tf-serve-gpu:master
image_tf_api: cortexlabs/tf-api:master
image_onnx_serve: cortexlabs/onnx-serve:master
image_onnx_serve_gpu: cortexlabs/onnx-serve-gpu:master
image_operator: cortexlabs/operator:master
image_manager: cortexlabs/manager:master
image_downloader: cortexlabs/downloader:master
image_request_monitor: cortexlabs/request-monitor:master
image_cluster_autoscaler: cortexlabs/cluster-autoscaler:master
image_metrics_server: cortexlabs/metrics-server:master
image_nvidia: cortexlabs/nvidia:master
image_fluentd: cortexlabs/fluentd:master
image_statsd: cortexlabs/statsd:master
image_istio_proxy: cortexlabs/istio-proxy:master
image_istio_pilot: cortexlabs/istio-pilot:master
image_istio_citadel: cortexlabs/istio-citadel:master
image_istio_galley: cortexlabs/istio-galley:master
image_python_serve: cortexlabs/python-serve:0.14.0
image_python_serve_gpu: cortexlabs/python-serve-gpu:0.14.0
image_tf_serve: cortexlabs/tf-serve:0.14.0
image_tf_serve_gpu: cortexlabs/tf-serve-gpu:0.14.0
image_tf_api: cortexlabs/tf-api:0.14.0
image_onnx_serve: cortexlabs/onnx-serve:0.14.0
image_onnx_serve_gpu: cortexlabs/onnx-serve-gpu:0.14.0
image_operator: cortexlabs/operator:0.14.0
image_manager: cortexlabs/manager:0.14.0
image_downloader: cortexlabs/downloader:0.14.0
image_request_monitor: cortexlabs/request-monitor:0.14.0
image_cluster_autoscaler: cortexlabs/cluster-autoscaler:0.14.0
image_metrics_server: cortexlabs/metrics-server:0.14.0
image_nvidia: cortexlabs/nvidia:0.14.0
image_fluentd: cortexlabs/fluentd:0.14.0
image_statsd: cortexlabs/statsd:0.14.0
image_istio_proxy: cortexlabs/istio-proxy:0.14.0
image_istio_pilot: cortexlabs/istio-pilot:0.14.0
image_istio_citadel: cortexlabs/istio-citadel:0.14.0
image_istio_galley: cortexlabs/istio-galley:0.14.0
```
4 changes: 2 additions & 2 deletions docs/cluster-management/install.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ See [cluster configuration](config.md) to learn how you can customize your clust
<!-- CORTEX_VERSION_MINOR -->
```bash
# install the CLI on your machine
$ bash -c "$(curl -sS https://raw.githubusercontent.com/cortexlabs/cortex/master/get-cli.sh)"
$ bash -c "$(curl -sS https://raw.githubusercontent.com/cortexlabs/cortex/0.14/get-cli.sh)"

# provision infrastructure on AWS and spin up a cluster
$ cortex cluster up
Expand All @@ -38,7 +38,7 @@ your cluster is ready!

```bash
# clone the Cortex repository
git clone -b master https://github.com/cortexlabs/cortex.git
git clone -b 0.14 https://github.com/cortexlabs/cortex.git

# navigate to the TensorFlow iris classification example
cd cortex/examples/tensorflow/iris-classifier
Expand Down
2 changes: 1 addition & 1 deletion docs/cluster-management/update.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ cortex cluster update
cortex cluster down

# update your CLI
bash -c "$(curl -sS https://raw.githubusercontent.com/cortexlabs/cortex/master/get-cli.sh)"
bash -c "$(curl -sS https://raw.githubusercontent.com/cortexlabs/cortex/0.14/get-cli.sh)"

# confirm version
cortex version
Expand Down
4 changes: 2 additions & 2 deletions docs/deployments/onnx.md
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,7 @@ You can log information about each request by adding a `?debug=true` parameter t
An ONNX Predictor is a Python class that describes how to serve your ONNX model to make predictions.

<!-- CORTEX_VERSION_MINOR -->
Cortex provides an `onnx_client` and a config object to initialize your implementation of the ONNX Predictor class. The `onnx_client` is an instance of [ONNXClient](https://github.com/cortexlabs/cortex/tree/master/pkg/workloads/cortex/lib/client/onnx.py) that manages an ONNX Runtime session and helps make predictions using your model. Once your implementation of the ONNX Predictor class has been initialized, the replica is available to serve requests. Upon receiving a request, your implementation's `predict()` function is called with the JSON payload and is responsible for returning a prediction or batch of predictions. Your `predict()` function should call `onnx_client.predict()` to make an inference against your exported ONNX model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your `predict()` function as well.
Cortex provides an `onnx_client` and a config object to initialize your implementation of the ONNX Predictor class. The `onnx_client` is an instance of [ONNXClient](https://github.com/cortexlabs/cortex/tree/0.14/pkg/workloads/cortex/lib/client/onnx.py) that manages an ONNX Runtime session and helps make predictions using your model. Once your implementation of the ONNX Predictor class has been initialized, the replica is available to serve requests. Upon receiving a request, your implementation's `predict()` function is called with the JSON payload and is responsible for returning a prediction or batch of predictions. Your `predict()` function should call `onnx_client.predict()` to make an inference against your exported ONNX model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your `predict()` function as well.

## Implementation

Expand Down Expand Up @@ -133,6 +133,6 @@ requests==2.22.0
```

<!-- CORTEX_VERSION_MINOR x2 -->
The pre-installed system packages are listed in the [onnx-serve Dockerfile](https://github.com/cortexlabs/cortex/tree/master/images/onnx-serve/Dockerfile) (for CPU) or the [onnx-serve-gpu Dockerfile](https://github.com/cortexlabs/cortex/tree/master/images/onnx-serve-gpu/Dockerfile) (for GPU).
The pre-installed system packages are listed in the [onnx-serve Dockerfile](https://github.com/cortexlabs/cortex/tree/0.14/images/onnx-serve/Dockerfile) (for CPU) or the [onnx-serve-gpu Dockerfile](https://github.com/cortexlabs/cortex/tree/0.14/images/onnx-serve-gpu/Dockerfile) (for GPU).

If your application requires additional dependencies, you can [install additional Python packages](../dependency-management/python-packages.md) or [install additional system packages](../dependency-management/system-packages.md).
2 changes: 1 addition & 1 deletion docs/deployments/python.md
Original file line number Diff line number Diff line change
Expand Up @@ -171,6 +171,6 @@ xgboost==0.90
```

<!-- CORTEX_VERSION_MINOR x2 -->
The pre-installed system packages are listed in the [python-serve Dockerfile](https://github.com/cortexlabs/cortex/tree/master/images/python-serve/Dockerfile) (for CPU) or the [python-serve-gpu Dockerfile](https://github.com/cortexlabs/cortex/tree/master/images/python-serve-gpu/Dockerfile) (for GPU).
The pre-installed system packages are listed in the [python-serve Dockerfile](https://github.com/cortexlabs/cortex/tree/0.14/images/python-serve/Dockerfile) (for CPU) or the [python-serve-gpu Dockerfile](https://github.com/cortexlabs/cortex/tree/0.14/images/python-serve-gpu/Dockerfile) (for GPU).

If your application requires additional dependencies, you can [install additional Python packages](../dependency-management/python-packages.md) or [install additional system packages](../dependency-management/system-packages.md).
4 changes: 2 additions & 2 deletions docs/deployments/tensorflow.md
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,7 @@ You can log information about each request by adding a `?debug=true` parameter t
A TensorFlow Predictor is a Python class that describes how to serve your TensorFlow model to make predictions.

<!-- CORTEX_VERSION_MINOR -->
Cortex provides a `tensorflow_client` and a config object to initialize your implementation of the TensorFlow Predictor class. The `tensorflow_client` is an instance of [TensorFlowClient](https://github.com/cortexlabs/cortex/tree/master/pkg/workloads/cortex/lib/client/tensorflow.py) that manages a connection to a TensorFlow Serving container via gRPC to make predictions using your model. Once your implementation of the TensorFlow Predictor class has been initialized, the replica is available to serve requests. Upon receiving a request, your implementation's `predict()` function is called with the JSON payload and is responsible for returning a prediction or batch of predictions. Your `predict()` function should call `tensorflow_client.predict()` to make an inference against your exported TensorFlow model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your `predict()` function as well.
Cortex provides a `tensorflow_client` and a config object to initialize your implementation of the TensorFlow Predictor class. The `tensorflow_client` is an instance of [TensorFlowClient](https://github.com/cortexlabs/cortex/tree/0.14/pkg/workloads/cortex/lib/client/tensorflow.py) that manages a connection to a TensorFlow Serving container via gRPC to make predictions using your model. Once your implementation of the TensorFlow Predictor class has been initialized, the replica is available to serve requests. Upon receiving a request, your implementation's `predict()` function is called with the JSON payload and is responsible for returning a prediction or batch of predictions. Your `predict()` function should call `tensorflow_client.predict()` to make an inference against your exported TensorFlow model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your `predict()` function as well.

## Implementation

Expand Down Expand Up @@ -128,6 +128,6 @@ tensorflow==2.1.0
```

<!-- CORTEX_VERSION_MINOR -->
The pre-installed system packages are listed in the [tf-api Dockerfile](https://github.com/cortexlabs/cortex/tree/master/images/tf-api/Dockerfile).
The pre-installed system packages are listed in the [tf-api Dockerfile](https://github.com/cortexlabs/cortex/tree/0.14/images/tf-api/Dockerfile).

If your application requires additional dependencies, you can [install additional Python packages](../dependency-management/python-packages.md) or [install additional system packages](../dependency-management/system-packages.md).
2 changes: 1 addition & 1 deletion docs/packaging-models/tensorflow.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# TensorFlow

<!-- CORTEX_VERSION_MINOR -->
Export your trained model and upload the export directory, or a checkpoint directory containing the export directory (which is usually the case if you used `estimator.train_and_evaluate`). An example is shown below (here is the [complete example](https://github.com/cortexlabs/cortex/blob/master/examples/tensorflow/sentiment-analyzer)):
Export your trained model and upload the export directory, or a checkpoint directory containing the export directory (which is usually the case if you used `estimator.train_and_evaluate`). An example is shown below (here is the [complete example](https://github.com/cortexlabs/cortex/blob/0.14/examples/tensorflow/sentiment-analyzer)):

```python
import tensorflow as tf
Expand Down
2 changes: 1 addition & 1 deletion docs/summary.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
* [Install](cluster-management/install.md)
* [Tutorial](../examples/sklearn/iris-classifier/README.md)
* [GitHub](https://github.com/cortexlabs/cortex)
* [Examples](https://github.com/cortexlabs/cortex/tree/master/examples) <!-- CORTEX_VERSION_MINOR -->
* [Examples](https://github.com/cortexlabs/cortex/tree/0.14/examples) <!-- CORTEX_VERSION_MINOR -->
* [Chat with us](https://gitter.im/cortexlabs/cortex)
* [Email us](mailto:[email protected])
* [We're hiring](https://angel.co/cortex-labs-inc/jobs)
Expand Down
2 changes: 1 addition & 1 deletion examples/tensorflow/image-classifier/inception.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -204,7 +204,7 @@
},
"source": [
"<!-- CORTEX_VERSION_MINOR -->\n",
"That's it! See the [example on GitHub](https://github.com/cortexlabs/cortex/tree/master/examples/tensorflow/image-classifier) for how to deploy the model as an API."
"That's it! See the [example on GitHub](https://github.com/cortexlabs/cortex/tree/0.14/examples/tensorflow/image-classifier) for how to deploy the model as an API."
]
}
]
Expand Down
2 changes: 1 addition & 1 deletion examples/tensorflow/iris-classifier/tensorflow.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -289,7 +289,7 @@
},
"source": [
"<!-- CORTEX_VERSION_MINOR -->\n",
"That's it! See the [example on GitHub](https://github.com/cortexlabs/cortex/tree/master/examples/tensorflow/iris-classifier) for how to deploy the model as an API."
"That's it! See the [example on GitHub](https://github.com/cortexlabs/cortex/tree/0.14/examples/tensorflow/iris-classifier) for how to deploy the model as an API."
]
}
]
Expand Down
2 changes: 1 addition & 1 deletion examples/tensorflow/sentiment-analyzer/bert.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -1000,7 +1000,7 @@
},
"source": [
"<!-- CORTEX_VERSION_MINOR -->\n",
"That's it! See the [example on GitHub](https://github.com/cortexlabs/cortex/tree/master/examples/tensorflow/sentiment-analyzer) for how to deploy the model as an API."
"That's it! See the [example on GitHub](https://github.com/cortexlabs/cortex/tree/0.14/examples/tensorflow/sentiment-analyzer) for how to deploy the model as an API."
]
}
]
Expand Down
4 changes: 2 additions & 2 deletions examples/tensorflow/text-generator/gpt-2.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -346,7 +346,7 @@
},
"source": [
"<!-- CORTEX_VERSION_MINOR x2 -->\n",
"We also need to upload `vocab.bpe` and `encoder.json`, so that the [encoder](https://github.com/cortexlabs/cortex/blob/master/examples/tensorflow/text-generator/encoder.py) in the [Predictor](https://github.com/cortexlabs/cortex/blob/master/examples/tensorflow/text-generator/predictor.py) can encode the input text before making a request to the model."
"We also need to upload `vocab.bpe` and `encoder.json`, so that the [encoder](https://github.com/cortexlabs/cortex/blob/0.14/examples/tensorflow/text-generator/encoder.py) in the [Predictor](https://github.com/cortexlabs/cortex/blob/0.14/examples/tensorflow/text-generator/predictor.py) can encode the input text before making a request to the model."
]
},
{
Expand Down Expand Up @@ -376,7 +376,7 @@
},
"source": [
"<!-- CORTEX_VERSION_MINOR -->\n",
"That's it! See the [example on GitHub](https://github.com/cortexlabs/cortex/tree/master/examples/tensorflow/text-generator) for how to deploy the model as an API."
"That's it! See the [example on GitHub](https://github.com/cortexlabs/cortex/tree/0.14/examples/tensorflow/text-generator) for how to deploy the model as an API."
]
}
]
Expand Down
2 changes: 1 addition & 1 deletion examples/xgboost/iris-classifier/xgboost.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -237,7 +237,7 @@
},
"source": [
"<!-- CORTEX_VERSION_MINOR -->\n",
"That's it! See the [example](https://github.com/cortexlabs/cortex/tree/master/examples/xgboost/iris-classifier) for how to deploy the model as an API."
"That's it! See the [example](https://github.com/cortexlabs/cortex/tree/0.14/examples/xgboost/iris-classifier) for how to deploy the model as an API."
]
}
]
Expand Down
2 changes: 1 addition & 1 deletion get-cli.sh
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@

set -e

CORTEX_VERSION_BRANCH_STABLE=master
CORTEX_VERSION_BRANCH_STABLE=0.14.0

case "$OSTYPE" in
darwin*) parsed_os="darwin" ;;
Expand Down
3 changes: 1 addition & 2 deletions manager/install.sh
Original file line number Diff line number Diff line change
Expand Up @@ -16,13 +16,12 @@

set -e

CORTEX_VERSION=master
CORTEX_VERSION=0.14.0
EKSCTL_TIMEOUT=45m

arg1="$1"

function ensure_eks() {
# Cluster statuses: https://github.com/aws/aws-sdk-go/blob/master/service/eks/api.go#L2785
set +e
cluster_info=$(eksctl get cluster --name=$CORTEX_CLUSTER_NAME --region=$CORTEX_REGION -o json)
cluster_info_exit_code=$?
Expand Down
4 changes: 2 additions & 2 deletions pkg/consts/consts.go
Original file line number Diff line number Diff line change
Expand Up @@ -17,8 +17,8 @@ limitations under the License.
package consts

var (
CortexVersion = "master" // CORTEX_VERSION
CortexVersionMinor = "master" // CORTEX_VERSION_MINOR
CortexVersion = "0.14.0" // CORTEX_VERSION
CortexVersionMinor = "0.14" // CORTEX_VERSION_MINOR

MaxClassesPerTrackerRequest = 20 // cloudwatch.GeMetricData can get up to 100 metrics per request, avoid multiple requests and have room for other stats
)
2 changes: 1 addition & 1 deletion pkg/workloads/cortex/client/cortex/client.py
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ def __init__(self, aws_access_key_id, aws_secret_access_key, operator_url):
self.aws_access_key_id = aws_access_key_id
self.aws_secret_access_key = aws_secret_access_key
self.headers = {
"CortexAPIVersion": "master", # CORTEX_VERSION
"CortexAPIVersion": "0.14.0", # CORTEX_VERSION
"Authorization": "CortexAWS {}|{}".format(
self.aws_access_key_id, self.aws_secret_access_key
),
Expand Down
2 changes: 1 addition & 1 deletion pkg/workloads/cortex/client/setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@

setup(
name="cortex",
version="master", # CORTEX_VERSION
version="0.14.0", # CORTEX_VERSION
description="",
author="Cortex Labs",
author_email="[email protected]",
Expand Down
2 changes: 1 addition & 1 deletion pkg/workloads/cortex/consts.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,4 +12,4 @@
# See the License for the specific language governing permissions and
# limitations under the License.

CORTEX_VERSION = "master"
CORTEX_VERSION = "0.14.0"

0 comments on commit f978586

Please sign in to comment.