Skip to content

Commit

Permalink
Merge pull request #17 from Jooho/0.12.rc0-sync
Browse files Browse the repository at this point in the history
Bump odh 0.12.rc0 to rhoai main branch
  • Loading branch information
Jooho authored May 21, 2024
2 parents 7933988 + 54c1e00 commit 1824bf9
Show file tree
Hide file tree
Showing 30 changed files with 867 additions and 149 deletions.
4 changes: 1 addition & 3 deletions .github/workflows/build.yml
Original file line number Diff line number Diff line change
Expand Up @@ -77,12 +77,10 @@ jobs:
# print env vars for debugging
cat "$GITHUB_ENV"
- name: Build and push runtime image
uses: docker/build-push-action@v4
with:
# for linux/s390x, maven errors due to missing io.grpc:protoc-gen-grpc-java:exe:linux-s390_64:1.51.1
platforms: linux/amd64,linux/arm64/v8,linux/ppc64le
platforms: linux/amd64,linux/arm64/v8,linux/ppc64le,linux/s390x
target: runtime
push: ${{ github.event_name == 'push' }}
tags: ${{ env.IMAGE_NAME }}:${{ env.VERSION }}
Expand Down
87 changes: 87 additions & 0 deletions .github/workflows/codeql.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,87 @@
# For most projects, this workflow file will not need changing; you simply need
# to commit it to your repository.
#
# You may wish to alter this file to override the set of languages analyzed,
# or to provide custom queries or build logic.
#
# ******** NOTE ********
# We have attempted to detect the languages in your repository. Please check
# the `language` matrix defined below to confirm you have the correct set of
# supported CodeQL languages.
#
name: "CodeQL"

on:
push:
branches: ["main"]
pull_request:
# The branches below must be a subset of the branches above
branches: ["main"]
schedule:
- cron: '45 8 * * *'

jobs:
analyze:
name: Analyze
# Runner size impacts CodeQL analysis time. To learn more, please see:
# - https://gh.io/recommended-hardware-resources-for-running-codeql
# - https://gh.io/supported-runners-and-hardware-resources
# - https://gh.io/using-larger-runners
# Consider using larger runners for possible analysis time improvements.
runs-on: ${{ (matrix.language == 'swift' && 'macos-latest') || 'ubuntu-latest' }}
timeout-minutes: ${{ (matrix.language == 'swift' && 120) || 360 }}
permissions:
actions: read
contents: read
security-events: write

strategy:
fail-fast: false
matrix:
language: ["java-kotlin", "python"]
# CodeQL supports [ 'c-cpp', 'csharp', 'go', 'java-kotlin', 'javascript-typescript', 'python', 'ruby', 'swift' ]
# Use only 'java-kotlin' to analyze code written in Java, Kotlin or both
# Use only 'javascript-typescript' to analyze code written in JavaScript, TypeScript or both
# Learn more about CodeQL language support at https://aka.ms/codeql-docs/language-support

steps:
- name: Checkout repository
uses: actions/checkout@v3

- name: Set up Java 17
uses: actions/setup-java@v3
with:
java-version: '17'
distribution: 'temurin'

# Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL
uses: github/codeql-action/init@v2
with:
languages: ${{ matrix.language }}
# If you wish to specify custom queries, you can do so here or in a config file.
# By default, queries listed here will override any specified in a config file.
# Prefix the list here with "+" to use these queries and those in the config file.

# For more details on CodeQL's query packs, refer to: https://docs.github.com/en/code-security/code-scanning/automatically-scanning-your-code-for-vulnerabilities-and-errors/configuring-code-scanning#using-queries-in-ql-packs
# queries: security-extended,security-and-quality

# Autobuild attempts to build any compiled languages (C/C++, C#, Go, Java, or Swift).
# If this step fails, then you should remove it and run the build manually (see below)
- name: Autobuild
uses: github/codeql-action/autobuild@v2

# ℹ️ Command-line programs to run using the OS shell.
# 📚 See https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idstepsrun

# If the Autobuild fails above, remove it and uncomment the following three lines.
# modify them (or add more) to build your code if your project, please refer to the EXAMPLE below for guidance.

# - run: |
# echo "Run, Build Application using script"
# ./location_of_script_within_repo/buildscript.sh

- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v2
with:
category: "/language:${{matrix.language}}"
82 changes: 82 additions & 0 deletions .github/workflows/create-release-tag.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,82 @@
name: Create Tag and Release changelog superrelease noparam

on:
workflow_dispatch:
inputs:
tag_name:
description: 'Tag name for the new release'
required: true

permissions:
contents: write
packages: write
pull-requests: write

jobs:
fetch-tag:
runs-on: ubuntu-latest
outputs:
old_tag: ${{ steps.get_tag.outputs.old_tag_name }}
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
ref: ${{ github.ref }}
fetch-depth: 0

- name: Get latest tag
id: get_tag
run: |
echo "old_tag_name=$(git ls-remote --tags origin | awk -F'/' '{print $3}' | grep -v '{}' | sort -V | tail -n1)" >> $GITHUB_OUTPUT
- name: print tag
id: print_tag
run: |
echo "Old Tag=${{ steps.get_tag.outputs.old_tag_name }}"
echo "NEW_TAG=${{ github.event.inputs.tag_name }}" >> $GITHUB_ENV
echo "$(basename ${{ github.ref }})"
- name: Check if tag exists
id: check_tag
run: |
import sys
import subprocess
tag_name = "${{ github.event.inputs.tag_name }}"
command = ['git', 'tag', '-l', tag_name]
output = subprocess.check_output(command, stderr=subprocess.STDOUT)
if output.decode() != "":
print(f"Error: Tag '{tag_name}' already exists.", file=sys.stderr)
sys.exit(1)
else:
print(f"Tag '{tag_name}' does not exists.")
shell: python
continue-on-error: false

- name: Create Tag
id: create_tag
run: |
git config --global user.email "[email protected]"
git config --global user.name "GitHub Actions"
git tag -a ${{ github.event.inputs.tag_name }} -m "Prepare for ODH release ${{ github.event.inputs.tag_name }}"
git push origin ${{ github.event.inputs.tag_name }}
changelog:
name: Changelog
needs: fetch-tag
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
with:
ref: ${{ github.ref }}

- name: Create Release
uses: softprops/action-gh-release@v2
with:
token: ${{ github.token }}
tag_name: ${{ github.event.inputs.tag_name }}
prerelease: false
draft: false
files: bin/*
generate_release_notes: true
name: ${{ github.event.inputs.tag_name }}
4 changes: 4 additions & 0 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,10 @@
We'd love to accept your patches and contributions to this project. There are
just a few small guidelines you need to follow.

## Developer guide

Check out the [developer guide](developer-guide.md) to learn about development practices for the project.

## Code reviews

All submissions, including submissions by project members, require review. We
Expand Down
47 changes: 7 additions & 40 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,50 +1,17 @@
[![Build](https://github.com/kserve/modelmesh/actions/workflows/build.yml/badge.svg?branch=main)](https://github.com/kserve/modelmesh/actions/workflows/build.yml)

# ModelMesh

The ModelMesh framework is a mature, general-purpose model serving management/routing layer designed for high-scale, high-density and frequently-changing model use cases. It works with existing or custom-built model servers and acts as a distributed LRU cache for serving runtime models.

See these [these charts](https://github.com/kserve/modelmesh/files/8854091/modelmesh-jun2022.pdf) for more information on supported features and design details.

For full Kubernetes-based deployment and management of ModelMesh clusters and models, see the [ModelMesh Serving](https://github.com/kserve/modelmesh-serving) repo. This includes a separate controller and provides K8s custom resource based management of ServingRuntimes and InferenceServices along with common, abstracted handling of model repository storage and ready-to-use integrations with some existing OSS model servers.

### Quick-Start

1. Wrap your model-loading and invocation logic in this [model-runtime.proto](./src/main/proto/current/model-runtime.proto) gRPC service interface
- `runtimeStatus()` - called only during startup to obtain some basic configuration parameters from the runtime, such as version, capacity, model-loading timeout
- `loadModel()` - load the specified model into memory from backing storage, returning when complete
- `modelSize()` - determine size (mem usage) of previously-loaded model. If very fast, can be omitted and provided instead in the response from `loadModel`
- `unloadModel()` - unload previously loaded model, returning when complete
- Use a separate, arbitrary gRPC service interface for model inferencing requests. It can have any number of methods and they are assumed to be idempotent. See [predictor.proto](src/test/proto/predictor.proto) for a very simple example.
- The methods of your custom applier interface will be called only for already fully-loaded models.
2. Build a grpc server docker container which exposes these interfaces on localhost port 8085 or via a mounted unix domain socket
3. Extend the [Kustomize-based Kubernetes manifests](config) to use your docker image, and with appropriate mem and cpu resource allocations for your container
4. Deploy to a Kubernetes cluster as a regular Service, which will expose [this grpc service interface](./src/main/proto/current/model-mesh.proto) via kube-dns (you do not implement this yourself), consume using grpc client of your choice from your upstream service components
- `registerModel()` and `unregisterModel()` for registering/removing models managed by the cluster
- Any custom inferencing interface methods to make a runtime invocation of previously-registered model, making sure to set a `mm-model-id` or `mm-vmodel-id` metadata header (or `-bin` suffix equivalents for UTF-8 ids)

### Deployment and Upgrades

Prerequisites:

- An etcd cluster (shared or otherwise)
- A Kubernetes namespace with the etcd cluster connection details configured as a secret key in [this json format](https://github.com/IBM/etcd-java/blob/master/etcd-json-schema.md)
- Note that if provided, the `root_prefix` attribute _is_ used as a key prefix for all of the framework's use of etcd

From an operational standpoint, ModelMesh behaves just like any other homogeneous clustered microservice. This means it can be deployed, scaled, migrated and upgraded as a regular Kubernetes deployment without any special coordination needed, and without any impact to live service usage.

In particular the procedure for live upgrading either the framework container or service runtime container is the same: change the image version in the deployment config yaml and then update it `kubectl apply -f model-mesh-deploy.yaml`
For more information on supported features and design details, see [these charts](https://github.com/kserve/modelmesh/files/8854091/modelmesh-jun2022.pdf).

### Build
## Get Started

Sample build:
To learn more about and get started with the ModelMesh framework, check out [the documentation](/docs).

```bash
GIT_COMMIT=$(git rev-parse HEAD)
BUILD_ID=$(date '+%Y%m%d')-$(git rev-parse HEAD | cut -c -5)
IMAGE_TAG_VERSION="dev"
IMAGE_TAG=${IMAGE_TAG_VERSION}-$(git branch --show-current)_${BUILD_ID}
## Developer guide

docker build -t modelmesh:${IMAGE_TAG} \
--build-arg imageVersion=${IMAGE_TAG} \
--build-arg buildId=${BUILD_ID} \
--build-arg commitSha=${GIT_COMMIT} .
```
Use the [developer guide](developer-guide.md) to learn about development practices for the project.
Loading

0 comments on commit 1824bf9

Please sign in to comment.