Skip to content

Commit

Permalink
update codetrans model
Browse files Browse the repository at this point in the history
Signed-off-by: Xinyao Wang <[email protected]>
  • Loading branch information
XinyaoWa committed Oct 23, 2024
1 parent a3f9811 commit b8389dc
Show file tree
Hide file tree
Showing 11 changed files with 11 additions and 11 deletions.
2 changes: 1 addition & 1 deletion CodeTrans/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ By default, the LLM model is set to a default value as listed below:

| Service | Model |
| ------- | ----------------------------- |
| LLM | HuggingFaceH4/mistral-7b-grok |
| LLM | mistralai/Mistral-7B-Instruct-v0.3 |

Change the `LLM_MODEL_ID` in `docker_compose/set_env.sh` for your needs.

Expand Down
2 changes: 1 addition & 1 deletion CodeTrans/docker_compose/intel/cpu/xeon/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,7 @@ By default, the LLM model is set to a default value as listed below:

| Service | Model |
| ------- | ----------------------------- |
| LLM | HuggingFaceH4/mistral-7b-grok |
| LLM | mistralai/Mistral-7B-Instruct-v0.3 |

Change the `LLM_MODEL_ID` below for your needs.

Expand Down
2 changes: 1 addition & 1 deletion CodeTrans/docker_compose/intel/hpu/gaudi/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ By default, the LLM model is set to a default value as listed below:

| Service | Model |
| ------- | ----------------------------- |
| LLM | HuggingFaceH4/mistral-7b-grok |
| LLM | mistralai/Mistral-7B-Instruct-v0.3 |

Change the `LLM_MODEL_ID` below for your needs.

Expand Down
2 changes: 1 addition & 1 deletion CodeTrans/docker_compose/set_env.sh
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
# SPDX-License-Identifier: Apache-2.0


export LLM_MODEL_ID="HuggingFaceH4/mistral-7b-grok"
export LLM_MODEL_ID="mistralai/Mistral-7B-Instruct-v0.3"
export TGI_LLM_ENDPOINT="http://${host_ip}:8008"
export MEGA_SERVICE_HOST_IP=${host_ip}
export LLM_SERVICE_HOST_IP=${host_ip}
Expand Down
2 changes: 1 addition & 1 deletion CodeTrans/kubernetes/intel/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ By default, the LLM model is set to a default value as listed below:

|Service |Model |
|---------|-------------------------|
|LLM |HuggingFaceH4/mistral-7b-grok|
|LLM |mistralai/Mistral-7B-Instruct-v0.3|

Change the `MODEL_ID` in `codetrans.yaml` for your needs.

Expand Down
2 changes: 1 addition & 1 deletion CodeTrans/kubernetes/intel/README_gmc.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ By default, the LLM model is set to a default value as listed below:

|Service |Model |
|---------|-------------------------|
|LLM |HuggingFaceH4/mistral-7b-grok|
|LLM |mistralai/Mistral-7B-Instruct-v0.3|

Change the `MODEL_ID` in `codetrans_xeon.yaml` for your needs.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,6 @@ spec:
internalService:
serviceName: tgi-service
config:
MODEL_ID: HuggingFaceH4/mistral-7b-grok
MODEL_ID: mistralai/Mistral-7B-Instruct-v0.3
endpoint: /generate
isDownstreamService: true
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ metadata:
app.kubernetes.io/version: "2.1.0"
app.kubernetes.io/managed-by: Helm
data:
MODEL_ID: "HuggingFaceH4/mistral-7b-grok"
MODEL_ID: "mistralai/Mistral-7B-Instruct-v0.3"
PORT: "2080"
HF_TOKEN: "insert-your-huggingface-token-here"
http_proxy: ""
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,6 @@ spec:
internalService:
serviceName: tgi-gaudi-svc
config:
MODEL_ID: HuggingFaceH4/mistral-7b-grok
MODEL_ID: mistralai/Mistral-7B-Instruct-v0.3
endpoint: /generate
isDownstreamService: true
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ metadata:
app.kubernetes.io/version: "2.1.0"
app.kubernetes.io/managed-by: Helm
data:
MODEL_ID: "HuggingFaceH4/mistral-7b-grok"
MODEL_ID: "mistralai/Mistral-7B-Instruct-v0.3"
PORT: "2080"
HF_TOKEN: "insert-your-huggingface-token-here"
http_proxy: ""
Expand Down
2 changes: 1 addition & 1 deletion supported_examples.md
Original file line number Diff line number Diff line change
Expand Up @@ -73,7 +73,7 @@ This document introduces the supported examples of GenAIExamples. The supported

| Framework | LLM | Serving | HW | Description |
| ------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------- | --------------------------------------------------------------- | ----------- | ---------------- |
| [LangChain](https://www.langchain.com)/[LlamaIndex](https://www.llamaindex.ai) | [HuggingFaceH4/mistral-7b-grok](https://huggingface.co/HuggingFaceH4/mistral-7b-grok) | [TGI](https://github.com/huggingface/text-generation-inference) | Xeon/Gaudi2 | Code Translation |
| [LangChain](https://www.langchain.com)/[LlamaIndex](https://www.llamaindex.ai) | [mistralai/Mistral-7B-Instruct-v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3) | [TGI](https://github.com/huggingface/text-generation-inference) | Xeon/Gaudi2 | Code Translation |

### DocSum

Expand Down

0 comments on commit b8389dc

Please sign in to comment.