[NOTE] The following values must be set before you can deploy: HUGGINGFACEHUB_API_TOKEN
You can also customize the "MODEL_ID" if needed.
You need to make sure you have created the directory
/mnt/opea-models
to save the cached model on the node where the CodeTrans workload is running. Otherwise, you need to modify thecodetrans.yaml
file to change themodel-volume
to a directory that exists on the node.
By default, the LLM model is set to a default value as listed below:
Service | Model |
---|---|
LLM | HuggingFaceH4/mistral-7b-grok |
Change the MODEL_ID
in codetrans.yaml
for your needs.
cd GenAIExamples/CodeTrans/kubernetes/intel/cpu/xeon/manifests
export HUGGINGFACEHUB_API_TOKEN="YourOwnToken"
sed -i "s/insert-your-huggingface-token-here/${HUGGINGFACEHUB_API_TOKEN}/g" codetrans.yaml
kubectl apply -f codetrans.yaml
cd GenAIExamples/CodeTrans/kubernetes/intel/hpu/gaudi/manifests
export HUGGINGFACEHUB_API_TOKEN="YourOwnToken"
sed -i "s/insert-your-huggingface-token-here/${HUGGINGFACEHUB_API_TOKEN}/g" codetrans.yaml
kubectl apply -f codetrans.yaml
To verify the installation, run the command kubectl get pod
to make sure all pods are running.
Then run the command kubectl port-forward svc/codetrans 7777:7777
to expose the CodeTrans service for access.
Open another terminal and run the following command to verify the service if working:
curl http://localhost:7777/v1/codetrans \
-H 'Content-Type: application/json' \
-d '{"language_from": "Golang","language_to": "Python","source_code": "package main\n\nimport \"fmt\"\nfunc main() {\n fmt.Println(\"Hello, World!\");\n}"}'
To consume the service using nginx, run the command below. The ${host_ip}
is the external ip of your server.
curl http://${host_ip}:30789/v1/codetrans \
-H 'Content-Type: application/json' \
-d '{"language_from": "Golang","language_to": "Python","source_code": "package main\n\nimport \"fmt\"\nfunc main() {\n fmt.Println(\"Hello, World!\");\n}"}'