From 03564bc8c0fcee526ccf91224f4d304a8bf75570 Mon Sep 17 00:00:00 2001 From: "pre-commit-ci[bot]" <66853113+pre-commit-ci[bot]@users.noreply.github.com> Date: Fri, 8 Nov 2024 04:47:21 +0000 Subject: [PATCH] [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --- FaqGen/docker_compose/amd/gpu/rocm/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/FaqGen/docker_compose/amd/gpu/rocm/README.md b/FaqGen/docker_compose/amd/gpu/rocm/README.md index bbabeb43a..ae06d4c09 100644 --- a/FaqGen/docker_compose/amd/gpu/rocm/README.md +++ b/FaqGen/docker_compose/amd/gpu/rocm/README.md @@ -2,7 +2,7 @@ ### Required Models -Default model is "meta-llama/Meta-Llama-3-8B-Instruct". Change "LLM_MODEL_ID" in environment variables below if you want to use another model. +Default model is "meta-llama/Meta-Llama-3-8B-Instruct". Change "LLM_MODEL_ID" in environment variables below if you want to use another model. For gated models, you also need to provide [HuggingFace token](https://huggingface.co/docs/hub/security-tokens) in "HUGGINGFACEHUB_API_TOKEN" environment variable.