diff --git a/FaqGen/docker_compose/amd/gpu/rocm/README.md b/FaqGen/docker_compose/amd/gpu/rocm/README.md index bbabeb43a..ae06d4c09 100644 --- a/FaqGen/docker_compose/amd/gpu/rocm/README.md +++ b/FaqGen/docker_compose/amd/gpu/rocm/README.md @@ -2,7 +2,7 @@ ### Required Models -Default model is "meta-llama/Meta-Llama-3-8B-Instruct". Change "LLM_MODEL_ID" in environment variables below if you want to use another model. +Default model is "meta-llama/Meta-Llama-3-8B-Instruct". Change "LLM_MODEL_ID" in environment variables below if you want to use another model. For gated models, you also need to provide [HuggingFace token](https://huggingface.co/docs/hub/security-tokens) in "HUGGINGFACEHUB_API_TOKEN" environment variable.