Skip to content

Commit

Permalink
vllm version
Browse files Browse the repository at this point in the history
  • Loading branch information
mobicham committed Dec 23, 2023
1 parent 2bca9f1 commit 3585e25
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion Readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -100,7 +100,7 @@ HQQModelForCausalLM.quantize_model_(model, quant_config=quant_config)
```

### VLLM 🚀
By default, VLLM is not installed to avoid CUDA version problems. Make sure you install the right version that matches your CUDA settings:
By default, VLLM is not installed to avoid CUDA version problems. Make sure you install the right version that matches your CUDA settings (vllm <= 0.2.2):
https://docs.vllm.ai/en/latest/getting_started/installation.html

After installation, you can quantize VLLM models as follows:
Expand Down

0 comments on commit 3585e25

Please sign in to comment.