Please follow the primary README.md of this repo.
Windows users may stumble when installing the package triton
.
You can choose to run on CPU without xformers
and triton
installed.
To use CUDA, please refer to issue#24 to try solve the problem of triton
installation.
You can try to set up according to the following steps to use CPU or MPS device.
-
Install torch (Preview/Nighly version).
# MPS acceleration is available on MacOS 12.3+ pip install --pre torch torchvision --index-url https://download.pytorch.org/whl/nightly/cpu
Check more details in official document.
-
Package
triton
andxformers
is not needed since they work with CUDA. Remove the related packages.Your requirements.txt should look like:
# requirements.txt pytorch_lightning==1.4.2 einops open-clip-torch omegaconf torchmetrics==0.6.0 opencv-python-headless scipy matplotlib lpips gradio chardet transformers facexlib
pip install -r requirements.txt
-
Run the inference script and specify
--device cpu
or--device mps
. Using MPS can accelarate your inference.You can specify
--tiled
and related arguments to avoid OOM.