Replies: 3 comments 1 reply
-
I implemented a version of HF cache support this weekend and it seems to work. There's more to be done, but the models can be queried in the cache and loaded. I'll issue a pull request if others are interested. |
Beta Was this translation helpful? Give feedback.
-
To whom looking for a quick solution - I've got a little hack here, until the codebase properly loads the cached models. If you are using Linux or WSL2create a import os
import sys
from pathlib import Path
import shutil
def main(base_dir):
base_path = Path(base_dir)
# Make sure base_dir exists
if not base_path.exists():
os.mkdir(base_dir)
# Clear up everything in base_dir
for item in base_path.iterdir():
if item.is_file():
os.remove(item)
else:
shutil.rmtree(item)
# Iterate over directories starting with 'models--'
for org_repo_path in Path(os.path.join(Path.home(), '.cache/huggingface/hub')).glob('models--*'):
org_repo_name = org_repo_path.stem[len("models--"):]
hash_path = next((org_repo_path / 'snapshots').iterdir())
hash_value = hash_path.stem
symlink_path = base_path / f"{org_repo_name}--{hash_value}"
os.symlink(hash_path, symlink_path)
if __name__ == "__main__":
main(sys.argv[1]) Then I called it like this: python3 make_models_dir.py ~/dev/huggingface-models-sym-link And after that start your webui like this: ./start_linux.sh --model-dir ~/dev/huggingface-models-sym-link If you are using windows:Go run this in WSL2. If you just HAVE TO run it in Windows:
./start_windows.sh --model-dir <cache-dir>\models--ORG--REPO\snapshots That gives you a slightly awkward models list but at least you know what it is. Worked for me. Enjoy 🤗 |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
I've been using Python to fine tune various models with Lora on an Ubuntu 22.04 LTS system
Using the transformers library, it caches the models in:
~/.cache/huggingface
which I believe is the standard location for caching. I have 100s of gigs of models.
Is it possible to set up oobabooga to use the existing huggingface cache instead of downloading duplicate copies to
text-generation-webui/models
??Likewise for local Lora fine-tunes ?? We don't want to push our Loras to a public repository. I tried entering the path to my local Lora in "Download custom model or LoRA" field in the off chance that it would work, but it does not (appears to only want to query the huggingface public repositories)
Beta Was this translation helpful? Give feedback.
All reactions