Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

最后一步python gradio_app.py报错了 #91

Open
lgkt opened this issue Jun 12, 2024 · 6 comments
Open

最后一步python gradio_app.py报错了 #91

lgkt opened this issue Jun 12, 2024 · 6 comments

Comments

@lgkt
Copy link

lgkt commented Jun 12, 2024

(omost) PS D:\ai\Omost> python gradio_app.py
D:\ai\Omost\lib_omost\pipeline.py:64: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
alphas_cumprod = torch.tensor(np.cumprod(alphas, axis=0), dtype=torch.float32)
Unload to CPU: AutoencoderKL
Unload to CPU: CLIPTextModel
Unload to CPU: CLIPTextModel
Unload to CPU: UNet2DConditionModel
Unused kwargs: ['_load_in_4bit', '_load_in_8bit', 'quant_method']. These kwargs are not used in <class 'transformers.utils.quantization_config.BitsAndBytesConfig'>.
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████| 2/2 [00:03<00:00, 1.55s/it]
WARNING:root:Some parameters are on the meta device device because they were offloaded to the cpu.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
WARNING:accelerate.big_modeling:You shouldn't move a model that is dispatched using accelerate hooks.
Traceback (most recent call last):
File "D:\ai\Omost\gradio_app.py", line 87, in
memory_management.unload_all_models(llm_model)
File "D:\ai\Omost\lib_omost\memory_management.py", line 67, in unload_all_models
return load_models_to_gpu([])
File "D:\ai\Omost\lib_omost\memory_management.py", line 42, in load_models_to_gpu
m.to(cpu)
File "C:\Users\lgkt\AppData\Roaming\Python\Python310\site-packages\accelerate\big_modeling.py", line 455, in wrapper
raise RuntimeError("You can't move a model that has some modules offloaded to cpu or disk.")
RuntimeError: You can't move a model that has some modules offloaded to cpu or disk.

@xhoxye
Copy link

xhoxye commented Jun 12, 2024

QQ截图20240612081831

如果你使用的是原版代码,可以尝试其他pr,或者自己修改
If you're using the original code, you can try other PRs, or modify it yourself

https://github.com/xhoxye/Omost_CN

@lgkt
Copy link
Author

lgkt commented Jun 12, 2024

是原版,不太会coding,不知道咋改……我只是想用这个omost,hf上的space时间太有限了。如果是你的pr,估计也会有这个问题吧 @xhoxye

@lgkt
Copy link
Author

lgkt commented Jun 12, 2024

"D:\ai\Omost\lib_omost\memory_management.py"这个文件我倒是找到了,但是咋改呢
@lllyasviel 敏神

@xhoxye
Copy link

xhoxye commented Jun 12, 2024

@lgkt
我自己使用是没有这个问题,不知道是不是原版的问题,因为它只是一个演示,你的电脑配置是什么?

#81

这个是我的pr

@lgkt
Copy link
Author

lgkt commented Jun 12, 2024

我3070 8G VRAM。我看你的pr,那个内存管理文件和原版是一样的
@xhoxye

@takishun0326
Copy link

I resolved the issue by specifying a particular GPU in the code.

There were others experiencing a similar problem, but I believe the cause was that their GPU did not meet the 8GB memory requirement, resulting in that GPU being selected. In such cases, you can specify a GPU with over 8GB of VRAM in gradio_app.py using os.environ["CUDA_VISIBLE_DEVICES"] = "0" to select the 0th GPU or by using some other method to resolve the issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants