We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
严格按照1.1环境配置来配置环境 首先遇到TypeError,随后更新了transformers包 然后执行命令python examples/generate_lora.py --base_model zjunlp/knowlm-13b-zhixi --run_ie_cases 遇到显存不足的问题 然后修改命令为CUDA_VISIBLE_DEVICES=0,1 python examples/generate_lora.py --base_model zjunlp/knowlm-13b-zhixi --run_ie_cases 也是显示同样的信息,似乎1号卡并没有用上。。服务器有两张titan,显存加起来有48G 再修改命令为CUDA_VISIBLE_DEVICES=0,1 python examples/generate_lora.py --base_model zjunlp/knowlm-13b-zhixi --run_ie_cases --multi_gpu ,出现如下结果 若按照提示信息执行 pip install protobuf==3.20.0,运行命令又会报如下错误
The text was updated successfully, but these errors were encountered:
您好,我们环境使用的protobuf的版本是5.26.1。如果仍然不行,请告知我transformers包的版本号,以便于我复现您的问题,我们当前环境使用的transformers的版本号是4.41.2。:)
protobuf
5.26.1
transformers
4.41.2
Sorry, something went wrong.
您好,我的protobuf包安装了5.26.1版本,仍然是报RecursionError: maximum recursion depth exceeded的错误,我的transformers版本为4.40.2
请问下最后解决了吗?我和您遇到了同样的问题
没有
No branches or pull requests
严格按照1.1环境配置来配置环境
首先遇到TypeError,随后更新了transformers包
然后执行命令python examples/generate_lora.py --base_model zjunlp/knowlm-13b-zhixi --run_ie_cases
遇到显存不足的问题
然后修改命令为CUDA_VISIBLE_DEVICES=0,1 python examples/generate_lora.py --base_model zjunlp/knowlm-13b-zhixi --run_ie_cases
也是显示同样的信息,似乎1号卡并没有用上。。服务器有两张titan,显存加起来有48G
再修改命令为CUDA_VISIBLE_DEVICES=0,1 python examples/generate_lora.py --base_model zjunlp/knowlm-13b-zhixi --run_ie_cases --multi_gpu ,出现如下结果
若按照提示信息执行 pip install protobuf==3.20.0,运行命令又会报如下错误
The text was updated successfully, but these errors were encountered: