Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

vllm 0.6.4.post1支持 #15

Open
To0nyZ opened this issue Nov 21, 2024 · 1 comment
Open

vllm 0.6.4.post1支持 #15

To0nyZ opened this issue Nov 21, 2024 · 1 comment

Comments

@To0nyZ
Copy link

To0nyZ commented Nov 21, 2024

在vllm/model_executor/models/加入了telechat.py并修改了已经消失的is_hip

(vllm相关变动vllm-project/vllm@4e2d95e#diff-e3f867c9588601222d74d39110d8e48cc43d5f6107436150faf1749a3d091419R211%EF%BC%89

在vllm/model_executor/models/registry.py配置好"TeleChatForCausalLM": ("telechat", "TeleChatForCausalLM"), #telechat

依然报错了:

Traceback (most recent call last):
File "/root/miniconda3/envs/textgen/lib/python3.11/site-packages/vllm/entrypoints/openai/api_server.py", line 197, in build_async_engine_client_from_engine_args
engine_config = engine_args.create_engine_config()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/miniconda3/envs/textgen/lib/python3.11/site-packages/vllm/engine/arg_utils.py", line 959, in create_engine_config
model_config = self.create_model_config()
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/miniconda3/envs/textgen/lib/python3.11/site-packages/vllm/engine/arg_utils.py", line 891, in create_model_config
return ModelConfig(
^^^^^^^^^^^^
File "/root/miniconda3/envs/textgen/lib/python3.11/site-packages/vllm/config.py", line 264, in init
supported_tasks, task = self._resolve_task(task, self.hf_config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/miniconda3/envs/textgen/lib/python3.11/site-packages/vllm/config.py", line 347, in _resolve_task
selected_task = next(iter(supported_tasks_lst))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
StopIteration

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "", line 198, in _run_module_as_main
File "", line 88, in _run_code
File "/root/miniconda3/envs/textgen/lib/python3.11/site-packages/vllm/entrypoints/openai/api_server.py", line 643, in
uvloop.run(run_server(args))
File "/root/miniconda3/envs/textgen/lib/python3.11/site-packages/uvloop/init.py", line 105, in run
return runner.run(wrapper())
^^^^^^^^^^^^^^^^^^^^^
File "/root/miniconda3/envs/textgen/lib/python3.11/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "uvloop/loop.pyx", line 1517, in uvloop.loop.Loop.run_until_complete
File "/root/miniconda3/envs/textgen/lib/python3.11/site-packages/uvloop/init.py", line 61, in wrapper
return await main
^^^^^^^^^^
File "/root/miniconda3/envs/textgen/lib/python3.11/site-packages/vllm/entrypoints/openai/api_server.py", line 609, in run_server
async with build_async_engine_client(args) as engine_client:
File "/root/miniconda3/envs/textgen/lib/python3.11/contextlib.py", line 210, in aenter
return await anext(self.gen)
^^^^^^^^^^^^^^^^^^^^^
File "/root/miniconda3/envs/textgen/lib/python3.11/site-packages/vllm/entrypoints/openai/api_server.py", line 113, in build_async_engine_client
async with build_async_engine_client_from_engine_args(
File "/root/miniconda3/envs/textgen/lib/python3.11/contextlib.py", line 210, in aenter
return await anext(self.gen)
^^^^^^^^^^^^^^^^^^^^^
RuntimeError: async generator raised StopIteration
ERROR 11-21 11:05:16 engine.py:366]
Traceback (most recent call last):
File "/root/miniconda3/envs/textgen/lib/python3.11/site-packages/vllm/engine/multiprocessing/engine.py", line 357, in run_mp_engine
engine = MQLLMEngine.from_engine_args(engine_args=engine_args,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/miniconda3/envs/textgen/lib/python3.11/site-packages/vllm/engine/multiprocessing/engine.py", line 114, in from_engine_args
engine_config = engine_args.create_engine_config()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/miniconda3/envs/textgen/lib/python3.11/site-packages/vllm/engine/arg_utils.py", line 959, in create_engine_config
model_config = self.create_model_config()
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/miniconda3/envs/textgen/lib/python3.11/site-packages/vllm/engine/arg_utils.py", line 891, in create_model_config
return ModelConfig(
^^^^^^^^^^^^
File "/root/miniconda3/envs/textgen/lib/python3.11/site-packages/vllm/config.py", line 264, in init
supported_tasks, task = self._resolve_task(task, self.hf_config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/miniconda3/envs/textgen/lib/python3.11/site-packages/vllm/config.py", line 347, in _resolve_task
selected_task = next(iter(supported_tasks_lst))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
StopIteration
Process SpawnProcess-1:
Traceback (most recent call last):
File "/root/miniconda3/envs/textgen/lib/python3.11/multiprocessing/process.py", line 314, in _bootstrap
self.run()
File "/root/miniconda3/envs/textgen/lib/python3.11/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/root/miniconda3/envs/textgen/lib/python3.11/site-packages/vllm/engine/multiprocessing/engine.py", line 368, in run_mp_engine
raise e
File "/root/miniconda3/envs/textgen/lib/python3.11/site-packages/vllm/engine/multiprocessing/engine.py", line 357, in run_mp_engine
engine = MQLLMEngine.from_engine_args(engine_args=engine_args,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/miniconda3/envs/textgen/lib/python3.11/site-packages/vllm/engine/multiprocessing/engine.py", line 114, in from_engine_args
engine_config = engine_args.create_engine_config()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/miniconda3/envs/textgen/lib/python3.11/site-packages/vllm/engine/arg_utils.py", line 959, in create_engine_config
model_config = self.create_model_config()
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/miniconda3/envs/textgen/lib/python3.11/site-packages/vllm/engine/arg_utils.py", line 891, in create_model_config
return ModelConfig(
^^^^^^^^^^^^
File "/root/miniconda3/envs/textgen/lib/python3.11/site-packages/vllm/config.py", line 264, in init
supported_tasks, task = self._resolve_task(task, self.hf_config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/miniconda3/envs/textgen/lib/python3.11/site-packages/vllm/config.py", line 347, in _resolve_task
selected_task = next(iter(supported_tasks_lst))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
StopIteration

@shunxing12345
Copy link
Contributor

vllm 已经支持telechat2 可以在官网拉取vllm最新代码安装vllm使用telechat2

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants