-
-
Notifications
You must be signed in to change notification settings - Fork 5.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[V1] Add RayExecutor
support for AsyncLLM
(api server)
#11712
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -23,6 +23,7 @@ | |
from vllm.v1.engine.detokenizer import Detokenizer | ||
from vllm.v1.engine.processor import Processor | ||
from vllm.v1.executor.abstract import Executor | ||
from vllm.v1.executor.ray_utils import initialize_ray_cluster | ||
|
||
logger = init_logger(__name__) | ||
|
||
|
@@ -150,7 +151,11 @@ def _get_executor_cls(cls, vllm_config: VllmConfig) -> Type[Executor]: | |
executor_class: Type[Executor] | ||
distributed_executor_backend = ( | ||
vllm_config.parallel_config.distributed_executor_backend) | ||
if distributed_executor_backend == "mp": | ||
if distributed_executor_backend == "ray": | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Instead of repeating this logic in both There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Good catch. Yeah we should unify them. cc @ruisearch42 |
||
initialize_ray_cluster(vllm_config.parallel_config) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This is called in |
||
from vllm.v1.executor.ray_executor import RayExecutor | ||
executor_class = RayExecutor | ||
elif distributed_executor_backend == "mp": | ||
from vllm.v1.executor.multiproc_executor import MultiprocExecutor | ||
executor_class = MultiprocExecutor | ||
else: | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should be a lazy import right? Doesn't this import ray?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This will import Ray but won't error out if Ray is not installed:
It errors out only when
initialize_ray_cluster
is called: