Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[CI/Build] Print running script to enhance CI log readability #10594

Merged
merged 4 commits into from
Nov 24, 2024
Merged
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 11 additions & 0 deletions .buildkite/test-pipeline.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -52,6 +52,7 @@ steps:
- tests/worker
- tests/test_lazy_torch_compile.py
commands:
- echo "Running test_lazy_torch_compile.py..." # print running script to enhance CI log readability
- python3 test_lazy_torch_compile.py
- pytest -v -s mq_llm_engine # MQLLMEngine
- pytest -v -s async_engine # AsyncLLMEngine
Expand Down Expand Up @@ -182,15 +183,25 @@ steps:
- examples/
commands:
- pip install awscli tensorizer # for llava example and tensorizer test
- echo "Running offline_inference.py..." # print running script to enhance CI log readability
- python3 offline_inference.py
- echo "Running cpu_offload.py..."
- python3 cpu_offload.py
- echo "Running offline_inference_chat.py..."
- python3 offline_inference_chat.py
- echo "Running offline_inference_with_prefix.py..."
- python3 offline_inference_with_prefix.py
- echo "Running llm_engine_example.py..."
- python3 llm_engine_example.py
- echo "Running offline_inference_vision_language.py..."
- python3 offline_inference_vision_language.py
- echo "Running offline_inference_vision_language_multi_image.py..."
- python3 offline_inference_vision_language_multi_image.py
- echo "Running tensorize_vllm_model.py..."
- python3 tensorize_vllm_model.py --model facebook/opt-125m serialize --serialized-directory /tmp/ --suffix v1 && python3 tensorize_vllm_model.py --model facebook/opt-125m deserialize --path-to-tensors /tmp/vllm/facebook/opt-125m/v1/model.tensors
- echo "Running offline_inference_encoder_decoder.py..."
- python3 offline_inference_encoder_decoder.py
- echo "Running offline_profile.py..."
- python3 offline_profile.py --model facebook/opt-125m

- label: Prefix Caching Test # 9min
Expand Down