Skip to content

Commit

Permalink
[Doc] Use shell code-blocks and fix section headers (vllm-project#9508)
Browse files Browse the repository at this point in the history
Signed-off-by: Rafael Vasquez <[email protected]>
  • Loading branch information
rafvasq authored Oct 22, 2024
1 parent ca30c3c commit f7db5f0
Show file tree
Hide file tree
Showing 3 changed files with 23 additions and 23 deletions.
8 changes: 4 additions & 4 deletions docs/source/getting_started/debugging.rst
Original file line number Diff line number Diff line change
Expand Up @@ -107,15 +107,15 @@ If GPU/CPU communication cannot be established, you can use the following Python
If you are testing with a single node, adjust ``--nproc-per-node`` to the number of GPUs you want to use:

.. code-block:: shell
.. code-block:: console
NCCL_DEBUG=TRACE torchrun --nproc-per-node=<number-of-GPUs> test.py
$ NCCL_DEBUG=TRACE torchrun --nproc-per-node=<number-of-GPUs> test.py
If you are testing with multi-nodes, adjust ``--nproc-per-node`` and ``--nnodes`` according to your setup and set ``MASTER_ADDR`` to the correct IP address of the master node, reachable from all nodes. Then, run:

.. code-block:: shell
.. code-block:: console
NCCL_DEBUG=TRACE torchrun --nnodes 2 --nproc-per-node=2 --rdzv_backend=c10d --rdzv_endpoint=$MASTER_ADDR test.py
$ NCCL_DEBUG=TRACE torchrun --nnodes 2 --nproc-per-node=2 --rdzv_backend=c10d --rdzv_endpoint=$MASTER_ADDR test.py
If the script runs successfully, you should see the message ``sanity check is successful!``.

Expand Down
34 changes: 17 additions & 17 deletions docs/source/getting_started/installation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -7,14 +7,14 @@ Installation
vLLM is a Python library that also contains pre-compiled C++ and CUDA (12.1) binaries.

Requirements
===========================
============

* OS: Linux
* Python: 3.8 -- 3.12
* Python: 3.8 - 3.12
* GPU: compute capability 7.0 or higher (e.g., V100, T4, RTX20xx, A100, L4, H100, etc.)

Install released versions
===========================
=========================

You can install vLLM using pip:

Expand Down Expand Up @@ -51,9 +51,9 @@ You can install vLLM using pip:
.. _install-the-latest-code:

Install the latest code
=========================
=======================

LLM inference is a fast-evolving field, and the latest code may contain bug fixes, performance improvements, and new features that are not released yet. To allow users to try the latest code without waiting for the next release, vLLM provides wheels for Linux running on x86 platform with cuda 12 for every commit since v0.5.3. You can download and install the latest one with the following command:
LLM inference is a fast-evolving field, and the latest code may contain bug fixes, performance improvements, and new features that are not released yet. To allow users to try the latest code without waiting for the next release, vLLM provides wheels for Linux running on a x86 platform with CUDA 12 for every commit since ``v0.5.3``. You can download and install it with the following command:

.. code-block:: console
Expand All @@ -66,7 +66,7 @@ If you want to access the wheels for previous commits, you can specify the commi
$ export VLLM_COMMIT=33f460b17a54acb3b6cc0b03f4a17876cff5eafd # use full commit hash from the main branch
$ pip install https://vllm-wheels.s3.us-west-2.amazonaws.com/${VLLM_COMMIT}/vllm-1.0.0.dev-cp38-abi3-manylinux1_x86_64.whl
Note that the wheels are built with Python 3.8 abi (see `PEP 425 <https://peps.python.org/pep-0425/>`_ for more details about abi), so **they are compatible with Python 3.8 and later**. The version string in the wheel file name (``1.0.0.dev``) is just a placeholder to have a unified URL for the wheels. The actual versions of wheels are contained in the wheel metadata.
Note that the wheels are built with Python 3.8 ABI (see `PEP 425 <https://peps.python.org/pep-0425/>`_ for more details about ABI), so **they are compatible with Python 3.8 and later**. The version string in the wheel file name (``1.0.0.dev``) is just a placeholder to have a unified URL for the wheels. The actual versions of wheels are contained in the wheel metadata.

Another way to access the latest code is to use the docker images:

Expand All @@ -77,17 +77,17 @@ Another way to access the latest code is to use the docker images:
These docker images are used for CI and testing only, and they are not intended for production use. They will be expired after several days.

Latest code can contain bugs and may not be stable. Please use it with caution.
The latest code can contain bugs and may not be stable. Please use it with caution.

.. _build_from_source:

Build from source
==================
=================

.. _python-only-build:

Python-only build (without compilation)
----------------------------------------
---------------------------------------

If you only need to change Python code, you can simply build vLLM without compilation.

Expand Down Expand Up @@ -122,22 +122,22 @@ Once you have finished editing or want to install another vLLM wheel, you should
$ python python_only_dev.py --quit-dev
The script with ``--quit-dev`` flag will:
The ``--quit-dev`` flag will:

* Remove the symbolic link from the current directory to the vLLM package.
* Restore the original vLLM package from the backup.

If you update the vLLM wheel and want to rebuild from the source and make further edits, you will need to start `all above <#python-only-build>`_ over again.
If you update the vLLM wheel and rebuild from the source to make further edits, you will need to repeat the `Python-only build <#python-only-build>`_ steps again.

.. note::

There is a possibility that your source code may have a different commit ID compared to the latest vLLM wheel, which could potentially lead to unknown errors.
It is recommended to use the same commit ID for the source code as the vLLM wheel you have installed. Please refer to `the above section <#install-the-latest-code>`_ for instructions on how to install a specified wheel.
It is recommended to use the same commit ID for the source code as the vLLM wheel you have installed. Please refer to `the section above <#install-the-latest-code>`_ for instructions on how to install a specified wheel.

Full build (with compilation)
---------------------------------
-----------------------------

If you want to modify C++ or CUDA code, you'll need to build vLLM from source. This can take several minutes:
If you want to modify C++ or CUDA code, you'll need to build vLLM from source. This can take several minutes:

.. code-block:: console
Expand All @@ -153,7 +153,7 @@ If you want to modify C++ or CUDA code, you'll need to build vLLM from source. T


Use an existing PyTorch installation
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
There are scenarios where the PyTorch dependency cannot be easily installed via pip, e.g.:

* Building vLLM with PyTorch nightly or a custom PyTorch build.
Expand All @@ -171,7 +171,7 @@ To build vLLM using an existing PyTorch installation:
Troubleshooting
~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~

To avoid your system being overloaded, you can limit the number of compilation jobs
to be run simultaneously, via the environment variable ``MAX_JOBS``. For example:
Expand Down Expand Up @@ -207,7 +207,7 @@ Here is a sanity check to verify that the CUDA Toolkit is correctly installed:
Unsupported OS build
----------------------
--------------------

vLLM can fully run only on Linux but for development purposes, you can still build it on other systems (for example, macOS), allowing for imports and a more convenient development environment. The binaries will not be compiled and won't work on non-Linux systems.

Expand Down
4 changes: 2 additions & 2 deletions docs/source/models/vlm.rst
Original file line number Diff line number Diff line change
Expand Up @@ -247,9 +247,9 @@ A full code example can be found in `examples/openai_api_client_for_multimodal.p

By default, the timeout for fetching images through http url is ``5`` seconds. You can override this by setting the environment variable:

.. code-block:: shell
.. code-block:: console
export VLLM_IMAGE_FETCH_TIMEOUT=<timeout>
$ export VLLM_IMAGE_FETCH_TIMEOUT=<timeout>
.. note::
There is no need to format the prompt in the API request since it will be handled by the server.

0 comments on commit f7db5f0

Please sign in to comment.