In this directory, you will find examples on how you could apply BigDL-LLM INT4 optimizations on Aquila2 models. For illustration purposes, we utilize the BAAI/AquilaChat2-7B as a reference Aquila2 model.
Note: If you want to download the Hugging Face Transformers model, please refer to here.
BigDL-LLM optimizes the Transformers model in INT4 precision at runtime, and thus no explicit conversion is needed.
To run these examples with BigDL-LLM, we have some recommended requirements for your machine, please refer to here for more information.
In the example generate.py, we show a basic use case for a Aquila2 model to predict the next N tokens using generate()
API, with BigDL-LLM INT4 optimizations.
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to here.
After installing conda, create a Python environment for BigDL-LLM:
conda create -n llm python=3.9 # recommend to use Python 3.9
conda activate llm
pip install --pre --upgrade bigdl-llm[all] # install the latest bigdl-llm nightly build with 'all' option
After setting up the Python environment, you could run the example by following steps.
Note: When loading the model in 4-bit, BigDL-LLM converts linear layers in the model into INT4 format. In theory, a XB model saved in 16-bit will requires approximately 2X GB of memory for loading, and ~0.5X GB memory for further inference.
Please select the appropriate size of the Aquila2 model based on the capabilities of your machine.
On client Windows machines, it is recommended to run directly with full utilization of all cores:
python ./generate.py --prompt 'AI是什么?'
More information about arguments can be found in Arguments Info section. The expected output can be found in Sample Output section.
For optimal performance on server, it is recommended to set several environment variables (refer to here for more information), and run the example with all the physical cores of a single socket.
E.g. on Linux,
# set BigDL-Nano env variables
source bigdl-nano-init
# e.g. for a server with 48 cores per socket
export OMP_NUM_THREADS=48
numactl -C 0-47 -m 0 python ./generate.py --prompt 'AI是什么?'
More information about arguments can be found in Arguments Info section. The expected output can be found in Sample Output section.
In the example, several arguments can be passed to satisfy your requirements:
--repo-id-or-model-path
: str, argument defining the huggingface repo id for the Aquila2 model to be downloaded, or the path to the huggingface checkpoint folder. It is default to be'BAAI/AquilaChat2-7B'
.--prompt
: str, argument defining the prompt to be inferred (with integrated prompt format for chat). It is default to be'AI是什么?'
.--n-predict
: int, argument defining the max number of tokens to predict. It is default to be32
.
Inference time: xxxx s
-------------------- Prompt --------------------
<|startofpiece|>AI是什么?<|endofpiece|>
-------------------- Output --------------------
<|startofpiece|>AI是什么?<|endofpiece|>人工智能(Artificial Intelligence,简称AI)是计算机科学中一个极为重要的研究领域,旨在让计算机模仿人类的智能,包括学习、推理、识别物体