Skip to content

Latest commit

 

History

History
 
 

wizardcoder-python

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 

WizardCoder-Python

In this directory, you will find examples on how you could apply BigDL-LLM INT4 optimizations on WizardCoder-Python models. For illustration purposes, we utilize the WizardLM/WizardCoder-Python-7B-V1.0 as a reference WizardCoder-Python model.

0. Requirements

To run these examples with BigDL-LLM, we have some recommended requirements for your machine, please refer to here for more information.

Example: Predict Tokens using generate() API

In the example generate.py, we show a basic use case for a WizardCoder-Python model to predict the next N tokens using generate() API, with BigDL-LLM INT4 optimizations.

1. Install

We suggest using conda to manage environment:

conda create -n llm python=3.9
conda activate llm

pip install --pre --upgrade bigdl-llm[all] # install the latest bigdl-llm nightly build with 'all' option

2. Run

python ./generate.py --repo-id-or-model-path REPO_ID_OR_MODEL_PATH --prompt PROMPT --n-predict N_PREDICT

Arguments info:

  • --repo-id-or-model-path REPO_ID_OR_MODEL_PATH: argument defining the huggingface repo id for the WizardCoder-Python model to be downloaded, or the path to the huggingface checkpoint folder. It is default to be 'WizardLM/WizardCoder-Python-7B-V1.0'.
  • --prompt PROMPT: argument defining the prompt to be infered (with integrated prompt format for chat). It is default to be 'def print_hello_world():'.
  • --n-predict N_PREDICT: argument defining the max number of tokens to predict. It is default to be 64.

Note: When loading the model in 4-bit, BigDL-LLM converts linear layers in the model into INT4 format. In theory, a XB model saved in 16-bit will requires approximately 2X GB of memory for loading, and ~0.5X GB memory for further inference.

Please select the appropriate size of the WizardCoder-Python model based on the capabilities of your machine.

2.1 Client

On client Windows machine, it is recommended to run directly with full utilization of all cores:

python ./generate.py

2.2 Server

For optimal performance on server, it is recommended to set several environment variables (refer to here for more information), and run the example with all the physical cores of a single socket.

E.g. on Linux,

# set BigDL-Nano env variables
source bigdl-nano-init

# e.g. for a server with 48 cores per socket
export OMP_NUM_THREADS=48
numactl -C 0-47 -m 0 python ./generate.py

2.3 Sample Output

Inference time: xxxx s
-------------------- Prompt --------------------
Below is an instruction that describes a task. Write a response that appropriately completes the request.

### Instruction:
def print_hello_world():

### Response:
-------------------- Output --------------------
Below is an instruction that describes a task. Write a response that appropriately completes the request.

### Instruction:
def print_hello_world():

### Response:Here's the code for the `print_hello_world()` function:

```python
def print_hello_world():
    print("Hello, World!")
```

This function simply prints the string "Hello, World!" to the console. You