OpenVINO™ GenAI is a library of the most popular Generative AI model pipelines, optimized execution methods, and samples that run on top of highly performant OpenVINO Runtime.
This library is friendly to PC and laptop execution, and optimized for resource consumption. It requires no external dependencies to run generative models as it already includes all the core functionality (e.g. tokenization via openvino-tokenizers).
Please follow the following blogs to setup your first hands-on experience with C++ and Python samples.
OpenVINO™ GenAI library provides very lightweight C++ and Python APIs to run following Generative Scenarios:
- Text generation using Large Language Models. For example, chat with local LLaMa model
- Image generation using Diffuser models, for example, generation using Stable Diffusion models
- Speech recognition using Whisper family models
- Text generation using Large Visual Models, for instance, Image analysis using LLaVa or miniCPM models family
Library efficiently supports LoRA adapters for Text and Image generation scenarios:
- Load multiple adapters per model
- Select active adapters for every generation
- Mix multiple adapters with coefficients via alpha blending
All scenarios are run on top of OpenVINO Runtime that supports inference on CPU, GPU and NPU. See here for platform support matrix.
OpenVINO™ GenAI library provides a transparent way to use state-of-the-art generation optimizations:
- Speculative decoding that employs two models of different sizes and uses the large model to periodically correct the results of the small model. See here for more detailed overview
- KVCache token eviction algorithm that reduces the size of the KVCache by pruning less impacting tokens.
Additionally, OpenVINO™ GenAI library implements a continuous batching approach to use OpenVINO within LLM serving. Continuous batching library could be used in LLM serving frameworks and supports the following features:
- Prefix caching that caches fragments of previous generation requests and corresponding KVCache entries internally and uses them in case of repeated query. See here for more detailed overview
Continuous batching functionality is used within OpenVINO Model Server (OVMS) to serve LLMs, see here for more details.
# Installing OpenVINO GenAI via pip
pip install openvino-genai
# Install optimum-intel to be able to download, convert and optimize LLMs from Hugging Face
# Optimum is not required to run models, only to convert and compress
pip install optimum-intel@git+https://github.com/huggingface/optimum-intel.git
# (Optional) Install (TBD) to be able to download models from Model Scope
For more examples check out our LLM Inference Guide
#(Basic) download and convert to OpenVINO TinyLlama-Chat-v1.0 model
optimum-cli export openvino --model "TinyLlama/TinyLlama-1.1B-Chat-v1.0" --weight-format fp16 --trust-remote-code "TinyLlama-1.1B-Chat-v1.0"
#(Recommended) download, convert to OpenVINO and compress to int4 TinyLlama-Chat-v1.0 model
optimum-cli export openvino --model "TinyLlama/TinyLlama-1.1B-Chat-v1.0" --weight-format int4 --trust-remote-code "TinyLlama-1.1B-Chat-v1.0"
import openvino_genai as ov_genai
#Will run model on CPU, GPU or NPU are possible options
pipe = ov_genai.LLMPipeline("./TinyLlama-1.1B-Chat-v1.0/", "CPU")
print(pipe.generate("The Sun is yellow because", max_new_tokens=100))
Code below requires installation of C++ compatible package (see here for more details)
#include "openvino/genai/llm_pipeline.hpp"
#include <iostream>
int main(int argc, char* argv[]) {
std::string models_path = argv[1];
ov::genai::LLMPipeline pipe(models_path, "CPU");
std::cout << pipe.generate("The Sun is yellow because", ov::genai::max_new_tokens(100)) << '\n';
}
See here
For more examples check out our LLM Inference Guide
#(Basic) download and convert to OpenVINO MiniCPM-V-2_6 model
optimum-cli export openvino --model openbmb/MiniCPM-V-2_6 --trust-remote-code --weight-format fp16 MiniCPM-V-2_6
#(Recommended) Same as above but with compression: language model is compressed to int4, other model components are compressed to int8
optimum-cli export openvino --model openbmb/MiniCPM-V-2_6 --trust-remote-code --weight-format int4 MiniCPM-V-2_6
See Visual Language Chat for a demo application.
Run the following command to download a sample image:
curl -O "https://storage.openvinotoolkit.org/test_data/images/dog.jpg"
import numpy as np
import openvino as ov
import openvino_genai as ov_genai
from PIL import Image
# Choose GPU instead of CPU in the line below to run the model on Intel integrated or discrete GPU
pipe = ov_genai.VLMPipeline("./MiniCPM-V-2_6/", "CPU")
image = Image.open("dog.jpg")
image_data = np.array(image.getdata()).reshape(1, image.size[1], image.size[0], 3).astype(np.uint8)
image_data = ov.Tensor(image_data)
prompt = "Can you describe the image?"
print(pipe.generate(prompt, image=image_data, max_new_tokens=100))
Code below requires installation of C++ compatible package (see here for more details). See Visual Language Chat for a demo application.
#include "openvino/genai/visual_language/pipeline.hpp"
#include "load_image.hpp"
#include <iostream>
int main(int argc, char* argv[]) {
std::string models_path = argv[1];
ov::genai::VLMPipeline pipe(models_path, "CPU");
ov::Tensor rgb = utils::load_image(argv[2]);
std::cout << pipe.generate(
prompt,
ov::genai::image(rgb),
ov::genai::max_new_tokens(100)
) << '\n';
}
See here
For more examples check out our LLM Inference Guide
#Download and convert to OpenVINO dreamlike-anime-1.0 model
optimum-cli export openvino --model dreamlike-art/dreamlike-anime-1.0 --weight-format fp16 dreamlike_anime_1_0_ov/FP16
#You can also use INT8 hybrid quantization to further optimize the model and reduce inference latency
optimum-cli export openvino --model dreamlike-art/dreamlike-anime-1.0 --weight-format int8 --dataset conceptual_captions dreamlike_anime_1_0_ov/INT8
import argparse
from PIL import Image
import openvino_genai
device = 'CPU' # GPU can be used as well
pipe = openvino_genai.Text2ImagePipeline("./dreamlike_anime_1_0_ov/INT8", device)
image_tensor = pipe.generate(
"cyberpunk cityscape like Tokyo New York with tall buildings at dusk golden hour cinematic lighting",
width=512,
height=512,
num_inference_steps=20
)
image = Image.fromarray(image_tensor.data[0])
image.save("image.bmp")
Code below requires installation of C++ compatible package (see here for additional setup details, or this blog for full instruction How to Build OpenVINO™ GenAI APP in C++
#include "openvino/genai/image_generation/text2image_pipeline.hpp"
#include "imwrite.hpp"
int main(int argc, char* argv[]) {
const std::string models_path = argv[1], prompt = argv[2];
const std::string device = "CPU"; // GPU can be used as well
ov::genai::Text2ImagePipeline pipe(models_path, device);
ov::Tensor image = pipe.generate(prompt,
ov::genai::width(512),
ov::genai::height(512),
ov::genai::num_inference_steps(20));
imwrite("image.bmp", image, true);
}
import argparse
from PIL import Image
import openvino_genai
import openvino as ov
device = 'CPU' # GPU can be used as well
pipe = openvino_genai.Image2ImagePipeline("./dreamlike_anime_1_0_ov/INT8", device)
image = Image.open("small_city.jpg")
image_data = np.array(image.getdata()).reshape(1, image.size[1], image.size[0], 3).astype(np.uint8)
image_data = ov.Tensor(image_data)
image_tensor = pipe.generate(
"cyberpunk cityscape like Tokyo New York with tall buildings at dusk golden hour cinematic lighting",
image=image_data,
strength=0.8
)
image = Image.fromarray(image_tensor.data[0])
image.save("image.bmp")
Code below requires installation of C++ compatible package (see here for additional setup details, or this blog for full instruction How to Build OpenVINO™ GenAI APP in C++
#include "openvino/genai/image_generation/image2image_pipeline.hpp"
#include "load_image.hpp"
#include "imwrite.hpp"
int main(int argc, char* argv[]) {
const std::string models_path = argv[1], prompt = argv[2], image_path = argv[3];
const std::string device = "CPU"; // GPU can be used as well
ov::Tensor image = utils::load_image(image_path);
ov::genai::Image2ImagePipeline pipe(models_path, device);
ov::Tensor generated_image = pipe.generate(prompt, image, ov::genai::strength(0.8f));
imwrite("image.bmp", generated_image, true);
}
import argparse
from PIL import Image
import openvino_genai
import openvino as ov
def read_image(path: str) -> openvino.Tensor:
pic = Image.open(path).convert("RGB")
image_data = np.array(pic.getdata()).reshape(1, pic.size[1], pic.size[0], 3).astype(np.uint8)
return openvino.Tensor(image_data)
device = 'CPU' # GPU can be used as well
pipe = openvino_genai.InpaintingPipeline(args.model_dir, device)
image = read_image("image.jpg")
mask_image = read_image("mask.jpg")
image_tensor = pipe.generate(
"Face of a yellow cat, high resolution, sitting on a park bench",
image=image,
mask_image=mask_image
)
image = Image.fromarray(image_tensor.data[0])
image.save("image.bmp")
Code below requires installation of C++ compatible package (see here for additional setup details, or this blog for full instruction How to Build OpenVINO™ GenAI APP in C++
#include "openvino/genai/image_generation/inpainting_pipeline.hpp"
#include "load_image.hpp"
#include "imwrite.hpp"
int main(int argc, char* argv[]) {
const std::string models_path = argv[1], prompt = argv[2];
const std::string device = "CPU"; // GPU can be used as well
ov::Tensor image = utils::load_image(argv[3]);
ov::Tensor mask_image = utils::load_image(argv[4]);
ov::genai::InpaintingPipeline pipe(models_path, device);
ov::Tensor generated_image = pipe.generate(prompt, image, mask_image);
imwrite("image.bmp", generated_image, true);
}
See here
For more examples check out our LLM Inference Guide
NOTE: Whisper Pipeline requires preprocessing of audio input (to adjust sampling rate and normalize)
#Download and convert to OpenVINO whisper-base model
optimum-cli export openvino --trust-remote-code --model openai/whisper-base whisper-base
NOTE: This sample is a simplified version of the full sample that is available here
import openvino_genai
import librosa
def read_wav(filepath):
raw_speech, samplerate = librosa.load(filepath, sr=16000)
return raw_speech.tolist()
device = "CPU" # GPU can be used as well
pipe = openvino_genai.WhisperPipeline("whisper-base", device)
raw_speech = read_wav("sample.wav")
print(pipe.generate(raw_speech))
NOTE: This sample is a simplified version of the full sample that is available here
#include <iostream>
#include "audio_utils.hpp"
#include "openvino/genai/whisper_pipeline.hpp"
int main(int argc, char* argv[]) {
std::filesystem::path models_path = argv[1];
std::string wav_file_path = argv[2];
std::string device = "CPU"; // GPU can be used as well
ov::genai::WhisperPipeline pipeline(models_path, device);
ov::genai::RawSpeechInput raw_speech = utils::audio::read_wav(wav_file_path);
std::cout << pipeline.generate(raw_speech, ov::genai::max_new_tokens(100)) << '\n';
}
See here
- List of supported models (NOTE: models can work, but were not tried yet)
- OpenVINO LLM inference Guide
- Optimum-intel and OpenVINO
The OpenVINO™ GenAI repository is licensed under Apache License Version 2.0. By contributing to the project, you agree to the license and copyright terms therein and release your contribution under these terms.