-
Notifications
You must be signed in to change notification settings - Fork 10.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
llama : first attempt to implement vision API (WIP) #9687
base: master
Are you sure you want to change the base?
Conversation
gguf-py/gguf/constants.py
Outdated
@@ -178,6 +178,28 @@ class Adapter: | |||
TYPE = "adapter.type" | |||
LORA_ALPHA = "adapter.lora.alpha" | |||
|
|||
class Vision: | |||
# only support vision.type = "clip" for now |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Probably better to be very specific here you are supporting ViT. HuggingFace has also moved away from calling every generic terms. Also I don't think the purpose is to support actual CLIP inference.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, agree. I think for now it's safer to call this clip-vit
to reflect the base implementation is openai/clip-vit-*
Atm it's quite complicated for me to drop the clip_
prefix in all functions. But hey at least the file name is now llama-vision.{cpp|h}
instead of llama-clip
, which should reflect that we can support ViT and also other things to come in the future.
n_cur = 0; | ||
llama_batch_clear(batch); | ||
llama_batch_add(batch, 1, 0, { 0 }, false); | ||
llama_decode(ctx, batch); | ||
|
||
n_cur = t1.size(); | ||
llama_batch_clear(batch); | ||
llama_batch batch0 = {int32_t(576), nullptr, _test_get_img_embd(ctx), nullptr, nullptr, nullptr, nullptr, n_cur, 1, 0, }; | ||
llama_decode(ctx, batch0); | ||
|
||
n_cur = 0; | ||
llama_batch_clear(batch); | ||
for (auto t : t1) { llama_batch_add(batch, t, n_cur, { 0 }, false); n_cur++; } | ||
llama_decode(ctx, batch); | ||
|
||
n_cur = t1.size() + 576; | ||
llama_batch_clear(batch); | ||
printf("pos %d\n", n_cur); | ||
for (auto t : t2) { llama_batch_add(batch, t, n_cur, { 0 }, false); n_cur++; } | ||
batch.logits[batch.n_tokens - 1] = true; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ggerganov I observed something weird here that I can't understand, so I would like to ask if you know why.
What I'm trying to do here is to test if I can call llama_decode
on different parts of the sentence out-of-order. That means instead of evaluating text,image,text
, I try evaluating image,text,text
. In theory, this should work because the position is set correctly for all input tokens/embeddings, just the placement in KV cache is different.
But in reality, llama_decode
on a batch containing embd
doesn't work correctly if the KV cache is empty. In this example, I need to evaluate at least one token for it to work.
I'd be happy to provide more info to help pin-point the problem. Thank you.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But in reality, llama_decode on a batch containing embd doesn't work correctly if the KV cache is empty.
What error do you get?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is no error, but it the generated text is quite irrelevant.
- Expected output:
The image features a beautiful view of the Eiffel Tower
- Without the first single-token
llama_decode
:The image features a cityscape with a tall building prominently visible in the center. The building appears to be lit up, possibly at night or during
--> clearly wrong because I provided a daytime image
Please note that if the code block follow order of text,image,text
, then it will work correctly (generate expected output).
I have a feeling that the KV for embedding is truncated somehow, but can't find a way to verify this. I tried playing with llama_kv_cache_view
but the output indicates all tokens are in their correct place.
show code
llama_kv_cache_view v = llama_kv_cache_view_init(ctx, 1);
llama_kv_cache_view_update(ctx, &v);
printf("n_cells %d\n", v.n_cells);
for (int i = 0; i < 16; i++) {
printf("cell %d --> pos %d (seq %d)\n", i, v.cells[i].pos, v.cells_sequences[0]);
}
for (int i = 570; i < 570+16; i++) {
printf("cell %d --> pos %d (seq %d)\n", i, v.cells[i].pos, v.cells_sequences[0]);
}
for (int i = n_cur-8; i < n_cur+8; i++) {
printf("cell %d --> pos %d (seq %d)\n", i, v.cells[i].pos, v.cells_sequences[0]);
}
(I'm uploading the model so that you can test it. Will drop a link here when it's available)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Link to gguf: https://huggingface.co/ngxson/test-llava-will-be-deleted-soon/blob/main/model.gguf
Command: ./llama-simple -m ../models/llava-1.5-7b-hf/model.gguf -c 1024 -v
Edit: you also need this image placed under ../models/eiffel-tower-3349075_1280.jpg
:
https://cdn.sdnews.com/wp-content/uploads/20230705142505/eiffel-tower-3349075_1280.jpg
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think that text,image,text
is equivalent to image,text,text
regardless if the positions are correct. But before we discuss that, it seems to me that the positions are off by one due to the BOS token:
n_cur = 0;
llama_batch_clear(batch);
llama_batch_add(batch, 1, 0, { 0 }, false);
llama_decode(ctx, batch);
n_cur = t1.size();
llama_batch_clear(batch);
llama_batch batch0 = {int32_t(576), nullptr, _test_get_img_embd(ctx), nullptr, nullptr, nullptr, nullptr, 1 + n_cur, 1, 0, };
llama_decode(ctx, batch0);
n_cur = 0;
llama_batch_clear(batch);
for (auto t : t1) { llama_batch_add(batch, t, 1 + n_cur, { 0 }, false); n_cur++; }
llama_decode(ctx, batch);
n_cur = t1.size() + 576;
llama_batch_clear(batch);
printf("pos %d\n", n_cur);
for (auto t : t2) { llama_batch_add(batch, t, 1 + n_cur, { 0 }, false); n_cur++; }
batch.logits[batch.n_tokens - 1] = true;
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But before we discuss that, it seems to me that the positions are off by one due to the BOS token
Just to clarify a bit, the single-token llama_decode is there just for testing. Indeed, adding this will effectively make the sequence to have 2 BOS at the same position 0. But for some reasons, the output is the same with single BOS: bos,text,image,text
(which is the expected output)
So, it's just to highlight the difference between llama_decode(img_embd)
on empty KV cache vs with KV cache having single token
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see, so this is OK in that case.
I don't think that text,image,text is equivalent to image,text,text
Let's for clarity assume the 2 cases of submitting 3 batches in different order:
text0, image, text1
image, text0, text1
In the first case, when the image
batch is evaluated it will "see" the text0
tokens, because they are stored in the KV cache (due to computing the text0
batch before that) and so the image tokens will attend to the text0
tokens when computing the attention.
If instead we first submit the "image" batch on an empty cache, then the tokens will not attend to text0
. Also, when submitting the second text0
batch, it will also not attend to the image
tokens because of the causal mask - this is correct.
In both cases the text1
batch is evaluated correctly because it will "see" all the data that it is supposed to. But I think the problem is that in the second case, the image
batch is computed incorrectly.
One thing to be careful is when computing the image
batch, what mask should we use? There was some relevant discussion before in #7553. But in short, it's possible that the image
tokens require non-causal mask between themselves (i.e. each image token "sees" all other image tokens). I'm not sure if this is the case, but it is something to consider.
Edit: just now I read #7553 (comment) by @abetlen and it appears that for the PaliGemma models, the entire context (both image and text) has to be evaluated with non-causal attention. This makes it necessary to submit everything inside a single ubatch, otherwise tokens from a previous ubatch cannot attend to tokens in a future ubatch.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for your response. FYI, I tried printing the inp_KQ_mask
and turns out it's the problem. In the second case image, text0, text1
, somehow the text1
can't attend to tokens in text0
.
What's weird is that, if now I have 4 batches: bos,image,text0,text1
, then it will work correctly (bos
is the batch that contains one single BOS token)
I also set the n_batch
and n_ubatch
to 1024 to make sure the image fit into one batch, but it doesn't solve my issue.
One thing to be careful is when computing the
image
batch, what mask should we use?
In the case of llava model, let's imagine that the image batch is just a batch containing 576 tokens, so let's call it textI
instead of image
. So my assumption is text0,textI,text1
should be equivalent to textI,text0,text1
(?). When generating the next token, we simply add all tokens that we already evaluated to the mask, so it should be a auto-aggressive mask.
Perhaps PaliGemma is different because it requires non-causal.
I also tried setting llama_set_causal_attn
to false before evaluating image
batch, but that doesn't help either.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think that
text,image,text
is equivalent toimage,text,text
regardless if the positions are correct.
Yeah right, I've just realized that my mathematical model is wrong. Just maintaining the correct position embedding is not enough.
Because softmax is applied to each layer, that means from the second layer of the model, each cell in KV cache already contains the info of the token before it. Therefore, decoding image
first is not be possible since it don't have information about text0
. They must at least be in the same batch in order to make it work.
Thank you again for the explanation! (And sorry for making noise)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For now, I think a hacky way would be to internally populate image tokens into the batch in llama_decode
. I'm also thinking about possibility of extending llm_build_inp_embd
to accept both token and embedding input at the same time. What do you think? (also cc @abetlen since that what your PR #7553 is about to resolve)
image_output = std::move(padded_image); | ||
} | ||
|
||
static void normalize_image_u8_to_f32(const clip_image_u8 src, clip_image_f32 dst, const std::array<float, 3> & mean, const std::array<float, 3> & std) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This might be a rootcause for some bugs ;)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, probably missing clip_image_f32 &
; thanks for pointing this out.
Forgive me for (probably) offtopic, but would it be possible to use CLIP model with llama.cpp to compute text embedding for the "classic" purpose of matching text with images? |
I'm not 100% sure, but in theory, everything is possible with the correct model weight and the correct compute graph. |
I took a shot at rebasing this branch and as there have been quite a few changes upstream which affects this PR and I wanted try this API out. I wanted to share the rebased repo in case it would be useful/save time. I moved the example to a new example named simple-vision as the |
@ngxson are you still planning on working on this? Having this as a first step towards llama-3.2-vision would be very useful. |
I'm not actively working on it, but will continue soon. Ref discussion: #10381 |
(Hopefully) fix #8010
Important
This is still WIP, only
simple
example is workingCollaborators are encouraged to discuss and give feedback on this
Motivation
Currently, the vision capability is provided by
llava
example, which is a CLIP implementation in ggml. While it's a good start, the API need some refactoring to be cleaner and more future-proof.Inspired by current rework on sampling API, I propose to move the CLIP implementation into the main
libllama
, providing user a stable, easy-to-use API like what we did forllama_encode
The goals of this refactoring are:
llama-cli
accept image inputThe no-goals:
llama-server
. It will be another PRPlan
libllama
convert_hf_to_gguf.py
to support llava --> not an ideal implementation, but kinda worksllama_model
andllama_context
to hold vision-related datallama-vision.{cpp|h}
llama-cli
Implementation
Naming scheme
For metadata, we will add
vision.*
namespace.vision.type
: the type of vision encoder. We only support"clip"
for now (not sure if there are any other implementation out there)vision.*
: other params for vision encoding, for example patch size, image size, etcvision.clip.*
: CLIP-related paramsExample:
For tensor naming scheme, we will prefix all vision-related tensor with
v.enc.*
. For example:API
libllama
will be responsible for: