Skip to content

Commit

Permalink
perplexity : fix kv cache handling for hellaswag (ggerganov#4981)
Browse files Browse the repository at this point in the history
ggml-ci
  • Loading branch information
ggerganov authored Jan 16, 2024
1 parent c37b347 commit 959ef0c
Showing 1 changed file with 1 addition and 0 deletions.
1 change: 1 addition & 0 deletions examples/perplexity/perplexity.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -428,6 +428,7 @@ static std::vector<float> hellaswag_evaluate_tokens(
for (size_t i_chunk = 0; i_chunk < n_chunk; ++i_chunk) {
size_t n_tokens = tokens.size() - i_chunk * n_batch;
n_tokens = std::min(n_tokens, size_t(n_batch));
llama_kv_cache_seq_rm(ctx, 0, n_past, -1);
if (llama_decode(ctx, llama_batch_get_one(tokens.data() + i_chunk * n_batch, n_tokens, n_past, 0))) {
fprintf(stderr, "%s : failed to eval\n", __func__);
return {};
Expand Down

0 comments on commit 959ef0c

Please sign in to comment.