Skip to content

Commit

Permalink
baby-llama : rename llama_layer to baby_llama_layer
Browse files Browse the repository at this point in the history
This commit renames the struct llama_layer to baby_llama_layer in the
baby-llama example. This is to avoid a symbol conflict with the
llama_layer struct in llama.cpp.

I ran into this when investigating an issue related to the baby-llama
example and stepping through the code I noticed the following in
`init_model`:
```console
$ gdb --args ./llama-baby-llama
Reading symbols from ./llama-baby-llama...
(gdb) br init_model
Breakpoint 1 at 0x251767: file examples/baby-llama/baby-llama.cpp,
line 190.
(gdb) r
...
(gdb)
204	    model->layers.resize(n_layer);
```
If we inspect the size of `model->layers` we see that it is 0:
```console
(gdb) p model->layers.size()
$1 = 0
```
And also `n_layer` is 1:
```console
(gdb) p n_layer
$3 = 1
```
And we can inspect the type of `model->layers`:
```console
(gdb) ptype model->layers
type = std::vector<llama_layer>
```

Now if we step over the resize function we will see something
interesting:
```console
(gdb) p model->layers.size()
$2 = 12
```
I also added two print statements to show the size of `n_layer` and
`model->layers.size()`:
```console
n_layer: 1
layers.size(): 2049638230412172414
```

I later realized that there is a symbol conflict. There is a
`llama_layer` in llama.cpp and this object file is compiled into the
`llama-baby-llama` binary:
```console
/usr/bin/ccache c++ -std=c++11 -fPIC -O0 -g -Wall -Wextra -Wpedantic
-Wcast-qual -Wno-unused-function -Wmissing-declarations
-Wmissing-noreturn -pthread -fopenmp  -march=native -mtune=native
-Wno-array-bounds -Wno-format-truncation -Wextra-semi -Iggml/include
-Iggml/src -Iinclude -Isrc -Icommon -D_XOPEN_SOURCE=600 -D_GNU_SOURCE
-D_GLIBCXX_ASSERTIONS -DGGML_USE_OPENMP -DGGML_USE_LLAMAFILE
ggml/src/llamafile/sgemm.o ggml/src/ggml.o ggml/src/ggml-alloc.o
ggml/src/ggml-backend.o ggml/src/ggml-quants.o ggml/src/ggml-aarch64.o
src/llama.o src/llama-vocab.o src/llama-grammar.o src/llama-sampling.o
src/unicode.o src/unicode-data.o common/common.o common/arg.o
common/log.o common/console.o common/ngram-cache.o common/sampling.o
common/train.o common/build-info.o common/json-schema-to-grammar.o
examples/baby-llama/baby-llama.o -o llama-baby-llama -g
```
This could be worked around by renaming the `llama_layer` in
baby-llama.cpp to `baby_llama_layer`, or use a namespace in
baby-llama.cpp. I initially considered not compiling llama.o into the
llama-baby-llama binary, but it looks like the baby-llama example uses
`train.h` so it needs llama.cpp indirectly, so I opted to rename the
struct.

After renaming the resize function works as expected:
```console
(gdb) p model->layers.size()
$1 = 0
(gdb) ptype model->layers
type = std::vector<baby_llama_layer>
(gdb) p model->layers.size()
$2 = 1
```
  • Loading branch information
danbev committed Sep 20, 2024
1 parent 6026da5 commit 0940460
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions examples/baby-llama/baby-llama.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -105,7 +105,7 @@ struct llama_hparams_lora {
}
};

struct llama_layer {
struct baby_llama_layer {
// normalization
struct ggml_tensor * attention_norm;

Expand Down Expand Up @@ -169,7 +169,7 @@ struct llama_model {
struct ggml_tensor * norm;
struct ggml_tensor * output;

std::vector<llama_layer> layers;
std::vector<baby_llama_layer> layers;
};

struct llama_model_lora {
Expand Down

0 comments on commit 0940460

Please sign in to comment.