You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Running llamafile with --gpu amd -ngl 9999 fails with the following error in hipcc call:
clang: error: invalid target ID 'native'; format is a processor name followed by an optional colon-delimited list of features followed by an enable/disable sign (e.g., 'gfx908:sramecc+:xnack-')
Version
llamafile v0.8.11
What operating system are you seeing the problem on?
Linux
Relevant log output
Full command invocation log:
$ ./llava-v1.5-7b-q4.llamafile --gpu amd -ngl 9999 --nobrowser
import_cuda_impl: initializing gpu module...
extracting /zip/llama.cpp/ggml.h to /home/username/.llamafile/v/0.8.11/ggml.h
extracting /zip/llamafile/compcap.cu to /home/username/.llamafile/v/0.8.11/compcap.cu
extracting /zip/llamafile/llamafile.h to /home/username/.llamafile/v/0.8.11/llamafile.h
extracting /zip/llamafile/tinyblas.h to /home/username/.llamafile/v/0.8.11/tinyblas.h
extracting /zip/llamafile/tinyblas.cu to /home/username/.llamafile/v/0.8.11/tinyblas.cu
extracting /zip/llama.cpp/ggml-impl.h to /home/username/.llamafile/v/0.8.11/ggml-impl.h
extracting /zip/llama.cpp/ggml-cuda.h to /home/username/.llamafile/v/0.8.11/ggml-cuda.h
extracting /zip/llama.cpp/ggml-alloc.h to /home/username/.llamafile/v/0.8.11/ggml-alloc.h
extracting /zip/llama.cpp/ggml-common.h to /home/username/.llamafile/v/0.8.11/ggml-common.h
extracting /zip/llama.cpp/ggml-backend.h to /home/username/.llamafile/v/0.8.11/ggml-backend.h
extracting /zip/llama.cpp/ggml-backend-impl.h to /home/username/.llamafile/v/0.8.11/ggml-backend-impl.h
extracting /zip/llama.cpp/ggml-cuda.cu to /home/username/.llamafile/v/0.8.11/ggml-cuda.cu
get_rocm_bin_path: note: hipInfo not found on $PATH
get_rocm_bin_path: note: $HIP_PATH/bin/hipInfo does not exist
get_rocm_bin_path: note: /opt/rocm/bin/hipInfo does not exist
llamafile_log_command: /usr/bin/rocminfo
get_amd_offload_arch_flag: error: hipInfo returned non-zero exit status
llamafile_log_command: hipcc -O3 -fPIC -shared --offload-arch=native -march=native -mtune=native -DGGML_USE_HIPBLAS -Wno-return-type -Wno-unused-result -Wno-unused-function -Wno-expansion-to-defined -DIGNORE0 -DGGML_BUILD=1 -DGGML_SHARED=1 -DGGML_MULTIPLATFORM -DGGML_CUDA_DMMV_X=32 -DK_QUANTS_PER_ITERATION=2 -DGGML_CUDA_PEER_MAX_BATCH_SIZE=128 -DGGML_CUDA_MMV_Y=1 -DGGML_USE_CUBLAS -DGGML_MINIMIZE_CODE_SIZE -o /home/username/.llamafile/v/0.8.11/ggml-rocm.so.s5vsqw /home/username/.llamafile/v/0.8.11/ggml-cuda.cu -lhipblas -lrocblas
clang: error: invalid target ID 'native'; format is a processor name followed by an optional colon-delimited list of features followed by an enable/disable sign (e.g., 'gfx908:sramecc+:xnack-')
Compile: warning: hipcc returned nonzero exit status
extracting /zip/ggml-rocm.so to /home/username/.llamafile/v/0.8.11/ggml-rocm.so
link_cuda_dso: note: dynamically linking /home/username/.llamafile/v/0.8.11/ggml-rocm.so
link_cuda_dso: warning: libamdhip64.so.6: cannot open shared object file: No such file or directory: failed to load library
fatal error: support for --gpu amd was explicitly requested, but it wasn't available
The text was updated successfully, but these errors were encountered:
Contact Details
No response
What happened?
Running llamafile with
--gpu amd -ngl 9999
fails with the following error in hipcc call:Version
llamafile v0.8.11
What operating system are you seeing the problem on?
Linux
Relevant log output
The text was updated successfully, but these errors were encountered: