comparison with qualcomm ai hub model #7411
Labels
module: qnn
Related to Qualcomm's QNN delegate
partner: qualcomm
For backend delegation, kernels, demo, etc. from the 3rd-party partner, Qualcomm
🐛 Describe the bug
I ran Llama-v3.2-3B-Chat(precision w4a16) from ai-hub-model on a Snapdragon 8 Gen 3 device, achieving 20 tokens/s.
For comparison, I ran inference for the Llama3.2-3B model quantized to W4A16 using executorch with the QNN backend on the same device. The performance I observed was 10 tokens/s.
Could you provide insights into what might be causing this performance difference? Are there issues with how executorch handles quantized models that could explain this gap?
Any guidance or suggestions would be greatly appreciated!
The text was updated successfully, but these errors were encountered: