Please update the NPU driver to the latest version to fully utilize the library features
PIP package: https://pypi.org/project/intel-npu-acceleration-library/1.4.0/
What's Changed
- Add doc for implementing new operations by @SarahByrneIntel in #79
- Adding power and log softmax operations by @SarahByrneIntel in #80
- Adding support for operations on tensors by @SarahByrneIntel in #81
- Add c++ examples by @alessandropalla in #86
- NPU compilation tutorial by @alessandropalla in #87
- Fix ops and r_ops in case of float and int by @alessandropalla in #88
- Adding support and testing for chunk tensor operation by @SarahByrneIntel in #90
- Make matmul op (@) torch compliant by @alessandropalla in #91
- Update scikit-learn requirement from <=1.5.0 to <=1.5.1 by @dependabot in #93
- Support for Phi-3 MLP layer by @SarahByrneIntel in #84
- Fix OpenSSF scan by @alessandropalla in #99
- Enable npu compile in compiler.py by @xduzhangjiayu in #100
- Dtype mismatch fix for model training by @SarahByrneIntel in #104
- Add the position_imbeddings param to LlamaAttention.forward by @Nagico2 in #105
- add param in profile_mlp.py to enable graph mode or not by @xduzhangjiayu in #106
- Add prelu and normalize ops by @alessandropalla in #107
- qwen2_math_7b.py to support Qwen Math 7b LLM network by @andyyeh75 in #119
- Update scikit-learn requirement from <=1.5.1 to <=1.5.2 by @dependabot in #123
- Fix some issues on CI by @alessandropalla in #130
- Model compiling demo by @SarahByrneIntel in #115
- 'Audio-Spectrogram-Transformer' example added by @sbasia in #134
- Building on Ubuntu 24.04 by @ytxmobile98 in #129
- Add turbo mode by @alessandropalla in #140
- Reinstate llama tests by @alessandropalla in #141
New Contributors
- @Nagico2 made their first contribution in #105
- @andyyeh75 made their first contribution in #119
- @sbasia made their first contribution in #134
- @ytxmobile98 made their first contribution in #129
Full Changelog: v1.3.0...v1.4.0