You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The transformer architecture (https://arxiv.org/pdf/1706.03762) has been instrumental to scale sequence neural networks.
The transformer architecture is the fundamental building block of all LLMs. The trend of open sourcing LLM or reducing the number of parameters is strong. So the support of transformer architecture and attention head would be a great addition to hls4ml, which power and latency gains expected.
The transformer architecture (https://arxiv.org/pdf/1706.03762) has been instrumental to scale sequence neural networks.
The transformer architecture is the fundamental building block of all LLMs. The trend of open sourcing LLM or reducing the number of parameters is strong. So the support of transformer architecture and attention head would be a great addition to hls4ml, which power and latency gains expected.
But the status and feature pages doesn't list it (https://fastmachinelearning.org/hls4ml/status.html)
Given the rationale above, I don't understand why the community has not yet engaged on this work, nor that it is listed in the discussions.
The text was updated successfully, but these errors were encountered: