Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for moonshine ASR models #990

Open
1 of 2 tasks
bil-ash opened this issue Oct 24, 2024 · 3 comments
Open
1 of 2 tasks

Add support for moonshine ASR models #990

bil-ash opened this issue Oct 24, 2024 · 3 comments
Labels
new model Request a new model

Comments

@bil-ash
Copy link

bil-ash commented Oct 24, 2024

Model description

Please add support for moonshine ASR models. The recent github commit adds support for onnx(python), so I guess porting to js won't take much effort. However, there is no mention about transformers usage.

This model is quite good for in-browser usage scenario since it is quite small and claims to use RAM proportional to length of audio.

Prerequisites

  • The model is supported in Transformers (i.e., listed here)
  • The model can be exported to ONNX with Optimum (i.e., listed here)

Additional information

No response

Your contribution

None

@bil-ash bil-ash added the new model Request a new model label Oct 24, 2024
@evmaki
Copy link

evmaki commented Oct 29, 2024

Hi, Evan from Useful Sensors/Moonshine here. Just letting you know this is on our radar. We're working on Transformers support right now, and we (internally) have our current ONNX models running in the browser with onnxruntime-web. Shouldn't be too difficult to get support added to transformers.js from there.

@bil-ash
Copy link
Author

bil-ash commented Oct 29, 2024

Hi, Evan from Useful Sensors/Moonshine here. Just letting you know this is on our radar. We're working on Transformers support right now, and we (internally) have our current ONNX models running in the browser with onnxruntime-web. Shouldn't be too difficult to get support added to transformers.js from there.

@evmaki Not related to this issue, but I am also eagerly waiting for the ability to finetune for supporting a new language.

@xenova
Copy link
Collaborator

xenova commented Oct 29, 2024

Hi, Evan from Useful Sensors/Moonshine here. Just letting you know this is on our radar. We're working on huggingface/transformers#34474 right now, and we (internally) have our current ONNX models running in the browser with onnxruntime-web. Shouldn't be too difficult to get support added to transformers.js from there.

@evmaki Great to hear! I have been following the ONNX support and it looks like a great start! One issue is that you currently export two versions of the decoder (w/ and w/o PKVs), leading to weight duplication (more of a problem when running in the browser since we load the decoder twice).

We were able to solve this in Optimum by adding an If node to the graph and then choosing which path to take based on whether the past key values are provided. See here for an example. And here is the code used to merge the two decoders.

I was experimenting with your codebase to pass zero-sized tensors as input, but I get gibberish output.

Either way, once we have transformers support, I expect this to be much easier to convert (since the input/output signatures should be similar to whisper).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
new model Request a new model
Projects
None yet
Development

No branches or pull requests

3 participants