Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

can the output model be transform to onnx format? #8

Open
dlkht opened this issue Oct 19, 2022 · 3 comments
Open

can the output model be transform to onnx format? #8

dlkht opened this issue Oct 19, 2022 · 3 comments

Comments

@dlkht
Copy link

dlkht commented Oct 19, 2022

as title

@hzhwcmhf
Copy link
Member

Not supported yet.
I have a plan to achieve quick and effective deployment but it is not my highest priority (I'm still pursuing higher quality). PR is welcome.

@dlkht
Copy link
Author

dlkht commented Oct 20, 2022

thanks for your answer.
and can it be run in android platform? (inference model)

@hzhwcmhf
Copy link
Member

@dlkht I am afraid it is not in my future plan.

I want to mention that NAT is mainly designed to utilize the parallel computing ability of GPU, so it may not bring such a big speedup on CPU/mobile device. (See Table1 in https://arxiv.org/pdf/2205.10577.pdf)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants