v2.3.0 #25
HolyWu
announced in
Announcements
v2.3.0
#25
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
dual
parameter to perform inference in two threads for better performance. Mostly useful for TensorRT, not so useful for CUDA, and not supported for DirectML.This discussion was created from the release v2.3.0.
Beta Was this translation helpful? Give feedback.
All reactions