You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have a converted lstm model located in /data1/v-leiwang3/benchmark/nnfusion_models/lstm.float32.onnx, which can be inferenced by onnxruntime, but I got a compiled failed issue with nnfusion.
I think the problem is because nnfusion has no Lstm Op Cuda_GPU DeviceType kernel, because there is no lstm.cpp under folder core/kernels/cuda_gpu/kernels.
🐛 Bug
I have a converted lstm model located in
/data1/v-leiwang3/benchmark/nnfusion_models/lstm.float32.onnx
, which can be inferenced by onnxruntime, but I got a compiled failed issue with nnfusion.To Reproduce
nnfusion /workspace/v-leiwang3/benchmark/nnfusion_models/lstm.float32.onnx -f onnx -p batch_size:1;seq_length:512 -fwarmup_step=5 -frun_step=100
some additional debug logs has been added by myself to find the problem, It seems that something stucked at lstm node, any suggestion?
The text was updated successfully, but these errors were encountered: