You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Above is the code that sets up my filtered backprojection and projection when processed on ndarray format and tensor format. I found that the effect is normal when fbp the projection on cpu through ndarray format, but the effect is much worse when processing on gpu through tensor format using op_modpT, is it because of the code writing problem?
The text was updated successfully, but these errors were encountered:
Hello, sorry I missed this issue when it came in; I was on vacation.
Notice that Torch is currently not really supported at all; the existing code only contains an attempt from a long time ago which is kind of a hack and relies on copying back and forth between CPU and GPU.
But we are actively working on proper Torch integration again. It won't be in the upcoming release, but should be ready some time in autumn. The idea is that it should then be possible to directly use computations involving high-level ODL operators, but with the implementation in PyTorch (completely on GPU if desired) and the ability to auto-differentiate as if you were directly working with Torch tensors.
para_ini = initialization()
fp, fbp ,op_norm= build_gemotry(para_ini)
op_modfp = odl_torch.OperatorModule(fp)
op_modfbp = odl_torch.OperatorModule(fbp)
op_modpT = odl_torch.OperatorModule(fp.adjoint)
Above is the code that sets up my filtered backprojection and projection when processed on ndarray format and tensor format. I found that the effect is normal when fbp the projection on cpu through ndarray format, but the effect is much worse when processing on gpu through tensor format using op_modpT, is it because of the code writing problem?
The text was updated successfully, but these errors were encountered: