Autodiffbackend #2476
-
Hi, |
Beta Was this translation helpful? Give feedback.
Replies: 4 comments
-
Autodiff implements Backend and so does NdArray. But Autodiff backend has an inner backend that knows how to do the computation. (Autodiff is just for computing gradients). A device is not a backend but is associated with a backend pub trait Backend: // <-- A backend
FloatTensorOps<Self>
+ BoolTensorOps<Self>
+ IntTensorOps<Self>
...
+ 'static
{
/// Device type.
type Device: DeviceOps; // <-- has an associated device type Backend = Autodiff<Wgpu>; // <-- specify backend
let device = WgpuDevice::default(); // <-- specify device
let x: Tensor<Backend, 1> = Tensor::from_floats([3.0], &device);
let mut y: Tensor<Backend, 1> = Tensor::from_floats([2.0], &device); |
Beta Was this translation helpful? Give feedback.
-
As "A device is not a backend but is associated with a backend", can burn infer the backend from the device ? |
Beta Was this translation helpful? Give feedback.
-
I tested the following code: let device = NdArrayDevice::default();
let x: Tensor<NdArray, 1> = Tensor::from_floats([3.0], &device);
let x: Tensor<Autodiff<NdArray>, 1> = Tensor::from_floats([3.0], &device); Both |
Beta Was this translation helpful? Give feedback.
-
Yep! The device can be inferred from the backend, but not the other way around. See also: #2415 |
Beta Was this translation helpful? Give feedback.
Yep! The device can be inferred from the backend, but not the other way around.
NdArray
is a concrete backend implementation, andAutodiff
also implementsBackend
but uses the inner backend to perform the tensor operations and adds auto-differentiation capabilities to the backend used. Autodiff is intended to be used as a decorator/wrapper, which means you have to be explicit about when to use it. This is intended so you specify explicitly when you want to track the operations graph, otherwise you'd be wasting resources for nothing 🙂See also: #2415