Pytorch. Can Autograd Be Used When The Final Tensor Has More Than A Single Value In It?
Can autograd be used when the final tensor has more than a single value in it? I tried the following. x = torch.tensor([4.0, 5.0], requires_grad=True) y = x ** 2 print(y) y.back
Solution 1:
See https://pytorch.org/tutorials/beginner/blitz/autograd_tutorial.html#gradients
y.backward()
is same as y.backward(torch.tensor(1.0))
Usually, the output is scalar and hence the scalar is passed to backward as a default choice. However, since your output is two dimensional you should call
y.backward(torch.tensor([1.0,1.0]))
Baca Juga
- Why Is It In Pytorch When I Make A Copy Of A Network's Weight It Would Be Automatically Updated After Back-propagation?
- Runtimeerror: Given Groups=1, Weight Of Size [64, 3, 7, 7], Expected Input[3, 1, 224, 224] To Have 3 Channels, But Got 1 Channels Instead
- Randomly Set Some Elements In A Tensor To Zero (with Low Computational Time)
This will give expected results with x.grad
being tensor([ 8., 10.])
Post a Comment for "Pytorch. Can Autograd Be Used When The Final Tensor Has More Than A Single Value In It?"