# one of the variables needed for gradient computation has been modified by an inplace operation

474 views

## Problem :

I am trying to compute a loss on the jacobian of the network but encountered following error
one of the variables needed for gradient computation has been modified by an inplace operation

## Solution :

Please note grad_output.zero_() is in-place and so is grad_output[:, i-1] = 0. In-place means "modify a tensor instead of returning a new one, which has the modifications applied".  An example which uses the zero out the 1st column as follows :

`e.g.`
```import torch
t = torch.randn(3, 3)
ixs = torch.arange(3, dtype=torch.int64)
zeroed = torch.where(ixs[None, :] == 1, torch.tensor(0.), t)

zeroed
tensor([[-0.6616,  0.0000,  0.7329],
[ 0.8961,  0.0000, -0.1978],
[ 0.0798,  0.0000, -1.2041]])

t
tensor([[-0.6616, -1.6422,  0.7329],
[ 0.8961, -0.9623, -0.1978],
[ 0.0798, -0.7733, -1.2041]])
```

Notice how the t retains values it had before and also zeroed has the values which you want.

38.6k points

## Related questions

78 views
Problem: I am trying to compute a loss on the jacobian of the network (i.e. to perform double backprop), and I get the following error: RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation I can't find ... =True, retain_graph=True, create_graph=True)[0]) jacobian = torch.stack(jacobian_list, dim=0) loss3 = jacobian.norm() loss3.backward()
8 views
Problem I'm aware of the gradient descent and the back-propagation algorithm. What I don't get is: when is using a bias important and how do you use it?
104 views
Problem: i will be thankful to the person who could help me in solve this problem? only one element tensors can be converted to python scalars.
3.4k views
Problem : I am trying to convert my list into an array using Python, But I got below error ValueError: only one element tensors can be converted to Python scalars.
68 views
Problem: My env configuration: python:3.6 tensorflow-GPU 1.3 CUDA:9.0 VS:2013 torch:0.4.0 run CUDA 9.0 sample:success But when I run the pytorch code,I get the error info as follows: File "D:\Anaconda3\lib\site-packages\torch\nn\modules\module.py" ... CUDA driver version is insufficient for CUDA runtime version at ..\src\THC\THCGeneral.cpp:70 I have tried reinstall CUDA,but the error still exists.
1 vote
2k views
Problem : I am very new to Pytorch. I am currently trying to train my pytorch model I am using the unet model. I am getting dimension out of range error as shown below: /usr/local/lib/python3.5/dist-packages/torch/nn/functional.py in log_softmax(input, ... .view(-1) targets_flat = targets.view(-1) return self.crossEntropy_loss(probs_flat, targets_flat)` Please let me know how to fix above error.
91 views
Problem: Can anyone simplify this : Could not broadcast input array from shape?