Это часть кода в моем train.py
:
output_c1[output_c1 > 0.5] = 1.
output_c1[output_c1 < 0.5] = 0.
dice = DiceLoss()
loss = dice(output_c1,labels)
optimizer.zero_grad()
loss.backward()
optimizer.step()
Ошибка:
Traceback (most recent call last):
File "D:\Anaconda3\lib\site-packages\IPython\core\interactiveshell.py", line 3296, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-2-78553e2886de>", line 1, in <module>
runfile('F:/experiment_code/U-net/train.py', wdir='F:/experiment_code/U-net')
File "D:\pycharm\PyCharm Community Edition 2019.1.1\helpers\pydev\_pydev_bundle\pydev_umd.py", line 197, in runfile
pydev_imports.execfile(filename, global_vars, local_vars) # execute the script
File "D:\pycharm\PyCharm Community Edition 2019.1.1\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "F:/experiment_code/U-net/train.py", line 99, in <module>
loss.backward()
File "D:\Anaconda3\lib\site-packages\torch\tensor.py", line 107, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "D:\Anaconda3\lib\site-packages\torch\autograd\__init__.py", line 93, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [4, 2, 224, 224]], which is output 0 of SigmoidBackward, is at version 2; expected version 0 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).
Когда я комментирую эти две строки кода:
output_c1[output_c1 > 0.5] = 1.
output_c1[output_c1 < 0.5] = 0.
это может работать. Я думаю, что ошибка происходит отсюда, но я не знаю, как ее решить.