-
Notifications
You must be signed in to change notification settings - Fork 6
Description
Line 119 in b1648aa
| loss.backward() |
getting this error:
['train.py', '--config=config_128.json', '--rank=0', '--group_name=group_2024_06_19-220542']
/home/exouser/DiffWave-Vocoder/train.py:61: SyntaxWarning: "is not" with a literal. Did you mean "!="?
if key is not "T":
output directory exp/ch128_T200_betaT0.02/logs/checkpoint
Data loaded
WaveNet_vocoder Parameters: 6.914297M
Successfully loaded model at iteration 1000000
Traceback (most recent call last):
File "/home/exouser/DiffWave-Vocoder/train.py", line 183, in
train(num_gpus, args.rank, args.group_name, **train_config)
File "/home/exouser/DiffWave-Vocoder/train.py", line 119, in train
loss.backward()
File "/home/exouser/.local/lib/python3.10/site-packages/torch/_tensor.py", line 525, in backward
torch.autograd.backward(
File "/home/exouser/.local/lib/python3.10/site-packages/torch/autograd/init.py", line 267, in backward
_engine_run_backward(
File "/home/exouser/.local/lib/python3.10/site-packages/torch/autograd/graph.py", line 744, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [2, 128, 16000]], which is output 0 of ReluBackward0, is at version 1; expected version 0 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).