Skip to content

Commit b691302

Browse files
author
leandro von werra
committed
fix distributed optimizer
1 parent e0b644b commit b691302

File tree

1 file changed

+0
-3
lines changed

1 file changed

+0
-3
lines changed

megatron/training.py

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -391,9 +391,6 @@ def setup_model_and_optimizer(model_provider_func,
391391
torch.distributed.barrier()
392392
timers('load-checkpoint').stop()
393393
timers.log(['load-checkpoint'])
394-
# This is critical when only model is loaded. We should make sure
395-
# main parameters are also updated.
396-
optimizer.reload_model_params()
397394
else:
398395
args.iteration = 0
399396

0 commit comments

Comments
 (0)