DDP gradient checkpoint crashes

Hello,

I am using the training script to fine-tune a wav2vec2 model for classification.

I am using DDP on two GPUs:

python -m torch.distributed.run --nproc_per_node 2 run_audio_classification.py

(run because launch fails)

All the rest being equal facebook/wav2vec2-base works if gradient_checkpointing is set to True, however, the large model crashes unless the option it is either set to False or removed.

gradient_checkpointing works for both models if using a single GPU, so the issue seems to be DDP-related.

Is this behaviour to be expected?

This is the error message:

Traceback (most recent call last):
  File "/home/emoman/Downloads/mosei/messai/run_audio_classification.py", line 606, in <module>
    main()
  File "/home/emoman/Downloads/mosei/messai/run_audio_classification.py", line 580, in main
    train_result = trainer.train(resume_from_checkpoint=checkpoint)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/emoman/Downloads/stable/lib/python3.11/site-packages/transformers/trainer.py", line 1553, in train
    return inner_training_loop(
           ^^^^^^^^^^^^^^^^^^^^
  File "/home/emoman/Downloads/stable/lib/python3.11/site-packages/transformers/trainer.py", line 1835, in _inner_training_loop
    tr_loss_step = self.training_step(model, inputs)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/emoman/Downloads/stable/lib/python3.11/site-packages/transformers/trainer.py", line 2690, in training_step
    self.accelerator.backward(loss)
  File "/home/emoman/Downloads/stable/lib/python3.11/site-packages/accelerate/accelerator.py", line 1923, in backward
    loss.backward(**kwargs)
  File "/home/emoman/Downloads/stable/lib/python3.11/site-packages/torch/_tensor.py", line 487, in backward
    torch.autograd.backward(
  File "/home/emoman/Downloads/stable/lib/python3.11/site-packages/torch/autograd/__init__.py", line 200, in backward
    Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
  File "/home/emoman/Downloads/stable/lib/python3.11/site-packages/torch/autograd/function.py", line 274, in apply
    return user_fn(self, *args)
           ^^^^^^^^^^^^^^^^^^^^
  File "/home/emoman/Downloads/stable/lib/python3.11/site-packages/torch/utils/checkpoint.py", line 157, in backward
    torch.autograd.backward(outputs_with_grad, args_with_grad)
  File "/home/emoman/Downloads/stable/lib/python3.11/site-packages/torch/autograd/__init__.py", line 200, in backward
    Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
RuntimeError: Expected to mark a variable ready only once. This error is caused by one of the following reasons: 1) Use of a module parameter outside the `forward` function. Please make sure model parameters are not shared across multiple concurrent forward-backward passes. or try to use _set_static_graph() as a workaround if this module graph does not change during training loop.2) Reused parameters in multiple reentrant backward passes. For example, if you use multiple `checkpoint` functions to wrap the same part of your model, it would result in the same set of parameters been used by different reentrant backward passes multiple times, and hence marking a variable ready multiple times. DDP does not support such use cases in default. You can try to use _set_static_graph() as a workaround if your module graph does not change over iterations.
Parameter at index 392 has been marked as ready twice. This means that multiple autograd engine  hooks have fired for this particular parameter during this iteration. You can set the environment variable TORCH_DISTRIBUTED_DEBUG to either INFO or DETAIL to print parameter names for further debugging.
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 1313298) of binary: /home/emoman/Downloads/stable/bin/python
Traceback (most recent call last):
  File "<frozen runpy>", line 198, in _run_module_as_main
  File "<frozen runpy>", line 88, in _run_code
  File "/home/emoman/Downloads/stable/lib/python3.11/site-packages/torch/distributed/run.py", line 798, in <module>
    main()
  File "/home/emoman/Downloads/stable/lib/python3.11/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 346, in wrapper
    return f(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^
  File "/home/emoman/Downloads/stable/lib/python3.11/site-packages/torch/distributed/run.py", line 794, in main
    run(args)
  File "/home/emoman/Downloads/stable/lib/python3.11/site-packages/torch/distributed/run.py", line 785, in run
    elastic_launch(
  File "/home/emoman/Downloads/stable/lib/python3.11/site-packages/torch/distributed/launcher/api.py", line 134, in __call__
    return launch_agent(self._config, self._entrypoint, list(args))
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/emoman/Downloads/stable/lib/python3.11/site-packages/torch/distributed/launcher/api.py", line 250, in launch_agent
    raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:

Duplicate:

Try this, it worked for me: model.gradient_checkpointing_enable(gradient_checkpointing_kwargs={"use_reentrant":False})

@Owos could you tell me exactly where should i place the line model.gradient_checkpointing_enable(gradient_checkpointing_kwargs={"use_reentrant":False}) ?