I’ve been trying to figure out the nature of the deepspeed integration, especially with respect to huggingface accelerate.
It seems that the trainer uses accelerate to facilitate deepspeed. But when I look at the documentation, it seems that we still use deepspeed as the launcher, or the pytorch distribute
But it didn’t mention using the accelerate launcher. I’m confused by this since trainer uses accelerate to facilitate the deepspeed integration.
As for the TRL library, it seems that it’s using the accelerate library for it’s trainers as well, but for the TRL library, the official way to launch it is to use the accelerate launcher
It seems that the trainer uses accelerate to facilitate deepspeed. But when I look at the documentation, it seems that we still use deepspeed as the launcher, or the pytorch distribute
You can still launch the script with the deepspeed and pytorch distribute command. However, since the trainer refacto, the trainer backend relies totally on accelerate. The accelerate launch command is there to simplify your life since you only need to remember one command + easy to set up the parameters using accelerate config.
But it didn’t mention using the accelerate launcher. I’m confused by this since trainer uses accelerate to facilitate the deepspeed integration.
Check the doc here! But I agree with you, this section is well hidden .
As for the TRL library, it seems that it’s using the accelerate library for it’s trainers as well, but for the TRL library, the official way to launch it is to use the accelerate launcher
TRL Trainer relies on transformers Trainer. This is why you can use accelerate there as well. We should probably update more our doc to emphasize on accelerate indeed.
I’m wondering if it’s still exchangeable with the deepspeed launcher, of it not, what’s the nature of the facilitation.
Under the hood, accelerate launch uses the deepspeed launcher. Same for pytorch distributed.
Under the hood, accelerate launch uses the deepspeed launcher. Same for pytorch distributed.
Ah, that clears up a lot for me. For ‘Same for pytorch distributed.’, does that mean accelerate launch uses pytorch distributed? Or pytorch distributed uses the deepspeed launcher?