Issues when using `accelerate` with `fp16`

Hi,

I’m trying to use accelerate module to parallelize my model training. But I have troubles to use it when training models with fp16. If I load the model with torch_dtype=torch.float16, I got ValueError: Attempting to unscale FP16 gradients.. But if I don’t load the model with half precision I will get a CUDA out of memory error. Below are the details of this problem:

I’m fine tuning a 2.7B CLM, the stanford-crfm/BioMedLM model, on one A100 - 40GB GPU (I will be working on a much larger model, but I want to use this model to test my training process to make sure everything works as expected). I initially started with a training script without accelerate and without Trainer. I can successfully train the model when I load the model in half precision with:

# here device = 'cuda'
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16).to(device)

I have to load the model in half precision otherwise I will get a CUDA out of memory error. I simplified my script and uploaded here as a demonstration. When loading the model with half precision, it takes about 27GB GPU memory out of 40GB in the training process. It has plenty of rooms left on the GPU memory.

Now I want to utilize the accelerate module (potentially with deepspeed for larger models) in my training script. I made the following changes:

model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16)
accelerator = Accelerator(cpu=False, mixed_precision='fp16')
...
model, optimizer, train_dataloader, eval_dataloader = accelerator.prepare(
    model, optimizer, train_dataloader, eval_dataloader
)
...
# in training loop, I updated `lose.backward()` to:
accelerator.backward(loss)

Here is the updated script. I also configured accelerate with accelerate config. The default_config.yaml can be found from the same Gist.

Now when I tried to launch the script on the same machine with accelerate launch --fp16 <script_path>. I got an error ValueError: Attempting to unscale FP16 gradients.. So I removed torch_dtype=torch.float16 from model loading and rely on accelerate to downcast the model weight to half precision. But now I got CUDA out of memory error.

To summarize:

  1. I can train the model successfully when loading it with torch_dtype=torch.float16 and not using accelerate.
  2. With accelerate, I cannot load the model with torch_dtype=torch.float16. It gives ValueError: Attempting to unscale FP16 gradients..
  3. If I don’t load the model with torch_dtype=torch.float16 and use fp16 with accelerate, I got CUDA out of memory error.

So my question is: how can I train the model on a single A100 - 40GB GPU with accelerate?

3 Likes

Same issue, have you solved it?

I encountered a similar problem and the fix for me was making the unet fp32 (and making the rest fp16)
Ex)

    pipe = StableDiffusionPipeline.from_pretrained('runwayml/stable-diffusion-v1-5',
                                                   torch_dtype=torch.float32)
    unet = pipe.components['unet']. # this is full precision
    noise_scheduler = pipe.components['scheduler']
    tokenizer = pipe.components['tokenizer']
    vae = pipe.components['vae'].requires_grad_(False)
    text_encoder = pipe.components['text_encoder'].requires_grad_(False)

    text_encoder.to(accelerator.device, dtype=torch.float16)
    vae.to(accelerator.device, dtype=torch.float16)

1 Like

Notice in the original question, the model was loaded in as fp16 due to OOM. You started with fp32 which might not be an option.

ValueError : Attemting to unscale fp16 Gradients - #13 by Kroshtan - PyTorch Forums If you are using amp you are not supposed to call model.half() or model.to(torch.float16) and let autocast perform the casting.

This works for me