Inflated GPU memory footprint of model prepared via accelerate

While analysing the GPU memory footprint of a few models ‘prepared’ via accelerator.prepare(), I found the models occupying more than twice the GPU memory compared to normally loaded models on device. What can be the reason for this inflated memory footprint?

I am using a single V100 32 GB GPU and this is my how config file looks:

compute_environment: LOCAL_MACHINE
distributed_type: MULTI_GPU
downcast_bf16: 'no'
gpu_ids: all
machine_rank: 0
main_training_function: main
mixed_precision: 'no'
num_machines: 1
num_processes: 1
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false

Here are my scripts:

Loading via Accelerator:

from transformers import AutoModelForCausalLM
from accelerate import Accelerator
accelerator = Accelerator()
model_pretrained_ckpt_path_hf = 'Salesforce/codegen-2B-multi'
model = AutoModelForCausalLM.from_pretrained(model_pretrained_ckpt_path_hf)
model = accelerator.prepare(model)
input("Model has been put into memory")

Normal loading:

from transformers import AutoModelForCausalLM
from accelerate import Accelerator
model_pretrained_ckpt_path_hf = 'Salesforce/codegen-2B-multi'
model = AutoModelForCausalLM.from_pretrained(model_pretrained_ckpt_path_hf).to("cuda")
input("Model has been put into memory")

Same problem. Did you make any progress in resolving it?

@tongyx361 I stopped passing my model through model.prepare(). I could afford to do this as I was only doing inference but I didn’t find a solution.

Thanks! But how to do distributed inference without prepare()?

@tongyx361 Check this out : Data Parallel Multi GPU Inference

1 Like

Thanks a lot !!!