While analysing the GPU memory footprint of a few models âpreparedâ via accelerator.prepare()
, I found the models occupying more than twice the GPU memory compared to normally loaded models on device. What can be the reason for this inflated memory footprint?
I am using a single V100 32 GB GPU and this is my how config file looks:
compute_environment: LOCAL_MACHINE
distributed_type: MULTI_GPU
downcast_bf16: 'no'
gpu_ids: all
machine_rank: 0
main_training_function: main
mixed_precision: 'no'
num_machines: 1
num_processes: 1
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
Here are my scripts:
Loading via Accelerator:
from transformers import AutoModelForCausalLM
from accelerate import Accelerator
accelerator = Accelerator()
model_pretrained_ckpt_path_hf = 'Salesforce/codegen-2B-multi'
model = AutoModelForCausalLM.from_pretrained(model_pretrained_ckpt_path_hf)
model = accelerator.prepare(model)
input("Model has been put into memory")
Normal loading:
from transformers import AutoModelForCausalLM
from accelerate import Accelerator
model_pretrained_ckpt_path_hf = 'Salesforce/codegen-2B-multi'
model = AutoModelForCausalLM.from_pretrained(model_pretrained_ckpt_path_hf).to("cuda")
input("Model has been put into memory")