Loadig the LLAMA 30B Model. Memory Issue

Hi,
I am trying to load the LLAMA 30B model for my research. I already downloaded it from meta, converted it to HF weights using code from HF. However, I tried to load the model, using the following code:

model = transformers.AutoModelForCausalLM.from_pretrained(

        model_args.model_name_or_path,

        cache_dir=training_args.cache_dir,

    )

I get the following error:

Loading checkpoint shards:   0%|                                                                                                                                                                                                                             | 0/7 [00:00<?, ?it/s]WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 36704 closing signal SIGTERM
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 36706 closing signal SIGTERM
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 36707 closing signal SIGTERM
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 36708 closing signal SIGTERM
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 36710 closing signal SIGTERM
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 36712 closing signal SIGTERM
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 36715 closing signal SIGTERM
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: -9) local_rank: 1 (pid: 36705) of binary: /usr/bin/python
Traceback (most recent call last):
  File "/usr/local/bin/torchrun", line 33, in <module>
    sys.exit(load_entry_point('torch==2.1.0a0+fe05266', 'console_scripts', 'torchrun')())
  File "/usr/local/lib/python3.8/dist-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 346, in wrapper
    return f(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/torch/distributed/run.py", line 794, in main
    run(args)
  File "/usr/local/lib/python3.8/dist-packages/torch/distributed/run.py", line 785, in run
    elastic_launch(
  File "/usr/local/lib/python3.8/dist-packages/torch/distributed/launcher/api.py", line 134, in __call__
    return launch_agent(self._config, self._entrypoint, list(args))
  File "/usr/local/lib/python3.8/dist-packages/torch/distributed/launcher/api.py", line 250, in launch_agent
    raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError: 

During model loading, it uses almost 1005GB of memory.

I loaded the 7B model earlier, which worked fine.
Is the issue that, since it’s a extremely large model, my DGX cannot handle with its 1005 GB of memory or I am missing something here.

I would really appreciate any suggestions.

Hi, I had the same OOM trouble when loading Llama2-70b,
Have you solved this problem?
I would appreciate your suggestions.

I was able to solve the problem.

I was using torchrun to load the model and the value of nproc_per_node was 8.
So the system was loading the model, it was creating 8 instance of the same model and essentially crashing the program because of low memory.

If this is the case for you, you can try setting the value to 4 or 2 and see what happens.

Let me know if this solves the issue.

1 Like