Fetching all parameters from the checkpoint at /xx/xxx/llama/70B. Killed

When I use the command ā€œpython convert_llama_weights_to_hf.py --input_dir /home/hadoop-kg-llm-ddpt/llama/ --model_size 70B --output_dir /home/hadoop-kg-llm-ddpt/llama/Llama-2-70b-chat-hfā€ to convert llama2, an error occurred:


Also, I noticed
Important note: you need to be able to host the whole model in RAM to execute this script (even if the biggest versions
come in several checkpoints they each contain a part of each weight of the model, so we need to load them all in RAM).

However, I still want to ask, when the memory is not enough, is there any parameter control that can use the disk instead of the memory to complete the task?

Did you find a solution by any chance? I ran into the same problem