ValueError: You are trying to offload the whole model to the disk. Please use the `disk_offload` function instead

I am trying to run a GitHub project on my computer.

GitHub Repo that I am trying to run

This is the code snippet that is causing errors.

Steps I took for replicating the project are:

  1. Cloned the repository.
  2. Generated the Hugging Face access token
  3. Added offload_folder and offload_dict_state after reading the Hugging Face guide to load huge models.
def load_llm():
    """
    Load the LLM
    """
    # Model ID
    repo_id = 'meta-llama/Llama-2-7b-chat-hf'
    login(token="hf_xxxxxxxx")
    # Load the model
    model = AutoModelForCausalLM.from_pretrained(
        repo_id,
        device_map='auto',
        load_in_4bit=False,
        token = True,
        offload_folder = r"C:\Users\DHRUV\Desktop\New folder\Law-GPT",
        offload_state_dict = True
    )

    # Load the tokenizer
    tokenizer = AutoTokenizer.from_pretrained(
        repo_id,
        use_fast=True
    )

    # Create pipeline
    pipe = pipeline(
        'text-generation',
        model=model,
        tokenizer=tokenizer,
        max_length=512
    )

    # Load the LLM
    llm = HuggingFacePipeline(pipeline=pipe)

    return llm

The Error I am facing, Please help:

Token will not been saved to git credential helper. Pass `add_to_git_credential=True` if you want to set the git credential as well.
Token is valid (permission: read).
Your token has been saved to C:\Users\DHRUV\.cache\huggingface\token
Login successful
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<?, ?it/s]
Traceback (most recent call last):
  File "C:\Users\DHRUV\Desktop\New folder\Law-GPT\app.py", line 5, in <module>
    chain = qa_pipeline()
  File "C:\Users\DHRUV\Desktop\New folder\Law-GPT\utils.py", line 100, in qa_pipeline
    llm = load_llm()
  File "C:\Users\DHRUV\Desktop\New folder\Law-GPT\utils.py", line 44, in load_llm
    model = AutoModelForCausalLM.from_pretrained(
  File "C:\Users\DHRUV\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\LocalCache\local-packages\Python39\site-packages\transformers\models\auto\auto_factory.py", line 566, in from_pretrained
    return model_class.from_pretrained(
  File "C:\Users\DHRUV\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\LocalCache\local-packages\Python39\site-packages\transformers\modeling_utils.py", line 3773, in from_pretrained
    dispatch_model(model, **device_map_kwargs)
  File "C:\Users\DHRUV\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\LocalCache\local-packages\Python39\site-packages\accelerate\big_modeling.py", line 438, in dispatch_model
    raise ValueError(
ValueError: You are trying to offload the whole model to the disk. Please use the `disk_offload` function instead.

I have the same issue trying to load. GPT-NeoXT-Chat-Base-20B
Hopefully some one chimes in here. For now my best guess is to follow the instruction in the error message.

Find this file File “C:\Users\DHRUV\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\LocalCache\local-packages\Python39\site-packages\accelerate\big_modeling.py”

on line 436 change : model.to(device)" to “model.disk_offload(device)”

Let me know if that works. I’m using a colab notebook and struggling to get the file updated.

And that is not the answer. : (

Yeah, I tried it already. It was not working🥲

I fixed it this morning. To resolve this, I made some modifications to the initialization method of my chat model. The modifications involve adding checks to determine if offloading to disk is necessary and using the disk_offload function accordingly, thereby resolving the ValueError .Here’s a comparison of the original and modified code with comments:

Original Code - ChatModel init function :

# Set device to GPU with specified id
device = torch.device('cuda', gpu_id)   
if max_memory is None:
    # Load model onto one device
    self._model = AutoModelForCausalLM.from_pretrained(
        model_name, torch_dtype=torch.float16, device_map="auto")
    self._model.to(device)
else:
    # Load model configuration
    config = AutoConfig.from_pretrained(model_name)
    with init_empty_weights():
        # Initialize model with empty weights
        model_from_conf = AutoModelForCausalLM.from_config(config)
    model_from_conf.tie_weights()
    # Create a device_map from max_memory
    device_map = infer_auto_device_map(
        model_from_conf, max_memory=max_memory, 
        no_split_module_classes=["GPTNeoXLayer"], dtype="float16"
    )
    # Load the model with the above device_map
    self._model = AutoModelForCausalLM.from_pretrained(
        model_name, device_map=device_map, offload_folder="offload",
        offload_state_dict=True, torch_dtype=torch.float16
    )
self._tokenizer = AutoTokenizer.from_pretrained(model_name)

Modified Code - ChatModel init function :

# Selects GPU if available, else CPU
device = torch.device('cuda', gpu_id) if torch.cuda.is_available() else torch.device('cpu')  
# Load model configuration
config = AutoConfig.from_pretrained(model_name)  
with init_empty_weights():
    # Initialize model with empty weights
    self._model = AutoModelForCausalLM.from_config(config)  
# Create device map based on memory constraints
device_map = infer_auto_device_map(
    self._model, max_memory=max_memory, no_split_module_classes=["GPTNeoXLayer"], dtype="float16"
)  
# Determine if offloading is needed
needs_offloading = any(device == 'disk' for device in device_map.values())  
if needs_offloading:
    # Load model for offloading
    self._model = AutoModelForCausalLM.from_pretrained(
        model_name, device_map=device_map, offload_folder="offload",
        offload_state_dict=True, torch_dtype=torch.float16
    )  
    offload_directory = "../offload/"
    # Offload model to disk
    disk_offload(model=self._model, offload_dir=offload_directory)  
else:
    # Load model normally to specified device
    self._model = AutoModelForCausalLM.from_pretrained(
        model_name, torch_dtype=torch.float16
    )
    self._model.to(device)  
# Initialize tokenizer
self._tokenizer = AutoTokenizer.from_pretrained(model_name)  

Thanks for the reply, will try it.
Also, Happy New Year brother.

Same to you! :fireworks: