How to set model_path to another directory?

Hello! I am trying to follow these instructions to have both GPU and CPU offloading

Below is the code I am using

Set the quantization config with llm_int8_enable_fp32_cpu_offload set to True

quantization_config = BitsAndBytesConfig(llm_int8_enable_fp32_cpu_offload=True)

device_map = {
“transformer.word_embeddings”: 0,
“transformer.word_embeddings_layernorm”: 0,
“lm_head”: “cpu”,
“transformer.h”: 0,
“transformer.ln_f”: 0,
}

model_path = “decapoda-research/llama-7b-hf”
model_8bit = AutoModelForCausalLM.from_pretrained(
model_path,
device_map=device_map,
quantization_config=quantization_config,
)

my problem is that it asks for the model_path to link to hugging face and a specific model. I also have these models downloaded but in another path:
C:\Windows\System32\text-generation-webui\models

When I run the code above it downloads the model again. How do I get the model_path to look at my already downloaded models?

Even when I let the download run, I get this error

Traceback (most recent call last):
File “C:\Windows\System32\text-generation-webui\server7b.py”, line 33, in
model_8bit = AutoModelForCausalLM.from_pretrained(
File “C:\Users\justi\miniconda3\envs\textgen\lib\site-packages\transformers\models\auto\auto_factory.py”, line 471, in from_pretrained
return model_class.from_pretrained(
File “C:\Users\justi\miniconda3\envs\textgen\lib\site-packages\transformers\modeling_utils.py”, line 2643, in from_pretrained
) = cls._load_pretrained_model(
File “C:\Users\justi\miniconda3\envs\textgen\lib\site-packages\transformers\modeling_utils.py”, line 2966, in _load_pretrained_model
new_error_msgs, offload_index, state_dict_index = _load_state_dict_into_meta_model(
File “C:\Users\justi\miniconda3\envs\textgen\lib\site-packages\transformers\modeling_utils.py”, line 662, in _load_state_dict_into_meta_model
raise ValueError(f"{param_name} doesn’t have any device set.")
ValueError: model.layers.0.self_attn.q_proj.weight doesn’t have any device set.

looks like your devicemap isn’t covering something called model.layers.0.self_attn.q_proj.weight
Lets make a net maybe

Create the device map dictionary with default device assignment to catch anything not specified

device_map = {param.name: default_device for param in model.parameters()}

Assign specific devices to parameters
device_map.update({
“transformer.word_embeddings”: torch.device(“cuda:0”),
“transformer.word_embeddings_layernorm”: torch.device(“cuda:0”),
“lm_head”: torch.device(“cpu”),
“transformer.h”: torch.device(“cuda:0”),
“transformer.ln_f”: torch.device(“cuda:0”)
})

if you want to get specific,

maybe something like this::
from transformers import AutoModelForCausalLM
model_path = “decapoda-research/llama-7b-hf”
model = AutoModelForCausalLM.from_pretrained(model_path, local_files_only=True)

Get the names of the model parameters

param_names = [param.name for param in model.parameters()]

Print the parameter names

for name in param_names:
print(name)
iterating over model.parameters() to find each parameter and retrieve its name attribute. This will provide you with the names of the model’s parameters for device assignment

Maybe have it check for the file first:

import os
from transformers import AutoModelForCausalLM
model_name = “llama-7b-hf”
model_directory = “models”

Check if the model file exists in the specified directory

model_file = os.path.join(model_directory, model_name)
if os.path.exists(model_file):
model_path = model_file
print(“Model file found in the directory. Using the local model file.”)
else:
model_path = model_name
print(“Model file not found in the directory. Downloading the model from the repository.”)

Load the model

model = AutoModelForCausalLM.from_pretrained(model_path)