We are trying to deploy a quantized Llama 3.1 70B model(from Huggingface, using bitsandbytes), quantizing part works fine as we check the model memory which is correct and also test getting predictions for the model, which is also correct, the problem is: after saving the quantized model and then loading it we get
valueError: Supplied state dict for layers.0.mlp.down_proj.weight does not contain
bitsandbytes__*
and possibly otherquantized_stats
components
What we do is:
- Save the quantized model using the usual save_pretrained(save_dir)
- Try to load the model using AutoModel.from_pretrained, passing the save_dir and the same quantization_config used when creating the model.
Here is the code:
model_id = "meta-llama/Meta-Llama-3.1-70B-Instruct"
cache_dir = "/home/ec2-user/SageMaker/huggingface_cache"
quantization_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16,
bnb_4bit_use_double_quant=True,
)
model_4bit = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="auto",
torch_dtype=torch.bfloat16,
quantization_config=quantization_config,
low_cpu_mem_usage=True,
offload_folder="offload",
offload_state_dict=True,
cache_dir=cache_dir
)
tokenizer = AutoTokenizer.from_pretrained(model_id,cache_dir=cache_dir)
pt_save_directory = "test_directory"
tokenizer.save_pretrained(pt_save_directory,)
model_4bit.save_pretrained(pt_save_directory)
## test load it
loaded_model = AutoModel.from_pretrained(pt_save_directory,
quantization_config=quantization_config
)