Llama2-70b SafetensorError: Error while deserializing header: HeaderTooLarge

I tried to load llama-2-70b-chat-hf with the following code:
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
base_model = ‘/llm/llama2-2-70b-chat-hf’
model = AutoModelForCausalLM( base_model, load_in_8bit=True, device_map={“”,0},use_safetensors=True)

and the Error is shown below:
in load_state_dict (checkpoint_file)
462 “”"
463 Reads a Pytorch checkpoint file, returning properly formatted errors if they arise.
464 “”"
465 if checkpoint_file.endswith(“.safetensors”) and is_safetensors_available():
–>466 with safe_open(checkpoint_file,framework=“pt”) as f:
467 metadata=f.metadate()
SafetensorError: Error while deserializing header: HeaderTooLarge

I tried to re-download the safetensors file but it cannot be solved.