Issue while loading file-tuned gemma2

Getting this warning while loading fine-tuned Gemma2-2b model. And getting weird outputs too when go with this
Some weights of the model checkpoint at <model_dir> were not used when initializing Gemma2ForCausalLM

1 Like

In the HF library, this is a message that often appears in both image generation AI and LLM, but the fact that it is only a warning means that the warning itself probably does not interfere with operation.
I think the cause of the strange output is something else.

@amanpreetsingh459 I’m facing the same issue. How did you resolve it?

1 Like

I resolved it by following below steps:-

  1. push adapter weights(from trainer)
    trainer.push_to_hub(“your_huggingface_dir”)

  2. push the base model as well
    base_model.push_to_hub(“your_huggingface_dir”)

  3. then loading the model with:
    model_finetuned = AutoModelForCausalLM.from_pretrained(
    finetuned_model_name,
    device_map=“auto”,
    torch_dtype=torch.bfloat16
    )

1 Like