Problem in deploying on hugging face hub

Hi!
I fine-tuned nllb model on my data using Lora.
but after uploading it on hugging face hub, I couldn’t deploy it and hugging face is not showing any deploy button for me.
these are the files in my model that I uploaded.
Can someone help me with this issue?

Hi,

It looks like the repository contains the adapter weights.

To load the entire model (base model with adapter weights), you can use the AutoPeftModelForSeq2Seq class:

from peft import AutoPeftModelForSeq2SeqLM

base_model_id = "facebook/nllb-200-distilled-600M"
adapter_model_id = "your-adapter-repo"

peft_model = AutoPeftModelForSeq2SeqLM.from_pretrained(base_model_id, adapter_model_id)

# now you can call the merge_and_unload method which returns a regular NLLB model which you can use for inference
model = peft_model.merge_and_unload()

I got this error. my base model is nllb 3.3:
During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File “/home/ali/veirtual_env/mahdi.py”, line 6, in
peft_model = AutoPeftModelForSeq2SeqLM.from_pretrained(base_model_id, adapter_model_id)
File “/home/ali/checkpoint/model-training/vnev/lib/python3.10/site-packages/peft/auto.py”, line 69, in from_pretrained
peft_config = PeftConfig.from_pretrained(pretrained_model_name_or_path, **kwargs)
File “/home/ali/checkpoint/model-training/vnev/lib/python3.10/site-packages/peft/config.py”, line 109, in from_pretrained
raise ValueError(f"Can’t find ‘{CONFIG_NAME}’ at ‘{pretrained_model_name_or_path}’")
ValueError: Can’t find ‘adapter_config.json’ at ‘facebook/nllb-200-3.3B’