CLIP-like models do not support .add_adapter method

Hi,

The following works for me:

from transformers import AutoModelForImageClassification
from peft import LoraConfig

config = LoraConfig(
    r=16,
    lora_alpha=16,
    target_modules=["query", "value"],
    lora_dropout=0.1,
    bias="none",
    modules_to_save=["classifier"],
)
model = AutoModelForImageClassification.from_pretrained("google/vit-base-patch16-224")

model.add_adapter(config)

for name, param in model.named_parameters():
    print(name, param.shape)

This is thanks to the PEFT integration in Transformers: PEFT integrations.

See also this tutorial: Image classification using LoRA.