Trainable weights in automodel and comparison with lora

Hi all ,

if i use RobertaForSequenceClassification or AutoModelForSequenceClassification,
are all the weights trained or only the new classification head trained

also ,

i am also noticing for “roberta-large” ,peft lora is under performing AutoModelForSequenceClassification
for glue data and benchmark.

peft lora lr=2e-4 , epoch =10,{‘accuracy’: 0.8897058823529411, ‘f1’: 0.9168207024029574}

AutoModelForSequenceClassification, lr=2e-5,epoch =10 {‘accuracy’: 0.9044117647058824, ‘f1’: 0.9312169312169313}

is something wrong , or is lora only useful for larger models