Is there any way i can get output from WhisperLarge LoraModel encoder?

I am trying to get the output from the encoder of the Whisper pretrained model. But i am running into error. This is what i have tried.

from transformers import WhisperProcessor, WhisperForConditionalGeneration
from peft import prepare_model_for_int8_training
from peft import LoraConfig, PeftModel, LoraModel, LoraConfig, get_peft_model
config = LoraConfig(r=32, lora_alpha=64, target_modules=["q_proj", "v_proj"], lora_dropout=0.05, bias="none")

# load model and processor
processor = WhisperProcessor.from_pretrained("openai/whisper-large-v2")
model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-large-v2", load_in_8bit=True, device_map="auto")
model.config.forced_decoder_ids = None
model.config.suppress_tokens = []
model = prepare_model_for_int8_training(model)
model = get_peft_model(model, config)
encoder = model.encoder() ### getting error here

The error i am getting

AttributeError: 'WhisperForConditionalGeneration' object has no attribute 'encoder'

Any kind of help would be greatly appreciated

Thank You

1 Like

I am having the same issue, did you come across a solution?

in this case it becomes model.model.encoder