Assume that I’m using the PEGASUS model for conditional generation.
Is it possible to run the encoder part and decoder part separately without manually writing the generation script?
For encoder part, I know I can do something like this to get the last hidden state:
model = PegasusForConditionalGenereation.from_pretrained(...)
encoder_output = model.model.encoder(**input)["last_hidden_state"]
Then I will use encoder_output
for something else, and optionally run the decoder of the model for sequence generation. The rest of the model
contains the decoder
and lm_head
for token generation, but I haven’t found a way to easily run the generation. I don’t want to just do another model.generate(...)
because that will run the encoder part again, which is a waste of time.
Is it something achievable easily with some methods in transformers library? Thanks!