Hello!
I recently figured out a way to prompt the summarization of Encoder/Decoder models like BART using the generate()
function. Normally the only prompting we get with this function appears to be the starting token for the generation (decoder_start_token_id
).
However, we can see that the GenerationMixin does accept a **kwargs
which gets forwarded to the underlying model!
In BART’s case, we have another parameter to its forward, decoder_input_ids, which looks like it allows you to set a custom Q vector for the decoder, for that function call.
Using these together, we can appear to make generate()
start the beam search given the encoded input, and the prompt to the decoder:
My question is: Is this process doing what I think it is? The results seem to make sense, and I can loosely verify this method by prompting with only "<s>"
, which gives identical output to the defaults for generate()
. Is this a good way to prompt summarization for the BART model?
Thanks in advance!