How to use inputs_embeds in generate()?

I have fine-tuned a T5 model to accept a sequence of custom embeddings as input. That is, I input inputs_embeds instead of input_ids to the model’s forward method.

However, I’m unable to use inputs_embeds with T5ForConditionalGeneration.generate(). It complains that bos_token_id has to be given if not inputting input_ids, but even if I provide a bos_token_id, it still doesn’t run. I considered running the encoder separately, but there is no way I can pass the encoder output to generate() either.

It will be very useful if generate() can accept inputs_embeds or encoder output, so that we can use the decoding strategies provided in the GenerationMixin.

3 Likes

Not an expert, but I just follow the example from this:https://github.com/patil-suraj/exploring-T5/blob/master/t5_fine_tuning.ipynb, and it works for me.

I guess you can use model_kwargs argument for .generate() function.
put encoder_outputs to model_kwargs, where encoder_outputs is encoder output of encoder-decoder model (like T5 and BART). I looked at the code and think this is the way.

Hi, did you solve this problem? I have met the same one but it seems that there is still no solution

Just encountered this problem too… anyone have a solution? thanks!

I worked with something like this. It runs, but I’m yet to see some meaningful results via the “custom embedding–>T5generate” route… would be great if someone has been able to train via input-embeds properly, and could share some advice…

output_sequences = model.t5_model.generate(inputs_embeds=i_embed,
                                num_beams=2, 
                                min_length= 0, 
                                max_length=128,  
                                repetition_penalty=2.5, 
                                length_penalty=1.0, 
                                early_stopping=True)