Hello, I am struggling with generating a sequence of tokens using model.generate() with inputs_embeds.
For my research, I have to use inputs_embeds (word embedding vectors) instead of input_ids (token indices) as an input to the GPT2 model.
I want to employ model.generate() which is a convenient tool for generating a sequence of tokens, but there is no argument for inputs_embeds. I tried to edit " transformers.generation_utils", but it was not easy to figure out which lines I should change.
Is there any idea that I can easily generate tokens with default settings for hyper-paremeters as in model.generate()? If there is any idea, help me please.