I have trained a customized transformer model (Pytorch) without using any Transformers API, and the checkpoint has been saved. How can I use the generate() method from GenerationMixin to auto-regressively generate tokens on my own model?
I think this problem can be solved by using the from_pretrained() method to load the checkpoint of my trained model. Then how can I prepare the checkpoints?
Hi, I want to achieve the same thing. I have a custom encoder (pytorch) and I want to use one of the pretrained decoders from huggingface. I managed to do this with a DecoderWrapper and train it with my own training loop via pytorch APIs, however, I want to be able to leverage the generate() (i.e. decoding) methods from GenerationMixin. Since my model inherits directly from nn.Module
it doesn’t have this.
It might be possible to inherit PreTrainedModel
and PretrainedConfig
from huggingface and rewire the model. However, I’m not sure what the best practice here would be.
I haven’t thought about just mapping the checkpoints over via from_pretrained().
Any assistance appreciated. Thanks!