Warm-started encoder-decoder models (Bert2Gpt2 and Bert2Bert)

One more question please,

I pushed the model to the hub:
Ayham/roberta_gpt2_summarization_cnn_dailymail

it gives great results … I really appreciate your assistance.

But when I try to use the Model’s API, it gives me the following message:

Can’t load tokenizer using from_pretrained, please update its configuration: Can’t load tokenizer for ‘Ayham/roberta_gpt2_summarization_cnn_dailymail’. Make sure that: - ‘Ayham/roberta_gpt2_summarization_cnn_dailymail’ is a correct model identifier listed on ‘Models - Hugging Face’ (make sure ‘Ayham/roberta_gpt2_summarization_cnn_dailymail’ is not a path to a local directory with something else, in that case) - or ‘Ayham/roberta_gpt2_summarization_cnn_dailymail’ is the correct path to a directory containing relevant tokenizer files

Why?!

Can I save the tokenizer file explicitly ? If yes, which one should I save, the encoder tokenizer (Roberta) or the decoder one (GPT2)?

If you have another way to enable the API to give results, please help me.

Thank you in advance :slight_smile: