Perplexity of BlenderBot

Hello!

I have implemented Longformer’s self-attention in BlenderBot small and fine-tuned it. My question is can apply this guide about perplexity of fixed-lenght models: Perplexity of fixed-length models — transformers 4.5.0.dev0 documentation to BlenderBot, even though its a different Transformer than GPT2?

In addition, I am building a chatbot, thus, I have used the: BlenderbotSmallForConditionalGeneration and not the CausalLM one. Would it make a difference which one I use for evaluating perplexity(both have a language modelling heads)?

Thank you so much! :hugs:

My question is: what should I modify, in order