Multi-turn dialogue using dialoGPT with Hosted Inference API

I am looking to use dialoGPT-large on the Hosted Inference API for a chatbot demo but am having trouble generating decent multi-turn dialogue.

As an example, when I post the following to the API endpoint:

I heard you won the cricket match. <|endoftext|> I did! <|endoftext|> Awesome. Who did you play against? <|endoftext|> I played against the Aussies. <|endoftext|> Wow ! Was it a tough game? <|endoftext|> It was a tough game. It went on till the last over. They almost won. <|endoftext|> Where was the match? <|endoftext|>

It seems to just spit it back out at me:

I heard you won the cricket match. <|endoftext|> I did! <|endoftext|> Awesome. Who did you play against? <|endoftext|> I played against the Aussies. <|endoftext|> Wow ! Was it a tough game? <|endoftext|> It was a tough game. It went on till the last over. They almost won. <|endoftext|> Where was the match? <|endoftext|>

This blog post has an example of someone getting meaningful results from exactly the above prompt: https://medium.com/datadriveninvestor/a-simple-contextual-chatbot-to-predict-an-reply-with-pre-trained-dialogpt-model-from-huggingface-f681b550cd60.

Any guidance as to where I’m going wrong would be really appreciated.

Try without the spaces. Works for me.

Hmm, gave that a go but no luck. Assuming you mean no spaces between the end of text tokens and the text itself?

Hi @anthonyralston, from the dialo-gpt paper, section 3.1

We first concatenate all dialog turns within a dialogue session into a long text x1, · · · , xN (N is the sequence length), ended by the end-of-text token.

So I think you won’t need to add eos tokens. Just concatenate your history and feed it to the model. That is how the model is trained.