Hello, separate question from my last post. Same example model:
When I repeat the same input on my models, they appear to have a nervous breakdown (repeated exclamation marks over and over, it is a little disturbing to be honest ) I have noticed this with 2 models now. Any thoughts on how I can fix this? Thank you for your time!
Hello,
I feel like you can make use of temperature parameter when inferring to avoid repetition and put more randomness to your conversations.
I found a nice model card showing how to infer with DialoGPT. Hope it helps.
There’s a nice blog post by Patrick that explains generative models.
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
model = AutoModelWithLMHead.from_pretrained(model_checkpoint)
for step in range(4):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# print(new_user_input_ids)
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(
bot_input_ids, max_length=200,
pad_token_id=tokenizer.eos_token_id,
no_repeat_ngram_size=3,
do_sample=True,
top_k=100,
top_p=0.7,
temperature=0.8
)
# pretty print last ouput tokens from bot
print("Bot: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))