Text Generation output keep repeat input sentences. Am I missing somethings

I use gpt2 train them by some dataset save it to local model.
When I use it its keep repeating the input

import torch
from transformers import pipeline

generator = pipeline(
    "text-generation", 
    model="./boba", 
    torch_dtype=torch.bfloat16, 
    device_map="auto"
    )
outputs = generator(
    "Hello, I'm a language model,", 
    max_length=30, 
    num_return_sequences=5,
    repetition_penalty=2.0
    )

for o in outputs:
    print(o)

print("\n")

Have you found any solution? I’m facing the same problem. However, I’m using vicuna instead

Not sure. I’ve change model to DialogGPT

Hi,

By default greedy decoding is used and text generation is deterministic. You can pass the do_sample flag to the generator to get non-deterministic behaviour.

See this blog post for an overview: How to generate text: using different decoding methods for language generation with Transformers