from transformers import pipeline
generator_gpt2 = pipeline('text-generation', model = 'gpt2')
text2text_generator = pipeline("text2text-generation", model = "t5-small")
my_file = open('./TURK_Original.txt', 'r')
data = my_file.read()
splitting = data.split("\n")
response = text2text_generator(f"Simplify the following text: {splitting[0]}", max_length = 30, num_return_sequences=1)[0]['generated_text']
with open('./TURK_GPT2.txt', 'a') as f:
f.write(response.rstrip('\r\n'))
f.write('\n')
I’m trying to use huggingface’s gpt2 and t5-small models to do simplification, but I’m getting pretty bad results. My prompt is “Simpify the following text [text]”, but it’s merely just repeating what I’m putting out and not actually simplifying. Does anyone have suggestions? Should I give it a few examples for it to do few-shot learning instead of zero-shot learning? Why is it not simplifying at all?