AutoNLP - tweaking model's output

Hi there, wonder if if anyone has advice or some form of documentation on tweaking the output from fine tuned AutoNLP models?

I recently made this auto-headline writer - Headline_writer - a Hugging Face Space by chinhon

It was based on a Bart model, and fine tuned on AutoNLP with about 4-5 years’ worth of news stories. I’ve been trying to get the model to output longer headlines, but with limited success.

The answers I’ve consulted on Github and Stack Overflow point to changes for max_length, min_length, length_penalty etc. I tweaked those, but the impact is barely there.

Am I missing something, or is this something “locked in” as a result of the auto fine tuning process on AutoNLP?

Here’s the code I’m using for the app:

model_name = “chinhon/headline_writer”
def headline_writer(text):
input_text = clean_text(text)
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)

with tokenizer.as_target_tokenizer():
    batch = tokenizer(
        input_text,
        truncation=True,
        max_length=1024,
        padding="longest",
        return_tensors="pt",
    )

raw_write = model.generate(**batch)
headline = tokenizer.batch_decode(
    raw_write, skip_special_tokens=True, min_length=200, length_penalty=100.1
)
return headline[0]  

Would appreciate any advice on this.
cc @abhishek

Chin Hon