T5 transformer output for masks is restricted to four masks only

    text = "<extra_id_0> cost of damage in Newton Stewart , one of the areas worst affected , is still being assessed <extra_id_1> work is ongoing in Hawick , but the council is still calling for the council to be compensated by a fee , rather than by an interest payment . <extra_id_2> the first time <extra_id_3> has been <extra_id_4> the last three months , for an issue being taken as an attack on <extra_id_5> . It came after the Tory MP for Canterbury Canterbury , Dr David Cresswell , said the government would be paying back for damage to the community centre on Sunday . The Department of the Environment has blamed the damage of the tornado upon the government but has not made any recommendations about how the government should <extra_id_6> . The Department of the Environment spent £18,200 on weather-related equipment <extra_id_7> spending nearly £300 on water pumps ."
    tokenizer = T5Tokenizer.from_pretrained("t5-large")
    model = T5ForConditionalGeneration.from_pretrained("t5-large")

    input_ids = tokenizer(text, return_tensors="pt").input_ids

    sequence_ids = model.generate(input_ids)
    sequences = tokenizer.batch_decode(sequence_ids)
    return sequences

Input has up untill <extra_id_7> but output has only till <extra_id_4>. Is there a parameter that I can so that output is filled for all the rest as well. Thanks

I think the issue is just that you need to pass either max_length or max_new_tokens to the generate method in order to generate more tokens. Do you get the desired output if you try:

sequence_ids = model.generate(input_ids, max_new_tokens=200)

?