Adding new layer to T5encoder

I need to add a new layer at the end of T5encoder for sequence classification. You can find my code bellow

class CustomT5(nn.Module):
def init(self, checkpoint = “t5-small”, size_last_hidden_state = 0, num_labels = 1):
super(CustomT5, self).init()
self.T5 = T5EncoderModel.from_pretrained(checkpoint) # checkpoint: t5-small

    ## new layer
    self.linear = nn.Linear(size_last_hidden_state , num_labels)  # T5EncoderModel(t5_small) last_hidden_state size = (1,15,512) 
    self.sigmoid = nn.Sigmoid()

def forward(self,size_last_hidden_state, input_ids = None):
    output = self.T5(input_ids = input_ids)
    last_hidden_state = self.linear(output.last_hidden_state.view(size_last_hidden_state))
    label =  self.sigmoid(last_hidden_state)
    return label

The problem is, I need to know the size of last_hidden_state of t5 encoder, that depends on sequence length. so, the code doesn’t work properly. could you help about that?