How to separately use T5 decoder

I am working on a task in which I should modify the encoding results.
what I would like to do is generally like this:

input_ids = tokenizer(“i am trying hard!”, return_tensors=‘pt’).input_ids
last_hidden_state=model.encoder(input_ids=input_ids).last_hidden_state
modified_last_hidden_state = modify(last_hidden_state)
outputs = model.decoder(modified_last_hidden_state)
output_sequence = tokenizer.decode(outputs)

I think this model.decoder() actually doesn’t work as I want.

1 Like

reply myself:
I think this is a good try since the loss and hidden states are totally the same as the standard training process, and I will test the training process later.
the separate process:

2 Likes

Have you found something in this???
Even I want to use an encoder and decoder separately.
My task involves passing the tokenized input ids to the encoder and get the last_hidden_layer and then passing those embeddings to the decoder to get the tokens further decoding those tokens.

thank you for sharing

any update on this I mean dose it work like the standard way of fine tuning ?

Hi, is there any update on this?