T5-Base not Torchscriptable

Hi
I have a finetuned model based on T5-Base but in order to use that in production I need to compile it using torchscript. But it seems that T5-Base model is not torchscriptable as it can be seen from the following error that I get:
Code:

import torch
from transformers import (
    T5ForConditionalGeneration,
)

model = T5ForConditionalGeneration.from_pretrained('t5-base', torchscript=True)
torch.jit.script(model)

The error message:

torch.jit.frontend.UnsupportedNodeError: function definitions aren't supported:
  File "/home/pya/anaconda3/envs/ecg_env/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 984
                    use_cache = False
    
                def create_custom_forward(module):
                ~~~ <--- HERE
                    def custom_forward(*inputs):
                        return tuple(module(*inputs, use_cache, output_attentions))

I also tried to fix a few of these issues myself but it goes deeper and there’s a lot of incompatibilities with Torchscript. Is there a better way to do this?
Thanks a lot in advance!

Did you manage to Torchscript your t5-base model?
I have a similar situation: a fine-tuned t5-base model with great performance, but I need to make it deployable (faster). Maybe, you found some alternative solution for deploying the model?

You can use this for faster generation and deployment: PaddleNLP/t5_sample.py at develop · PaddlePaddle/PaddleNLP (github.com)

@gongel Is it faster than just pytorch implementation? I cannot understand how it works as it is written in Chinese…