Fairseq Roberta to Transformers: torch.nn.modules.module.ModuleAttributeError: 'RobertaModel' object has no attribute 'decoder'

I trained a roberta model with fairseq and I am trying to convert it with the class convert_roberta_original_pytorch_checkpoint_to_pytorch and its method : convert_roberta_checkpoint_to_pytorch.
I provide the checkpoints of the fairseq traning, the checkpoint_best.pt is the model.pt. The dict.txt is also given.
However when running
convert_roberta_original_pytorch_checkpoint_to_pytorch.convert_roberta_checkpoint_to_pytorch(’/path/to/checkpoints’,‘savind_directory’,False)
I get "torch.nn.modules.module.ModuleAttributeError: ‘RobertaModel’ object has no attribute ‘decoder’ "
The error is raised when meeting this line of code:
roberta_sent_encoder = roberta.model.decoder.sentence_encoder
What does this mean ? What is missing ? How can get this attribute from my trained fairseq model

FWIW, it looks like it has been fixed here.

1 Like