Hi, I’m trying to use the opus-mt-es-en model from the pretrained models. I used the code generated in Helsinki-NLP/opus-mt-es-en · Hugging Face by the use in transformers:
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained(“Helsinki-NLP/opus-mt-es-en”)
model = AutoModelForSeq2SeqLM.from_pretrained(“Helsinki-NLP/opus-mt-es-en”)
But when I run it I get this error:
RuntimeError: unexpected EOF, expected 2715505 more bytes. The file might be corrupted.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File “Pretrained_model.py”, line 7, in
model=MarianMTModel.from_pretrained(model_name)
File “/home/ddelahoz/HuggingFace/venv/lib/python3.8/site-packages/transformers/modeling_utils.py”, line 1289, in from_pretrained
raise OSError(
OSError: Unable to load weights from pytorch checkpoint file for ‘Helsinki-NLP/opus-mt-es-en’ at '/home/ddelahoz/.cache/huggingface/transformers/fff0d80cd1590357b109efdc2eaf7534e652daf4ed1e4dab107c4742d480aa90.175783185eb9d9b71534371e2d601226f068e277cf6e50851e3ea6f262a4ca30’If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True.
I tryed to load the en-es model and it work. I’m missing something?
Thanks in advance.