Error Loading google/bart-large or bart-xsum

This is the code I am running:

Python 3.6.8 (default, Nov 16 2020, 16:55:22)
[GCC 4.8.5 20150623 (Red Hat 4.8.5-44)] on linux

from transformers import pipeline
summarizer = pipeline(“summarization”,
model=“google/pegasus-large”)

And I get the error:

Traceback (most recent call last):
File “/usr/local/lib/python3.6/site-packages/IPython/core/interactiveshell.py”, line 3343, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File “”, line 3, in
model=“google/pegasus-large”)
File “/usr/local/lib/python3.6/site-packages/transformers/pipelines.py”, line 2742, in pipeline
model = model_class.from_pretrained(model, config=config, **model_kwargs)
File “/usr/local/lib/python3.6/site-packages/transformers/modeling_auto.py”, line 1079, in from_pretrained
pretrained_model_name_or_path, *model_args, config=config, **kwargs
File “/usr/local/lib/python3.6/site-packages/transformers/modeling_utils.py”, line 923, in from_pretrained
model = cls(config, *model_args, **model_kwargs)
File “/usr/local/lib/python3.6/site-packages/transformers/modeling_bart.py”, line 978, in init
base_model = BartModel(config)
File “/usr/local/lib/python3.6/site-packages/transformers/modeling_bart.py”, line 857, in init
self.encoder = BartEncoder(config, self.shared)
File “/usr/local/lib/python3.6/site-packages/transformers/modeling_bart.py”, line 298, in init
config.max_position_embeddings, embed_dim, self.padding_idx
File “/usr/local/lib/python3.6/site-packages/transformers/modeling_bart.py”, line 1344, in init
self.weight = self._init_weight(self.weight)
File “/usr/local/lib/python3.6/site-packages/transformers/modeling_bart.py”, line 1355, in _init_weight
out[:, 0 : dim // 2] = torch.FloatTensor(np.sin(position_enc[:, 0::2])) # This line breaks for odd n_pos
RuntimeError: a view of a leaf Variable that requires grad is being used in an in-place operation.

I get the same error trying to load google/pegasus-xsum

I also tried loading the models locally. The config and tokenizer load fine. The error occurs when trying to load the model, e.g.:

    model = BartForConditionalGeneration.from_pretrained(model_dir,
                                                                                             config=model_config)

From site-packages/torch/version.py:

version = ‘1.8.1+cu102’
debug = False
cuda = ‘10.2’
git_version = ‘56b43f4fec1f76953f15a627694d4bba34588969’
hip = None