AutoTokenizer.from_pretrained('google/pegasus-cnn_dailymail') giving valueerror Couldn't instantiate the backend tokenizer

from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('google/pegasus-cnn_dailymail')

error:

--------------------------------------------------------------------------- ValueError Traceback (most recent call last) Cell In[18], line 1 ----> 1 AutoTokenizer.from_pretrained(‘google/pegasus-cnn_dailymail’) File [c:\Users\Omkar\anaconda3\envs\text_summerization\Lib\site-packages\transformers\models\auto\tokenization_auto.py:745](file:///C:/Users/Omkar/anaconda3/envs/text_summerization/Lib/site-packages/transformers/models/auto/tokenization_auto.py:745), in AutoTokenizer.from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs) [743](file:///C:/Users/Omkar/anaconda3/envs/text_summerization/Lib/site-packages/transformers/models/auto/tokenization_auto.py:743) tokenizer_class_py, tokenizer_class_fast = TOKENIZER_MAPPING[type(config)] [744](file:///C:/Users/Omkar/anaconda3/envs/text_summerization/Lib/site-packages/transformers/models/auto/tokenization_auto.py:744) if tokenizer_class_fast and (use_fast or tokenizer_class_py is None): → [745](file:///C:/Users/Omkar/anaconda3/envs/text_summerization/Lib/site-packages/transformers/models/auto/tokenization_auto.py:745) return tokenizer_class_fast.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) [746](file:///C:/Users/Omkar/anaconda3/envs/text_summerization/Lib/site-packages/transformers/models/auto/tokenization_auto.py:746) else: [747](file:///C:/Users/Omkar/anaconda3/envs/text_summerization/Lib/site-packages/transformers/models/auto/tokenization_auto.py:747) if tokenizer_class_py is not None: File [c:\Users\Omkar\anaconda3\envs\text_summerization\Lib\site-packages\transformers\tokenization_utils_base.py:1854](file:///C:/Users/Omkar/anaconda3/envs/text_summerization/Lib/site-packages/transformers/tokenization_utils_base.py:1854), in PreTrainedTokenizerBase.from_pretrained(cls, pretrained_model_name_or_path, cache_dir, force_download, local_files_only, token, revision, *init_inputs, **kwargs) [1851](file:///C:/Users/Omkar/anaconda3/envs/text_summerization/Lib/site-packages/transformers/tokenization_utils_base.py:1851) else: [1852](file:///C:/Users/Omkar/anaconda3/envs/text_summerization/Lib/site-packages/transformers/tokenization_utils_base.py:1852) logger.info(f"loading file {file_path} from cache at {resolved_vocab_files[file_id]}") → [1854](file:///C:/Users/Omkar/anaconda3/envs/text_summerization/Lib/site-packages/transformers/tokenization_utils_base.py:1854) return cls._from_pretrained( [1855](file:///C:/Users/Omkar/anaconda3/envs/text_summerization/Lib/site-packages/transformers/tokenization_utils_base.py:1855) resolved_vocab_files, [1856](file:///C:/Users/Omkar/anaconda3/envs/text_summerization/Lib/site-packages/transformers/tokenization_utils_base.py:1856) pretrained_model_name_or_path, [1857](file:///C:/Users/Omkar/anaconda3/envs/text_summerization/Lib/site-packages/transformers/tokenization_utils_base.py:1857) init_configuration, [1858](file:///C:/Users/Omkar/anaconda3/envs/text_summerization/Lib/site-packages/transformers/tokenization_utils_base.py:1858) *init_inputs, [1859](file:///C:/Users/Omkar/anaconda3/envs/text_summerization/Lib/site-packages/transformers/tokenization_utils_base.py:1859) token=token, [1860](file:///C:/Users/Omkar/anaconda3/envs/text_summerization/Lib/site-packages/transformers/tokenization_utils_base.py:1860) cache_dir=cache_dir, [1861](file:///C:/Users/Omkar/anaconda3/envs/text_summerization/Lib/site-packages/transformers/tokenization_utils_base.py:1861) local_files_only=local_files_only, [1862](file:///C:/Users/Omkar/anaconda3/envs/text_summerization/Lib/site-packages/transformers/tokenization_utils_base.py:1862) _commit_hash=commit_hash, [1863](file:///C:/Users/Omkar/anaconda3/envs/text_summerization/Lib/site-packages/transformers/tokenization_utils_base.py:1863) _is_local=is_local,

…

ValueError: Couldn’t instantiate the backend tokenizer from one of: (1) a tokenizers library serialization file, (2) a slow tokenizer instance to convert or (3) an equivalent slow tokenizer class to instantiate and convert. You need to have sentencepiece installed to convert a slow tokenizer to a fast one.

Try one of the following:

  1. Pass in use_fast= False parameter, in the from_pretrained method. E.g tokenizer = AutoTkenizer.from_pretrained(model_name, use_fast = False)

  2. pip install sentencepiece, then run again.