T0 Tokenizer Throws Error

I鈥檓 trying to use the new T0 model (bigscience/T0pp 路 Hugging Face) but when I try following the instructions, I get the following error:

from transformers import AutoTokenizer
from transformers import AutoModelForCausalLM, AutoModelForSeq2SeqLM, GPT2Model, GPT2Config, pipeline
t0_tokenizer = AutoTokenizer.from_pretrained("bigscience/T0pp")
Traceback (most recent call last):
  File "<input>", line 1, in <module>
  File "/home/rschaef/CoCoSci-Language-Distillation/CLIP_prefix_caption/clipgpt_venv/lib/python3.7/site-packages/transformers/models/auto/tokenization_auto.py", line 469, in from_pretrained
    return tokenizer_class_fast.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
  File "/home/rschaef/CoCoSci-Language-Distillation/CLIP_prefix_caption/clipgpt_venv/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 1742, in from_pretrained
    resolved_vocab_files, pretrained_model_name_or_path, init_configuration, *init_inputs, **kwargs
  File "/home/rschaef/CoCoSci-Language-Distillation/CLIP_prefix_caption/clipgpt_venv/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 1858, in _from_pretrained
    tokenizer = cls(*init_inputs, **init_kwargs)
  File "/home/rschaef/CoCoSci-Language-Distillation/CLIP_prefix_caption/clipgpt_venv/lib/python3.7/site-packages/transformers/models/t5/tokenization_t5_fast.py", line 136, in __init__
    **kwargs,
  File "/home/rschaef/CoCoSci-Language-Distillation/CLIP_prefix_caption/clipgpt_venv/lib/python3.7/site-packages/transformers/tokenization_utils_fast.py", line 117, in __init__
    "Couldn't instantiate the backend tokenizer from one of: \n"
ValueError: Couldn't instantiate the backend tokenizer from one of: 
(1) a `tokenizers` library serialization file, 
(2) a slow tokenizer instance to convert or 
(3) an equivalent slow tokenizer class to instantiate and convert. 
You need to have sentencepiece installed to convert a slow tokenizer to a fast one.

Am I missing something from the instructions on that page?

According to python - Transformers v4.x: Convert slow tokenizer to fast tokenizer - Stack Overflow, I need to separately install a library called sentencepiece - is that correct?