I was trying to use the new whisper-large-v3 model and get the following error:
File “/ext3/miniconda3/lib/python3.10/site-packages/transformers/tokenization_utils_base.py”, line 2065, in _from_pretrained
raise ValueError(
ValueError: Non-consecutive added token ‘<|0.02|>’ found. Should have index 50365 but has index 50366 in saved vocabulary.
Looks like some bug in the model?