Error on creating the Wav2Vec2CTCTokenizer

Hi,

I tried to follow tutorial Google Colab in order to finetune a wav2vec model.
I generated my dicitonary for the chosen dataset.
Still, I encounter an error when I try to create the tokenizer.

python line: tokenizer = Wav2Vec2CTCTokenizer.from_pretrained(“./”, unk_token=“[UNK]”, pad_token=“[PAD]”, word_delimiter_token=" ")

response: Traceback (most recent call last):
File “”, line 1, in
File “/home/some_user/anaconda3/envs/some_conda_space/lib/python3.10/site-packages/transformers/tokenization_utils_base.py”, line 1764, in from_pretrained
raise EnvironmentError(
OSError: Can’t load tokenizer for ‘./’. If you were trying to load it from ‘Models - Hugging Face’, make sure you don’t have a local directory with the same name. Otherwise, make sure ‘./’ is the correct path to a directory containing all relevant files for a Wav2Vec2CTCTokenizer tokenizer.

If I understood it corectly, I cannot go forward to finetune the model since I don’t have a tokenizer available.

Can anyone help in this matter?