Error when fine tuning whisper for Hausa language

Good day i followed this blog post, and used it with a Hausa dataset and the training was done successfully

My issue now is that anytime run this command

from transformers import WhisperForConditionalGeneration, WhisperProcessor
model = WhisperForConditionalGeneration.from_pretrained("valacodes/whisper-small-hausa")
processor = WhisperProcessor.from_pretrained("valacodes/whisper-small-hausa")

I get this Error:

OSError: Can't load tokenizer for 'valacodes/whisper-small-hausa'. If you were trying to load it from '', make sure you don't have a local directory with the same name. Otherwise, make sure 'valacodes/whisper-small-hausa' is the correct path to a directory containing all relevant files for a WhisperTokenizer tokenizer.

Please what might be causing this and what can i do to rectify it