Skingpt4 error allocation

(base) PS C:\Users\asus> python demo.py --cfg-path eval_configs/skingpt4_eval_llama2_13bchat.yaml --gpu-id 0
Initializing Chat
tokenizer.json: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 466k/466k [00:00<00:00, 506kB/s]
Loading VIT
Loading VIT Done
Loading Q-Former
Loading Q-Former Done
Loading LLM tokenizer
C:\ProgramData\anaconda3\Lib\site-packages\transformers\tokenization_utils_base.py:1930: FutureWarning: Calling LlamaTokenizer.from_pretrained() with the path to a single file or url is deprecated and won’t be possible anymore in v5. Use a model identifier or the path to a directory instead.
warnings.warn(
You are using the default legacy behaviour of the <class ‘transformers.models.llama.tokenization_llama.LlamaTokenizer’>. This is expected, and simply means that the legacy (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set legacy=False. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in ⚠️⚠️[`T5Tokenize`] Fix T5 family tokenizers⚠️⚠️ by ArthurZucker · Pull Request #24565 · huggingface/transformers · GitHub
Traceback (most recent call last):
File “C:\Users\asus\demo.py”, line 60, in
model = model_cls.from_config(model_config).to(‘cuda:{}’.format(args.gpu_id))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “C:\Users\asus\skingpt4\models\skin_gpt4.py”, line 245, in from_config
model = cls(
^^^^
File “C:\Users\asus\skingpt4\models\skin_gpt4.py”, line 87, in init
self.llm_tokenizer = LlamaTokenizer.from_pretrained(llm_model, use_fast=False)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “C:\ProgramData\anaconda3\Lib\site-packages\transformers\tokenization_utils_base.py”, line 2029, in from_pretrained
return cls.from_pretrained(
^^^^^^^^^^^^^^^^^^^^^
File “C:\ProgramData\anaconda3\Lib\site-packages\transformers\tokenization_utils_base.py”, line 2261, in from_pretrained
tokenizer = cls(*init_inputs, **init_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “C:\ProgramData\anaconda3\Lib\site-packages\transformers\models\llama\tokenization_llama.py”, line 178, in init
self.sp_model = self.get_spm_processor(kwargs.pop(“from_slow”, False))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “C:\ProgramData\anaconda3\Lib\site-packages\transformers\models\llama\tokenization_llama.py”, line 203, in get_spm_processor
tokenizer.Load(self.vocab_file)
File "C:\ProgramData\anaconda3\Lib\site-packages\sentencepiece_init
.py", line 905, in Load
return self.LoadFromFile(model_file)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ProgramData\anaconda3\Lib\site-packages\sentencepiece_init
.py", line 310, in LoadFromFile
return _sentencepiece.SentencePieceProcessor_LoadFromFile(self, arg)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Internal: D:\a\sentencepiece\sentencepiece\src\sentencepiece_processor.cc(1102) [model_proto->ParseFromArray(serialized.data(), serialized.size())]
i have no clue how to solve these error i think i misses the path in the yaml file i checked nothing there