Code is like this:
tokenizer = LlamaTokenizer.from_pretrained('decapoda-research/llama-13b-hf', use_fast=True)
tokenizer.save_pretrained('./llama_tok')
tokenizer = LlamaTokenizer.from_pretrained('./llama_tok/') # very fast to load
tokenizer = AutoTokenizer.from_pretrained('./llama_tok/') # very slow to load
What is the cause of this and how could I fix it please?