HI,
I am new llm and hugging face. I tried to use Auto classes with llama 2(heBloke/CodeLlama-7B-GGUF) model gguf file(codellama-7b.q4_K_M.gguf) and was unable to get tokenizer or model using following code:
AutoModelForCausalLM.from_pretrained(âTheBloke/CodeLlama-7B-GGUFâ, model_file=âcodellama-7b.q4_K_M.ggufâ, model_type=âllamaâ, gpu_layers=50)
tokenizer = AutoTokenizer.from_pretrained(âTheBloke/Llama-2-13B-GGUFâ,model_file=âllama-2-13b.Q4_K_M.ggufâ)
I am getting value error like : ValueError: Model file âcodellama-7b.q4_K_M.ggufâ not found in â/root/.cache/huggingface/hub/modelsâTheBlokeâCodeLlama-7B-GGUF/snapshots/98596f7f6c318118824bcbee4b0e20010ec510ecâ
Can you please help, how can i get tokenizer or model instance using Auto classes ? Using llama cpp I can load model but not sure about tokenizer. Is this like gguf model files cannot work with Auto classes?