Running GGUF model files using Auto classes

HI,
I am new llm and hugging face. I tried to use Auto classes with llama 2(heBloke/CodeLlama-7B-GGUF) model gguf file(codellama-7b.q4_K_M.gguf) and was unable to get tokenizer or model using following code:
AutoModelForCausalLM.from_pretrained(“TheBloke/CodeLlama-7B-GGUF”, model_file=“codellama-7b.q4_K_M.gguf”, model_type=“llama”, gpu_layers=50)
tokenizer = AutoTokenizer.from_pretrained(‘TheBloke/Llama-2-13B-GGUF’,model_file=‘llama-2-13b.Q4_K_M.gguf’)

I am getting value error like : ValueError: Model file ‘codellama-7b.q4_K_M.gguf’ not found in ‘/root/.cache/huggingface/hub/models–TheBloke–CodeLlama-7B-GGUF/snapshots/98596f7f6c318118824bcbee4b0e20010ec510ec’

Can you please help, how can i get tokenizer or model instance using Auto classes ? Using llama cpp I can load model but not sure about tokenizer. Is this like gguf model files cannot work with Auto classes?

Why don’t you use llama-cpp-python? As far as I know, you can use the model tokenizer something like model.tokenizer() or model.tokenize("text".encode("utf-8"))

As I said, I am learning and I want get basics write. looking at code on internet and examples allows you to use different model and using AutoClasses is super easy. However, there is some concept that I am missing and hence unable to use Auto class with specific gguf file. Thanks for your reply.