Error loading llama_cpp_binaries

I am having a problem loading my first model . As far as I can tell my localhost is running fine. I download Mistral-7b v0.3 ( because I only have 8gb ram ) and I go to load the model but it cant find llama_cpp_binaries. As far as I can tell I dont need binaries for this. I dont know what I did wrong. Any advice helps. THANK YOU!

10:43:58-713195 INFO Loading “Mistral-7B-Instruct-v0.3.Q4_K_M.gguf”
10:43:58-713195 ERROR Failed to load the model.
Traceback (most recent call last):
File “C:\Windows\System32\text-generation-webui\modules\ui_model_menu.py”, line 190, in load_model_wrapper
shared.model, shared.tokenizer = load_model(selected_model, loader)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “C:\Windows\System32\text-generation-webui\modules\models.py”, line 43, in load_model
output = load_func_maploader
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “C:\Windows\System32\text-generation-webui\modules\models.py”, line 66, in llama_cpp_server_loader
from modules.llama_cpp_server import LlamaServer
File “C:\Windows\System32\text-generation-webui\modules\llama_cpp_server.py”, line 12, in
import llama_cpp_binaries
ModuleNotFoundError: No module named ‘llama_cpp_binaries’

1 Like

This?