How to download a model and run it with Ollama locally?

Hi,

Ollama is a wrapper on top of Llama cpp: GitHub - ggerganov/llama.cpp: LLM inference in C/C++, which supports any HF model which can be run from the terminal. I would recommend checking it out, along with LMStudio, which provides a nice UI on top of it, it supports any HF model in the GGUF format.