How to download a model and run it with Ollama locally?

What about other architectures?

1 Like