Isn't there a simpler way to run LLMs / models locally?

Ollama kinda already does all that. GpT4All might help you, it’s got API integration and all that GUI stuff so you don’t have to use terminal commands as much..

1 Like