How to use a local gemma-7b-it with Jan

I apologize for what might be a very basic question, I’ve scoured docs, tried this and that, and cannot for the life of me get what I’m doing wrong:

I am on a Linux Lubuntu 22.04LTS system with 40GB ram. I’ve installed the .deb of Jan, and it runs fine. I have gotten it to chat with a API non-local server, one of the defaults.

But when I install a local Model, (I’m trying gemma-7b-it, not sure if that’s the problem?) I cannot get it to start. I’ve gotten a variety of errors, but when I go to Huggingface and find it and click, “Run on Jan Local” I get an authentication error that I cannot figure out (I’ve tried adding API keys, tried running “huggingface cli” and get a “token already on your machine” notice. I’ve gone to Huggingface and generated the tokens…I can see “Gemma 7B Q4” in my Local Model list…(It does say “slow on my device” but I’m okay with slow). If I go to My Models I see it’s “Inactive” and when I click the kebab menu and choose “Start” it flashes for a sec and says “Gemma failed to start.”

I’m not finding any usable walkthrough that gives me a grounding in all the different steps, either. So I don’t know what I’ve done wrong: Download an incompatible Model, failed to set up parameters that the model needs to run, failed to authenticate properly, failed to choose the right model for my system…there’s a myriad of things that could be wrong. I tried a Mistral model and couldn’t see any pricing, it seemed that I should sign up first and only AFTER see what I’m going to pay. So I backed out of that.

I’d love to just have a local model running that I can use mostly for writing and various tests of certain prompts. I’d also ideally like to train it locally so it’s a bit different from what’s out there online, unique to me.

Thank you for any help.